How insurers can tackle the cyber insurance risk of deepfakes
A legal expert is urging insurers to keep abreast of the risks and liabilities associated with one of the most rapidly-evolving and complex forms of cyber crime — deepfakes.
“They certainly should be paying attention to this. While the use of a deepfake alone may or may not implicate insurance issues either for an insurance company or for a purchaser of insurance, they implicate a lot of other issues and, in our view, can give potential rise to potential liability,” Andy Moss, partner, Insurance Recovery Group at Reed Smith, said.
He noted that the risk is particularly “great” for businesses that provide professional services, such as law firms, doctor offices or advertising firms. However, he emphasized that insurers should be careful not to “overreact” to these concerns. Instead, he suggested they discuss coverage options with their clients, including: commercial general liability, fiduciary liability and cyber insurance coverage.
What is a deepfake?
A deepfake is an image, video or audio that imitates a real or fictional person using sophisticated artificial intelligence. Deepfakes can appear quite realistic and be difficult to distinguish from real life.
There are legitimate uses for this kind of technology, such as for marketing needs. However, “bad actors” can use deepfakes of an actual person without their permission or to conduct illegal, fraudulent or otherwise misleading activity.
“I think companies have to be very careful with it. As part of their risk management, they should understand and evaluate whether their insurance may potentially respond to particular risks they might face. If not, they should identify those gaps, and our advice would be seek to fill those gaps,” Moss said.
Risks and liabilities
Moss noted that risks related to the use of technology are not new, but have “only gotten worse” with continued advancements and as more businesses switch to electronic platforms for billing and other operations.
He said businesses using deepfakes risk:
- Having their reputation harmed, both internally and in the public’s view
- Potentially violating consumer protection laws and/or intellectual property rights
- Potentially infringing upon an individual’s privacy rights, leading to claims of defamation or libel
At the same time, these businesses and those that create deepfake software can be held liable if that technology is misused.
“The potential for misuse raises questions about whether companies are adequately protecting against the risk of claims against their senior management or their boards of directors in maintaining and enacting prudent safeguards against both ways that they might fall victim to people using deepfakes or the use of similar technology within the company,” Moss said.
Coverage gaps
For corporate policyholders, it’s crucial to think about what kind of risks deepfakes may pose either from authorized or unauthorized usage, Moss added. He suggested they asses their coverage to determine whether there are gaps that need to be filled either by endorsing current coverage or by adding coverage.
However, he said there are several different lines of existing insurance products that could potentially cover these risks, such as:
- Commercial general liability insurance
- Fiduciary liability insurance
- Cyber insurance
“The one type that comes to mind first, which is probably the most ubiquitous type of insurance that most businesses carry, is commercial general liability insurance. This typically protects you against liability claims for bodily injury, property damage or what’s called personal and advertising injury — and the use of deepfakes implicates the personal and advertising injury portion of this insurance,” Moss explained.
General liability insurance could also cover liability related to invasion of right of privacy, he said. For instance, a case where a company uses deepfake technology to depict the likeness or voice of an individual, such as a celebrity, without their permission.
In the case where use of a deepfake results in a security breach, Moss said fiduciary liability or cyber insurance may also be adequate forms of coverage. It depends on the case, as cyber insurance typically covers a variety of computer crimes while fiduciary coverage could apply if a company is held liable for failing to have adequate protections against deepfakes in place.
“Most businesses are not required to have these insurance, and the way that the insurance is structured in a particular program or the terms and conditions can vary from business to business,” Moss emphasized.
‘Don’t overreact’
Moss acknowledged that some insurers may consider excluding coverage for particular cyber risks such as deepfakes. However, he strongly urged against it.
“Rather than overreacting and trying to exclude this stuff, they should be focusing on real solutions. It seems to me like the major gap that should be filled would be to clarify that their coverage does extend to these types of risks, and that the types of claims that arise shouldn’t be excluded by the fact that some kind of AI tool was used to propagate them,” he said.
Reed Smith LLC is an international law firm founded in 1877. Headquartered in Pittsburgh, Philadelphia, it has more than 30 offices in the United States, Europe, Asia and the Middle East.
© Entire contents copyright 2024 by InsuranceNewsNet.com Inc. All rights reserved. No part of this article may be reprinted without the expressed written consent from InsuranceNewsNet.com.
Rayne Morgan is a journalist, copywriter, and editor with over 10 years' combined experience in digital content and print media. You can reach her at [email protected].
Prudential rides five $1B-selling annuity products to solid Q3
Nonprofit involvement: How do you get business and WIIFM?
Advisor News
Annuity News
Health/Employee Benefits News
Life Insurance News