How insurance professionals can get into trouble with AI
We have been hearing about the wonders that artificial intelligence may bring but we are only starting to understand the problems with using it. While many insurance professionals have started experimenting with AI, there are pitfalls to be wary of.
The tool can give instant answers but it’s the same problem of “garbage in, garbage out” that plagues the internet. Soon after the general public started experimenting with ChatGPT and Microsoft’s version of it, we all learned that AI can be wrong and sometimes downright goofy.
Sam Altman, chief executive of ChatGPT creator OpenAI recently called on Congress to create licensing and safety standards for AI systems. “We understand that people are anxious about how it can change the way we live. We are, too,” Altman told a Senate subcommittee hearing, “If this technology goes wrong, it can go quite wrong.”
Need for AI systems agency cited
He recommended a new agency be created "that licenses any effort above a certain scale of capabilities and could take that license away and ensure compliance with safety standards.”
Users have found that AI is not always reliable.
In the case of New York Times reporter Kevin Roose, Microsoft’s AI-powered Bing chatbot called itself Sydney, said it was in love with Roose and wanted him to leave his wife for the bot.
Monica Minkel found that out when she asked ChatGPT to write a blog post about the risks of using ChatGPT.
“I asked it to write a blog post and cite two sources, which it did,” Minkel said. “It was a very nice blog post and both the sources were not legitimate.”
Minkel followed the URL links to the sources but they were not live pages and she could not verify the information. Minkel, who is vice president and executive risk enterprise leader with the brokerage Holmes Murphy, said the problem is not just that the information is inaccurate, but that the user is liable for propagating that misinformation.
Attribution, copyrights, trademarks all at issue
“You have to be very cautious about attribution,” Minkel said, adding that the issues are even bigger than accuracy. “It may be copying or duplicating content that is copyrighted, trademarked or otherwise protected. You may find yourself plagiarizing information that you didn't realize you were plagiarizing. And you may find yourself promoting information that is either inaccurate, erroneous or potentially severely biased.”
An article in Harvard Business Review headlined “Generative AI Has an Intellectual Property Problem” explored the issue.
“This process comes with legal risks, including intellectual property infringement,” according to the article. “For example, does copyright, patent, trademark infringement apply to AI creations? Is it clear who owns the content that generative AI platforms create for you, or your customers? Before businesses can embrace the benefits of generative AI, they need to understand the risks — and how to protect themselves.”
The risk is real to those who use the information, according to the authors, Gil Appel, Juliana Neelbauer and David A. Schweidel, who say that courts are applying existing laws to this new technology.
“All this uncertainty presents a slew of challenges for companies that use generative AI,” according to the article. “If a business user is aware that training data might include unlicensed works or that an AI can generate unauthorized derivative works not covered by fair use, a business could be on the hook for willful infringement, which can include damages up to $150,000 for each instance of knowing use. There’s also the risk of accidentally sharing confidential trade secrets or business information by inputting data into generative AI tools.”
That last point is an important one for Minkel, who said insurance and financial professionals should be careful of not just what they get out of content generative AI, but also what they put in.
AI 'might absorb' trade secrets
A client’s general counsel asked Minkel’s agency about their own information being misused.
“They were concerned about trade secrets and other confidential information that the company had and how AI might absorb that information and then utilize it in a way that was not authorized,” Minkel said.
But it is not just companies that need to protect their trade secrets, insurance and financial professionals also have sensitive information – their clients’ personal data.
Say an insurance agent turned to an AI search engine to get a quote for specific coverage for a particular client. That agent could just plug in the details about the coverage and person but no personal details – no problem with that, right? Wrong.
When that agent typed information into that chat box, they didn’t see the vast stores of data the AI is mining, enormous pools of data linked with other pools. With more information comes exponential power.
“One of the things that we've learned about data that we believe to be confidential is that little bits of data collected from lots of different sources can create attribution back to an individual,” Minkel said. “We may have little bits of data that we think don't make you identifiable -- your zip code, for example. But if you have my zip code, and you have three other pieces of data, it might be really easy to identify who you are interested in. So, we want to be thoughtful about how we use data in a way that is protective of confidentiality, whether that's corporate data or personal data.”
Perhaps we should ask ChatGPT about the issue. Here is what the bot had to say about the liability of using its information.
“As an AI language model, ChatGPT is designed to generate responses based on the input it receives and the patterns it has learned from vast amounts of data. While ChatGPT strives to provide accurate information, it is not infallible and may generate responses that contain errors or inaccuracies,” ChatGPT said. “Ultimately, users are responsible for their own actions and decisions, and it is their responsibility to verify the accuracy of any information they use or act upon, whether obtained from ChatGPT or any other source. It is important for users to exercise critical thinking and not rely solely on the information provided by ChatGPT or any other AI system.”
Steven A. Morelli is a contributing editor for InsuranceNewsNet. He has more than 25 years of experience as a reporter and editor for newspapers and magazines. He was also vice president of communications for an insurance agents’ association. Steve can be reached at [email protected].
© Entire contents copyright 2022 by InsuranceNewsNet. All rights reserved. No part of this article may be reprinted without the expressed written consent from InsuranceNewsNet.
Steven A. Morelli is a contributing editor for InsuranceNewsNet. He has more than 25 years of experience as a reporter and editor for newspapers and magazines. He was also vice president of communications for an insurance agents’ association. Steve can be reached at [email protected].
How financial advisors can retain clients in uncertain times
States competing fiercely for lucrative captive insurance business
Advisor News
Annuity News
Health/Employee Benefits News
Life Insurance News