Actuarial businesses should implement their own responsible governance frameworks when using artificial intelligence, suggested a panel of experts at the American Academy of Actuaries 2023 Annual Meeting.
Panelists noted that while some frameworks exist, formal regulation will not likely be forthcoming for some time and it will likely change frequently once established.
As such, Charles Lee, legal lead at Leap, a McKinsey & Company business leading practice, said it is important that actuaries “adopt responsible AI principles from the start.”
“The regulations are always going to sort of lag premier technology, and this is such a fast-moving technology that these regulations are going to be way behind,” Lee said.
“I think we’re going to have to, as responsible members of the industry and leaders, a lot of this is going to have to come internally from within the private sector.”
Other experts on the panel included James Lynch, MAAA, FCAS, James Lynch Casualty Actuary Services; Doug McElhaney, McKinsey & Company partner and global leader of AI practitioners; and Arthur da Silva, vice president of actuarial at Slope Software.
The experts collectively said some of the key factors that must be governed in the use of AI in the actuarial practice, as well as within the insurance industry at large, include avoiding discrimination, ethical use of AI and diligence in fact checking.
Lee noted that governance of AI’s use in the United States has typically been “piecemeal state regulations,” with the exception of a recent Biden administration AI executive order and the Colorado Draft Artificial AI.
The executive order seeks to establish consumer rights and protections by governing the way businesses use AI.
Colorado’s regulation creates requirements governing the way external consumer data and information sources are utilized in the life insurance industry. It was scheduled to take effect Nov. 14.
“This Colorado regulation is a good touch point because it will probably lead the way and we’ll see a lot of other states take similar tacks,” Lee said.
Lynch similarly suggested that organizations take cues from some of the concerns regulators typically have for the insurance industry as they seek to create frameworks to ensure responsible use of AI.
“In general, regulators step in where there’s a problem that exists. That problem has not manifested itself yet for generative AI, but I think we can take some cues from other types of artificial intelligence in general,” he said.
Experts agreed that unfair discrimination in insurance is one of the major issues a responsible AI framework should seek to avoid.
“Insurance regulators have generally been most concerned about fairness, particularly the idea of unfair discrimination slipping in under the radar screen unbeknownst and unintended by any of the parties involved,” Lynch noted.
He suggested actuarial businesses start there when considering what kind of regulations to adopt.
Lee added that the question of what counts as unfair discrimination and how generative AI comes to that conclusion will also be major concerns in its application in actuarial practice.
Additionally, panelists emphasized the importance of human oversight and ethical use in using AI responsibly.
Lynch noted that this due diligence should be performed not just for the sake of responsible use but also to avoid liabilities.
He gave examples such as medical and legal malpractice, as well as product liability claims if a business professional uses generative AI and something goes wrong.
Establishing a responsible AI framework
Experts agreed that organizations should establish teams to ensure responsible use of AI in insurance even in the absence of formal regulations.
“I were a person who was in a key position at an insurance company, I would have a team of people that would be going through every form and looking at how generative AI could impact it,” Lynch said.
“It’s new and we don’t know all of the things it’s going to do, but you have to take that kind of a diligent look. If you don’t take that diligent look, you may also be opening yourself up down the road to a directors and officers claim.”
Lee also noted that the Biden administration’s executive order likewise recommends a team be established for this purpose.
“You probably need a cross-functional team… Key statisticians, data scientists, leadership stakeholders, legal and risk, there’s a cross-functional group that needs to come up with what this framework is,” he said.
He suggested that teams should establish policies, best practices and tools to ensure: human-centric AI development and deployment; fair, trustworthy and inclusive AI; transparent and explainable AI; robust data protection, privacy and security measures; and ongoing monitoring and evaluation of AI systems.
Additionally, he recommended routine audits of what AI tools exist and how they are being used.
“Especially if they’re external, if they’re commercial, because they’re going to be subject to some of this regulatory inquiry,” Lee said.
The American Academy of Actuaries is a nonprofit organization that aims to provide support to U.S.-based actuaries. It was founded in 1965 and today has more than 19,500 members.