AI-enabled cybercrime becoming more effective, insurance experts say
With the booming advent of large language model (LLM) platforms and generative AI systems, the task of policing cybercrimes has suddenly gotten more difficult and is likely to get even more challenging, insurance experts say.
It’s not all bad news. Artificial intelligence and platforms like ChatGPT are bringing new efficiencies to the industry, speeding up claims processing to lightning-fast procedures among other things. IBM Research says it is using use generative models to write high-quality software code faster, discover new molecules, and train trustworthy conversational chatbots grounded on enterprise data. They’re even using generative AI to create synthetic data to build more robust and trustworthy AI models and to stand-in for real data protected by privacy and copyright laws.
“Identifying suspicious patterns takes humans much longer to recognize subsequent fraudulent activities but AI can analyze these data points and identify unusual activity in seconds,” said Blair Cohen, founder and president of Authentic ID, “By harnessing its power and using it to authenticate activity, the insurance sector can stay ahead of bad actors.”
For example, Cohen said, fraud detection systems driven by AI cut fraud investigation time by 70% while simultaneously improving detection accuracy by 90%. This ability to quickly identify and prevent fraudulent claims makes AI an essential piece of insurance companies’ fraud prevention initiatives.
Clearly, AI is revolutionizing business and industry. But the crooks have already caught on. As fast as an AI program can write new computer code to safeguard against the latest cyber threat, a criminal organization can use AI to write software code on how to break it. Already, the dark web is replete with offers to sell LLMs that can create sophisticated malware and phishing schemes.
Cyber crime enhanced by AI
“AI has the ability to improve the effectiveness of certain types of cyber attacks,” said Carolyn Boris, vice president of personal risk services at Chubb, the world's largest publicly traded property and casualty insurance company. “Those phishing emails that we’re all used to getting and clearly look suspicious are now going to look very legitimate.”
Moreover, AI-bolstered models can instantly scan the web for personal information on their targets that vastly bolster the seeming legitimacy of an email. Ironically, the AI chatbots can produce authentic looking emails, accurately modeled after a human sender, which can appear more legitimate than those composed by humans, which frequently contain misspellings and poor grammar that can easily be spotted and rejected.
And it’s not just email. AI programs can incorporate audio and video that can accurately mimic a person’s voice or image to help convince someone they are communicating with an actual, and appropriate, person.
Criminals are 'one step ahead'
“It's going to be very challenging because the criminals are always moving one step ahead of us,” said Gilad Perry, co-founder of Toronto-based Armour Cybersecurity. “The things that we teach machine learning to do to protect against cyber theft is usually in hindsight, after an incident has occurred. We need to teach them how to detect the behavior and repair it as it’s happening.”
Chubb is considered a leader in the cybersecurity insurance sector, having offered personal cyber coverage since 2017. Chubb’s Boris said high net worth individuals are obviously prime targets for cyber criminals and are also frequently the most vulnerable.
“It’s important for the individual to be practicing safe, data hygiene, but it's really important for anyone who is acting on their behalf has access to passwords or accounts, that they're also practicing good cyber hygiene,” she said.
Boris said sometimes the most basic protection methods – updating operating software, frequently changing passwords, and relying on two-factor authentication, and other things – are often overlooked by customers.
“I think people are definitely more aware of using passwords that can’t be linked to say, a family pet, or an important date such as their birthday,” she said. “I think in the past two years, it was people who weren't necessarily well aware of them that fell victim. But I think people are definitely taking advantage of the technology to help to protect themselves.”
Businesses, on the other hand, have been slow to adapt strong data protection methods, said Perry, whose company mostly services small to mid-sized businesses, which are frequent targets for ransomware attacks. While the financial services sector seems to be catching on, manufacturing and health care companies are still behind the curve in terms of proper cyber protection, he said.
“It's still hard to convince people of the risks,” he said. “And I’m always surprised at the level of unpreparedness [of ] some companies … to this day in spite of all the publicity and stories about cyber criminals and incidents.”
Authentic IDs Blair says insurers should create a “biometric watchlist” to identify even the most rapid, prolific, and sophisticated bad actors.
“By analyzing data from across the globe in real-time, a biometric bad actor watchlist enables insurers to identify instances of synthetic identity fraud, where fraudsters are manipulating personally identifiable information or stealing it from real individuals, to detect and flag suspicious or inconsistent activity.”
According to IBM’s latest data breach report, the average cost of a ransomware breach was $4.54 million in 2022 – but this figure does not include the cost of the actual ransom itself. It is a combined cost that includes many different factors that play in ransomware recovery. Firms that suffered ‘destructive’ attacks, where cybercriminals sought to use malware to destroy data, saw even higher expenses, at $5.12 million. According to the Acronis 2022 End-of-Year Cyberthreats Report, the average total cost of a data breach in the United States was nearly $9.5 million.
Perry tells one story of a company CEO who suggested that instead of investing in upgraded security software and equipment, the company should just set aside $10 million to pay in the event of a ransomware attack.
“I can’t believe I’m having conversations like that,” he said.
Doug Bailey is a journalist and freelance writer who lives outside of Boston. He can be reached at [email protected].
© Entire contents copyright 2023 by InsuranceNewsNet.com Inc. All rights reserved. No part of this article may be reprinted without the expressed written consent from InsuranceNewsNet.com.
Doug Bailey is a journalist and freelance writer who lives outside of Boston. He can be reached at [email protected].
Health care reform: Most workers pay more for other people’s health coverage
Customer satisfaction with health insurance varies by generation
Advisor News
Annuity News
Health/Employee Benefits News
Life Insurance News