Can ChatGPT pass procurement?
By Alan Ringvald
Marisa, an entry-level accountant at a Fortune 500 company, found herself drowning in tedious tasks over the weekend. Frustrated with the soul-sucking nature of manually reviewing and categorizing expenses and transactions, she took matters into her own hands and developed an AI wrapper to ease her workload. This wrapper, a software code encapsulating a generative AI model such as ChatGPT, proved capable of efficiently handling these tasks.
Excited by her discovery, Marisa shared it with her boss, sparking serious conversations about scaling this across the entire organization. However, questions arose about the appropriateness of using her creation at work without proper authorization and clearance. Could Marisa face repercussions for using her AI tool, especially if her colleagues followed suit? Moreover, would her AI wrapper meet her company's procurement and information security standards
Enterprises, particularly those in regulated sectors, find themselves at a crossroads regarding AI adoption. Although the benefits of integrating AI into various operations are evident, the associated risks raise valid concerns for large corporations.
The truth is, AI is still incredibly early. We're currently in the phase reminiscent of Mark Zuckerberg's famous mantra, "Move fast and break things.” This phrase translates as “Get your product out to market as quickly as possible and figure out the details later.” Although that philosophy works beautifully in the consumer world, for heavily regulated, risk averse industries such as insurance, banking, utilities and many others, “moving fast and breaking things” can be a terrifying proposition.
The emergence of transformative AI models such as ChatGPT-3 has revolutionized various sectors, offering unprecedented utility. However, significant drawbacks can come along with the advantages. AI platforms often come with issues such as copyright infringement, hallucinations and inadequate security measures. In regulated industries such as insurance, banking and utilities, these deficiencies are particularly alarming.
Marisa's initiative in creating an AI wrapper to streamline expense categorization exemplifies innovation and huge potential value for her company. However, her unauthorized use of AI with sensitive company data raises pertinent concerns regarding data protection and information security. Without stringent encryption and controls, her actions may breach company policies and regulatory standards, inviting serious consequences.
Although AI tools such as ChatGPT present immense value, it's crucial for large, regulated industries to adopt a "safe AI" approach. This entails collaborating with vendors or internal data science and engineering groups who prioritize data security, offer transparency regarding training data sources, and facilitate user feedback mechanisms. Implementing AI effectively requires a focus on specific use cases tailored to user workflows and feedback loops.
Alan Ringvald is CEO of Relativity6. Contact him at [email protected].
© Entire contents copyright 2024 by InsuranceNewsNet.com Inc. All rights reserved. No part of this article may be reprinted without the expressed written consent from InsuranceNewsNet.com.
Millennials, Gen Z twice as likely to get financial advice online, study finds
Fifteen California residents charged in alleged auto insurance fraud ring
Advisor News
Annuity News
Health/Employee Benefits News
Life Insurance News