5 ways companies are incorporating AI ethics
3rdtimeluckystudio // Shutterstock
As more companies adopt generative artificial intelligence models, AI ethics is becoming increasingly important. Ethical guidelines to ensure the transparent, fair, and safe use of AI are evolving across industries, albeit slowly when compared to the fast-moving technology.
But thorny questions about equity and ethics may force companies to tap the brakes on development if they want to maintain consumer trust and buy-in.
Within the tech industry, ethical initiatives have been set back by a lack of resources and leadership support, according to an article presented at the 2023
While nearly 3 in 4 consumers say they trust organizations using GenAI in daily operations, confidence in AI varies between industries and functions. Just over half of consumers trust AI to deliver educational resources and personalized recommendations, compared to less than a third who trust it for investment advice and self-driving cars. Consumers are open to AI-driven restaurant recommendations, but not, it seems with their money or their life.
Clear concerns persist around the broader use of a technology that has elevated scams and deepfakes to a new level. The
That puts the onus to set ethical guardrails upon companies and lawmakers. In
As other states evaluate similar legislation for consumer and employee protections, companies especially possess the in-the-weeds insight to address high-risk situations specific to their businesses. While consumers have set a high bar for companies' responsible use of AI, the
The reality is that the tension between proceeding cautiously to address ethical concerns and moving full speed ahead to capitalize on the competitive advantages of AI will continue to play out in the coming years.
Drata analyzed current events to identify five ways companies are ethically incorporating artificial intelligence in the workplace.
Actively supporting a culture of ethical decision-making
Media Photos // Shutterstock
AI initiatives within the financial services industry can speed up innovation, but companies need to take care in protecting the financial system and customer information from criminals. To that end,
Development of risk assessment frameworks
Gorodenkoff // Shutterstock
The
Specialized training in responsible AI usage
Communication of AI mission and values
Rawpixel.com // Shutterstock
Companies that develop a mission statement around their AI practices clearly communicate their values and priorities to employees, customers, and other company stakeholders. Examples include
Implementing an AI ethics board
Ground Picture // Shutterstock
Companies can create AI ethics advisory boards to help them find and fix the ethical risks around AI tools, particularly systems that produce biased output because they were trained with biased or discriminatory data. SAP has had an
Story editing by
Hiscox appoints new Group Chief Auditor
Projected Expansion of Generative AI in Insurance Market: Predicted Value to Reach $14.4 Billion by 2032
Advisor News
Annuity News
Health/Employee Benefits News
Life Insurance News