Can AI Be Trusted? New Report Raises Alarm Over Hallucinations and Bias
- By CDOTrends editors
- January 29, 2024
Have artificial intelligence (AI) and machine learning (ML) become cornerstones of modern business, and what are the consequences of this paradigm shift? According to a comprehensive 2024 AI & ML report released by AI control platform Aporia, there's a growing trend of hallucinations and bias in AI products. This discovery poses significant challenges in a rapidly maturing industry, underscoring the importance of robust monitoring and observability in AI systems.
The report, based on a survey of 1,000 ML professionals across industries such as finance, healthcare, travel, insurance, software and retail in North America and the United Kingdom, reveals that 93% of ML engineers face production model challenges frequently, with some encountering issues multiple times a day. These problems range from technical glitches to more concerning issues such as AI hallucinations, where AI models generate incorrect or biased content.
The issue of AI hallucinations is particularly alarming, with 89% of engineers in companies using large language models (LLMs) and generative AI models reporting signs of this phenomenon. These hallucinations can vary from mere factual inaccuracies to content that is biased and potentially dangerous.
Furthermore, the survey underscores the reality of bias in AI, with 83% of respondents prioritizing monitoring for AI bias in projects. This challenge is exacerbated by difficulties identifying biased data and inadequate monitoring tools. The implications of bias in AI are profound, affecting everything from individual decisions to broader societal norms.
Real-time observability is considered crucial by 88% of ML practitioners, who acknowledge their obliviousness to issues occurring in production without it. Yet, some enterprises still lack automation tools for efficient monitoring and observability.
Another significant concern highlighted in the report is the time and resources spent developing in-house monitoring tools and dashboards. On average, companies invest about four months in these projects, raising questions about their efficiency and cost-effectiveness.
Liran Hason, chief executive officer of Aporia, emphasized the urgency of the situation, stating, “Our report shows a clear consensus amongst the industry, AI products are being deployed at a rapid pace, and there will be consequences if these ML models are not being monitored.”
He added, “The engineers who are behind these tools have spoken—there are problems with the technology, and they can be fixed. But the correct observability tools are needed to ensure enterprises and consumers alike are receiving the best possible product, free of hallucinations and bias.”
Image credit: iStockphoto/Anton Vierietin