Taming GenAI: Addressing the Enterprise Dilemma
- By Winston Thomas
- November 17, 2023
Generative AI (GenAI) has piqued the public imagination about AI’s potential. Not a week goes by without some mention of GenAI or integration in the media space.
For enterprises, the GenAI discussion is also evolving. Jay Jagadeesan, senior partner and the ASEAN business transformation services leader at IBM Consulting, cited research that showed many enterprises "are looking to expand organizational capabilities and drive a business rather than optimize their costs."
Speaking at the executive roundtable session "The GenAI Dilemma—Great Power Comes With Great Responsibility," he noted that the conclusions "is in stark contrast to what the typical priorities for organizations was just one and a half years ago when cost optimization and customer and employee experience were top of the list."
As a result, enterprises are beginning to experiment with new use cases. Many are also looking to scale their models for different use cases.
Yet, while the enthusiasm is apparent, the path to enterprisewide usage is not. Standing in the way are worries about whether the AI models "are secure, safe, unbiased and non-discriminatory," commented Jagadeesan.
It is what the executive roundtable session, moderated by Alex Carmichael, managing director for Promontory Financial Group, looked at. Below are some of the insights.
AI governance matters to all, not just some
Several participants, who were senior IT decision-makers from significant companies, noted the lack of an overarching AI governance framework.
It meant that many worry that AI models may exhibit biases and discriminatory results that enterprises can't afford. Carmichael, who characterized his role as "bringing a glass of cold water" to such discussions, noted that enterprises need to be conscious of the risks from the onset.
Some participants noted that an AI governance framework is essential, especially in providing guardrails for its development. Many in the financial services industry see this as an extension of the compliance framework. Others added that senior management must be directly involved to enforce the framework and its guardrails.
One representative explained that AI governance should involve all parts of an organization. He added that it was the reason why many of their AI projects involve different business leaders since the outcomes impact the entire company.
Stop paying lip service to Responsible AI
Many participants expressed worries about deepfakes and plagiarism. While these are frowned upon among consumers, for enterprises, the issue is about brand trust.
For example, one participant noted that these issues may infringe on copyright laws, which are currently being debated. Suppose there is a tightening of these laws. Enterprises may face lawsuits in that case since generative AI creates new outcomes using such data.
Another issue highlighted was PII data and concerns about privacy. Many enterprise AI models use such data to create personalized experiences but may be seen as infringing the rights.
Carmichael noted that while the technology to identify privacy issues or identify deepfakes is getting better, it does come down to how enterprises address Responsible AI. He admitted that it is a subject matter that is only now beginning to be discussed in the public domain.
“We need to understand how are we going to provide an ethical usage of AI and how are we going to combat deep fakes in the market, not just in text or pictures, but also in voice and coding,” he said.
Some participants joined Carmichael to urge enterprises to start this discussion early within their companies. They noted that Responsible AI should be a significant focus as enterprises start their AI journeys.
As AI technology advances and the lines between originality and fake blurs, enterprises can adapt and pivot if they already have a strong framework.
"Otherwise, we're going to end up in a situation where something will go wrong, and we won't even know where to begin to look at what went wrong," said Carmichael.
Make your AI model explainable
A key discussion centered on the importance of explainability. Viewing AI models as more than a "black box" and understanding how AI outcomes were created.
Participants noted that regulators are also beginning to see the importance of explainability. It is a massive concern in the legal and financial services industries.
One primary reason is AI hallucination, a key challenge of using GenAI models. Carmichael noted that such issues arise when these models generate outcomes that answer the prompt with no source data to back it up.
Technology vendors and the IT industry are creating techniques and practices to remove AI hallucinations. However, as the technology progresses, participants noted that companies must ensure that AI models are explainable.
Human feedback loop
The above concerns on Responsible AI and explainability highlighted the need to keep humans in the loop.
Many participants noted that it is too early to remove humans from the training of AI models, even though GenAI models can potentially do self-service machine learning.
Carmichael also noted that many public-domain GenAI tools are based on models trained on data that may not be specific to an industry or include biases. Using a human during the training is vital to address these upfront or when scaling the AI model to different geographies.
Some participants noted the importance of having a ModelOps team to ensure that there is no model drift and that it remains updated. This is essential when scaling the model and expanding its use cases.
No time to wait
While many participants noted these concerns are keeping them on the sidelines, Jagadeesan and Carmichael said that enterprises cannot afford to wait on the sidelines.
Carmichael noted that GenAI has ignited AI development to a new level. As the number of users multiplies, it also creates a new generation of future employees who expect and work best GenAI tools in the enterprise.
It requires enterprises to start their GenAI journeys quickly, explore use cases, deploy the proper framework, and use the correct guardrails to steer their internal AI developments.
"It truly needs you to bring the entire organization together to start to drive this sort of initiative. It's not okay to give it to the data analytics team; neither is waiting for [guardrail] development," said Carmichael.
By then, the gap between AI leaders and laggards will be too significant, and your future employees will choose to move to another AI-savvy work environment.
Winston Thomas is the editor-in-chief of CDOTrends. He's a singularity believer, a blockchain enthusiast, and believes we already live in a metaverse. You can reach him at [email protected].
Image credit: iStockphoto/wildpixel
Winston Thomas
Winston Thomas is the editor-in-chief of CDOTrends. He likes to piece together the weird and wondering tech puzzle for readers and identify groundbreaking business models led by tech while waiting for the singularity.