Generative AI at DBS: An Insider’s Perspective
- By Paul Mah
- March 20, 2024
“Generative AI is a highly transformative technology that is still being explored, and there is virtually an unlimited amount of use cases you can build on top of it,” said Luis Carlos Cruz Huertas, the head of Automation, Infrastructure, and Analytics at DBS Bank.
Huertas was speaking at the Alibaba Cloud AI and Big Data Summit earlier this year, where he shared his thoughts about generative AI and how it can be deployed today.
It was an insightful sharing, given his experiences working with AI systems so that “data scientists can do their testing and exploration safely and securely”.
Harnessing the power of generative AI
According to Huertas, generative AI is currently tested across multiple use cases, though a lot of them revolve around improving productivity. As you may expect, this diversity does pose a number of challenges.
“When you have AI models this big, they can be used for many, many different purposes. So how do you provide guardrails to reduce hallucination?” he said.
“And we're going to have not one LLM, but potentially using 13, 14… 16 different models across the board. So how do you create a framework [to manage them]? This is quite important.”
Even data privacy isn’t as straightforward a topic as it may appear at first blush.
“When we talk about risks and data privacy, we must first define what we are talking about. Are we talking about training our own model, or are we talking about data privacy when it comes to providing a safe and secure RAD [Rapid Application Development] framework, or to provide proper technology and data model governance?”
Some additional insights
Huertas issued a word of caution about creating applications that are too tightly coupled to a specific AI model. This is because the release of an updated model could cause the application to behave differently, necessitating additional work to get it back to where it was.
There is also a need to identify performance metrics that make sense and use them to track the performance of the application before and after the release of a new LLM, he says.
Finally, even small models can do wonders, too. “Sometimes smaller models are very good at specific tasks. And this is why you have to be open about what models you offer… you might have a small model that can perform extremely well for specific users.”
In closing, Huertas offered suggestions on choosing the optimal public cloud provider for organizations turning to generative AI.
“The guaranteed service delivery on GPUs to do exploration across use cases for LLM models and applications, its something critical… When you are evaluating cloud service providers, it’s not what they have in their service catalog, but it’s the resources that are truly available, any time and at any point.”
“When you are doing analytical compute, which is basically of an ephemeral nature, you need on-demand resources; that on-demand access is critical for the scientist to do data exploration,” he summed up.
Image credit: iStockphoto/a-image
Paul Mah
Paul Mah is the editor of DSAITrends, where he report on the latest developments in data science and AI. A former system administrator, programmer, and IT lecturer, he enjoys writing both code and prose.