Singapore Unveils Governance Framework, Testing Toolkit for GenAI
- By Paul Mah
- June 05, 2024
At the fourth annual Asia Tech x Singapore (ATxSG) conference held at Capella Singapore last week, the AI Verify Foundation and the Infocomm Media Development Authority (IMDA) launched AI Verify Project Moonshot and a governance framework for GenAI.
Project Moonshot is a testing toolkit designed to address security and safety challenges associated with the use of large language models (LLMs). On its part, the “Modal AI Governance Framework for Generative AI” paper sets forth a systematic and balanced approach to address GenAI concerns while continuing to facilitate innovation.
Project Moonshot
Designed to be easy to use and released as an open beta, Project Moonshot was billed as one of the world’s first open-sourced tools to bring red-teaming, benchmarking, and baseline testing together in an easy-to-use platform. It aims to provide intuitive results of the quality and safety of a model or application in an easily understood manner, even for non-technical users.
The toolkit was developed by working with partners such as DataRobot, IBM, Singtel, and Temasek to ensure that the tool is useful and aligned with industry needs, and underscores Singapore’s commitment to harnessing the power of the global open-source community in addressing AI risks.
“The provision of this new tool is significant as it aims to help developers and data scientists test their LLM applications against a baseline of risks, thereby accelerating the adoption of AI. We look forward to working closely with IMDA to develop appropriate open standards through our contributions,” said Anup Kumar, a Distinguished Engineer and the CTO of Data and AI at IBM Asia Pacific.
The AI Verify Foundation was launched in June last year to harness the collective power and contributions of the global open-source community to develop AI testing tools for the responsible use of AI.
As we reported then, the Foundation will help to foster an open-source community to contribute to AI testing frameworks, code base, standards, and best practices. Additionally, it seeks to create a neutral platform for open collaboration and idea sharing on testing and governing AI.
AI Framework
The Modal AI Governance Framework for Generative AI was developed in consultation with some 70 organizations. It identifies nine areas such as accountability, trusted data for AI training, and content provenance where governance of GenAI can be strengthened.
As its name suggests, it seeks to provide a model governance framework that businesses developing or deploying GenAI can adapt for use. Stakeholders are encouraged to view the issues set out in the GenAI Framework practically and holistically, instead of seeking a single intervention.
Businesses are advised to tailor the relevant good practices offered in the Framework, based on their unique characteristics such as the particular use case, nature of business, and associated risks related to the use of GenAI. It is worth noting that reliance on the Framework does not absolve a company from having to comply with applicable laws.
“The [Framework] sets forth a systematic and balanced approach to address GenAI concerns while facilitating innovation. It comprises nine dimensions to be looked at in totality, to foster a trusted ecosystem. Within these nine dimensions, the framework calls for all key stakeholders, including policymakers, industry, the research community, and the broader public, to collectively do their part,” wrote to the IMDA.
Minimizing the risks of AI
The topic of AI was featured heavily at the recent ATxSG conference. At the opening gala, President Tharman Shanmugaratnam gave a speech as the guest of honor, where he explained the need for AI regulation and his views on it.
He noted that getting AI right means getting the most amount of good while seeking to alleviate the worst of AI. This also means minimizing the risks, which presumably include governance regulations and the use of toolkits such as Project Moonshot to validate AI models.
“It's fair to say that the pace of advancement of AI and related technologies is far outstripping our public policy and regulatory responses... AI and related technologies around it are moving very fast.”
“You can't leave [AI] to the law of the jungle... you will otherwise simply be letting might be right. And we will be letting whatever player emerges the largest to dictate the norms.”
You can download the Model AI Governance Framework for Generative AI here (pdf).
Image credit: IMDA
Paul Mah
Paul Mah is the editor of DSAITrends, where he report on the latest developments in data science and AI. A former system administrator, programmer, and IT lecturer, he enjoys writing both code and prose.