Making Singapore a Trusted, AI-Enabled Digital Economy

If data is the new oil, then what are its Exxon Valdez and Deepwater Horizon moments? As with environmental disasters, any major blunder of using data and AI in an unethical way will put the brands involved under extreme pressure from consumers and governments.

While Singapore has so far escaped major data and AI disasters, the proliferation of AI means that it’s only a matter of time. In 2018, an AI and ethics council initiated by the Singapore government set out to address three major risk categories for the AI-enabled digital economy envisioned for Singapore:

  • Technology risk: countering data misuse and rogue AI
  • Social risk: building trust between agencies, companies, employees, and customers
  • Economic and political risk: securing Singapore’s future in a digital economy

Ethics and social responsibility as core principles

The framework follows two guiding principles. The first one is to ensure that AI decision-making is explainable, transparent, and fair. Explainability, transparency, and fairness — “generally accepted AI principles” — are the foundation of ethical AI use. Absent from the framework, however, is the notion of accountability. The framework’s second principle is that AI solutions should be human-centric and operate for the benefit of human beings. This ties AI ethics to the larger dimension of corporate values, corporate social responsibility, and the corporate risk management framework.

A risk management approach for deploying AI at scale

In alignment with other global frameworks, the Singapore Model AI Governance Framework recommends a risk management approach to address the technology risk associated with AI. Ideally, this would be a dimension added to corporate risk management frameworks. This will elevate the risk beyond IT and individual business units to the corporate level (following in the footsteps of cybersecurity risk).

In particular, the framework recommends that organizations:

  • Set up AI governance structures and measures and link them to corporate structures.
  • Determine the level of human involvement with a severity probability matrix.
  • Use data and model governance for responsible AI operations.
  • Set up clear, aligned communication channels and interaction policies.

Risk management and accountability chains for AI

The key task for organizations is to start early and build awareness internally about AI risk. Deploying AI-enabled decision processes at scale must be accompanied by investments in governance and risk management. Guidelines such as Singapore’s Model AI Governance Framework set nonbinding recommendations, but organizations must start to develop their capabilities internally. The evolving nature of the Model Framework has added use case libraries as well as assessment tools — although adoption might still challenge all but the largest organizations.

Forrester recommends that organizations start on the following activities:

  • Turn customer trust into a competitive advantage through fair, ethical, and accountable use of data and AI.
  • Align AI ethics with your corporate values and risk management frameworks.
  • Define your organization’s AI accountability chain, including external partners and providers.
  • Leverage the expertise of AI consultancies with strong capabilities in AI ethics and governance.

For further details on this issue, please review the Singapore Personal Data Protection Commission (PDPC) from January 2020. The second edition of the Singapore Model AI Governance Framework can be accessed here (pdf), and the Implementation and Self-Assessment Guide (ISAGO) is available here (pdf).

This post was written by Achim Granzen, a principal analyst at Forrester and it originally appeared here.

The views and opinions expressed in this article are those of the author and do not necessarily reflect those of CDOTrends. Image credit: iStockphoto/orpheus26