According to predictions by PwC, artificial intelligence (AI) will add a staggering US$16 trillion to the global economy by 2030. To put things into perspective: The gross domestic product (GDP) of Singapore and Hong Kong-based on 2018 figures are just below US$400 billion each. Even China’s GDP of US$13 trillion in 2018 is lower.
For all the rosy promises, however, a couple of reports this week on the state of AI may put a dampener on the next AI-touting startup.
The hype of AI
In a report on The Economist, Tom Gauld calls out what he sees as the technology hitting a wall. He concedes that modern AI endeavors are far more successful than in the past, with billions of users using AI in some form or shape, either as they interact with their smartphones or us various AI-powered internet services.
However, Gauld argues that it has not fundamentally changed how we are doing things. Predictions about AI replacing radiologists or even self-driving cars have been slow to materialize.
Of course, there is no question that machine learning – which is what most of us mean when we say “AI”, has experienced substantial progress. It excels at recognizing patterns in data, and that is useful everywhere and is used from banks to assess credit risk, retail shops to enhance the customer experience, through AI-powered facial recognition or in pharmaceutical in the search for better medication.
Yet the hype over AI might have far exceeded the state of AI science notes Gauld, who argues that many of the grandest claims made about AI have failed to become a reality. He wrote: “Confidence is wavering as researchers start to wonder whether the technology has hit a wall.”
When AI gets it wrong
With the proliferation of machine learning and predictive analysis comes the increased likelihood of AI getting things wrong. This can either stem from poor data sources, or discriminatory algorithms and outright errors. As use skyrocket and mistakes abound, liabilities on the use of algorithmic decision-making are increasing, according to a report this week on Law.com.
For instance, an automated screening system for tenants in the United States (U.S.) was found to be plagued by inaccuracies. As reported on The Markup, one cast such a wide net for negative information that it lumped the hapless victim with at least four others with similar names across the country – including an active inmate in jail.
While some have managed to convince potential landlords that they are trustworthy, it can next to impossible demonstrate that an automated system is producing erroneous results. And AI-based systems are increasingly used in places such as recruitment and assessing credit risk, potentially increasing the impact of mistakes.
Have a plan
For businesses relying on AI-based systems in various areas of their businesses, the potential exists for millions of dollars in lawsuits and fines. From hackers attempting to manipulate the algorithm to algorithmic misbehavior due to problems in the training process, businesses need to come up with plans for when AI causes harm.
For now, authorities around the world are taking notice. In a document filed just last month, the U.S. Federal Trade Commission (FTC) wrote that: “With the proliferation of machine learning and predictive analytics, the FTC should make use of its unfairness authority to tackle discriminatory algorithms and practices in the economy.”
Some like the Monetary Authority of Singapore (MAS) are not waiting. As we reported earlier this month, the MAS is working with partners developing a framework to ensure that AI-based systems employed to market financial products to customers are used appropriately.
This will see the development of quantifiable metrics for financial institutions to assess the extent to which data and the algorithms or models used to evaluate credit scoring for unsecured lending meet the principles of fairness.
To be clear, AI is far too widely deployed today to just fizzle out. And many of the hiccups can be overcome with proper legislation and better algorithms. However, it may be worth noting that AI is not the be-all and end-all technology of the future that it was marketed to be.
Photo credit: iStockphoto/Maxwell Grover