“It’s not something you can program. You can’t put it into a chip.”
Much has changed since Arnie, and his cyborg friends stormed LA in The Terminator. And while we’re yet to experience havoc quite at the levels seen in the 1984 hit, artificial intelligence is a reality in our lives today.
Autonomous vehicles were mooted in the 1920s and, hot on the heels of real-life ‘Iron Man’ Elon Musk, car manufacturers are working tirelessly to stamp their badge on early mainstream vehicles. And as tech giants like Uber and Baidu invest heavily in AI research, many proclaim their cars are safer and more efficient than human-driven cars.
While Musk predicts self-drive cars will be widely available within a couple of years, several issues need addressing before consumers will take their eye off the road: from the vehicles’ ability to navigate phrenetic city centers to make critical moral choices in potentially dangerous situations.
We’re also beginning to see the emergence of AI across the financial services industry. Chinese insurance providers are leading the field, with Zhong An one of its key success stories, using AI for real-time pricing of products, underwriting, fraud detection and customer services. They even offer a flight insurance product which can be purchased 15 mins before a flight – priced using real-time flight and weather data, and passenger information. PingAn, another Chinese Insurance behemoth, have used their own ‘Brain’ platform to save USD 302 million (RMB 2 billion) in fraudulent claims, with an impressive 78 percent accuracy in detection versus 21 percent ‘pre-Brain.'
But one vital question remains unanswered: if we entrust our cars and insurance policies to machines, who pays the price when it all goes wrong? As machines increasingly power our decisions and become increasingly independent, should humans focus change to understand why they take specific actions – and if so, do we increasingly become guardians that oversee these new robot decision makers?
These risks have played out starkly under the open road’s high-profile glare in a recent accident with an autonomous Uber vehicle. Although the case was settled, actual liability was unclear. A Stanford Law professor suggested that blame could lie with Uber for the safety driver’s negligence; they could also be held accountable as the manufacturer under product liability where the AI failed to identify the individual; and even with the lady who was hit who made the accident ‘unavoidable’ by walking where she did.
In insurance, there are emerging concerns too – machines to inaccurately assess risk, or process claims, based on inaccurate training data or human bias in the models themselves. The Financial Stability Board flagged concerns over existing prejudices influencing the building of the models which power the beneficial outcomes of automated pricing, underwriting and claims processes. IBM echoed these sentiments by stating that it will become key to "…train the teams working with [AI] to understand bias, including implicit and unconscious bias, monitor for it, and know how to address it.
The EU is beginning to explore this, with a goal to ensure that AI decisions are both explainable and regulated through their AI for good for all framework. However, the unexpected outcomes from arising AI will likely go well beyond the EU’s framework, and as AI matures and the machines become capable of creating and optimizing their models, the finger of blame might eventually rest on the robots themselves. Once self-aware or at least able to understand their actions, will they wilfully trade in freedom to serve time for their crimes?
For all the advances in AI, it still feels like it will be some time before robots feel guilt or remorse; so society doesn’t need a dystopian cyborg jail just yet. And while punishing these machines’ might not be the answer, clipping their wings could hinder their output in the short term, forcing their creators and dependents to slow their pace of development.
It’s clear that legal policy must catch up; and that in sectors with frayed consumer trust retrospective measures won’t be enough. As AI touches more households, the price we will pay if technological intervention goes awry will be severe. Musk has said this himself, stating “AI is a rare case where we need to be proactive about regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it’s too late.”
AI will only work to its full potential with accurate data, bias-free models, and regulation that is sufficiently robust to drive healthy outcomes, and those outcomes will hopefully benefit everybody. However, getting there depends on collaboration between regulators, industry leaders and the data scientists behind these innovations. And it could be that these alliances create the dominant force we need to keep the robots in check – and ensure we don’t see orange-clad T1000s in the near future!
Patrick Milburn, managing director, Hong Kong at Mezzo Labs contributed to this article.
The views and opinions expressed in this article are those of the author and do not necessarily reflect those of CDOTrends.