Opinion: Why AI Faces a Human Crisis

We are reaching the era of Strong AI or artificial general intelligence (AGI). Essentially, it is AI that can perform intellectual tasks that humans can. Author Ray Kurzweil predicts human-like AI programs will pass the Turing Test by 2029.

Strong AI evokes fear. Many believe it will be an all-knowing, calculative, cold and methodical overmind. Largely, this thinking is a product of Hollywood: In WarGames, the AI played a game to win at all costs; Terminator saw Skynet preemptively eliminate unreliable humans; in I, Robot the AI wanted to micro-manage humans for the sake of sustaining the planet; and in Matrix, humankind's usefulness is equivalent to a battery.

In reality, AI supplanting humanity should not be our biggest concern. Instead, we should look at how we are developing and training AI in the first place. And it is not a perfect science.

Below are the top issues from my perspective:

Data is Food

The more data you feed, the better AI learns and performs. It is where firms like AlibabaBaidu, FacebookGoogleIBM, and Tencent have an advantage as they own treasure troves of data. Startups need to access this data. They may have brilliant AI ideas and strong algorithms, but they need a lot of data to help AI learn. The problem only gets worse in areas where some data is restricted, like in healthcare and the military, which can lead to biased or erroneous outcomes.

Learning Needs a Rethink

Many AI algorithms still use supervised and reinforcement techniques to learn. Supervised learning requires a lot of data on labeled actions; reinforced techniques need the AI to die a thousand deaths before finding the best solution -- something that human intuition helps. It is why many of today's advanced AI programs are good at problems with large data sets but cannot correctly identify a mechanical pencil 100% of the time.

Biased Data

Data can be biased. It is why Microsoft's Tay spewed hate words after learning from Twitter and Google Images incorrectly classified African Americans as gorillas. So, who determines whether the data used to teach AI is unbiased? Difficult to answer in a commercial lab. Here, hard de-biasing is beginning to show promise.

Adversarial Programming

How we see and interpret data is vastly different from that of an AI program. This difference can be exploited. Ian Goodfellow, the inventor of Generative Adversarial Networks (GANs), showed that you could influence neural networks by mathematically altering images. The images may look almost identical to humans, but for deep learning programs they may see them as separate. It can impact how such programs decide on actions to take, like when avoiding a car collision, while raising the danger of AI systems becoming hijacked by hackers. AI programs need to protect themselves against such adversarial programming threats that humans easily overcome. 

Social Engineering and Humanities

The way we look at the world is not through science alone. Art and literature play a significant role, as does our daily interactions with our kind. However, many AI development teams have minimal representations from these fields -- especially when AI development talent itself is difficult to find and expensive to retain. AI development needs to become multi-disciplinary and should include humanities and art.

Thankfully, researchers are actively looking to address the problems with new techniques. But it is essential that we tackle these before we allow more AI programs to run and assist our lives. If they end up supplanting humans, it is not their fault; it is always ours.