The State of AI in 2020 and Beyond

AI investors Nathan Benaich and Ian Hogarth have just released their latest annual State of AI report, a comprehensive report that looks at the technology, capabilities, talent, and financing around artificial intelligence.

The State of AI Report 2020 this year comes with a whopping 177 slides packed with updates and insights. We highlight a small handful of what caught our eyes with this year’s report.

Momentum in AI growing

Momentum in AI is growing, but behind closed doors in most cases. According to the report, a mere 15% of papers on AI publish their code. There are various possible reasons for this situation, including its implementation in proprietary applications: “For the biggest tech companies, their code is usually intertwined with proprietary scaling infrastructure that cannot be released.”

Aside from research code implementations being important for accountability, reproducibility, and driving progress in AI, closed-source AI can also lead to the centralization of AI talent. For now, notable organizations that didn’t publish all of their code are OpenAI and DeepMind.

From those that publish or cite the framework that they use, it appears that Facebook’s PyTorch is fast outpacing Google’s TensorFlow in research papers. This is noteworthy as a leading indicator of production use down the line. For now, TensorFlow, Caffe, and Caffe2 remain the workhorse for production AI.

Practical real-world visible implementations AI are still some way away, however, with self-driving car mileage staying “microscopic” in 2019. Moreover, nations are also passing laws to let them scrutinize foreign takeovers of AI companies.

Barriers of entry

It is probably easier to get started with AI today than it was a few short years ago, thanks to the availability of tools and maturity of infrastructure. But if you are training a new model like GPT3, then you will probably find it hard to catch up.

As noted in a report on ZDNet, the cost of training OpenAI's GPT3 could be in the millions. Indeed, with 175 billion different coefficients, a likely budget suggested by experts pegs training GPT3 at a whopping USD10 million.

New, innovative approaches might well reduce this steep barrier of entry, however. For instance, London-based PoolyAI produced and open-sourced a conversational AI model that outperforms Google’s BERT model in conversational applications. Crucially, PolyAI’s model requires just a fraction of the parameters to train, which translates directly to a significantly lower cost.

How did that happen? Speaking to ZDNet, Benaich and Hogarth believe it boils back to having a thorough understanding of a specific domain good engineering rigor, instead of relying on sheer brute force. If anything, this will be what opens the door of AI to more innovators, who can theoretically make further breakthroughs even in “tried and tested” areas.

Predictions about AI

In it concluding few pages, the authors outlined eight predictions that they believe will happen over the next 12 months. Their 2019 report got four out of six predictions right, one wrong, and a tie on the final one. It would certainly be interesting to see the results of the latest predictions 12 months later.

Here are three of them:

  • The race to build larger language models continues and we see the first 10-trillion parameter model.
  • A major corporate AI lab shuts down as its parent company changes strategy.
  • NVIDIA does not end up completing its acquisition of ARM.

You can download the full State of AI Report 2020 here.

Photo credit: iStockphoto/onlyyouqj