The pace of development in the field of artificial intelligence (AI) is picking up, as technology giants, research scientists, and enterprises jump into the field. And with increasingly powerful new AI hardware on the way, AI is here to stay.
The rapid-fire developments can be hard to keep track of, though. To help you along, here are some recent AI-centric developments and events over the last week.
Clearing traffic jams using AI
Hate traffic jams? Me too. It turns out an Israeli AI firm thinks it has the problem solved with the use of AI. At the recent EcoMotion showcase in Tel Aviv, Intelligent Traffic Control (ITC) showcased an AI solution it developed to distill pesky jams into “mathematical” problems that are eminently solvable.
Deployed at two traffic junctions, the system brought about a 30 percent drop in traffic. It works by collecting real-time data from road cameras, sending out instructions to traffic lights to smoothen traffic as necessary.
"ITC managed to prove mathematically that many traffic jams can be prevented – if you intervene early enough,” said ITC co-founder and chief technology officer Dvir Kenig in an interview with AI-Monitor.
Of course, it is not clear if traffic conditions at surrounding junctions are left worse off, and if the system is scalable across an entire city. Still, the potential of not being stuck in traffic, and the corresponding reduction in greenhouse gas emissions – ITC says the average driver spends three days a year stuck in traffic – probably makes this a solution to consider for cities in Asia.
A call to data-centric AI
Implementing AI in your organization? To succeed, organizations need to systematically engineer the data needed to build a successful AI system, says AI pioneer Andrew Ng. The co-founder of Google Brain, Ng had long advocated that massive data sets are not needed for success.
Ng repeated his call at a recent conference hosted by MIT Technology Review, encouraging organizations to focus on high-quality, consistently labeled data to unlock the value of AI. However, he acknowledged that the concept of “data-centric AI” is still a new idea that is being discussed by experts and outlined some common challenges that must first be addressed.
For a start, ensuring quality labelling is easier said than done, as even experts might disagree about the state of a product or the categorization of a manufacturing defect. This ambiguity can create confusion in an AI system.
Moreover, certain sectors such as healthcare and manufacturing might not generate a lot of usable data, too. For instance, there might not be many X-rays of a rare medical condition, or a factory might only make a handful of defective units of a specific product – which can make it difficult to train an AI model the traditional way.
AI-powered hate speech
Bias, toxic language, and hate speech are known weaknesses of large language models, and active effort is needed to mitigate against them. Indeed, OpenAI reportedly relied on human contractors to manually clean up responses from GPT-3, though we don’t know if it is a recurring arrangement.
But what if you train an AI model specifically to be a hate speech machine? That’s what AI researcher Yannic Kilcher did, training an AI using the 3.3 million threads from 4chan’s infamously toxic “Politically Incorrect” board.
One trained, “GPT-4chan” was unleashed back to the board via nine different bots that were set to post over 24 hours. According to Kilcher, some 15,000 toxic posts were made, making up more than 10 percent of all new content on the board that day.
As if human trolls and conspiracy theorists are not enough, it turns out that AI can now be used to generate harmful content at a massive, sustained scale. Unsurprisingly, the move drew concerns and criticisms from ethicists and researchers in the field.
I think Os Keyes, a Ph.D. candidate at the University of Washington summed it up best to Motherboard: “Some people just want to be edgy out of an insecure need for attention. Most of them use 4chan; some of them, it seems, build models from it.”
One more thing
Finally, who is liable for harmful outputs from AI systems? And what do you do when an AI system generates an untruth about you? Such as what the Google search engine at one point did to security research Marcus Hutchins when it erroneously stated that he created the WannaCry virus?
Turns out that Hutchins stopped the malware in its tracks when he spotted a secret “kill switch” and registered the domain to trigger it. But the way the search engine serves out the results for searches made using his name and “WannaCry” made it easy to misunderstand that he wrote the malware – and had apparently resulted in an increase in hate mail.
You can catch Hutchins complaining about it on TikTok here. Food for thought indeed.
Paul Mah is the editor of DSAITrends. A former system administrator, programmer, and IT lecturer, he enjoys writing both code and prose. You can reach him at [email protected].
Image credit: iStockphoto/jiefeng jiang