Google Scrambles To Meet ChatGPT Threat
- By Paul Mah
- January 04, 2023
Google is scrambling in response to the recent launch of ChatGPT, an AI tool capable of generating human-like responses that are difficult to distinguish from those made by a real person.
ChatGPT is the latest evolution of OpenAI’s GPT-3 released on 30 November to gather feedback from the public.
According to a report in the New York Times, which reviewed an internal memo and audio recording, Google CEO Sundar Pichai has called for a refocusing of efforts within the company to address the potential threat to its search engine business posed by ChatGPT.
The report claims that various teams have been directed to work on the development and launch of AI prototypes and products, and further suggests that Google will make a series of AI announcements in May.
Search engine of the future
As I wrote in December, we might stop using search engines in the future. Instead, AI agents would gather relevant information and links from them, which are then intelligently summarized to give users the precise information they need without the need to parse through multiple pages of search results.
This poses a threat to the search giant's business model, given that users clicking on Google links with ads generated USD208 billion in 2021 – 81% of Alphabet's overall revenue.
Paul Buchheit, a computer engineer, and entrepreneur who created Gmail and developed the original prototype of Google AdSense recently tweeted about the potential paradigm shift, comparing the Internet search engine to the Yellow Pages of the past.
“The Yellow Pages used to be a great business, but then Google got so good that everyone stopped using the yellow pages. AI will do the same thing to web search,” Buchheit wrote.
To be clear, ChatGPT suffers from various weaknesses. For one, it is not capable of fact-checking what it says and cannot distinguish between a verified fact and misinformation. It is also prone to "hallucinations”, which are made-up responses generated out of thin air. The danger lies in its plausibility,
Even Sam Altman, CEO of OpenAI, cautioned against using ChatGPT for crucial work. He wrote in a tweet: “ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness. it's a mistake to be relying on it for anything important right now. it’s a preview of progress; we have lots of work to do on robustness and truthfulness.”
In my use of ChatGPT, I have found that while its responses are well-written, its answers to specific queries are often repetitive and limited. Additionally, AI also suffers from inherent fragility, which makes them susceptible to failing in completely unexpected ways.
Paul Mah is the editor of DSAITrends. A former system administrator, programmer, and IT lecturer, he enjoys writing both code and prose. You can reach him at [email protected].
Image credit: iStockphoto/Zephyr18
Paul Mah
Paul Mah is the editor of DSAITrends, where he report on the latest developments in data science and AI. A former system administrator, programmer, and IT lecturer, he enjoys writing both code and prose.