Meta AI’s New Chatbot Goes ‘Bad’ in Days
- By DSAITrends editors
- August 10, 2022
Meta AI has built and unveiled BlenderBot 3, a 175 billion-parameter chatbot that it has made publicly available, complete with model weights, code, datasets, and model cards.
“BlenderBot 3 delivers superior performance because it’s built from Meta AI’s publicly available OPT-175B language model — approximately 58 times the size of BlenderBot 2,” said Meta in an announcement on Friday.
Unlike its predecessor, BlenderBot 3 can search the Internet to chat about almost any topic. Moreover, it can learn and improve its skills and safety through natural conversations and feedback from people in the real world. In contrast, most datasets are typically collected through research studies that “can’t reflect the diversity of the real world”, claims Meta.
On this front, two recently developed machine learning techniques were incorporated to build conversational models that learn from interactions and feedback, while new techniques were developed to enable learning while avoiding those attempting to trick it into unhelpful or toxic responses.
“[Not] all people who use chatbots or give feedback are well-intentioned. Therefore, we developed new learning algorithms that aim to distinguish between helpful responses and harmful examples,” explained Meta.
The model uses the entire user behavior across conversations to determine whether to trust a user – and will either filter or down-weight feedback it deems as suspicious.
Oops! I did it again
As part of Meta’s efforts to improve BlenderBot 3, a live demo was put online to demonstrate how it can converse naturally with humans while also providing the feedback it needs to improve.
The initial announcement and live demo took place last Friday. Well, it turned out that things turned bad within days, much like Microsoft’s ill-fated Tay.
According to Mashable, it has already described Meta CEO Mark Zuckerberg as "too creepy and manipulative", asserted that Trump won the elections and "will always be" president, as well as touted an anti-Semitic conspiracy theory.
For now, the team has added a new disclaimer attributed to Joelle Pineau, the managing director of Fundamental AI Research at Meta, dated August 8.
“While it is painful to see some of these offensive responses, public demos like this are important for building truly robust conversational AI systems and bridging the clear gap that exists today before such systems can be productionized.”
On the bright side, it appears that of the 70K conversation with BlenderBot 3 so far, just 0.11 percent of its responses were flagged as inappropriate, 1.36 percent as nonsensical, and 1 percent as off-topic. So perhaps it is not so bad after all.
You can check out the live demonstration here (Unfortunately, US-only for now).
Image credit: iStockphoto/Mikhail Konoplev