Meta introduces BlenderBot 3 over the web, its most capable conversational AI to date


More than half a decade has passed since Microsoft’s truly monumental Taye debacle, but the incident still stands as a reminder of how quickly artificial intelligence can be corrupted after being exposed to the Internet’s powerful toxicity, and a warning against building bots without strong enough behavioral bonds. On Friday, Meta’s AI Research section will see if the latest iteration of the Blenderbot AI can withstand the horrors of the interwebs with a public demo version of the 175 billion parameter Blenderbot 3.

It is one of the main sources of hindrance currently facing chatbot technology (as well as the natural language processing algorithms that drive them). Traditionally, chatbots are developed in highly curated environments—because otherwise you always get Taye—but this limits the topics it can discuss to the specific topics available in the lab. Conversely, you can feed your chatbot information from the web to access a wide range of topics, but it can and probably will be a complete Nazi at some point.

“Researchers cannot predict or simulate every conversational scenario in research settings alone,” the Meta AI researchers wrote in a blog post Friday. “The field of AI is still a long way from truly intelligent AI systems that can understand, communicate and converse with us like other humans. Chatbots have a broad perspective with diverse, ‘in the wild’ people to create models that are more adaptable to real-world environments.”

Meta has been working to solve this problem since it first introduced BlenderBot 1 chat in 2020. At first, little more than an open-source NLP experiment, over the next year BlenderBot 2 also learned to remember information it had discussed in previous conversations. and how to search the web for additional details on a given topic. BlenderBot 3 takes these capabilities a step further by evaluating not only the data it pulls from the internet, but also the people it talks to.

When a user enters an unsatisfactory answer from the system—currently about 0.16 percent of all training answers—Meta remodels the feedback from the user to avoid repeating the mistake. The system also uses the Director algorithm, which generates an answer using training data, then runs through a classifier to see if the answer fits a scale of right and wrong defined by user feedback.

“Language modeling and classification mechanisms must be coordinated to produce a sentence,” the team wrote. “Using data showing good and bad responses, we can train a classifier to penalize low-quality, toxic, conflicting or repetitive statements, and statements that are generally useless.” The system also uses a separate user-weighting algorithm to detect unreliable or malicious responses from a human speaker—essentially teaching the system not to trust what the person is saying.

“Our live, interactive, public demo allows BlenderBot 3 to learn from organic interactions with all kinds of people,” the team wrote. “We encourage adults in the US to try disassembly, have natural conversations about topics of interest, and share their answers to help advance research.”

BB3 is expected to speak more naturally and conversationally than its predecessor, thanks in part to the massively improved OPT-175B tongue model, which is approximately 60 times larger than BB2’s. “We found that compared to BlenderBot 2, BlenderBot 3 provides a 31 percent improvement in overall ranking on human-judged speech tasks,” the team said. “It is also judged to be twice as informative, 47 percent less likely to be factually incorrect. Compared to GPT3, it is more relevant 82 percent of the time and more specific 76 percent of the time.”

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you purchase something through one of these links, we may earn an affiliate commission. All prices are correct at time of publication.



Source link