14.5 C
New York
Monday, May 20, 2024
spot_img

Many people underestimate the frequency of “hallucinations” in chatbots.

When the San Francisco start-up OpenAI unveiled its ChatGPT online chatbot late last year, millions were impressed by its human-like abilities. However, it soon became apparent that these chatbots often make up information. Similar issues were seen with Google’s chatbot and Microsoft’s Bing chatbot, leading to concerns about the accuracy of information provided by these systems. Vectara, a new start-up founded by former Google employees, is now focused on understanding the frequency of chatbots inventing information. Their research suggests that even in controlled scenarios, chatbots can create false information between 3 percent and 27 percent of the time. This “hallucination” behavior poses serious concerns, especially when used with sensitive data such as court documents or medical information. The lack of a definitive way to determine the extent of hallucination is a significant challenge. Vectara conducted research on chatbots’ summarization abilities and found a persistent invention of information, even with straightforward tasks. Additionally, the research highlights variations in hallucination rates among different AI companies, with OpenAI having the lowest rate (around 3 percent) and Palm chat from Google having the highest rate (27 percent). Vectara wants to raise awareness about the issue and promote efforts to minimize hallucinations across the industry. While OpenAI and Google are already working on mitigating this problem, it remains to be seen whether it can be completely eliminated.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Stay Connected

0FansLike
3,912FollowersFollow
0SubscribersSubscribe
- Advertisement -spot_img

Latest Articles