Google’s recent application of artificial intelligence in its search engine has raised eyebrows due to the strange and often inaccurate answers provided in search results. While some speculated that these results were a hallucination, Google clarified that they were the result of misunderstood search queries.
During the recent I/O Developers Conference, Google showcased its AI Overview feature, which uses artificial intelligence based on Gemini to offer answers at the top of search results. However, in recent weeks, bizarre results emerged, such as suggesting people with depression to jump off a bridge or advising users to stare at the sun for fifteen minutes.
According to Google, these peculiar answers were due to a combination of factors. The company addressed social media posts that highlighted these inaccurate responses and explained that some of the screenshots circulating were fake. Google attributed such misleading information to a lack of data available on the internet for the AI to generate accurate responses.
In some instances, Google admitted that the AI misinterpreted language, leading to incorrect information being displayed. While the AI Overview feature strives to provide relevant answers based on the top search results, it can falter when faced with ambiguous or sarcastic queries from internet forums.
Despite the hiccups, Google emphasized that the inaccurate answers were not a result of hallucination but rather a limitation of the AI’s training data and the nuances of internet language. Moving forward, the tech giant aims to enhance the AI Overview feature to minimize such discrepancies and provide users with more reliable search results.