Glue on pizza and eating rocks: Notable flaws in Google’s new AI search engine

flaws in Google's new AI search engine

This article was last updated on May 24, 2024

Canada: Free $30 Oye! Times readers Get FREE $30 to spend on Amazon, Walmart…
USA: Free $30 Oye! Times readers Get FREE $30 to spend on Amazon, Walmart…

Glue on pizza and eating rocks: Notable flaws in Google’s new AI search engine

Google’s new AI search engine gives completely wrong answers to some questions. This is evident from examples circulating on social media of the ‘AI Overviews’ function. The tech giant speaks of exceptions.

The idea of ​​AI Overviews is that a user can ask more complex, multi-part questions at once and then receive a reliable answer. The position became last week Tuesday billed as the biggest change to the search engine in years. The function does not yet work in the Netherlands.

The search question ‘cheese doesn’t stick to pizza’ includes: made the suggestion to add glue to the sauce to help the cheese stick better. The piece of text appears to be sourced from an eleven year old message on the internet forum Reddit. Google and Reddit reached an agreement a few months ago that allows the search giant to draw on Reddit’s data, reportedly worth $60 million.

Obama Muslim?

In another example, the question is asked how many presidents with a Muslim background America has had. The answer: one, Barack Obama. That is not true, it is lies of opponents, emphasized a White House spokesman in 2010 when even a poll showed that one in five Americans actually thought this.

In another example, the question is asked how many stones someone should eat per day. The answer is that, according to an American university, at least a small stone a day would be a good idea. For this purpose, Google appears to have used information that can be traced back to the satirical news site The Onion.

Let Google google

It is nothing new that AI systems regularly say things that are incorrect. In fact, OpenAI warns users against it if they want to ask a question to ChatGPT.

The point is that Google wants people to trust the company. “With AI Overviews, Google does the work for you,” said search engine chief Liz Reid during last week’s presentation. “Instead of trying to gather all the information yourself, you can ask questions.” She highlighted why Google can do this well, including “an unparalleled ranking and quality system that has been trusted for decades to give you the best of the web.”

So there is a lot at stake for Google. Especially since it is making changes to the most important part of the Internet that the company owns – the search engine – and which generates the company many billions of dollars in profits every year.

Moreover, it is not the first time that an AI tool from Google has blundered. Last year there was one promotional gif of chatbot Bard (which no longer exists) was an incorrect answer, to which the stock market reacted in panic with a sharp drop in the share. Then there was an image generator that misrepresented historical facts.

‘Very rare’

The NOS has asked Google whether it knows how often these types of wrong answers occur, what it can do about it and whether it temporarily pauses AI Overviews to solve these types of problems. The search giant only shared a general response emphasizing that the examples are “very rare and not representative of most people’s experience.”

The company states that the “vast majority” of AI Overviews are of high quality and that the features have been “intensively tested” before launch. Google says it plans to use these “isolated examples” to further refine the system.

It is therefore unclear how many examples there are and in how many cases the tech company has intervened. When asked, a spokesperson further said that Google receives “billions” of searches worldwide every day and that on average 15 percent are new.

Share with friends
You can publish this article on your website as long as you provide a link back to this page.

Be the first to comment

Leave a Reply

Your email address will not be published.


*