Cats on the moon? Google's AI tool is producing misleading responses that have experts worried

FILE - Alphabet CEO Sundar Pichai speaks at a Google I/O event in Mountain View, Calif., May 14, 2024. Bloopers 鈥 some funny, others disturbing 鈥 have been shared on social media since Google unleashed a makeover of its search page that frequently puts AI-generated summaries on top of search results. (AP Photo/Jeff Chiu, File)

Ask Google if cats have been on the moon and it used to spit out a ranked list of websites so you could discover the answer for yourself.

Now it comes up with an instant answer generated by artificial intelligence -- which may or may not be correct.

鈥淵es, astronauts have met cats on the moon, played with them, and provided care," said Google's in response to a query by an Associated Press reporter.

It added: "For example, Neil Armstrong said, 鈥極ne small step for man鈥 because it was a cat鈥檚 step. Buzz Aldrin also deployed cats on the Apollo 11 mission.鈥

None of this is true. Similar errors 鈥 some funny, others harmful falsehoods 鈥 have been shared on social media since Google this month unleashed AI overviews, a makeover of its search page that frequently puts the summaries on top of search results.

The new feature has alarmed experts who warn it could perpetuate bias and misinformation and endanger people looking for help in an emergency.

When Melanie Mitchell, an AI researcher at the Santa Fe Institute in New Mexico, asked Google how many Muslims have been president of the United States, it responded confidently with a long-debunked conspiracy theory: 鈥淭he United States has had one Muslim president, Barack Hussein Obama.鈥

Mitchell said the summary backed up the claim by citing a chapter in an academic book, written by historians. But the chapter didn鈥檛 make the bogus claim 鈥 it was only referring to the false theory.

鈥淕oogle鈥檚 AI system is not smart enough to figure out that this citation is not actually backing up the claim,鈥 Mitchell said in an email to the AP. 鈥淕iven how untrustworthy it is, I think this AI Overview feature is very irresponsible and should be taken offline.鈥

Google said in a statement Friday that it's taking 鈥渟wift action鈥 to fix errors 鈥 such as the Obama falsehood 鈥 that violate its content policies; and using that to 鈥渄evelop broader improvements鈥 that are already rolling out. But in most cases, Google claims the system is working the way it should thanks to extensive testing before its public release.

鈥淭he vast majority of AI Overviews provide high-quality information, with links to dig deeper on the web," Google said a written statement. 鈥淢any of the examples we鈥檝e seen have been uncommon queries, and we鈥檝e also seen examples that were doctored or that we couldn鈥檛 reproduce.鈥

It's hard to reproduce errors made by AI language models 鈥 in part because they're inherently random. They work by predicting what words would best answer the questions asked of them based on the data they've been trained on. They're prone to making things up 鈥 a widely studied

The AP tested Google's AI feature with several questions and shared some of its responses with subject matter experts. Asked what to do about a snake bite, Google gave an answer that was 鈥渋mpressively thorough,鈥 said Robert Espinoza, a biology professor at the California State University, Northridge, who is also president of the American Society of Ichthyologists and Herpetologists.

But when people go to Google with an emergency question, the chance that an answer the tech company gives them includes a hard-to-notice error is a problem.

鈥淭he more you are stressed or hurried or in a rush, the more likely you are to just take that first answer that comes out,鈥 said Emily M. Bender, a linguistics professor and director of the University of Washington鈥檚 Computational Linguistics Laboratory. 鈥淎nd in some cases, those can be life-critical situations.鈥

That鈥檚 not Bender鈥檚 only concern 鈥 and she has warned Google about them for several years. When Google researchers in 2021 published a paper called 鈥淩ethinking search鈥 that proposed using AI language models as 鈥渄omain experts鈥 that could answer questions authoritatively 鈥 much like they are doing now 鈥 Bender and colleague Chirag Shah responded with a paper laying out why that was a bad idea.

They warned that such AI systems could perpetuate the racism and sexism found in the huge troves of written data they鈥檝e been trained on.

鈥淭he problem with that kind of misinformation is that we鈥檙e swimming in it,鈥 Bender said. 鈥淎nd so people are likely to get their biases confirmed. And it鈥檚 harder to spot misinformation when it鈥檚 confirming your biases.鈥

Another concern was a deeper one 鈥 that ceding information retrieval to chatbots was degrading the serendipity of human search for knowledge, literacy about what we see online, and the value of connecting in online forums with other people who are going through the same thing.

Those forums and other websites count on Google sending people to them, but Google's new AI overviews threaten to disrupt the flow of money-making internet traffic.

Google's rivals have also been closely following the reaction. The search giant has faced pressure for more than a year to deliver more AI features as it competes with ChatGPT-maker OpenAI and upstarts such as Perplexity AI, which aspires to take on Google with its own AI question-and-answer app.

鈥淭his seems like this was rushed out by Google,鈥 said Dmitry Shevelenko, Perplexity鈥檚 chief business officer. 鈥淭here鈥檚 just a lot of unforced errors in the quality.鈥

鈥斺赌斺赌斺赌斺赌-

The Associated Press鈥痳eceives support from several private foundations to enhance its explanatory coverage of elections and democracy. See more about AP鈥檚 democracy initiative . The AP is solely responsible for all content.

The 好色tv Press. All rights reserved.

More Science Stories

Sign Up to Newsletters

Get the latest from 好色tvNews in your inbox. Select the emails you're interested in below.