Google’s new AI Overview feature recently came under scrutiny due to bizarre and misleading answers to search queries. After the issue went viral on social media, Google’s head of search, Liz Reid, addressed the concerns and admitted that improvements were needed. Two specific examples were highlighted: one answer endorsing eating rocks and another suggesting using nontoxic glue to thicken pizza sauce.
Reid explained that the rock-eating recommendation was a result of the AI tool misinterpreting a satirical article from The Onion that had been reposted by a software company. The AI algorithm mistakenly treated the information as factual. As for the glue on pizza suggestion, Reid attributed it to sarcastic or troll-like content found in discussion forums. While forums can be a valuable source of information, they can also lead to unhelpful advice.
Reid emphasized the importance of carefully reviewing AI-generated content before following it, especially when it comes to something as crucial as dinner menus. She also cautioned against judging the quality of Google’s search based solely on viral screenshots, as extensive testing was conducted before the feature’s launch. Google’s data suggests that users value AI Overviews, as indicated by their prolonged engagement with pages discovered through this feature.
The embarrassing failures experienced by Google were attributed to an internet-wide audit that wasn’t always well-intentioned. Reid mentioned that nonsensical and erroneous searches aimed at producing misleading results were prevalent during the testing phase. Additionally, Google disputed some widely distributed screenshots of AI Overviews gone wrong, stating that they were fake. WIRED’s own testing supported this claim, as they were unable to reproduce certain results.
Moreover, the issue extended beyond social media, as even reputable news outlets, such as The New York Times, were influenced by misleading screenshots. The Times issued a correction to its reporting about AI Overviews, clarifying that certain dangerous suggestions, like jumping off the Golden Gate Bridge, were never made by the feature but instead originated as dark memes on social media. Reid’s post sought to address these misconceptions and reassure users that certain dangerous outcomes were not produced by AI Overviews.
Reid acknowledged that improvements were necessary and mentioned that Google had made more than a dozen technical changes to the feature. These changes included better detection of nonsensical queries, reduced reliance on user-generated content from platforms like Reddit, less frequent use of AI Overviews in situations where they are not found to be helpful, and stronger safeguards disabling AI summaries on important topics, such as health. However, she did not mention any plans to significantly roll back the AI summaries, indicating that Google plans to monitor user feedback and make adjustments as needed.
In conclusion, Google’s AI Overview feature faced criticism when misleading information spread through viral screenshots. Google acknowledged the need for improvement and made several technical changes to address the issues. It’s important to approach AI-generated content with caution and conduct further research before following any recommendations. Google remains committed to refining the feature based on user feedback and aims to provide more accurate and valuable search results through AI Overviews.
Source link