Google Serving AI-Generated Images of Mushrooms Could Have ‘Devastating Consequences’
by Emanuel Maiberg |
Google is serving AI-generated images of mushrooms when users search for some species, a risky and potentially fatal error for foragers who are trying to figure out what mushrooms are safe to eat.
The AI images were flagged by the moderator of the r/mycology Reddit community, dedicated to “the love of fungi,” including, “hunting, foraging, [and] cultivation.” The moderator, who goes by MycoMutant, noticed that when they searched for the Coprinus comatus a fungus commonly known as shaggy ink cap or shaggy mane, the first image in the Google snippet, which is featured above the search results, was an AI-generated image that looked nothing like a real Coprinus comatus.
As we’ve seen in other instances when Google search surfaced AI-generated images and presented them to users in results as if they were real, Google pulled the AI-generated image from a site where it was plainly flagged as such. The image was hosted on the stock image site Freepik, where it had an “AI-generated” label on top of the image. “This resource was generated with AI,” the description of the image says. “You can create your own using our AI image generator.” The image was also wrongly labeled as “Coprinus comatus.”
MycoMutant told me in a direct message that as a fungi enthusiast and moderator of the r/mycology they have seen Google snippets feature incorrect images for certain species many times, and cited a Reddit thread he posted two years ago where Google featured an incorrect image for Sepula that was not AI-generated.
“At the time I posted that I attempted to alert Google to the issue since this was not the only incorrect mushroom photo I had encountered in the snippets,” MycoMutant told me.
MycoMutant said that they don’t remember the exact reason they were searching for Coprinus comatus, but that most of what they do as a moderator is help others identify mushrooms and correct any issues they see to help people learn and debunk dangerous information. While doing this, they routinely search for species names to find generally more reliable and human-curated information from resources like Wikipedia, First-Nature, MushroomExpert, and iNaturalist.
“Google using incorrect images in the snippets I think compounds this issue because it would be logical for a bot to trust a result that is featured so prominently by what should be a trustworthy source,” they said. “Subsequently I have to assume that incorrect images in the snippets are going to have a cascading effect on the accuracy, or rather lack of it in AI identification algorithms. More than once I have since people try to use bots to scrape data to compile a database for mushroom species and the results have been horrifically inaccurate and potentially filled with dangerously wrong information.”
Elan Trybuch, the secretary of the New York Mycological Society, also told me in an email that the AI-generated images of mushrooms could be dangerous.
“Not only are many folks visual learners, but with the speed in which information is disseminated folks are relying more on instant feedback, and nothing does this better than a reference photo,” Trybuch told me. “The problem is, many of these photos look ‘close enough’ to the real deal. While this can have devastating consequences, I think for the most part its just going to add confusion as to: What is real?”
Trybuch also said that Google needs to do more to flag these images as AI-generated and remove them.
“Any and all search engines should have not only remove AI-generated photography from their results, but they should also indicate via a label of suspected AI generative art,” he said. “It would make it easier if they had a search result section, specifically for AI results.”
“We have protections in place to provide a high quality experience, and we continue to work to strengthen these quality guardrails,” a Google spokesperson told 404 Media. “When we receive user feedback about potential issues, we work to make systemic improvements.”
Google serving an AI-generated image of a mushroom as if it was a real species is highlighting two problem with AI-generated content we’ve covered previously: Potentially wrong and dangerous information being presented as fact, and Google search inability to sort through all the AI-generated content that has flooded the internet and tell users what is real and what isn’t.
Last year, Sam wrote about AI-generated mushroom foraging books appearing on Amazon, which the New York Mycological Society said could “mean life or death,” and I’ve previously written about Google featuring AI-generated images of famous artworks or historical events in snippets as if they were real. In both of those cases, the AI-generated images Google surfaced were clearly labeled as such on the site it pulled them from.
MycoMutant said that there have been a few notable cases of “sensationalizing stories” about how dangerous mushrooms can be, and that it’s “because of this that many of us are so prompt to want to jump on incorrect information and prevent it from spreading.”
“The rise of AI obviously complicates that due to its capability of producing mass amounts of nonsense rapidly,” they said. “It is becoming more common to see comments where someone has tried to answer a poster’s question by just copy/pasting it into ChatGPT and the result is invariably garbage. AI generated images seem likely to become a bigger issue also.”