Source: The Conversation (Au and NZ) – By Kevin Veale, Senior Lecturer in Media Studies, School of Humanities, Media and Creative Communication, Te Kunenga ki Pūrehuroa – Massey University
During the first world war, the British government was looking for ways to help people stretch their limited food supplies. It found pamphlets from a noted 19th-century herbalist who said rhubarb leaves could be used as a vegetable along with the stalks.
The government duly printed its own pamphlets advising people to eat rhubarb leaves as a salad rather than throwing them out. There was one problem: rhubarb leaves can be poisonous. People reportedly died or became ill.
The advice was corrected and the pamphlets pulled from circulation. But during the second world war, the government was again looking for ways to stretch food supplies.
It found a stockpile of old resources from the previous war that explained unorthodox sources of food, including rhubarb leaves. Reusing the pamphlets seemed an efficient thing to do, so they were sent out to the public. Once again, people reportedly died or became ill.
Those pamphlets were misinformation, but the public had no reason to suspect them either time. They were official resources developed by the government – why wouldn’t they be safe?That is how misinformation can cause problems even after the initial error is corrected. And the moral of the story still reverberates in the age of generative artificial intelligence (AI).
Chatbots are not search engines
Generative AI is used to generate text and images (and other forms of data) based on original information it has ingested. But it can also be an engine for churning out misinformation faster than people can produce safe information, let alone fact-check and correct it.
And as the rhubarb story illustrates, corrections can’t always properly remove the original contamination.
AI platforms such as ChatGPT and Claude don’t work like a conventional search engine. But people use them as one because they seem to summarise complex topics quickly and require fewer clicks than conventional internet searches.
Search engines rely on articles and text about a given topic, and then weigh how reliable those articles are. Generative AI instead relies on huge bodies of text, from which it measures the odds of words appearing next to each other.
These “large language models” are purely looking to generate reasonable-looking sentences, rather than accurate ones.
For example, if “green eggs and ham” appeared frequently enough in its huge pile of words, it is more likely to describe “eggs and ham” as green if someone asks.
‘Plausible yet incorrect’
OpenAI, which developed ChatGPT, has admitted (based on its own study) there’s no way to stop false information being presented as truth due to the way generative AI works. Explaining why large language models “hallucinate”, the researchers wrote:
Like students facing hard exam questions, large language models sometimes guess when uncertain, producing plausible yet incorrect statements instead of admitting uncertainty.
This can have real-world consequences. One recent study showed ChatGPT failed to recognise a medical emergency in more than half of cases. This can be exacerbated by already existing errors in medical records, which a UK inquiry in 2025 found affected up to one in four patients.
While a doctor might order more tests to confirm a diagnosis, one researcher explained that generative AI “delivers the wrong answer with the exact same confidence as the right one”.
The problem, as another scientist noted, is that generative AI “finds and mimics patterns of words”. Being right or wrong is not really the point: “It was supposed to make a sentence and it did.”
Research has shown generative AI tools misrepresent the news 45% of the time, no matter the language or geographic region. And there is now genuine concern about AI risking lives by generating non-existent hiking routes.
It’s easy to make fun of generative AI when it advises people to eat rocks or hold toppings on a pizza base with glue.
But other examples aren’t so amusing – such as the supermarket meal planner that suggested a recipe that would produce chlorine gas, or the dietary advice that left someone with chronic toxic exposure to bromide.
Look for older information
Education and establishing good rules around the appropriate and cautious use of generative AI will be essential, especially as it makes inroads into governments, bureaucracies and complex organisations.
Politicians are already using generative AI in their everyday work, including for policy research. And hospital emergency departments are using AI tools to record patient notes to save time.
One safeguard is to try to source more reliable information produced before AI-contaminated text and imagery infiltrated the internet.
There are even tools available to help simplify that process, including one created by Australian artist Tega Brain “that will only return content created before ChatGPT’s first public release on November 30 2022”.
Finally, if your instinct is to fact-check the story at the start of this article, good old-fashioned books might be your best bet: references to how the British government twice encouraged rhubarb poisoning can be found in the The Poison Garden’s A-Z of Poisonous Plants and Botanical Curses and Poisons: The Shadow Lives of Plants.
– ref. Using your AI chatbot as a search engine? Be careful what you believe – https://theconversation.com/using-your-ai-chatbot-as-a-search-engine-be-careful-what-you-believe-277616

