Recommended Sponsor Painted-Moon.com - Buy Original Artwork Directly from the Artist

Source: The Conversation (Au and NZ) – By Uri Gal, Professor in Business Information Systems, University of Sydney

Recently some Australian shoppers got more than they bargained for when they chatted with Woolworths’ artificial intelligence (AI) assistant, Olive.

Instead of sticking to groceries, recipes and basket suggestions, Olive reportedly produced strange, overly human-like responses. It talked about its “mother” and offered other personal-sounding details.

Further testing revealed pricing errors for basic items. And when I asked about the price of a specific product, Olive didn’t provide a clear answer. Instead, it checked whether the item was in stock and explained pickup fees.

So what exactly is going on here? And what lessons might these incidents hold for businesses and consumers alike?

What actually happened?

Olive is powered by a large language model (LLM). These models don’t “know” things the way humans do, nor do they have mothers. Using elaborate statistical analyses, they generate language that sounds plausible.

Comments from a Woolworths spokesperson to the Australian Financial Review suggest that in Olive’s case, the references to its supposed mother appear to have been pre-written scripts dating back several years.

When users entered something that looked like a birthdate, the system likely triggered a matching “fun fact” from an old decision tree with pre-programmed responses.

Woolworths says it has now removed this particular scripting “as a result of customer feedback”.

The pricing errors point to a different problem.

Because LLMs generate responses based on learned patterns rather than real-time data, they do not automatically know today’s prices unless they are explicitly connected to a live database.

If that grounding step is weak, the system can produce outdated prices.

Not a new problem

Woolworths is not the first company to discover, after the fact, that its customer-facing AI had unexpectedly “misbehaved”.

In 2022, Air Canada’s chatbot incorrectly told a passenger, Jake Moffatt, that he could purchase tickets at full price and later apply for a bereavement fare refund. No such policy existed.

When Air Canada refused to honour the chatbot’s advice, Moffatt sued the airline and won.

Air Canada’s defence was striking. It argued the chatbot was a separate legal entity, responsible for its own actions and therefore beyond the airline’s liability. The tribunal rejected this outright. It ruled that a chatbot is part of a company’s website, and that the company owns its outputs.

In January 2024, UK parcel delivery firm DPD faced a different kind of embarrassment. A frustrated customer who could not get help to locate a missing parcel asked DPD’s chatbot to write a poem that criticised the company. It did. He then asked it to swear. It did that too. The exchange went viral on social media. DPD disabled the chatbot shortly after.

Both cases point to the same underlying failure: companies launched customer-facing AI without adequate oversight and were caught off-guard by the consequences.

What is Woolworths’ responsibility?

Woolworths operates the largest supermarket chain in Australia. It has promoted Olive as a trusted, convenient interface for its customers, who are reasonable to expect that the information Olive provides is accurate.

A screenshot of the Woolworth's chatbot.

Woolworths admits its AI assistant can make mistakes. Woolworths

Admitting that Olive may make mistakes, as Woolworths does when a user opens the chatbot, does not sit easily with that expectation.

There is also a broader ethical dimension. Woolworths serves customers who, in many cases, are making careful decisions about household budgets.

The ACCC has already commenced proceedings against Woolworths over allegedly misleading discount pricing practices.

That context makes the Olive pricing errors harder to dismiss as an isolated technical glitch.

Companies that deploy AI in customer-facing roles take on a duty of care to ensure those systems are accurate and honestly presented. That duty does not diminish because the technology is new.

Why do companies keep making chatbots that pretend to be your friend?

The logic behind Olive’s programmed personality is not without basis.

Research on human-computer interaction consistently finds that people respond positively to interfaces that feel conversational and warm. Human-like chatbots that have a name and personality tend to generate higher customer engagement, satisfaction, and trust.

For companies, the commercial appeal is straightforward: a customer who feels at ease with a chatbot is more likely to complete a transaction and return.

However, this comes with a significant risk. When an anthropomorphised chatbot fails to meet the expectations its personality has created, customers tend to be more dissatisfied than they would have been with a plainly mechanical system.

This “expectation violation” means that the warmer the persona, the harder the fall.

The larger stakes

The Olive episode is a reminder that deploying AI in customer-facing roles is not a set-and-forget exercise.

A chatbot that quotes wrong prices and rambles about its family is not a quirky inconvenience but a sign that something in the development and oversight process has broken down.

For Woolworths, and for the many other companies now rushing to put AI in front of their customers, the lesson is clear: accountability cannot be outsourced to an algorithm. When a business puts a system in front of the public, it owns what that system says and does.

There is a lesson for consumers, too.

AI assistants may feel confident and conversational, but they are still tools, not authorities. If something seems unclear, inconsistent or too good to be true, it is worth double-checking.

As AI becomes a routine part of everyday transactions, a small measure of healthy scepticism may prove just as important as technological innovation.

ref. Woolworths’ AI agent rambled about its ‘mother’. It’s a sign of deeper problems with the tech rollout – https://theconversation.com/woolworths-ai-agent-rambled-about-its-mother-its-a-sign-of-deeper-problems-with-the-tech-rollout-277072

NO COMMENTS