Recommended Sponsor Painted-Moon.com - Buy Original Artwork Directly from the Artist

Source: The Conversation (Au and NZ) – By Barbara Barbosa Neves, Senior Horizon Fellow, AI and Ageing, University of Sydney

Australia’s Royal Commission into Aged Care found a broken system. Now, technology companies are promising artificial intelligence (AI) will fix everything, from staff shortages to older people’s loneliness.

This is known as agetech, an industry projected to reach a global value of A$170 billion by 2030. But its promised “fixes” obscure what is actually breaking aged care.

In our new study, we analysed how 33 agetech companies selling AI for aged care in Australia, East Asia, Europe and North America market their products, including monitoring tools and companion robots.

We found their websites, promotional materials and product descriptions depict aged care as inefficient, understaffed and overwhelmed by a growing ageing population. Older people are too frail or too many. Care workers are overstretched. Human care is flawed.

And AI is presented as the answer. As the agetech industry grows, governments are also subscribing to this vision of technological rescue.

Yet our research shows these narratives distract from structural problems and reinforce ageism, even as Australia’s new Aged Care Act commits to a stronger focus on dignity and autonomy.

Before we accept AI as the cure, we need to understand what we are being sold.

The cure on offer

The companies we studied claim AI will predict falls before they happen, detect health changes humans miss, eliminate incompetence, and deliver “unprecedented” improvements in safety and quality.

It sounds revolutionary. But it is also a carefully constructed narrative. In the marketing materials, aged care is consistently framed as a failure of efficiency and public delivery.

Promotional images show older people sitting passively, struggling with mobility aids, or being reduced to body parts attached to monitoring devices. They are represented through statistics: fall rates, malnutrition prevalence, hospitalisation risk.

According to the companies, older people are incidents waiting to happen and data sources to be mined. One company promises to transform intimate daily activities such as showering into “trackable metrics” for “optimal care”.

Care workers fare no better. Their labour is “time-consuming” and “error-prone”. With AI as the solution, care workers become the problem: well-meaning but unreliable, requiring technological oversight. Several companies market systems that track staff movements and automatically report delays to managers.

The rise of techno-solutionism

Agetech companies selling their wares paint the aged care sector as fundamentally broken, plagued by rising costs and inefficiencies.

By contrast, AI systems – featuring 24/7 monitoring, predictive analytics and automated alerts – are presented as objective and inherently superior.

This narrative reflects techno-solutionism: presenting social problems in ways that make technical fixes appear inevitable.

But AI is far from neutral. Models used to train AI are frequently based on datasets that exclude older people or overrepresent younger and healthier groups. Both AI design and implementation rely on stereotypical ideas of older people as technophobic and passive.

AI is not the salvation

The aged care crisis stems from decades of social and political choices about how we value care and ageing. The royal commission documented this in detail: systemic neglect, regulatory failures, a funding model that incentivises cost-cutting over quality, and pervasive societal ageism.

AI solutionism frames the crisis as technical rather than social or political, burying the fact that broader reforms are needed.

AI systems are said to eliminate work. But they require substantial human labour to function and can create as much work as they remove.

Care staff must learn new systems, interpret data, and respond to constant notifications and false alarms. They suddenly have to oversee technologies that need ongoing calibration and maintenance.

Studies show this increases worker stress, as staff juggle care responsibilities with tech troubleshooting – all with limited training and time. Much of this labour remains invisible.

Alongside this, the relational aspects of care – noticing subtle changes in mood, building trust over time – get marginalised because they can’t be easily measured or automated.

Older people suffer the consequences. When care is organised around efficiency metrics and cost reduction, residents become problems to be managed rather than people with diverse histories, preferences and needs.

No single tech will fix this

Aged care faces serious challenges. It does need repair – but the fixes must take many forms, most of which have nothing to do with AI.

These include staff ratios that allow proper time for meaningful conversations, helping residents feel less lonely. Wages that reflect the value and complexity of care work. Funding models that prioritise dignity, agency and authentic participation in decisions about care.

Regulatory frameworks must hold providers accountable for quality of life, wellbeing and inclusion, not just compliance metrics. Aged care should also include community-based models that keep older people connected to neighbourhoods.

The best role AI can play is through supporting care practices that include and empower older people and staff, centring their voices and experiences.

If we let AI companies define what is broken, we also let them define what repair looks like. That may leave our systems more profitable, but far less caring and humane.


The authors acknowledge Naseem Ahmadpour, Alex Broom and Kalervo Gulson from the University of Sydney for their contributions to the research project.

ref. AI companies promise to ‘fix’ aged care, but they’re selling a false narrative – https://theconversation.com/ai-companies-promise-to-fix-aged-care-but-theyre-selling-a-false-narrative-275822

NO COMMENTS