Recommended Sponsor Painted-Moon.com - Buy Original Artwork Directly from the Artist

Source: The Conversation (Au and NZ) – By Laura Davy, Senior Lecturer, Crawford School of Public Policy, Australian National University

Every welfare program negotiates a fundamental tension: between fiscal responsibility and consistency on one hand, and care for real people with complex needs and situations on the other.

Over the past decade or so, one Australian program after another has tried to absolve itself of that tension by handing off part of its decision-making to a computer.

Robodebt automated welfare debt recovery, with devastating results.

The National Disability Insurance Scheme (NDIS) is moving towards computer-guided planning tools to generate budgets after participants’ support needs are assessed. Many worry this amounts to a form of “robo-planning”.

Since November, the new Support at Home program has used a rules-based algorithm called the Integrated Assessment Tool to decide how much home-care funding older Australians receive.

Each of these systems promises to replace fallible human judgement with something more consistent, efficient and fair.

Now, the Commonwealth Ombudsman is investigating complaints about the aged care assessment tool.

This is just the latest moment to ask: what do we lose when we automate decisions that really should be difficult and made case by case?

What’s the controversial aged care algorithm?

The Integrated Assessment Tool is a structured digital assessment used during a home interview with an older person seeking government-subsidised care. Assessors enter information about mobility, cognition, daily living and the person’s broader circumstances.

The tool converts these inputs into scores. It then applies rules to sort the person into one of eight funding classifications.

Assessors are barred from overriding the tool’s classification except in a small set of pre-defined circumstances.

Despite requests for the technical specifications, and for the identity of the team that designed the classification logic, details have not been released.

Is this really the new Robodebt?

Some commentators have drawn parallels between the Integrated Assessment Tool and Robodebt, but the comparison doesn’t quite work.

Robodebt was implemented at arm’s length from both welfare recipients and compliance officers. Debt notices were sent without human involvement in individual cases.

However, the Integrated Assessment Tool still involves an assessor sitting down to interview an older person in their home.

The closer parallel is with NDIS assessment reforms. These appear to be moving in the same direction with scoring tools that convert disability into numerical metrics, and opaque algorithms with weighting that has not been made public.

The health department insists the Integrated Assessment Tool for aged care is not artificial intelligence (AI). It says it is a rules-based classification algorithm, not a machine-learning model. But whether a system involves AI or not is beside the point.

Deeper issues sit underneath the technology.

There are deeper issues

The key ethical problems in algorithmic decision-making relate to opacity, discretion and accountability. In other words, problems relate to whether the people affected can see how the system works, whether the professionals using it can exercise judgement, and whether anyone can be held to answer for its decisions.

A standardised tool does deliver consistency: everyone is processed the same way. But consistency is not fairness, especially when the standard is hidden and applied to people whose needs do not fit standard categories.

US public policy researcher Michael Lipsky made the case in his classic 1980 study Street-Level Bureaucracy – discretion is a defining feature in frontline public service work.

Teachers, social workers, nurses and aged-care assessors exercise judgement precisely because rules are always incomplete, resources are constrained, and every client is unique. Strip discretion out of these encounters, and the assessment can no longer respond to what it finds, only to what the tool allows.

The appeal of the algorithm is partly a belief that machine thinking is less biased than human thinking. One review calls this the “perceived mechanistic objectivity” of computer-generated analytics. In other words, people defer to algorithmic outputs because they appear neutral, and may even override human judgement.

This is partly what aged-care assessors mean when they describe feeling “handcuffed” by the new system. The algorithm’s apparent objectivity makes their professional judgement look like bias, even when they know better how to respond to the person in front of them.

This appearance of objectivity also does political work some have called “agency laundering” – distancing oneself from morally consequential decisions by attributing them to an algorithm. Responsibility is diffused, and it becomes hard to say exactly who decided that a particular older person should receive less support this year than last.

None of this is abstract

By late March this year, some 800 people had formally requested reviews of their Support at Home assessments.

Media reports describe older people being reassessed under the new system at lower funding levels than they received previously, even where their needs had increased.

Minister for Aged Care Sam Rae has defended the reforms by pointing out that A$4 billion was incorrectly allocated under the previous system.

This may well be true. But fixing allocation errors is not the same as building a system that older Australians can understand, question and contest where necessary.

What happens next?

The Commonwealth Ombudsman has rightly said it cannot comment on the substance of its investigation while the investigation is underway.

So the government should pause using the classification algorithm until the investigation concludes.

Failing that, the minimum owed to older Australians and the public is transparency: publish the algorithm itself and its classification logic, reveal who designed it and how, and open all of it to scrutiny by sector experts, advocates and the people it affects.

Eligibility criteria and funding rules are public documents. A scoring system that determines whether someone can safely stay in their own home should be held to at least the same standard.

ref. First Robodebt, now NDIS and aged care: how computers still decide who gets care – https://theconversation.com/first-robodebt-now-ndis-and-aged-care-how-computers-still-decide-who-gets-care-280711

NO COMMENTS