Recommended Sponsor Painted-Moon.com - Buy Original Artwork Directly from the Artist

Source: The Conversation (Au and NZ) – By Stacy Carter, Professor and Director, Australian Centre for Health Engagement, Evidence and Values, University of Wollongong

Zamrznuti tonovi/Shutterstock

Australia’s biggest radiology provider, I-MED, has provided de-identified patient data to an artificial intelligence company without explicit patient consent, Crikey reported recently. The data were images such as X-rays and CT scans, which were used to train AI.

This prompted an investigation by the national Office of the Australian Information Commissioner. It follows an I-MED data breach of patient records dating back to 2006.

Angry patients are reportedly avoiding I-MED.

I-MED’s privacy policy does mention data sharing with “research bodies as authorised by Australian law”. But only 20% of Australians read and understand privacy policies, so it’s understandable these revelations shocked some patients.

So how did I-MED share patient data with another company? And how can we ensure patients can choose how their medical data is used in future?

Who are the key players?

Many of us will have had scans with I-MED: it’s a private company with more than 200 radiology clinics in Australia. These clinics provide medical imaging, such as X-rays and CT scans, to help diagnose disease and guide treatment.

I-MED partnered with the AI startup Harrison.ai in 2019. Annalise.ai is their joint venture to develop AI for radiology. I-MED clinics were early adopters of Annalise.ai systems.

I-MED has been buying up other companies, and is listed for sale, reportedly for A$4 billion.

Big commercial interests are at stake, and many patients potentially affected.

Why would an AI company want your medical images?

AI companies want your X-rays and CT scans because they need to “train” their models on lots of data.

In the context of radiology, “training” an AI system means exposing it to many images, so it can “learn” to identify patterns and suggest what might be wrong.

This means data are extremely high value to AI start-ups and big tech companies alike, because AI is, to some extent, made of data.

You might be thinking it’s a wild west out there, but it’s not. There are multiple mechanisms controlling use of your health-related data in Australia. One layer is Australian privacy legislation.

What does the privacy legislation say?

It’s likely the I-MED images were “sensitive information” under the Australian Privacy Act. This is because they can identify an individual.

The law limits situations in which organisations can disclose this information, beyond its original purpose (in this case, providing you with a health service).

One is if the person has given consent, which doesn’t seem to be the case here.

Another is if the person would “reasonably expect” the disclosure, and the purpose of disclosure is directly related to the purpose of collection. On the available facts, this also seems to be a stretch.

This leaves the possibility that I-MED was relying on disclosure that is “necessary for research, or the compilation or analysis of statistics, relevant to public health or public safety”, where getting people’s consent is impracticable.

Code
AI needs data to learn.
Victor Ochando/Shutterstock

The companies have repeated publicly that the scans were de-identified.

De-identified information is mostly outside the scope of the Privacy Act. If the chance of re-identification is very low, de-identified information can be used with little legal risk.

But de-identification is complex, and context matters. At least one expert has suggested these scans were not sufficiently de-identified to take them outside the protection of the law.

Changes to the Privacy Act have toughened up penalties for interfering with people’s privacy, although the Office of the Australian Information Commissioner is underfunded and enforcement remains a challenge.

How else is our data protected?

There are lots more layers governing health-related data in Australia. We’ll consider just two.

Organisations should have data governance frameworks that specify who is responsible, and how things should be done.

Some large public institutions have very mature frameworks, but this isn’t the case everywhere. In 2023, researchers argued Australia urgently needed a national system to make this more consistent.

Australia also has hundreds of human research ethics committees (HRECs). All research should be approved by such a committee before it starts. These committees apply the National Statement on Ethical Conduct in Human Research to assess applications for research quality, potential benefits and harms, fairness, and respect towards participants.

But the National Health and Medical Research Council has recognised that human research ethics committees need more support – especially to assess whether AI research is good quality with low risks and likely benefits.

How do ethics committees operate?

Human research ethics committees determine, among other things, what kind of consent is required in a study.

Published Annalise.ai research has had approval, sometimes from multiple human research ethics committees, including approval for a “waiver of consent”. What does this mean?

Traditionally, research involves “opt in” consent: individual participants give or refuse consent to participate before the study happens.

But in AI research, researchers generally want permission to use some of an existing massive data lake already created by regular health care.

Researchers doing this kind of study usually ask for a “waiver of consent”: approval to use data without explicit consent. In Australia this can only be approved by a human research ethics committee, and under certain conditions, including that the risks are low, benefits outweigh harms, privacy and confidentiality are protected, it is “impracticable to obtain consent”, and “there is no known or likely reason for thinking that participants would not have consented”. These matters aren’t always easy to determine.

Waiving consent might sound disrespectful, but it recognises a difficult trade-off. If researchers ask 200,000 people for permission to use old medical records for research, most won’t respond. The final sample will be small and biased, and the research will be poorer quality and potentially useless.

Because of this, people are working on alternative models. One example is “consent to governance”, where governance structures are established in partnership with communities, then individuals are asked to consent to future use of their data for any purpose approved under those structures.

Listen to consumers

We are at a crossroads in AI research ethics. Both policymakers and Australians agree we need to use high-quality Australian data to build sovereign health AI capability, and health AI systems that work for all Australians.

But the I-MED case demonstrates two things. It’s vital to engage with Australian communities about when and how health data should be used to build AI. And Australia must rapidly strengthen and support our existing infrastructure to better govern AI research in ways that Australians can trust.

The Conversation

Stacy Carter has received funding from the National Health and Medical Research Council, the Medical Research Future Fund, the National Breast Cancer Research Foundation, and the Australian Commission on Safety and Quality in Healthcare for research on AI in healthcare. She has received travel support to speak at conferences about consumer views on AI in health care from professional bodies, regulators, research groups and medical indemnity insurers. She has worked with consumer organisations, medical professional organisations, and state and commonwealth health departments and agencies on AI and data governance in health care.

Megan Prictor does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. An imaging company gave its patients’ X-rays and CT scans to an AI company. How did this happen? – https://theconversation.com/an-imaging-company-gave-its-patients-x-rays-and-ct-scans-to-an-ai-company-how-did-this-happen-245753

NO COMMENTS