Recommended Sponsor Painted-Moon.com - Buy Original Artwork Directly from the Artist

Source: The Conversation (Au and NZ) – By Ksenia Sawczak, Head, Research and Development, Faculty of Arts and Social Sciences, University of Sydney

Shutterstock

Research at Australian universities has been scrutinised through the Australian Research Council’s (ARC) assessment exercise, Excellence in Research for Australia, since 2010.

A companion Engagement and Impact Assessment exercise began in 2018. The time and costs for universities of running these exercises (the ARC collected this information when ERA began but never released it) and the value they generate for universities, government, industry and the public are unknown.

It’s difficult to see how any future versions can be justified without evidence of a healthy return on investment.




Read more:
Starting next year, universities have to prove their research has real-world impact


The question of future assessment exercises is now in the spotlight. The ARC recently completed a review of ERA and EIA to “ensure the national research assessments address Australia’s future needs”.

The review’s terms of reference included consideration of “the purpose and value of research evaluation, including how it can further contribute to the Government’s science, research and innovation agendas”. This is important, as no evidence has ever been provided of exactly how the government, industry or community uses assessments for informing agendas.

The review received 112 submissions in response to a consultation paper. Most came from universities, peak bodies/associations and various service providers and consultants. No responses were received from the sectors that supposedly benefit from these exercises, namely government, industry and the community.




Read more:
Who cares about university research? The answer depends on its impacts


What are the issues with the system?

A review advisory committee was then appointed to consider key issues and make recommendations to the ARC CEO. The committee readily identified key concerns about how the assessments work, such as rating scales, streamlining and automation, evaluation cycles and eligibility requirements. These matters also came up in university submissions.

But what came through most clearly from universities were the mixed views about the value of assessments as a whole. By extension, there is a question mark over whether they should continue if their utility cannot be clearly demonstrated.

While EIA has been run only once, there have now been four rounds of ERA overseen by four different ministers. Each round has culminated in a detailed national report with a minister’s foreword that consistently focuses on the same two matters:

  • ERA results provide assurance of the government’s investment in the research sector
  • the results will inform and guide future strategies and investments.

In other words, there has been an overreaching focus on justification for the exercise and on its purported utility. But how convincing is this?




Read more:
Explainer: how and why is research assessed?


ERA is past its use-by date

In its early days, ERA was credited with playing an important role in focusing university efforts on lifting research performance. Indeed, a number of university submissions to the review acknowledged this.

However, much has changed since then. As university responses noted, new databases and digital tools – together with greater expertise in data analytics within universities to analyse performance – as well as the impact of international benchmarking through university and subject rankings have meant ERA’s influence has dramatically dwindled. Universities no longer need an outdated assessment exercise to tell them how they are performing.

As for its actual application, there was a brief time when ERA informed funding allocations under the Sustainable Research Excellence for Universities scheme. It was one of a number of schemes through which government support for university research was based on their performance. But this was quickly abandoned.

screenshot from archived ERA web page on ARC website
ERA data were once used to inform government funding allocations, but funding no longer mentioned on the website.
Wayback Machine archives

In 2015, with a clear focus on incentivising performance and simplifying funding, the government introduced revised research block grants. In the process, it overlooked the very exercise that identifies research excellence and so ought to inform performance-based funding.

Since then, the best the government has been able to come up with is adding national benchmarking standards for research to the Higher Education Standards Framework. But with the bar set so low and no apparent reward for institutions that perform well above the required standards, barely an eyelid has been batted over this change.

‘Informing’ without evidence of use

Returning to the review committee, its final report of June 2021 acknowledged the vision for and objectives of ERA required rethinking, as these had lost their relevance or failed. This included the objectives of providing a stocktake of Australian research and identifying emerging research areas and opportunities for development.

But the committee has danced around the issue of ERA’s utility. It issued a lofty vision statement:

“that rigorous and transparent research assessment informs and promotes Australian universities’ pursuit of research that is excellent, engaged with community, industry and government, and delivers social, economic, environmental and cultural impact.”

The ARC has adopted it as part of the ERA and EI Action Plan.

The notion of “informing” as a buzzword for influence and utility has been the consistent feature of ERA. It seems this will continue. The review committee’s report contains over 50 references to this idea. And “informing decisions” is to be one of the four objectives taken up by the ARC, specifically to “provide a rich and robust source of information on university excellence and activity to inform and support the needs of university, industry, government and community stakeholders”.

But no evidence has ever been provided of ERA’s usefulness to these sectors. This objective rings hollow, particularly in light of the conspicuous absence of industry or government responses to the review.




Read more:
Unis want research shared widely. So why don’t they properly back academics to do it?


Entomologist looks at netting with lights to attract insects in the dark
The ERA process has produced no clear evidence of how university research is being used.
Shutterstock

The vanishing link to funding

Of course, the really big question is whether ERA and EI will ever inform research funding. That’s something the ARC has brought up over the years, and possibly the only reason why universities are so compliant.

Curiously, though, the review’s terms of reference did not cover this issue. Perhaps, after 11 years, no one can work this out. Now that would surely represent a very poor return on investment.

The Conversation

Ksenia Sawczak does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

ref. Where is the evidence for ERA? Time’s up for Australia’s research evaluation system – https://theconversation.com/where-is-the-evidence-for-era-times-up-for-australias-research-evaluation-system-165622

NO COMMENTS