After Federal Labor’s Gonski education announcement, universities have again been elevated into the national conversation for all the wrong reasons. The funding cuts that will pay for the scheme represent $2.3 billion of pain that will not be absorbed by a “business-as-usual” approach.
The conversion of $1.2 billion worth of student scholarships into loans will impact upon the education aspirations of some of Australia’s most underprivileged young people. But what staff in the sector will most be worried about is $900 million of efficiency measures. University by university this may mean “savings” of anywhere up to $25 million per annum, considering the institution’s size and dependence upon public funding. But it should be noted that in the last few years, universities have sought to raze entire faculties for less.
Much of the higher education debate in 2013, will be about how these funding cuts are defrayed. As university staff go head-to-head with management over enterprise bargaining, the most cynical administrators will mobilise the funding cuts by offering redundancies, replacing workers with more precarious contracts, stalling bargaining claims.
Some university leaders have already outlined job cuts. In an email to Monash University staff, Vice Chancellor Ed Byrne declared last week, “We will of course need to take the new financial circumstances into account in our EB negotiations. I am afraid I cannot rule out job losses, but I would hope to keep any such losses to a minimum”.
Staff cuts are a symptom of something much more problematic about how Australian universities are run. Whatever loss of faith the Australian community might have in Labor’s education revolution, questionable decisions undertaken on behalf of Australian universities are tied to a more insidious set of structural changes, guided by legislative and policy agendas.
These forces are wrecking traditional concepts of what make Australian universities important and unique education institutions, as well as actors in the Australian innovation system. A new report published today by the NTEU, Impact of ERA Research Assessment on University Behaviour and their Staff, shows how changes in the management of university research, guided by government policy directives, are laying siege to the principle of academic freedom itself.
The NTEU’s report highlights that concerns about “gaming” the rules — “the exploitation of the rules of classification to improve apparent, though not actual, performance” — extend to the highest echelons of Australia's university management structures.
To explain what has been going on, we need to explore a little technical detail about the Excellence in Research Australia (ERA) first. The ERA is a quality review undertaken of all the research produced at every Australian university by the Australian Research Council (ARC). In 2010 and 2012 the ERA national reports have required universities to collect evidence from their research workforce.
The gaming of submissions was possible in the ARC process because universities did not need to identify research that fell under a “low-volume threshold”. They could potentially rebadge and disappear particular research outputs that did not meet the world-class expectations of the ARC. Furthermore, while each Vice Chancellor was required to guarantee the integrity of the entire ERA submission, there was no stated legal requirement that researchers would have a role in negotiating how their research was coded, let alone whether they should be informed at all.
Commentators about tertiary education have already sought to shape concerns about what universities have been up to. Following the release of the 2012 ERA National Report, the ARC’s Chief Aiden Byrne recognised that universities actively sought to reposition outputs into other areas, potentially responsible for the “decrease of 100 units of assessment” overall.
Leanne Harvey claimed that the ARC had in fact conducted an audit of university submissions that included an analysis of potential “inappropriate discipline coding”, but denied that specific increases in research quality were worrying. More recently, the LH Martin Institute’s Frank Larkins noted enough significant variations between 2010 and 2012 ERA results to call for a review of the ERA guidelines.
The NTEU report gets into some of the “strategic” efforts Australian universities undertook in shaping and maximising their submissions. What it reveals is disturbing.
As alleged by Merlin Crossley, we found evidence that publications were disappeared to build better outcomes in other disciplines. Moreover, we found former academics (many of whom remain involved in teaching and research) being pushed into general staff positions, most likely to make the institution’s proportion of research-active staff appear leaner, and therefore the scale of research stronger.
Last of all, through the continued ubiquity of the “ERA journal rankings”, a set of quality indicators that were killed by Science Minister Kim Carr back in May 2011, the NTEU is raising issues about how deep gaming around research performance extends in university bureaucracies. Our conclusion is that even when the ERA wasn’t being conducted, universities have progressively intensified plans to improve the appearance of their research outcomes and strength, irrespective of their guiding organisational values.
The problem is in part a concern about the impact of certain research management systems upon university staff. The report highlights that workplace conditions are a worry for many academics. At the broader level, it is about whether the workplaces that support publicly-funded research meet their functional purpose – what the Federal Government’s own 2012 National Research Investment Plan endorsed as sustaining the scale and diversity of excellent research into the future, the apparent purposes also evident in the Group of 8’s discussion paper on the significance of research intensive universities last week.
The most dangerous unintended consequence is that we are increasingly seeing direct assaults on the principle of academic freedom by universities themselves. As universities seek to boost their output in disciplines they believe they will score well, they are implementing strategies that shape where academics publish, what kinds of outputs they publish, and the research they are endorsed to do. The kinds of strategies that tie ERA scores to improving university research profiles, are often conditions dictated from the top, for instance, determining academic redundancies at the University of Sydney starting in late 2011, and at ANU in 2012.
In the 1950s, Ivy League academics were already saying that the measurement of a firm’s performance would drive change whether tied to financial incentives or not. Since then experts highlight many of the changes performance measurement engenders cannot be predicted, and are sometimes described as unintended or dysfunctional consequences.
The important question is, “Does the adaption of an organisation to performance measurement undermine its fundamental goals?” If the goal of the Australian research system is to burden researchers with so many performance benchmarks and constraints on intellectual freedom, that they leave the sector, or move their research overseas, a combination of factors is bringing this vision to fruition.
The worst research managers are determining what is good research through the ERA scores, and removing the power to define what is quality research from researchers, disciplines and even professional bodies themselves. This is a process that will de-professionalise Australian academic culture, especially where research quality cannot be bean-counted by commercialisation outcomes, research income, or other base indicators.
We need an independent, comprehensive review, conducted by higher education experts who will evaluate universities as unique cultural ecosystems generating public and private goods. The scale of concern realistically cannot be answered via media release.
Such a claim also happens to be a fundamental prerequisite for evaluating how good a performance measure the ERA actually is. Some experts have argued that gaming is difficult to predict and that assessment bodies like the ARC will not know how good a performance measure is until after it is complete. Fair enough. Now that the ERA has been conducted twice, a major review of what has happened is due.