Teaching excellence framework: a case of history repeating?

Roger Brown urges the current government to learn from the mistakes of past teaching assessment exercises

October 12, 2015
University teaching
Source: Shutterstock

In his recent speech to vice-chancellors, the new higher education minister, Jo Johnson, reaffirmed the government’s commitment to the teaching excellence framework (TEF), which will appear in a Green Paper this autumn.

Until we see the paper, we can only speculate about the framework. But even without the details, it may be worth pondering whether history may be about to repeat itself.

We know that under the TEF, assessments will be made of the quality of institutions’ teaching and those assessments will have financial consequences for the institutions concerned (in being able to raise their full-time undergraduate fees in line with inflation). Although we do not know the basis on which the assessments will be made, how or by whom, we do know that metrics such as the National Student Survey and the Destination of Leavers from Higher Education will be used as part of the evidence base.

We also know that the new framework will be introduced at a time when overall public funding of university teaching is being cut, when the potential costs of higher education to each graduate are being increased, when both demography and the government’s immigration policies are working against recruitment, and when the resourcing differentials between institutions, which are already considerable, will be widening still further.

As it happens, these are not dissimilar to the circumstances in which Teaching Quality Assessment (TQA) was introduced in 1993. The aim was for assessments of teaching quality to be made on a subject-by-subject basis, and for these assessments to be linked to additional funded student places for the most successful institutions. In this way, good departments and institutions would be rewarded and others incentivised to improve.

The original intention was for these assessments to be made by former Her Majesty’s Inspectors (HMI). But after strong advice from the Funding Council, these were replaced by reviewers from the sector under the supervision of a limited number of ex-HMIs. There was a very limited use of metrics.

The new system ran into difficulties from the start. There was insufficient piloting. There were endless arguments about the methodology. The metrics were quietly dropped. The old universities complained about “underling assessment”, the new ones contrasted the amateurism of the new process with what they had experienced under CNAA (the Council for National Academic Awards) and HMI.

There was very little consistency either between subjects or, within subjects, between assessment teams; indeed, one of the assessment managers admitted that they did not have time even to read the assessors’ reports. Very few departments were found to be “unsatisfactory”, while increasing numbers got the highest score. Questions were also raised about the relationship between TQA and the separate process of quality assurance operated by the Higher Education Quality Council on behalf of the sector.

Matters came to a head in January 2001 when economists at the University of Warwick received a maximum  score of 24 and immediately condemned the whole process as a worthless bureaucratic exercise. Shortly afterwards, in April 2001, the then secretary of state for education announced a major  reduction in external quality assurance, and the whole assessment process was finally laid to rest in 2004.

But the idea of linking assessments to funding had by then long been abandoned. This was partly because of the methodological issues already mentioned. But it was mainly because the late 1993 cutbacks in public expenditure meant that there were no longer any additional funded places with which to reward successful institutions (although for a while assessment judgements played a part in influencing bids under the Learning and Teaching Enhancement Fund operated by the Funding Council).

In any case, the question of whether incremental funds should be used to help already strong departments to improve further, or bring weaker centres up to a common standard, was never resolved. This will be yet another challenge for the new TEF. 

In their seminal 2013 book The Blunders of our Governments, Anthony King and Ivor Crewe identified lack of institutional memory as one of the main reasons for policy failure. Although higher education supplies none of their main cases, it is not very difficult to find  examples.

Presumably those responsible for initiating and designing the TEF will be fully aware of previous attempts to incentivise institutions to improve their teaching, and the need to avoid the problems that dogged Teaching Quality Assessment.  It remains to be seen how this will be done. Is this yet another case of history repeating itself in higher education policy?

Roger Brown is former professor of higher education policy at Liverpool Hope University.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Related articles

Sponsored