My Lords, I am speaking to the proposal, in the name of my noble friend Lord Stevenson, that Clause 25 should not stand part of the Bill.
That clause refers to the Office for Students taking over HEFCE’s current administrative responsibilities to deliver the TEF on behalf of the Secretary of State. I say in passing how disappointed I am that so many in your Lordships’ House, whom I thought would come to hear this debate on TEF metrics, have now departed. Perhaps that was not the reason they were here after all. Those of us who are ploughing through the Bill until all hours of the night realise that this is an
important topic. The fact that we have had so many speakers on it is a clear reflection of that.
As the Minister will be aware, there is widespread concern across the sector at the use of proxy metrics, including statistics on graduate earnings, in an exercise that was supposed to be about teaching quality. On the face of it, there is some logic to the metrics. It is difficult to imagine an excellent course, the teaching, support and assessment for which the students think are rubbish, and that a large proportion of the students do not complete; or that hardly anyone who completes it manages to find employment or get a place on a postgraduate course.
Where metrics are used, they have to be much more securely evidence-based than those suggested. Last week in Committee, our Amendments 196 and 198 would have obliged the Office for Students to assess the evidence that any proposed metric for assessing teaching quality is actually correlated to teaching quality, and ensured that, prior to making that assessment, the OfS consulted those who know first-hand what is needed to measure teaching quality: academic staff and students. The Minister did not comment on that point, so it remains one on which I should like to hear his opinion. The importance of ensuring the statistics used are reliable and evidence-based cannot be overstated. They must earn and retain the confidence of the higher education sector—and that involves academics, students and administrators.
In her Amendment 201, the noble Baroness, Lady Wolf, seeks to ensure the quality of the statistics used by the OfS, and this should be a basic requirement. I support my noble friend Lord Lipsey in questioning the validity and value of the National Student Survey. The survey merely asks students about their perceptions of teaching at their institution. By definition, these perceptions are subjective and cannot involve comparing institutions. I heard what the noble Lord, Lord Willetts, said, when he suggested that similar institutions could be compared in terms of their ethnic make-up and students’ economic background. That kind of benchmarking sounds improbable at best because, even if suitable comparators could be found, the question is, how would the outcome be weighted?
It sounds as though gold, silver and bronze categories would be created before the metrics had even been measured. As I said, that sounds improbable to me, and I agree with the noble Baroness, Lady Wolf, that benchmarking is surely not the answer. Linking institutions’ reputations to student satisfaction is likely to encourage academics to mark more generously and, perhaps, even avoid designing difficult, more challenging courses.
With academics increasingly held accountable for students’ learning outcomes, students’ sense of responsibility for their own learning—something I thought was a core aspect of higher education—will surely diminish. We are now entering an era where students dissatisfied with their grades can sue their universities. Improbable as that sounds, only last week the High Court ruled that Oxford University had a case to answer, in response to a former student who alleged that what he termed “boring” and “appallingly bad” teaching cost him a first-class degree and the opportunity of higher earnings.
This may be the shape of things to come. Last year, nearly 2,000 complaints were made by students to the higher education Ombudsman, often concerning contested degree results. Nearly a quarter were upheld, which led to universities being ordered to pay almost £500,000 in compensation. Does anyone seriously believe that the introduction of the TEF metrics will lead to a reduction in such complaints?
Metrics used to form university rankings are likely to reveal more about the history and prestige of those institutions than the quality of teaching that students experience there. The Office for National Statistics report, on the basis of which the TEF is being taken forward, made it clear that they were told which metrics to evaluate, leading to the conclusion that these metrics were selected simply because the data were available to produce them. It is widely acknowledged that students’ experience in their first year is key in shaping what they gain from their time at university, yet the focus of the proposed metrics, of course, is mainly on students’ experiences in their final year and after graduation.
The ONS report was clear that the differences between institutions’ scores on the metrics tend to be narrow and not significant. So the majority of the judgment about who is designated gold, silver or bronze will actually be based on the additional evidence provided by institutions. In other words, an exercise that is supposedly metrics-driven will in fact be decided largely by the TEF panels, which is, by any other description, peer review.
Although the Minister spoke last week about how the TEF would develop to measure performance at departmental level, the ONS report suggested that the data underpinning the metrics would not be robust enough to support a future subject-level TEF. Perhaps the Minister can clarify why he believes that this will not be the case—the quality of courses in a single university tend to be as variable as the quality of courses between institutions. As I said in Committee last week, this would also mean that students’ fees were not directly related to the quality of the course they were studying. A student at a university rated gold or silver would be asked to pay an enhanced tuition fee, even if their course at that university was actually below standard—a fact that was disguised in the institution’s overall rating.
Learning gain—or value added—has been suggested as an alternative, perhaps better, measure of teaching quality and is being explored in other countries. At a basic level, this measure looks at the relationship between the qualifications and skills level a student has when starting their degree programme, compared to when they finish—in other words, a proper, reliable means of assessing what someone has gained from their course of study.
The BIS Select Committee report on the TEF metrics published last year recommended that priority should be given to the establishment of potentially viable metrics relating to learning gain. I hope the Minister will have something positive to say on that today, or, failing that, on Report. We do not believe that the metrics as currently proposed are fit for purpose; more importantly, nor do many of those within the sector
who will be directly involved with the TEF. That should be a matter of some concern for the Minister, for his colleague the Minister for Universities and Science, and indeed for the Government as a whole.