UK Parliament / Open data

Higher Education and Research Bill

I think the noble Lord, Lord Lipsey, meant a number of things, but I am not saying that raw data are the problem. I think he was also referring to aspects such as whether you have a decent sample size. Benchmarking is not the answer. I am somewhat alarmed that it seems to have become a major part of what is under discussion.

My second amendment is something that is not an exceptional ask in the world of regulation. Before elaborating, I have a request for the Minister. If at the end of this debate he does not think that the Office for Students can and should report on whether its statistics meet the UK Statistics Authority’s code of practice, will he explain why? Most regulators which I know that are involved in collecting statistics for information and regulation proudly boast on their websites that their statistics meet the code of practice.

Things that we can be proud of in this country are the UK Statistics Authority and that we have a record of knowing what makes a good-quality statistic and of making sure that among public bodies and for public purposes we do our very best to meet those criteria. One of things we know, for example, is the importance of sample size. We know about the importance of the reliability of measures. We know that in many things it is quite difficult to get a valid measure and that it is just as well to say that we cannot measure them properly.

Another thing we know is that the quality of statistics can change over time and that you have to keep looking at them. One thing that has clearly changed over time is the degree to which one can assume that a standard that was used in one time and place has been carried over to another. Many of us in these debates have been standing up for the quality of university education. It is pretty clear that in North America, this country and many other countries over time there has been grade inflation and that the proportion of people getting higher-class degrees and higher marks cannot be fully explained by harder working students, miraculous teaching or any other splendid innovation. There has been a slippage. One of the reasons there has been grade inflation and one of the reasons why we need to be very careful about this—this is why I raise the point quite clearly—is that students like easy grades. Since student satisfaction is quite important for promotion, particularly in North America, it has also been studied a great deal. We know that student satisfaction judgments and scores rise the more easily instructors, lecturers and examiners grade. We also know—these are statistics that I use a great deal in my teaching because students like them—that lecturers and professors get higher student satisfaction scores if they are good looking. This applies to both men and women; it is completely gender-neutral.

So there are things we know about specific statistics, and we also know more broadly that there are things we need to look at to know whether statistics are valid, reliable and fit for purpose. As the noble Lord, Lord Lipsey, has indicated, there are aspects of student satisfaction measures which require careful attention before they are used for something as important and high-stakes as a rating of teaching quality issued by the regulator.

The final thing I want to say about the importance of observing a code of good practice—and I have no reason to suppose that the Office for Students will not, but it would be nice to have reassurance that it will—is that you cannot add up completely unrelated statistics to make a meaningful total grade. This is often described as “apples and oranges”. Apples and oranges are relatively easy to add up, but trying to take a large number of different measures with different levels of validity—different levels of reliability in terms of whether you would get the same thing if you measured it again; different types of statistics, some with clear numbers attached and some judgmental—and adding those all up into a single judgment is a pretty dicey affair, at best. It is interesting that it is something that on the whole has not been done in research. It has always been done at a much more disaggregated level. However, it is also something which we need to be very careful about because, among other things, it risks not informing students but misleading them.

I find it very strange that, at the same time as saying that we want to give maximum information to students, we are also saying that the Government in their wisdom—or the Office for Students in its wisdom—are going to pull it all together into a single-rank order which cannot be unpacked. What is really useful to students is to have lots of different information on different aspects, so that they can look for the things that they most want.

Is there any reason why we should not expect the Office for Students to follow the code of good practice that we already have in this country and which many other regulators follow? I also suggest that, once again, we only use statistics which actually have substantive meaning. That in itself makes it extremely unlikely that a gold, silver, bronze all-encompassing, all-singing, all-dancing rating is going to fit the bill.

About this proceeding contribution

Reference

778 cc459-460 

Session

2016-17

Chamber / Committee

House of Lords chamber
Back to top