Ranking does nothing to improve the quality of education
As the school-leaving results are being published, the massive rush for college admissions has already commenced all over the country. Colleges range from top-class to pathetically poor in academic standards, from very expensive to almost free, and from impressively resource-rich to bare four walls with only a few chairs and tables. The inequality is striking too. The range of quality and standards is not evenly distributed over the large number of institutions. A few good campuses, and then there is a precipitous drop into a cesspool of thousands of campuses that have very little to do with proper education. In such a situation, parents and students face a problem of choice, often limited to two or three colleges, trying to find out which college is a little better than the others.
Many years ago, this information came from word of mouth of people who knew something about these colleges. Usually, colleges would differ in terms of many characteristics. One could be perceived to be better in terms of having good discipline but not having an adequate number of teachers. Another could be known to have a very committed set of teachers but perceived to be poor in facilities and infrastructure. Colleges were not clones of one another. Some campuses could be well known for the liberal arts, others could be considered to have a good physics department, and so on. Then, from around the mid-1990s, the media came into the picture, along with some market survey organizations. Ranking of educational institutions became a business in its own right. Inaccurate big data, corruption, freezing of inequalities and forcing college administrations to do what the media wanted them to do are features of this business. As in many other cases, this business began in the United States of America, and then was caricatured in India and many other developing nations.
Delving a little into the background of ranking is important in understanding its wider ramifications. A news magazine in the US, in an attempt to compete with the likes of Time and Newsweek, decided to rank 1,800 colleges and universities in the country in the 1980s. This ranking was offered to students and parents as a means of arriving at a more informed decision about admissions. Entering a college is a decisive moment in a student's life. The college or university would bring new learning, lifelong friends, often spouses, and jobs that set them out in a career. The news magazine decided, that to ensure credibility of their ranking, the results ought not to be widely different from the perceived ranking that society already had in its mind. Hence, if the likes of Harvard, MIT, Stanford and Princeton were not at the top of the heap, the ranking would be considered useless, and above all, the magazine would not sell. Hence, the magazine decided to base the ranking on subjective perceptions of people in the education community.
The results were as expected, at least at the top of the pile. Colleges that did not make it to the very top began to question the authenticity of the rankings since it was entirely subjective and not based on hard data. The next time, the magazine tried to identify the factors that made universities like Harvard and Princeton what they were - factors like acceptance rates, results of students entering, results of graduating students, jobs obtained, the position of alumni, the publications of the faculty, and cost of education. These factors were measurable and data largely available. Now the game changed dramatically. All colleges tried to emulate the toppers. This could be considered good to the extent that it might lead to competition among colleges to reach the top. However, the concept of excellence that the ranking entailed and the new found obsession with data, ironed out the differences even among the top colleges. There emerged a single idea of what constituted a good university. There was simply no alternative version around.
The inevitable occurred. If a college was not on the top relative to its competitors, it would lose students and revenue, funding would dry out, and the reputation of the college would suffer. If a college was good it would continue to be good, raise fees, get more funds and have higher acceptance rates. If a university was down in the ranking it would sink further. Thus, the inequality in perceived and actual quality increased sharply. Hence, college administrators began to try by hook or by crook to improve in the rankings. Lies, bribes, and marketing consultants were resorted to by a large number of not so good colleges. Resources, already sparse, were spent on all the wrong things.
In India, around the mid 1990s the ranking business began. Market-survey firms and big magazines started by ranking the business schools first, then gradually expanding to other colleges and universities, and even schools. It started with subjective data. Then reacting to criticisms, the ranking included more objective data. However, many such data were hard to validate, like the number of books in a library. Moreover, in a multi-factor analysis, the fate of a college or institution would be determined not only by the included parameters, but also by the weights (indicating relative importance) attached to them in calculating the total score. In the rankings of educational institutions in India during the past 20 years or so, the pattern has remained the same by and large. Once in a while a surprise occurs which is immediately suspect. No significant change has occurred in relative positions. Colleges are rumoured to falsify data regarding faculty, books, research, and physical facilities, and some magazines openly quote 'sponsorship fees' for a position in the top 10 of some narrowly defined category - like engineering colleges with more than 5,000 students.
In an attempt to have a reliable ranking the government of India has jumped into the fray, in what might be a desperate attempt to change things. The National Institutional Ranking Framework has been formed for all educational institutions to participate in. The parameters are roughly the same as those used by magazines and market-survey experts. Perceptions continue to play a major role. Apart from ranking India has aped another concept of evaluation of educational institutions from the Western countries - accreditation. An institution is accredited if it follows a specifically given set of policies and processes. The self assessment is then validated by a team of experts. If an institution fails the first time, it is mentored (for a consideration) to pass the second time. Now it is possible for an institution to have accreditation but a low rank, or can have a high rank without accreditation.
The more confusing and complicated the system becomes, the more likely is its chances of being gamed. Lies and fake data abound. Resources are wasted for better optics. The sharp inequalities between colleges get fossilized. The average quality of education does not improve. More and more graduates become unemployable. The business that is higher education (with capitation fees, management quotas, and private tuition and notebooks and cheating) thrives while students suffer. This is true by and large in many nations, especially developing economies. Little wonder then, that a few days ago, the president of Beijing University declared that it was best not to encourage students to think and criticize. Higher education in India is all about getting a degree - with the faint hope of getting a job with it. Ranking does little to improve quality, diversity, and honesty in education.
The author is former professor of Economics, IIM Calcutta