The Telegraph
Tuesday , February 11 , 2014
CIMA Gallary


At a recent literary meet, Sandip Roy, who was moderating a session on Calcutta, remarked that he had never met a Bengali parent whose children did not come first in class. It has been observed that whenever Bengalis wish to demonstrate how scholarly a person was, they proclaim that he (or she) ‘had never stood second’ in his life. Mischievously, I think that the statement would still be true if the said scholar always stood third, 14th or 20th.

The concept of ranking students on the basis of their academic performance, usually synonymous with examination results, was always odious to me. The goal for a good student seemed to boil down to getting that one mark more than her rival — irrespective of the quality or standard achieved. Very often it resulted in a vicious competitiveness that manifested itself in a reluctance to share books, notes and, indeed, learning itself. Besides, there was the danger that a student who was placed first in her section of the class might believe that she had reached the height of attainment in the absence of a proper benchmark of excellence that was urgently required.

Thus, instead of serving as an incentive for students to improve the quality of their learning, ranking ends up generating negative effects. An anecdote from history illustrates the unexpected results certain incentives can produce. In the 19th century, the colonial rulers of Hanoi decided to offer an incentive to people in order to control a rat infestation. For every dead rat submitted to the authorities, the catcher would receive a reward. Many rats were killed but many were bred for profit.

It is puzzling that no one seriously questioned the whole concept of ranking. I feel that statistics and modes of assessment and evaluation have never been properly addressed in teacher training programmes. Irrespective of the marking scheme, there would be wide differences in the way different teachers marked the same paper. It is quite impossible to rule out the subjective element unless it is a purely objective paper where each question has only one possible answer. Oh yes, there was also the belief that some subjects were ‘more scoring’ than others, so there would be an artificial selection of subjects that had nothing to do with aptitude or interest. Also, I had often wondered how it was statistically possible for there to be just one ‘topper’ out of lakhs of examinees in the regional Board examinations. Anyway, around a decade ago, our school decided to do away with aggregates and over-all percentages and ranking. A student’s academic report would thereafter indicate her performance in every subject in terms of grades and the highest and the average of her class would also be indicated to give a clearer picture of the student’s standing in class in each of the subjects. However, we have always respected ranking in the sports field simply because these performances can be observed and quantified. We have absolutely no quarrel with the bronze, silver and gold medals and competitors standing on the podium in rank order. Now the ranking of organizations and institutions is another ball game altogether. And my story today is about the dubious business of ranking institutions.

It is said that because we readers love to read rankings, newspapers and magazines publish them. Indeed, society has always been ‘fascinated with order, rank and competition’. We are inundated with lists of the Top 10, 20, 30 or 100 of almost everything — all of which are read eagerly by diehard list-lovers. So in any given year we have lists of the best-dressed and worst-dressed men and women, the richest people, the happiest countries, the most polluted cities, the best celebrity smiles and so on. I cannot say why ranking is needed in these categories but there are reasons why institutions and organizations are rated and ranked. Ranking certainly narrows down choices and brings order to a bewildering mass of data. Some say that the publication of rankings brings good revenue to newspapers and magazines.

It is widely acknowledged that there is no satisfactory methodology, till date, of assessing and ranking institutions. Nevertheless, newspapers and magazines keep hiring business consultants and market research organizations to carry out surveys and arrive at findings which are happily published by them along with relevant interpretations. I admit my perverse pleasure when I saw the other day, a prominent advertisement in this paper with the heading, “Condemnation of the New IRS”. Eighteen leading newspapers of the country “showing an unprecedented show of unanimity” had rubbished the latest Indian readership survey saying that it was riddled with shocking anomalies which defy logic and commonsense. I am hoping that newspapers will now think twice before they pay for irresponsibly executed surveys of other entities and publish them so readily.

The flawed ranking of schools should be equally strongly condemned. Since I know that schools will not come together to do this, I am going to point out why the ratings have no credibility whatsoever. Outlook was possibly the first magazine in India to publish an annual survey of schools in 2002 and another one the following year. I still have the then managing editor’s reply to my indignant email questioning the absurd methodology. He must have received a barrage of letters and emails because he said that they would be looking for ‘the right methodology’ and that they would incorporate my concerns in their next survey “whenever that happens”. It hasn’t happened yet. An educational magazine has been doing a survey of “India’s Most Respected Schools” for some years. This exercise seems to have found favour with many and has grown from strength to strength. The editor claimed that schools were reluctant to fill up questionnaires while some exaggerated their achievements. So their survey, carried out by a well-known market research organization, is based on ‘perception’. But the fact remains that scientific research has established that there is little correlation between ‘perceived data’ and actual data. You can, of course, argue that your perception is your reality.

The trouble is that schools — especially those which have been placed at the top — accept these rankings unquestioningly. According to the ‘Respected Schools 2013’ rankings, ours is “the best girls’ day school in India”. Although we were flattered, we could not accept this as a truth, simply because there is no foolproof method to select the best school from among hundreds of schools in India. In any case, your ‘best’ school may not be mine because each of us wants something different from a school and our priorities may not match. Very recently we were informed by an ICT magazine that we were adjudged as one of the top schools of India. They claimed that out of the 10,000 schools that were sent questionnaires, 5476 responded. We didn’t — yet we have been included in the list. If the rating was based on facts and figures as they claim, where did they get their data from? Stranger still, a national daily’s Calcutta edition published the most absurd and confusing lists with ‘smart schools’ and ‘cool schools’ and so on. By the time they were through with their zonal lists with their highly obfuscating data, many felt that they were too silly to even discuss. We would have been quite happy to have been left out of the reckoning altogether until one fine morning we were informed that we were 10th in the south zone.

The whole exercise was carried out with the help of ‘mystery parents’ who had to pose as potential applicants and find out about the facilities offered by different schools. (I seriously wondered why this kind of sting operation had to be adopted to find out basic information from schools.) For some strange reason, their ‘Perception Module’ required “tuition teachers” to rate a school on criteria such as transparency, freedom and holistic education. How on earth would these “tuition teachers” know? Lastly, is it ethical or fair for the principal of a school to rate other schools?

The three most powerful ranking systems in the world have been accused of “passing inaccurate information as factual”. Phil Baty of the Times Higher Education world university rankings confessed that the rankings which his magazine had been publishing and which had attracted enormous global attention were “not good enough”. Thus the partnership with QS (Quacquarelli Symonds) came to an end, over disputes regarding the methodology. Studies indicate that not only are rankings wildly variable, but important ‘intangibles’ such as campus experiences and ethos simply cannot be measured accurately. The US News & World Report abounds in statistics but criticism is directed at ‘vague’ criteria such as ‘academic reputation’ and also at an Ivy League bias.

The Office for Standards in Education in England does not rank schools. It inspects all government and government-aided schools and grades them as ‘outstanding’, ‘good’, ‘requires improvement’ and ‘inadequate’. The objective is to promote improvement. Letters are sent to the school and parents in advance, spelling out the purpose and nature of the inspection. Her Majesty’s inspection team spends two whole days with each school, gathering firsthand information and questioning people on campus. Every judgment is backed by evidence. In spite of the transparency, Ofsted is criticized for ‘number crunching’ and ‘using test data which are fundamentally unsound’.

Schools must be accountable and they need to be inspected at regular intervals to be kept on their toes and to be made aware of their strengths and weaknesses. However, the impact of rankings (all of them grossly unscientific so far, at least in India) on schools can be seriously damaging. Schools are likely to become publicity conscious and will be on the lookout for a winning formula to secure good list positions. In the process, the real work of teaching and learning, which most often goes unnoticed, may get somewhat side-lined. A quality education is one which brings out the best in an individual and instils in her a thirst for lifelong learning. An artificial measure for ‘quality education’ is a dangerous thing.