The Telegraph
 
 
ARCHIVES
Since 1st March, 1999
 
THE TELEGRAPH
 
 
Email This Page
THE RATING SEASON
- There should be a single mandatory annual rating for all B-schools

Performance anxiety

The ratings season is again upon us. The three business magazines, a management journal and the two major news magazines are publishing their ratings of business schools. Ratings of management schools should be useful sources of information for prospective students and for employers. The user should be able to make an informed choice in picking a school that scores well on characteristics relevant to him, suiting his ability and finances. An annual ratings exercise provides comparative information between different B-schools.

To be useful, it must cover all the approximately 1000 (or 1200') recognized B-schools. A rating gives information about each school, on many parameters and sub-parameters. The reliability of the information depends on the rigorous definitions of these parameters and verification of claims by the school. Such a comprehensive survey costs money. The magazines finance most surveys and make a surplus from the revenues from advertising by B-schools. Instead of today’s five ratings, students must have only one. All B-schools must be required to respond and pay their share of the costs of the survey, so that comprehensive information is available.

Business World published the first ratings of B-schools in 1999. The All India Management Association developed an objective methodology for its ratings surveys as a service to students and recruiters, and results have been published since 2000. Each of the five publications publishing annual ratings gets responses only from a quarter to a third of the business schools recognized by the All India Council for Technical Education. For the majority who have to choose between the lesser-known schools, the information is of no use.

Each rating involves fieldwork by a market research agency. It is not enough to accept responses even from some schools by mail. If 100 per cent coverage is not physically possible in the time, a knowledgeable team must visit at least the schools with suspect information and verify the claims. This is because some B-schools juggle computers, libraries and even faculty between different disciplines, to mislead the recognizing government regulator, AICTE and the rating agency, by pretending to more infrastructure and faculty than they have. In the users’ interest, a single mandatory annual rating for all schools, with subscription from each towards the costs, is desirable. This will enable information becoming available on all schools, and stimulate competitive improvements between them.

Some of the present ratings surveys supplement personal visits with mailed questionnaires and written responses to enlarge the list of responding institutions. In most years, some of the better schools do not respond. Among the reasons are, perhaps, pique at an earlier rating given to them, disagreement on methodology, too many ratings questionnaires that require response, and so on. This vitiates the information since comparisons are incomplete.

Some surveys are more detailed than others are. But all have many common elements. All of them look at the infrastructure, quality and size of faculty, admission procedures, placement (some also look at the quality of placement), interface with industry, and in one case governance as well. In each case, the parameters are broken down into detailed sub-parameters. The parameters are refined in every case with the lessons learnt from the previous surveys. All have some system for validation and one of the surveyors does an additional survey to establish satisfaction of recruiters with the schools.

All seem to have some sort of weeding out to eliminate schools that are too new, part-time courses, and so on. The documentation that is cross-checked for verification includes balance sheet, annual report, and appointment letters. In one case, there are “mystery shoppers” who visit anonymously to verify the claims of the business school.

In all cases, the results classify the institutions into categories. AIMA, for example, rates all participating institutes into three broad categories: namely A, B and C. These categories are further subdivided. Category A is divided into A+ and A; category B is divided into B+ and B; category C is divided into C+ and C. Subsequently, within each such category — that is, A, B and C — the institutes are divided into two groups, one for the “Better Performers” within that group, and the other, for the institutes figuring at lower positions within the group. This is to distinguish between the institutes in the same group. All the B-Schools in each of the sub-groups are listed alphabetically to make it most impartial. The B-Schools are divided into broad bands. The top 10-15 B-Schools are given a special mention as Super League.

Some of the responding schools show distinct improvement on many parameters over the years. Thus, many more schools now have faculties pursuing research, publication and case collection. Some schools have begun intensive programmes for faculty training and development. Many more now actively pursue business interface and consultancy. A few have developed and teach innovative new courses such as “extending creative boundaries”, “retail management” and “infrastructure management”. The orientation of the education, however, remains towards knowledge and skills. Training in emotional intelligence or entrepreneurship is not much developed.

While the parameters appear to be similar in the different surveys, the sub-parameters differ in number and detail between schools. Further, the definitions of each may vary between different surveys. For example, the quality of the publications accepted for rating faculty may vary between schools. Hence, the results for a survey by one publication may not be comparable with those of another. Validation is usually not done for all responding schools. There can always be some doubt about the neutrality of the surveys since the survey expenses are borne by advertising revenues derived from the rated schools. The student and the recruiter need guidance on how to interpret the findings.

There are also multiple agencies conducting credit rating. The customer (lender, investor, company) decides which agency to use. He also decides which agency to trust. The user has enough knowledge to choose between them. However, we cannot compare credit rating with B-school ratings. The principal user of B-school ratings is a prospective management student typically between 18 to 20 years old. We cannot expect the student to know how to judge that one or the other is the best rating to use for his choice of business school. He does not have the skills or maturity to decide which rating survey is superior and trustworthy. But he is going to be making a decision crucial to him for the rest of his life. He needs to be certain that the information is correct and will guide him to a decision that will consider his needs and be in his interest.

For this reason, we must confine the ratings to one agency covering all recognized institutions who must compulsorily participate. Independent oversight is essential. This must preferably be by a group of eminent neutral advisers consisting of experts who review the methodology, interpretations and conduct of the ratings survey. The full methodology must be published including definitions of terms used. It must have 100 per cent validation — that is, every school is visited for verification. The ratings for all schools together should be published in book form each year so that the student can look more closely at each school. The survey costs must be borne by the business schools.

These rating exercises have already brought in some healthy competition between schools. If all were to participate, it will improve the position even more. Many schools have upgraded themselves as compared to earlier years, while new ones have tried to match the best standards.

Top
Email This Page