Science and Technology in Russian Federation

Sign in Sign up
May 15 | Education

Ratingmania

"Ratingmania" in the college environment has begun not so long ago: the best-known international rating ARWU (Academic Ranking of World Universities), or the Shanghai ranking, first saw the light in 2003. Today rankings of higher education and research institutions have become a part of our life, but complaints to the system of evaluation remain. Oleg Soloviev, editor of Round University Ranking, told STRF.ru about the fundamental problems of different ratings and their solving, as well as about the future of the education effectiveness evaluation system.

Many errors on the wrong indicators used in the rankings. For example, the MSU has recently risen in the ranking of THE (Times Higher Education World Universities Ranking) with 25 seats due to changes in the sample of respondents. Please tell us, are there any other biased indicators, in your opinion?

Yes, there are some "anomalies" in the rankings. If you follow the THE rating for 5 years, from 2010 to 2014, you will see that one and the same institution can be very "jumping." First of all, due to the fact that it is a rating of three indicators actually (although technically there are 13 of them), and there will be no significant changes as long as there are 3 abnormal indicators: normalized citation - 30%, research reputation - 18% and reputation in teaching - 15%, i.e. 63%, - if they stay, universities will be "jumping".

To avoid this, you need a rating that has all the indicators more or less equal. For example, our rating does not have any indicator with a weight of more than 10%. This is the main point. Accordingly, we are increasing the number of indicators.

If we talk about the indicators, many people have a question. When so much attention is paid to indicators of research universities, what would be with teaching?

The problem here is that the teaching has not been able to be measured still. How is it possible to measure the talent of the lecturer or student satisfaction with the knowledge acquired? You can conduct a survey among the students, but it will be very arbitrary and subjective.

Therefore, there virtually no indicators on teaching in leading rankings de facto. Formally one of the groups of indicators in THE, of 30% weighing is the teaching. And one indicator within the group – the teaching reputation – is of 15%. In fact, it is a fiction, and there are two reasons for it.

First of all, this estimate is made up of expert reviews, and how do the ratings find them? The data of the respondents were unloaded from the communities of Web of Science in 2010-2014 years and, accordingly, will be unloaded from SCOPUS from 2015. That is, in fact, the teaching in higher educational institutes, as well as their academic reputation, is estimated by the researchers, but not by the end-users of the product – of the new knowledge or of workers.

And the second is the form of the question. It sounds like this: "Call no more than 15 universities in the world, where would you send your students for PhD program », that is for the research programs again. In fact, the correlation between the quoting and reputation is very big, about 0.88.

Therefore, the only thing by which the teaching can be measured is formalized indicators. Let's say the ratio of teachers and students, the number of issued PhDs and bachelor degrees, and so on.

The teaching group of indicators is 50% of the total rating in our RU ranking. It consists of five indicators, each of which "weighs 10%".

This indicator is formalized. Because it is impossible to get more or less comparable information on the seventy-something countries that are present in our country, and within the THE rating as well. One would take the average score of entrance exam, but it is different in all countries.

Most likely, most of the traditional universities will simply disappear because of global competition, particularly because of volumes of online educational platforms. And some CAs will stay.

That is, people are trained on certain courses, and then come to such center and pass a standardized exam all over the world for some specialties.

Does this mean that the sector universities that train not researchers, but professionals in some practice field will disappear?

This is a unique case. MIPT, for example, or MEPI will not disappear for sure. With regard to the practical areas, such as physics, chemistry and medicine, it is impossible to prepare a person virtually in these areas. It hasn’t been possible for now. Then, perhaps, it will be possible.

But as far as social sciences and humanities are concerned, they will transfer to online for sure. They have been already transferring!

By the way, there almost no humanities in the rankings. For example, if you throw out the entire humanies block out of Moscow State University, the university will rise up very strongly. These faculties, such as the History Faculty, provide no citation, they also don’t have reputation. But their staff is inflated and they also have more students.

How does the rating system affect the higher education in a whole? Isn’t there a danger of even more explicit division of Russian universities on a small group of leaders and "outsiders"?

There is stratification on the one hand. But even the strongest Russian universities, the 5-100 project participants, are very far from the mid-level universities in the top 200 in the world. And the funding at the expense of this program makes not a very big difference.

What is a global challenge of ratings then?

In general, ratings are needed as global navigator in the field of education, that is, to make it easier for a person to choose a university.

For administrators of universities and national regulators - the ministries, departments, etc. it is a way to evaluate the effectiveness of institutions and to distribute the money, and the ratings have an enormous impact here. Another thing is that it is necessary to use ratings correctly here: not to evaluate a high school only because it takes 185th place in THE, not the 95th one, but do to in an integrated manner.

If talk more about the mechanism of the distribution of funding, it is somehow different from the ratings. For example, the Moscow Engineering Physics Institute, which is funded very well, is not at the very top in the ratings. But it's the world’s first by citation.

Rating

0
votes total: 0

Discussion