Teaching: the missing response to university rankings and undergraduate satisfaction

I was listening to academic administrators bemoan the rise of international university rankings at the Worldviews conference on Higher Education and the Media in Toronto not long ago. University of Toronto President David Naylor cheekily compared rankings to the necessary indignity of having a colonoscopy. The president of the University of Alberta, Indira Samarasekera, was much more dismissive, attacking the statistical quality of the rankings while expressing appreciation that the rankers remained engaged in dialogue. But she seemed proud of the decision not to fill out surveys for rankers like Canada’s Macleans magazine. They’re free to use publicly-available data, she declared. Public money (translated as time spent by university staff in wages) shouldn’t be spent on gathering data explicitly for them.

One could argue that the rise in the popularity of university ranking reveals that there is a gap between what the public expects of the university and what the university believes it offers to the public (read, wider society). From the ranker’s point of view, the public needs to have solid, third-party information in order to make an informed decision about where to attend.  This is more important than ever, now that students are facing more and more financial barriers to participate. (For a sense of where that might go in Canada, see the new White Paper in the UK).

Universities say they’re interested in meeting the needs of students to ensure their academic success (with or without the frame of “value” for money).  But they have a longstanding habit of putting undergraduate student experience second to research missions.

A consistent reaction to the rankings, therefore, is not that they are such a terrible idea. Rather, the universities fall on their bailiwick (scholarship) and claim that the ratings represent shoddy science. Universities question the variables used in generating a ranking and whether or not they are consistently comparable over the years or across institutions. Rankers (those on the panel, anyway) say their aim is to be as transparent about their methodology as possible, but hey. They’re not  the scientists.

There are real limitations to rankings. For example, those based upon research-only productivity ratings don’t capture what student education will be like. Student satisfaction and other qualitative measures are difficult to capture and compare in any meaningful way.

The whole enterprise, at least from the perspective of the two presidents on the panel, seemed a bit of a defeatist wash. Their attitude toward coming up with something better to fill the gap, however, was very passive — this has little to do with us and it is largely out of our hands. Like a colonoscopy, we’re resigned to it and hope the results aren’t too disruptive of the status quo.

Do they have to take that attitude?  In another post tomorrow, I’ll argue that they don’t.

But change, they won’t.

Advertisements

Leave a comment

Filed under Uncategorized

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s