A “Transparent” Institution: Monitoring Higher Education

Lately, media outlets are all abuzz over higher education.  We are constantly told that our college degrees are not worth their cost and new metrics have been devised to prove it.  (see here, here, and here).  At the same time, recent scholarship, has claimed that American universities appear to be teaching little to our students.  Leaving aside whether or not these fears are well founded, it cannot be denied that such fears have led to an increased effort to measure (and therefore rank) the success of educational institutions.  College rankings, although they have been around for quite a while, seem to have become increasingly important as of late (for a brief history, see here).  Rankings are one way to secure legitimacy, and legitimacy becomes increasingly important when your institution is under criticism.  The recent debut of President Obama’s College Scorecard  only further legitimizes the use of school rankings as a way to pass judgment on any given institution and to really know the value of a school.  (notice that the scorecard comes out of the U.S. Department of Education’s College Affordability and Transparency Center—my emphasis).

Picking a school with a good rating is supposed to assure that your degree with be worth its salt, securing your future.   We think that if we can just manage to measure enough, we can really know how a degree from a particular school with work out.

The problem is that school rankings are just statistics.  And, as one professor of sociology recently reminded me, statistics are like sausage.  When you know how they’re made, you don’t want much to do with them.  In other words, statistics and rankings can be incredibly misleading.

Common metrics such as graduation rate, SAT scores, acceptance rate, or number of faculty publications tell you more about the prestige and exclusivity of the institution than they do about the quality of the education.  These measures have little bearing on the classroom experience, the willingness of professors to meet out of class, the rigor of course material, or the quality of support you get when looking for a job.   (learn more here).

luna mothSociologists Wendy Espeland and Michael Sauder have documented a number of additional problems with rankings systems (2007).   One issue is that school rankings act as a type of self-fulfilling prophecy.  This happens because audiences (like potential students or donors) start to perceive the inequalities created by the rankings as real.  Keep in mind that the rankings are based on a particular set of values that may or may not match up with those of the audience.  Perhaps you want to be an entomologist and study the luna moth for the rest of your life.  “Darwin University” might have the number 1 ranked program in biology, but doesn’t have any course about bugs and is located outside of the luna moth habitat.  A ranking system obscures this fact.

Rankings also influence the outcomes of future rankings.  This occurs because part of the ranking process involves ratings solicited from experts in the field (so, for example, lawyers are asked to rate law schools).  As Espeland and Sauder detail, these individuals often know little about the schools they rank other than the previous rating.  So, Lawyer X has been told that ABC Law School is number 1 in the USA and proceeds to rank it and number 1 on the survey.  Over time, these rankings solidify.

In addition, schools alter their behavior based not on what results in the best education, but on what results in the best rankings.  A prime example of this is the increased importance of LSAT scores for entrance to law school.  The average LSAT score of admitted students is calculated into the ratings.  As a result, students with lower LSAT scores are no longer admitted.  And, as Espeland and Sauder document, this does not always mean that schools get to admit those students with the most potential or merit if one considered their entire application.

Finally, schools also spend a lot of money—which might have been spent on actual education—on marketing, trying to influence the way which those who complete the ranking survey will rate their school.  In other words, a great deal of effort and funding goes into perception management rather than education.   This is exceedingly important when schools are facing increasingly tight budgets and yet expected increase their educational standards in the classroom.

If you are going to look at school rankings, make sure to look at the ranking criteria too.  Does the ranking agency care about what you want from your institution?  I think most of us would see that they probably don’t.  When we stop caring about rankings, so will universities—and then they can get back to their real job and forget about impression management.

Advertisements

One thought on “A “Transparent” Institution: Monitoring Higher Education

  1. Pingback: Managed Selves | The Fifth Floor

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s