Graphic detail | The value of university

Our first-ever college rankings

By D.R.

AMERICAN universities claim to hate the simplistic, reductive college rankings published by magazines like US News, which wield ever-growing influence over where students attend. Many have even called for an information boycott against the authors of such ratings. Among the well-founded criticisms of these popular league tables is that they do not measure how much universities help their students, but rather what type of students choose to attend each college. A well-known economics paper by Stacy Dale and Alan Krueger found that people who attended elite colleges do not make more money than do workers who were accepted to the same institutions but chose less selective ones instead—suggesting that former attendees and graduates of Harvard tend to be rich because they were already intelligent and hard-working before they entered college, not because of the education or opportunities the university provided.

On September 12th America’s Department of Education unveiled a “college scorecard” website containing a cornucopia of data about universities. The government generated the numbers by matching individuals’ student-loan applications to their subsequent tax returns, making it possible to compare pupils’ qualifications and demographic characteristics when they entered college with their salaries ten years later. That information offers the potential to disentangle student merit from university contributions, and thus to determine which colleges deliver the greatest return and why.

The Economist’s first-ever college rankings are based on a simple, if debatable, premise: the economic value of a university is equal to the gap between how much money its students subsequently earn, and how much they might have made had they studied elsewhere. Thanks to the scorecard, the first number is easily accessible. The second, however, can only be estimated. To calculate this figure, we ran the scorecard’s earnings data through a multiple regression analysis, a common method of measuring the relationships between variables.

We wanted to know how a wide range of factors would affect the median earnings in 2011 of a college’s former students. Most of the data were available directly from the scorecard: for the entering class of 2001, we used average SAT scores, sex ratio, race breakdown, college size, whether a university was public or private, and the mix of subjects students chose to study. There were 1,275 four-year, non-vocational colleges in the scorecard database with available figures in all of these categories. We complemented these inputs with information from other sources: whether a college is affiliated with the Catholic Church or a Protestant Christian denomination; the wealth of its state (using a weighted average of Maryland, Virginia and the District of Columbia for Washington) and prevailing wages in its city (with a flat value for colleges in rural areas); whether it has a ranked undergraduate business school (and is thus likely to attract business-minded students); the percentage of its students who receive federal Pell grants given to working-class students (a measure of family income); and whether it is a liberal-arts college. Finally, to avoid penalising universities that tend to attract students who are disinclined to pursue lucrative careers, we created a “Marx and Marley index”, based on colleges’ appearances during the past 15 years on the Princeton Review’s top-20 lists for political leftism and “reefer madness”. (For technically minded readers, all of these variables were statistically significant at the 1% level, and the overall r-squared was .8538, meaning that 85% of the variation in graduate salaries between colleges was explained by these factors. We also tested the model using 2009 earnings figures rather than 2011, and for the entering class of 2003 rather than 2001, and got virtually identical results.)

After feeding this information into the regression, our statistical software produced an estimate for each college based exclusively on these factors of how much money its former students would make. Its upper tiers are dominated by colleges that emphasise engineering (such as Worcester Polytechnic) and attract students with high SAT scores (like Stanford). The lower extreme is populated by religious and art-focused colleges, particularly those in the south and Midwest. This number represents the benchmark against which we subsequently compare each college’s earnings figure to produce the rankings. The bar is set extremely high for universities like Caltech, which are selective, close to prosperous cities and teach mainly lucrative subjects. If their students didn’t go on to extremely high-paying careers, the college would probably be doing something gravely wrong. Conversely, a southern art school with low-scoring, working-class students, such as the Memphis College of Art, might actually be giving its pupils a modest economic boost even though they earn a paltry $26,700 a year a decade after enrolment: workers who attended a typical college with its profile would make about $1,000 less.

The sortable table above lists the key figures for all 1,275 institutions in our study that remain open. The first column contains the median post-enrolment salary that our model predicts for each college, the second its actual median earnings, and the third its over- or under-performance. Clicking on a university pops up a window that shows the three factors with the biggest effect on the model’s expectation. For example, Caltech’s forecast earnings increase by $27,114 as a result of its best-in-the-country incoming SAT scores, another $9,234 thanks to its students’ propensity to choose subjects like engineering, and a further $2,819 for its proximity to desirable employers in the Los Angeles area.

In an unexpected coincidence, it has come to our attention that the Brookings Institution, a think-tank in Washington, happens to have published its own “value-added” rankings using the scorecard data on the exact same day that we did (October 29th). Although their approach was broadly similar to ours, they looked at a much larger group of universities (including two-year colleges and vocational schools), and they appear to have used a very different set of variables. Above all, the Brookings numbers regard a college’s curriculum as a significant part of its “value add”, causing the top of its rankings to be dominated by engineering schools, and the bottom by art and religious institutions. In contrast, we treated fields of study as a reflection of student preferences, and tried to identify the colleges that offer the best odds of earning a decent living for people who do want to become artists or study in a Christian environment. Similarly, the Brookings rankings do not appear to weight SAT scores nearly as heavily as ours do, if they count them at all: colleges like Caltech and Yale, whose students subsequently earn far more money than those of an average university but significantly less than their elite test results would indicate, sit at the very bottom of The Economist’s list, whereas Brookings puts them close to the top.

It is important to clarify how our rankings should be interpreted. First, the scorecard data suffer from limitations. They only include individuals who applied for federal financial aid, restricting the sample to a highly unrepresentative subset of students that leaves out the children of most well-off parents. And they only track students’ salaries for ten years after they start college, cutting off their trajectory at an age when many eventual high earners are still in graduate school and thus excluded from the sample of incomes. A college that produces hordes of future doctors will have far lower listed earnings in the database than one that generates throngs of, say, financial advisors, even though the two groups’ incomes are likely to converge in their 30s.

Second, although we hope that our numbers do in fact represent the economic value added by each institution, there is no guarantee that this is true. Colleges whose earnings results differ vastly from the model’s expectations might be benefiting or suffering from some other characteristic of their students that we neglected to include in our regression: for example, Gallaudet University, which ranks third-to-last, is a college for the deaf (which is why we excluded it from our table in print). It is also possible that highly ranked colleges simply got lucky, and that their future students are unlikely to make as much money as the entering class of 2001 did.

Finally, maximising earnings is not the only goal of a college, and probably not even the primary one. In fact, you could easily argue that “underperforming” universities like Yale and Swarthmore are actually making a far greater contribution to American society than overperformers like Washington & Lee, if they tend to channel their supremely talented graduates towards public service rather than Wall Street. For students who want to know which colleges are likely to boost their future salaries by the greatest amount, given their qualifications and preferences regarding career and location, we hope these rankings prove helpful. They should not be used for any other purpose.

CORRECTION: An eagle-eyed commenter has alerted us that all 20 listed campuses of Pennsylvania State University appeared with the same median earnings. Other keen observers have noted irregularities regarding a handful of colleges with similar names in different states. In response, we have reviewed the scorecard database, consolidated all colleges with multiple campuses but a single listed salary figure, identified and distinguished universities with overlapping names, re-run the regression, and revised the rankings and the text of this blog post. As a result, the top and bottom ten colleges published in our print issue no longer exactly match the ones in these updated rankings. However, the vast majority of universities moved by no more than a handful of places. Additionally, we have removed references to “graduates” and “alumni”, to reflect the fact that the scorecard’s income data do not distinguish between graduates and students who enrolled but did not graduate.

Discover more

America’s fentanyl epidemic, explained in six charts

It is among the deadliest scourges the country has ever faced

Which countries have the best, and worst, living standards?

Data show progress worldwide may have suffered a permanent setback


Where is the “motherhood penalty” greatest?

The effect varies widely both between countries and within them