College Completion Rates: Up, Down, and Sideways

I love a good controversy about an important higher education topic. What better way to enjoy a Wisconsin snowstorm than to sit cozily inside, trading emails with knowledgeable folks who are trying to sort out why it appears college completion rates have declined in the U.S. over the last 30 or 40 years. I'm hard-pressed to think of one (well, maybe, after a long day of work having this 38-week fetus out of me would be nice). So, thanks to Sarah Turner, John Bound, and Michael Lovenheim for giving us such a nice meaty analysis to chew over this week.

There's already been a good bit written about and commented on this report, particularly by Cliff Adelman, the man who gave the world America's longitudinal transcript data and a robust series of reports on what they tell us about colleges and students. The fact that so many people find so many different messages in the analysis actually bodes well for the paper--it's partly a story about trends in completion rates (are they really down, or just stagnant?), partly a story about potential reasons for declines in rates (is it all about inadequate student preparation?), partly about differences among 4-year institutions (e.g. public flagships vs. other nonselectives), and partly about community colleges (are they "doing harm?" Why don't their outcomes seem affected by resources? etc).

As a sociologist, I see questions about inequality pervading all of these issues, and nothing tickles me more than to see economists writing about stratification. If completion rates really declined in the face of efforts to expand overall participation, we can anticipate political pushback against advocates for greater efforts to enhance access-- regardless of the reasons for the decline. If the reasons for decline (or stagnation) have anything to do with compositional changes on either the supply or the demand side (and the answer really is "both") then that's a story about inequality too, since those changes accompanied expansion. And any story about differences among institutions or effects of institutions is really about the functions or unintended consequences of institutional differentiation itself, a key facet of our higher education "opportunity" structure.

All that said, here's what I think we should take away from this paper:

1. It's nearly impossible to expand participation in any program without affecting the outcomes of that program. For too long some people have talked about changes in access and completion in U.S. higher education without sufficiently acknowledging that compositional shifts in who attends college will (almost without a doubt) affect graduation rates. Let's hope this paper gets the basic discussion back on the right track.

2. That said, changes in composition of the student population did not occur in a vacuum. As the student body changed, so did many of our policies and practices. More states came to rely more heavily on the community colleges to serve those deemed "unsuitable" for 4-year institutions (see Brint and Karabel, and Dougherty for more). With increased institutional differentiation came a greater need for states to choose how to distribute scarce resources, and evidence suggests that oftentimes a decision was made to give less money to sectors serving needier students (e.g. public 4-year nonselective and 2-year colleges). That didn't go unnoticed by students and families themselves, whose perceptions of resources and status affect their college choices (see Cellini for a recent paper demonstrating this). Furthermore, other policies changed at the same time--including federal financial aid--in ways that promoted shifts to less-expensive colleges.

3. As a nation we relied on community colleges to absorb much of the growth in enrollment. To what end? While some will read this paper and decide that community colleges have screwed up, that's a flat-wrong and oversimplified conclusion. It's also not one intended by the authors. As table 4 in the paper shows, we treat community college students like they are cheap to educate. Median per-student expenditures during the 1990s were just $2,610 at community colleges, having declined 14% since the 1970s. In comparison, spending at public 4-year "non-top 50" colleges was 52% higher. What's the expression I'm looking for here? Oh yes, "crap in, crap out." (Hold on-- I will clarify-- I am not saying community college students are crappy or that all community college outcomes are crappy!) We pushed lots of students in the door, gave the colleges little money, and were surprised that when faced with paltry resources, crowding, and a growing abundance of missions things didn't go so well? Shame on us. Take a look at CUNY's faculty, students, and classrooms a few decades after the 1970s open-admissions experiment there and you'll see the relationship I'm describing. You simply cannot install a massive policy change without proper supports, no matter how good the intentions are.

But let's be honest--the paper doesn't demonstrate a strong relationship between resources and outcomes, in the community college sector or elsewhere. In fact, it indicates a weaker relationship in that sector compared to others. But as the authors have acknowledged (in personal correspondence), endogenous state behavior would bias them against finding a larger effect, and measurement of resource effects is perhaps more problematic in the 2-year sector, for many reasons including how and under what conditions (e.g. governance) resources are allocated, costs may be greater, and there is overall less variation in resources. So, this paper isn't the greatest test of whether money matters for college completion (not that a good direct test exists). It is, however, pretty good at showing that fixing k-12 isn't going to be a sufficient solution to the completion problem.

I will be the first to admit that the paper doesn't provide sufficient evidence to support all of the relationships I've laid out here-- and therefore many remain partially-tested hypotheses. Mostly, the authors didn't test them because of methodological challenges that could be hard to overcome since changes in student characteristics, sectoral enrollments, and resources are highly interrelated and operating bi-directionally. If that's true, teasing out what matters most using the logistic regressions employed in this paper becomes much more problematic. I've also got to note that given the methods used here, it's also not appropriate to use the findings as evidence that one sector is outperforming another.

So in the end, here's the punchline: if we want graduation rates to improve, we need to pay attention more attention to how we structure college opportunities. This is a multi-sided process, with states, colleges, parents, and students all making decisions, and often in an information-poor, resource-deficient environment. No single approach (e.g. high school preparation, financial aid, college accountability, etc) targeting a single group is going to work.