[The Montana Professor 22.1, Fall 2011 <http://mtprof.msun.edu>]
Richard Arum and Josipa Roksa
Chicago: University of Chicago Press, 2010
272 pp., $70 hc, $25 pb
Susanne Monahan
Sociology
MSU-Bozeman
smonahan@montana.edu
We hear a lot about student retention in higher education, and for good reason. When they fail to finish college, students do not realize the cumulative learning of an academic program or the full economic benefits of a college degree. Students who leave higher education before completing their degree have also consumed public resources (e.g., state dollars, student loan subsidies, Pell Grants) without returning the societal benefits created by finishing college. Finally, low student retention creates uncertainties for colleges, as they try to forecast enrollments and demand for courses and programs. So we spend a lot of time figuring out how to keep students in school, because we think it is good for them and us, and because retention is part of our accountability to the public.
While retention is important, higher education's primary purpose with respect to students is education: creating contexts where intellectual growth and learning occurs. We often conflate retention and learning, implicitly assuming that if students are enrolled in college, then they are learning. But what if enrolled students are not learning, or not learning much?
In Academically Adrift: Limited Learning on College Campuses, Richard Arum and Josipa Roksa examined the learning of more than 2300 students in 24 colleges and universities. Focusing on a specific measure of student learning, the Collegiate Learning Assessment (CLA) Performance Task—more on that later—Arum and Roksa explored factors associated with learning: student demographic background and academic preparation, selectivity of the college, and student choices and behaviors while enrolled in college. Their central finding—that students exhibit little development of the skills assessed by the CLA in the first two years of college—was startling enough to have been covered extensively in the popular media (i.e., The New York Times, Washington Post, Wall Street Journal, Economist, National Public Radio).
The CLA Performance Task "allows students ninety minutes to respond to a writing prompt that is associated with a set of background documents" (21). Rather than focusing on a student's grasp of discipline-specific content, the task allowed assessment of the more general skills almost all colleges and universities profess to develop among their students: "critical thinking, analytical reasoning, problem solving and writing" (21). Arum and Roksa used longitudinal data from the CLA Performance Task collected at two time points—the beginning of the first year of college and at the end of the second—to examine whether students showed gains in learning on these general skills.
Arum and Roksa concede the limitations of their data. First, they have captured what students have learned in the first two years of college, not upon completion of a degree program. Second, they assess general skills and not knowledge specific to a given discipline. And third, while they controlled statistically for other factors associated with learning (e.g., student background and preparation, selectivity of institution), the absence of a true experimental design means they cannot know for sure that there were no unmeasured factors that would explain their findings. These limitations aside, however, their findings are thought provoking.
Results from the CLA Performance Task suggest that average gains in learning in the first two years of college were small. At the end of the second year of college, students performed on average only 0.18 standard deviations better than they did at the beginning of the first year. According to the test, little learning occurred, and less learning was occurring in the mid-2000s than in the 1980s or 1990s. That is, research on the learning of these general skills reviewed in the early 1990s revealed that students scored on average about 0.5 standard deviations above the mean from the initial test; in the early 1980's, the improvement in scores was about 1.0 standard deviations. It is possible that improvements in these skills tend to manifest in the last two years of college or in the completion of an academic degree program. But what then is happening in the first two years of college? Students take many general education courses in their first two years, and the near universal goal of such courses is exactly the kinds of thinking skills assessed by the CLA. In addition, today's college students may be substantively different from past students along dimensions that matter for the development of critical thinking skills. For example, the recent expansion of participation in higher education has resulted in a more heterogeneous student population that as a whole may be less poised to learn. But the results remain troubling if you value the skills measured by the CLA and believe the CLA is a good indicator of this kind of learning.
Gains in learning as measured by the CLA were not evenly distributed across students. Instead they varied by student background, particularly race. Specifically, African-American students showed smaller gains in learning than white students, even when parent education and high school racial composition were statistically controlled for. This is particularly problematic because African-American students initially scored lower on the CLA performance test. Thus, after the first two years of college, the racial gap in the CLA score had gotten larger. Gains per se did not vary by parents' education, but students whose parents had less education scored lower on the CLA initially than students whose parents had more education, and the gap persisted after two years of college. College did not exacerbate those gaps but it did not ameliorate them either.
Not surprisingly, gains in learning also varied by academic preparation: Students who had taken four or more AP courses in high school showed larger learning gains than those who had taken no AP courses; students in the top four quintiles for high school GPA showed more learning than those in the lowest quintile for high school GPA, and students in the top four quintiles of the SAT/ACT showed more learning than students in the lowest.
The academic rigor of college courses—defined as courses with at least 40 pages of reading per week and 20 pages of writing over the course of a semester—also seemed to matter. Arum and Roksa's measurement of rigor was fairly narrow. For example, math, science, and engineering courses may be rigorous but in ways not measured in pages read and written. Unfortunately, large-scale survey research often requires the over-simplification of complex concepts. Arum and Roksa find, however, that even with this blunt measurement of academic rigor, this characteristic was associated with gains on the CLA. In particular, students who reported having at least one course with both heavy reading and writing requirements had greater gains on the CLA, controlling for other background and institutional factors. In addition, students who report that faculty communicated high academic expectations also scored better on the CLA Performance Task. But rigorous courses and demanding faculty were not the norm at every institution. Students at highly selective institutions were more likely to have taken rigorous courses and reported high faculty expectations than were students at selective or less selective colleges. Differences in rigor also existed across disciplines but the weakness in the measurement of rigor used by Arum and Roksa made those findings difficult to interpret.
Full-time college students have many distractions from school. Arum and Roksa lead their book with the observation that college is often understood less as an academic experience and more as a four-year social interlude between childhood and adulthood. Their observation was supported by student reports of how they spend their time. In a typical 168-hour week, the average full-time student devoted approximately 15 hours to classes or labs but only 12 hours to studying. Another 15 hours were allocated to organized activities such as work for pay, volunteering, student clubs, and Greek life. A startling 85 hours per week were spent in recreational activities: socializing, playing with computers, watching TV, exercising, hobbies, and other forms of entertainment. Now I better understand my niece's deep concern when she began her first full-time job after college: "But I don't have any time to see my friends!" she wailed, after realizing how little time was left in her week after she worked, commuted, and slept.
Gains in learning were associated with how students use their time, sometimes in surprising ways. For example, while more time spent studying alone was associated with more learning, time spent studying in groups was associated with less learning. Time spent in activities related to sororities and fraternities was also associated with less learning. These findings are correlations; we cannot conclude that studying with others or participating in Greek life causes less learning. But it is worrisome that some of our retention strategies—those that encourage students to connect socially on campus—are associated with less learning. It also suggests, per Arum and Roksa, that faculty may not be sufficiently skilled at implementing collaborative learning strategies. Working in groups requires fairly high-level interpersonal and management skills, and structuring effective group work is challenging—especially, perhaps, for those faculty with little formal training in pedagogy.
The book's title is provocative. You might think "academically adrift" refers to an absence of vocational clarity among today's college students but that is not Arum and Roksa's concern. Indeed, controlling for student background, preparation, and institution, gains in learning were highest among students in the basic sciences, humanities, and social sciences, the fields where a student is most likely to be presumed vocationally adrift, most likely to be asked, with a quizzically raised eyebrow, some variant of "What are you going to do with that?" Instead, Arum and Roksa are concerned about what college means to young people and what they are getting out of their investment of time and money. They worry that colleges are rife with distractions, students are not committed enough to academics, classes are not rigorous enough, and the years spent in college do not prepare students for the world they expect to enter.
Arum and Roksa posit a number of explanations for their findings: faculty who are more focused on research than teaching, universities more concerned with chasing research funding than educating students, increased reliance on contingent faculty and graduate students to teach undergraduates, and a focus on the student as a consumer and the university degree as a credential rather than a culmination of learning. They review research that corroborates these trends, but their data do not allow them to examine whether these factors are associated with less learning in their sample. After such careful presentation of statistical data on correlates of student learning, it was disappointing to read unsupported speculation about the causes of limited learning. I can see why they suspect that these trends are culpable, but unfortunately their data did not allow them to go beyond institutional selectivity to examine the association between student learning and factors such as average class size and percent of classes taught by tenure-track faculty.
As a faculty member, I can't read Academically Adrift and not link its findings to what I see every day. Arum and Roksa ask a basic question about higher education, the same one I have heard for years at MSU's Bozeman campus: What are we doing here? Are we educating students? Are we training workers? Are we sorting and credentialing graduates for the economy? Are we creating new knowledge? Are we supporting the career aspirations of faculty? Are we driving economic development in the state and the region? The naive answer is that we are doing all of these things.
A more nuanced response is that different participants think we are doing different things. Arum and Roksa argue that higher education in general is conflicted about what it is doing: trying to graduate large numbers of undergraduates while simultaneously rewarding faculty primarily for their research. As a result, undergraduate education is increasingly off-loaded onto contingent faculty and graduate students who lack the job security to consistently challenge students with academic rigor. And when research faculty do teach, Arum and Roksa speculate that they have incentives to soft-pedal rigor in an implicit bargain with students: I won't ask much of you if you don't ask much of me. And so many undergraduates are left academically adrift; they do not regularly encounter rigorous course work, they are out of touch with faculty who are busy with non-instructional work, they value the social and recreational aspects of college more than the academic, and they seek to be credentialed for the job market with the minimum investment of time and energy on their own part. Despite our emphasis on student recruitment, retention, success, social integration and satisfaction, Arum and Roksa find that many students do not learn much in college, at least not in the first two years along the dimensions of critical thinking, problem solving, analytical thinking and writing measured by the CLA performance test. It is worth noting that similar rumblings come from industry leaders who struggle to find adaptable workers able to do high-skill work.
Arum and Roksa identify multiple sources of the problems we face: our institutional leadership and its priorities, faculty priorities and attitudes, student priorities and attitudes. And there are some changes faculty might make. We could spend more time with students, put more thought into developing engaging curricula and learning experiences, and up the ante on the rigor of our courses. But we still operate within institutional and fiscal constraints. In many cases (the freshman seminars and first year composition classes excepted), our business model rests on large lower-level courses subsidizing smaller class sizes in upper-division and graduate education, class sizes that are fully appropriate given the advanced courses' pedagogical aims. But how do we effectively and consistently teach "critical thinking, problem solving, analytical reasoning and writing" in large, lower-division lecture courses? In addition, our institutions are increasingly realizing the efficiencies of having lower-division courses taught by contingent faculty and graduate students, many of whom are very good teachers but few of whom have sufficient job security to feel truly safe when they challenge students with rigorous course work. And, of course, our institutions expect faculty to be teachers and researchers, but research and creative activity takes time and focus.
Arum and Roksa take on a hard and thankless task when they focus on student learning. It is easy to quibble with their assessment instrument or their focus on what is learned after their first two years of college. But their book tells us something important about how our students approach their education and what we have or have not demanded of them. Despite its flaws, Academically Adrift can be a valuable source for a conversation about the role of undergraduate education in systems of higher education, as well was what we owe students and also what they should be prepared to give when they enter the collegiate classroom.
[The Montana Professor 22.1, Fall 2011 <http://mtprof.msun.edu>]