[The Montana Professor 23.1, Fall 2012 <http://mtprof.msun.edu>]

Why Not To Run and Hide

Doug Downs, PhD
Assistant Professor of Composition and Rhetoric
Montana State University-Bozeman

—Doug Downs
Doug Downs

It seems like most faculty opinion about outcomes assessment proceeds from the premise that the Ministry of Magic is interfering at Hogwarts. Indeed, my own perspective on assessment is rooted in disgust with the political, corporate, and cultural forces that drive it. Outcomes assessment seems to be a lever in a cultural movement that I think of as, literally, the cheapening of higher education. Reducing higher education to instrumentalist jobs-training; putting educational goals and "accountability" in the hands of corporate interests with a storied history of seeking less critical thinking and education, not more; presuming that anything worth knowing can be cheaply measured via standardized testing; transforming education from public to private good; de-authorizing faculty; and an all-consuming fixation with "value-added" as a means of prioritizing cost-cutting—these are the real and unhappy roots and effects of the public, national drive for higher-ed outcomes assessment.

Yet I advocate not resistance to but an embracing of outcomes assessment—not only to fill a vacuum by doing our own better assessment first, but also because assessment can proceed from motivations and pedagogy that honor our traditional sense of higher education as entwining liberal arts and professional education, thus helping us do what we want to do better. But because we so often seem to begin from a position of resistance to bad reasons for assessment, we may miss the good ones.

My field, rhetoric and composition, includes writing scholars who direct first-year composition programs. Both because there are never enough college graduates writing well enough, and because college composition is taught in high volumes and thus usually demands its own administrative structure, we've been working on outcomes assessment for four decades. We've spent most of those years having to argue that writing can only be validly assessed with writing, not via multiple-choice grammar exams. Our latest challenge is explaining that, no, computers can't read writing, they can only count it, which is why writing that will move a computer to tears is often worse than meaningless to actual readers. But we do not choose to argue that articulating and assessing learning outcomes is of no value. Rather, we assess because we can't create the strongest possible writing programs without assessing outcomes.

The obvious question is: Why can't departments simply use course grades to assess student learning? First, course grades don't capture learning that crosses courses; all of us mean to teach our students more than the sum of a major's course content. Second, having faculty thoughtfully articulating and negotiating consensus on learning outcomes is crucial to ensuring that a program is a program, that its faculty share understandings about what the program is meant to accomplish, and that they have considered how individual courses contribute to the program.

To the first reason: Course content is only part of what majors teach, and thus course grades won't be a sufficient indicator of what gets learned across courses. Mathematics faculty, for example, want to teach not just differential equations but mathematics as a way of thinking and being. Beyond a defined body of knowledge, courses combine to teach an intangible world view or habit of mind, a particular way of approaching problems. In my field, James Paul Gee calls this stuff a "Discourse," a "saying (writing)-doing-being-valuing-believing combination" that is a "way of being in the world," a social identity (Literacy, Discourse, and Linguistics: Introduction, 1989, 6-7). Few single courses teach it explicitly; it's learned over time and experience. An engineering program isn't meant to teach simply "how to engineer," which each of its courses will focus on some aspect of; it's meant to teach how to BE an engineer, which there is no specific course for at all. That learning emerges from the total experience of the engineering major and will never be graded.

It isn't the case, then, that course grades show everything we need to know about student learning and thus our pedagogy. Arguing as if they do actually plays directly into the hands of those who claim we teach nothing more than discreet, mechanical "skill sets" and lists of information. That, of course, is wrong. We don't just teach our students what to know and do, we teach them how to be, and if we want to know how well that's going, we'll need to look beyond course grades. What do the very best graduates have in common? How did the cumulative efforts of our teaching contribute to those graduates' knowledge and abilities?

This is one way outcomes assessment can be startlingly revealing not only about what learning isn't happening in a major (isn't it interesting how assessment is usually imagined as ferreting out lack?) but about learning that is happening—learning which it had never occurred to anyone would be present. In one assessment of English majors I conducted at another college, we found students doing a kind of cultural criticism that none of the faculty remembered teaching. Our student community was transferring ideas and methods of inquiry we taught to a pursuit we hadn't considered. That assessment didn't show us lack, but instead created a sense of possibility for our teaching and, frankly, a marketing point. It helped us see what we do differently.

That experience also demonstrates my second point here, one of the real challenges of a homegrown assessment that faculty can value: articulating learning outcomes to begin with. A department has to not only learn the breadth of faculty values, it must also develop a language to describe those values and students' learning in relationship to them. My sense is that it's very rare for departments to really have a handle on these things if they've never done outcomes assessment. Every department I've learned or taught in worked for decades without explicit learning outcomes for its majors. (That would be six, two of which I've brought outcomes assessment to.) English departments universally want to teach their majors close reading, powerful writing, critical thinking, sharp textual research, and methods of literary criticism. Yet when faculty sat down to develop concrete language for those outcomes, they couldn't easily express their values, and their attempts exposed a far wider range of values than we had been aware of. We couldn't say what we wanted, and we had no idea what others wanted.

It is in this exchange of rarely discussed consensus and difference about what the experience of the major ought to be for students that we improve the understandings by which we design coursework and programs. In one sense, it's following advice about exams that some professors give students: The point of giving an exam (or an assessment) isn't so much the exam as it is getting students to study for it. A major benefit of outcomes assessment isn't assessing outcomes; it's preparing to assess, which requires study that doesn't otherwise tend to happen.

There are many good reasons to want to assess outcomes differently, and for different purposes, than various current witch-hunts would have us do. None of those are adequate justification not to assess at all. Outcomes assessment is necessary to curriculum and program design, and we get tangible benefits from being able to clearly and directly state what we want students to learn, and show what of it they do learn. As professionals, we shouldn't be asking whether to assess; we should only be asking how.

[The Montana Professor 23.1, Fall 2012 <http://mtprof.msun.edu>]


Contents | Home