Response to Richard Barrett

Paul Trout
English
Montana State University-Bozeman

I'm flattered, of course, to have my little jeu académique taken so seriously as to deserve such a lengthy response. The trouble is, Professor Barrett seems to have taken my essay a bit too seriously, misapprehending both its facetious tone and ethical import. As a result, he had distorted what I was trying to do and some of what I said, attributing to me a host of convictions and attitudes that I simply do not hold and did not express. I urge those interested in this scuffle to read what I actually wrote; but let me clarify a few points now, for the record.

If Professor Barrett thinks he has scored a point for the validity and use of Student-generated Numerical Evaluation Forms (SNEFs), he should think again. He admits that the whole issue of the validity of these forms is marked by "very diverse results" and that after fifty years of research, this issue is still "an open question." Despite thousands of studies, there is still "no consensus" among specialists who have studied SNEFs as to whether or not they measure good teaching (should that concept ever be satisfactorily defined).

This is not a very consoling admission. Given the dubious validity of these forms, every authority on this subject I've ever read declares that SNEFs should not be used in administrative decision-making unless they have undergone validation. Most forms now used have not undergone this process. Yet, almost all administrators and faculty behave as if all SNEFs were valid, as if they provided dependable evidence about classroom instruction, however that may be defined. Across the country, from Harvard University to Clatsop Community College, faculty members dutifully hand these forms out in course after course, semester after semester, year after year, and administrators continue to factor them into their decisions about hiring, retention, promotion, tenure, and merit. Most of us are complicit in a bogus enterprise.

Excuse me, but under these circumstances, I don't feel very apologetic for citing a host of experts who think that SNEFs are invalid. I was trying to shake our blithe and unthinking assumption that these forms are accurate gauges of what we do in the classroom. And for the record, note that I did not assert that all SNEFs are invalid, as Barrett claims; what I did write is that "SNEFs have severe limitations as assessment tools of good teaching if that is defined in terms of student learning, mastery, or cognitive growth" (italics added).

Why should I want to shake our blind faith in the validity of SNEFs? Because I am convinced that the widespread use of these forms for administrative purposes is now having significantly deleterious effects on the rigor and integrity of classroom instruction. Barrett dismisses this concern as implausible. His reasons for doing so, however, do not stand up to scrutiny. First, he argues that given the "declining level of student achievement," "any institution that tried to defend its teaching by pointing to the fact that it permits students to evaluate faculty would be considered, frankly, laughable." But when Barrett says this, he helps support a case against evaluations that goes further than I did.

Second, Barrett contends that "most institutions try to demonstrate the quality of their teaching by what their students achieve, in GRE scores, scholarships, employment and so forth," not by SNEFs. This is an amazing statement. "Most institutions" in the country--over 80 % according to experts--use SNEFs to assess the quality of classroom instruction, as a sign of their commitment to improving the undergraduate "experience." At MSU, the various forms used to assess teaching are examined at every stage of performance review, not just at tenure time, as Barrett suggests. About 20 departments use the Knapp form, which has been around my campus since the 70s and has never been validated. My department uses it, and also a narrative form, to evaluate teaching for retention, tenure and promotion, and also for annual performance review affecting merit-pay recommendations. In my 25 years at MSU, I have not known anyone in my department to ever mention, let alone give serious consideration to, evaluating teaching in terms of GRE scores, scholarships, employment, and so forth. But some do put great faith in the Knapp numbers, in part because ours are much higher than the campus average. To dismiss my concern that these forms may be having an adverse impact on instruction by saying that they are not factored into institutional decision making is disingenuous; Barrett himself admits that at "teaching institutions," "evaluations presumably count for much more." Professor Barrett is right, it isn't the Sixties any more, so let's get rid of a Sixties educational "reform" that once may have served a purpose but that now may be undermining effective classroom instruction.

What Barrett has to say about the behaviors I list that might improve SNEF scores reveals the extent to which he misapprehends the rhetorical nature of my essay. So instead of gnawing the bones of contention he digs up about the studies I cite, let me address the larger issues I wanted to raise with this piece. I am aware that there are conflicting data on many of these issues. What I facetiously recommended was that given this conflicting data, it would be prudent (my word) for professors to hedge their bet and assume that the (almost forty) studies I cited were on to something-here could be ways to elevate evaluation scores without pain or personal cost.

I made this facetious, sarcastic recommendation to call attention to the ethical pitfalls implicit in the evaluation situation, a situation essentially imposed on faculty by administrators. As a result of this situation, students--some of whom are just out of high school--are given the power to affect the careers and salaries of the very professional educators who must evaluate and grade their academic achievement. Certainly I am not the first to notice the educational and ethical dangers of this arrangement. My essay was intended to make professional educators examine their conscience to see if they have been tempted by the evaluation system to engage in unsound educational practices for their own benefit. Since people who are good with words are consummate alibi artists, it is no easy task to force them to confront the moral ambiguities of their own behavior. But I do not for an instant think that we all have sold out, or that all students want us to.

In a meretricious attempt to score debating points, Barrett tries to make it sound as if I am opposed to treating students kindly and decently, that I am against "warm relationships" with them or knowing something about their personal lives or "providing them with refreshments." As someone who formed a mentoring group to do all of these things, I am not. Nor do I believe, as Barrett says I do, that "any" effort that faculty makes in that direction is prima facie evidence of "sucking up." But my piece does ask each of us to ask one fundamental question: why am I doing it? And one can figure that out by looking at when one does it. Are parties announced and/or given before evaluations? Do you smile at students to demonstrate your "concern" for them? Do you constantly use in the classroom the very same honorific key terms that are on evaluation forms to convince your students that they apply to you and your class?

The the real issue is not who buys the pizza, but all the little things educators may do--or may not do--to raise evaluation scores, even when doing so compromises the integrity and rigor of classroom instruction. That's the point! The issue is not whether or not we have concern for students but how we express that concern. There are many instances when a professional educator will express his or her most heartfelt and profound concern for students by saying and doing things that some students will hate and resist. It is up to each of us, insofar as we are capable of ruthless honesty, to determine if we are really doing the right thing or not.

In closing, I might add that the Knapp Form used at MSU explicitly asks students to numerically "rate" the professor's "Concern for Students" (a category many students use to ding abrasive and/or demanding teachers). Too bad it doesn't ask about other "concerns." Need I remind my colleagues that we should also have "concern" for ourselves, for our own morale, for the dignity of our profession, for the integrity of our discipline, for the reputation of our department, college and university, for the investment of taxpayers, and for the cultural health of the state and our country? These are the "concerns" that all professional educators ought to take very seriously indeed.


Contents | Home