How to Improve Your Teaching Evaluation Scores Without Improving Your Teaching!

Paul Trout
Montana State University-Bozeman

Always to avoid their wrath, always to court their favor, and sometimes to cajole them into a little learning--that is teaching.
--Michael Platt

Talk Softly and Grade Easy.
--D. Larry Crumbley

Although student-generated numerical evaluation forms (or SNEFs) are often used to determine whether an instructor is a bad, good, or excellent teacher, these forms, many experts contend, rarely, if ever, provide accurate assessments of instruction.

If good teaching is correlated with, or defined in terms of, student learning, mastery, or cognitive growth, then SNEFs have severe limitations as assessment tools. According to John C. Damron, a Canadian sociologist who has examined the research on SNEFs, "the ratings yielded by virtually all teaching evaluation procedures bear only a modest or nonexistent relationship to the very quality effective teaching must promote: student learning" (Damron "Three" 5). The problem, according to two experts who have studied the issue, is that "students are less than perfect judges of teaching effectiveness if the latter is measured by how much they have learned. If how much students learn is considered to be a major component of good teaching, it must be concluded that good teaching is not validly measured by student evaluations in their current forms" (Rodin and Rodin, in Damron "Three" 20).

Ironically, some experimental studies revealed an inverse relationship between evaluation ratings and student learning. Sullivan and Skanes (1974), for example, found a sizable group of instructors who facilitated high achievement in their students but who received low ratings from them, and a second group who prompted low student achievement but nonetheless received high ratings (Damron "Three" 8). Many other studies have revealed the same unsettling correlation. So, when SNEFs are assumed to be "true" and "valid" assessments of pedagogical effectiveness, as administrators are too apt to do, they constitute what Damron calls "a major threat to college teaching, a prospect that surely counts as one of the more startling ironies of modern educational technology" (Damron "Three" 12).(1)

In light of the accumulating research on the invalidity and sundry pernicious effects of SNEFs, Paul Rosenfeld and S. C. Erikson, two researchers at the Center for Research on Learning and Teaching, University of Michigan, recommend that colleges "eliminate the questionable practice of using the results of student ratings for purposes of administrative assessment" (in Damron "Politics" 19). Michael Scriven, who has conducted extensive analyses on faculty evaluation methodology, warns administrators against using such invalid forms for personnel decisions :

All [SNEFs] are face-invalid and certainly provide a worse basis for adverse personnel action than the polygraph in criminal cases. Based on examination of some hundreds of forms that are or have been used for personnel decisions (as well as professional development), the previous considerations entail that not more than one or two could stand up in a serious [legal] hearing. (in Haskell 9)
But the elimination of SNEFs is not going to happen any time soon because they serve purposes that have nothing to do with evaluating effective teaching. They serve, for example, a PR function. However fraudulent they may be, SNEFs allow administrators to tell students, taxpayers, regents and legislators that teaching is being evaluated and "good" teaching encouraged. Students want SNEFs because these forms enable them to compel professors to give the kind of course students prefer, which, to an increasing number of students, usually means a course that has few demands and high grades. SNEFs allow students to reward professors for doing the wrong thing (Hocutt 61).

In this the interests of administrators and students coincide. Administrators want satisfied student consumers (and happy parents and taxpayers). So administrators use SNEFs to make sure that classroom instruction does not seriously displease student "customers." Through SNEFs, administrators "encourage" professors not to be too rigorous and too demanding, or to have standards that are too high. lndeed, the more invalid the device to assess the quality of instruction, the better, from the point of view of administrators, because invalid and vacuous SNEFs--SNEFs that do not cogently assess instructor-mediated student learning--are sensitive to student likes and dislikes (2). SNEFs thus induce faculty to get "good" ratings by teaching in ways that please the students, but which may not educate them. This explains why SNEFs almost never ask students whether the course was demanding, the assignments difficult, the tests probing, the standards high (all variables linked to increasing student learning), or whether they developed an ability to identify the main ideas and significant implications of a subject, or a respect for unconventional interpretations of material studied, or an understanding of the most fundamental or important ideas of the subject (Renner 128).

The SNEF, as currently used in higher education, is a device for making classrooms comfortable for and marketable to students who are increasingly under-prepared for and disengaged from rigorous academic study. Consumer satisfaction trumps student achievement. As Haskell puts it, the SNEF is a "powerful tool in assuring classroom changes that lead to the retention of student tuition dollars by assenting to student consumer demands and of parents who foot the tuition bill" (Haskell 12l; see also Damron "Three" 19).

Since faculty have had these fraudulent devices imposed upon them, they might as well exploit SNEFs for their own advantage. As Damron contends, "In an atmosphere in which good teaching is equated with high student ratings, it makes sense in terms of simply survival to weave into one's classroom performance virtually anything that elevates such ratings" (Damron "Instructor" 20). Haskell also seems to accept the calculated exploitation of SNEFs: "If a faculty can chose teaching styles, grading levels, and course content, s/he will naturally prefer choices that are expected to result in higher SEF scores; if faculty know the variables affecting their careers, they will meet these criteria" (13).

Everyone knows that SNEF ratings are contaminated by all kinds of influences irrelevant or hostile to effective teaching (see Renner 129 for a partial list). But if you cannily manipulate these factors, you can boost your evaluation ratings without raising your workload. Should you feel ashamed about this, keep in mind that higher scores will make you happy, your chairperson happy, your Dean happy, your Provost happy, your President happy, and the taxpayers happy. It's a win-win-win situation. Follow my advice and have a Great Evaluation Experience (or GEE!).

"First impressions," Haleta reminds us, "are crucial for students, the teacher, and the teacher-student relationship," influencing the classroom climate for the entire semester or term (16).

In this regard, nonverbal communication may be more important than what you actually have to say. Students, you see, often arrive at summative judgments about us very quickly--well before we have a chance to tell them about our grading policy or course organization, and before we can display our knowledge or fairness. Ambady and Rosenthal, for example, "have shown that students arrive at opinions about teachers within seconds of being exposed to these teachers. The students' impressions are highly correlated with end-of-semester ratings generated by other students who had these same teachers for an entire course" (Ceci 5). According to Drew Weston, who summarizes these studies in Psychology: Mind, Brain & Culture (1996), "the correlations between initial nonverbal ratings [the videotapes of instructors that students had seen were silent] and eventual student evaluations are as near perfect as one finds in psychology" (288).

Here are some tips about how you should act and look as you enter the classroom for the first time to make a lasting impression that will improve those end-of-the-semester ratings.

  1. Increase your "immediacy" effect. A sense of "immediacy" is the result of behaviors which enhance closeness to and nonverbal interaction with another. This is an important effect to establish, because students who view their teachers as immediate or 'close' "indicate that they enjoy the course more, feel more comfortable with the material, and intend to pursue the subject farther than do students with less immediate teachers" (Moore 29). You can enhance your "immediacy" effect (and get higher evaluation scores) by simply smiling, using gestures, being relaxed, moving among the students, and looking them in the eye.
  2. Dress casually. Morris found that clothing makes not only the man but the Excellent Teacher too. If you go "grunge," dressing in a T-shirt, ripped jeans, and gymshoes, students will find you more friendly, likeable, flexible, interesting, sympathetic, fair, approachable, and enthusiastic (137).
  3. Use powerful words and a confident delivery. Do not use hedges, intensifiers, deictic phrases and hesitation forms (such as "ah" and "uhm"). Language without these characteristics comes across as powerful, and a powerful speech style will convince students that you are both attractive and competent, an impression that boosts your evaluation ratings (Haleta 18). Students assign more favorable ratings "to teachers who used a concise, direct style of language than to teachers who used a language style that contained multiple hesitations" (24). What's important to note is that a smooth delivery has quite a generous payoff at evaluation time, boosting your ratings in several evaluation categories thanks to the "halo" effect (3). A majority of students tested by Haleta rated powerful teachers more favorably in such categories as organization, professionalism, and knowledge of the subject (25).
  4. Tell students that you're warm and nurturing. This impression seems to be getting more and more important in the 1990s. To get tenure, Peter Sacks recounts in Generation X Goes to College how he transformed himself, at least ostensibly, from a no-nonsense, straightforward serious-minded academic into a "teaching teddy bear." "Students could do no wrong, and I did almost anything possible to keep all of them happy, all of the time, no matter how childish or rude their behavior, no matter how poorly they performed in the course, no matter how little effort they gave. If they wanted their hands held, I would hold them. If they wanted a stapler (or a Kleenex) and I didn't have one, I'd apologize" (85). Willimon properly counsels professors who want good evaluations to "never overtly confront students about their class attendance, indolence, apathy, or impertinent behavior. The entire class may turn against the professors, leading to a precipitous drop in one's ratings as a teacher" (22). Shrewd strategy. Here's the support.

Widmeyer and Low (1988), in their "Dr. Jim Wilson" experiment, gave students a biographical sketch of an alleged special lecturer named "Dr. Jim Wilson" that emphasized that Wilson was a warm person while another group was given a sketch that described him as "cold." Students who were told that Wilson was cold assessed him as "less pleasant, less sociable, less good-natured and less humorous than students in the warm condition. Students in the cold condition were also less likely than their counterparts to surmise that Dr. Wilson would 'go far' in his teaching career" (Damron "Instructor" 23). Widmeyer and Loy concluded that "by being perceived as a warm individual, a teacher can influence students' ratings not only of his or her personality, but also of his or her teaching abilities.... [I]f instructors want to 'get ahead,' they should present themselves as 'warm'" (Damron 23). Once again, the benefits of the halo effect can be yours, if you know how to manipulate students.

When teenagers were asked to describe their WORST teachers, eighty percent said "dull or boring" (poor knowledge of the subject provoked least concern--twenty-one percent). To improve your evaluation scores, you are going to have to accept the fact that college 'teaching' has less to do with knowledge and information and more with convincing students you are one hell of a lecturer, even when spouting nonsense. It's not what you communicate, but how.

Engulfed by a twenty-four-hours-of-amusement culture from their first days of watching "Sesame Street," students want to be entertained, which is why "fun" turns up so often on student evaluations of instructors. What students want are instructors who are "expressive" and "charismatic," regardless of whether they have anything to say (the "Dr. Fox effect"). Back in 1973, Naftulin and his associates found that an entertaining, charismatic lecturer (actually an actor introduced as "Dr. Myron L. Fox") who spoke deliberate nonsense received surprisingly high evaluations from an audience of educators and mental health professionals. Naftulin concluded that a lecturer's authority, wit, and personality can "seduce" students into the illusion of having learned, even when the content of the lecture was nil. Although there is still much discussion about the educational seduction paradigm (see Abrami "Educational Seduction"), a number of other studies show that highly expressive instructors get higher ratings on evaluation forms even when the content of the talk was "low" and did not improve student achievement (see Abrami's review of studies, 447-50).

Recently, William Cenci has shown, through a successfully controlled classroom experiment in which the content of a course taught in two different semesters was identical in all salient respects, that an instructor, merely by speaking more "enthusiastically"--defined as varying voice pitch and using more hand gestures--can seduce students into rating every aspect of the instructor and course more highly than when the instructor spoke less dramatically, another powerful demonstration of the "halo effect." They thought that the grading system was fairer (from 3.03 to 3.72), that the professor knew more, that he was more tolerant of other points of view, that he was better organized, that he was more accessible outside of class (even though the instructor kept exactly the same schedule, but his score in this category went from 2.99 to 4.06), and that even the textbook had improved (2.06 to 2.98, form "poor" to "average")! Moreover, thanks solely to the more enthusiastic delivery used in the second semester, students in that course were seduced into thinking that they had actually learned more than students in the less animated course, when in fact they had not: "The end-of-semester point totals for the identical sets of exams (based on nearly identical lectures and identical texts) were virtually identical for the two semesters in both mean and standard deviation. The fact that the students stated that they had learned emphatically different amounts of information--from less than 'average' to more than 'a lot' on the rating scale--simply due to differences in the professor's style, is staggering" (Ceci 16-17).

Even if you have nothing to say, you can get higher scores by saying it charismatically! Be shrewd and let the Dr. Fox effect work for you!

Although researchers still debate whether or not instructors can out-and-out buy high ratings with high grades (Abrami, Howard say no), ample evidence does suggest that this would indeed be a prudent strategy to follow if you want to jack up those evaluation scores (and what nitwit doesn't?). Chacko believes that "the most prominent bias in student ratings of teaching effectiveness is the evaluation a student receives from the instructors in the form of a grade" (19). Vasta also believes that "the effect of grades on student evaluations of instruction must be interpreted as potentially quite powerful" (210). Max Hocutt writes:

A study of several thousand courses at my home university has recently confirmed what most professors have always suspected: students definitely give a higher rating to teachers who grade higher. The coefficient of correlation is a low but significant .38. This correlation might have been higher had the study considered not actual but expected grade. (Hocutt 60)
Snyder and Clair concluded from their carefully controlled investigation that "a teacher can get a 'good' rating simply by assigning 'good' grades" (81). Their research confirmed earlier studies that showed that higher grades can contaminate evaluation ratings (the "reward effects"). And, thanks to that old halo effect, even evaluation items such as humor, self-reliance, and attitude toward students were affected. "This implies that student perceptions of most behavior dimensions of instructors are responsive to the instructor's evaluation of student performance" (23).

Similarly, Powell found that students rate instructors on the basis of a global impression which they form ("liking"), and that the individual items on the evaluation form--no matter what they are--all reflect this impression to some extent. His findings showed that this impression, and the resultant evaluation scores, "are strongly influenced by the grade the student receives from the instructor" (201). Even the comments that students write on narrative evaluation forms, Powell concluded, were "strongly influenced by the student's grade" (203). DuCette concluded from his experiments "that students do give better evaluations to an instructor if they obtain good grades" (313). In some courses, DuCette speculated, instructors could "significantly" raise their evaluation scores simply by "using lenient grading standards" (314). Worthington came to the same conclusion: "it is obvious that, given objectively equivalent teaching skills, lenient markers will tend to receive more positive evaluation ratings than stringent markers..." (772).

Here's more good news: you can inflate your evaluation scores even more by giving really high grades to really poor students (let me name this the Santa Clause effect). Snyder and Clair found that students who received higher grades than they expected tended to give "very positive teacher evaluations" (the "happy birthday effect," 75), rating tests, lectures, and the professor's teaching style more favorably. Worthington came to the same conclusion: "Those with least mastery are, in general, more likely to give more positive evaluations when their grades are inflated" (774).

But be on guard, there may also be a "grinch who stole Christmas effect." Holmes found that when students didn't get the grades they expected or felt entitled to, "they deprecated the instructor's teaching performance" across the board (133). Don't give B's to students who expect A's!

Since "it is clear that if an instructor inflates grades, he will be much more likely to receive positive evaluations" (Worthington 774), hand out those A's and beat your colleagues to the merit-pay trough!

"Education is the only service industry where the less you give, the happier are your constituents" (Crumbley). When teenagers were asked to describe their WORST teachers, 50 percent choose "expectations too high" (poor knowledge of the subject mattered least!). Although research indicates that students learn more from professors who are demanding task masters, it also shows that students give these task masters lower evaluations (Damron "Instructor"; Sullivan & Sranes 587). William McBroon, a sociologist at the University of Montana, found that when he eased course requirements (dropping attendance and participation obligations), his course evaluations went up. He writes, "as requirements were set aside, evaluations of the content, instructor, and course became more positive." "The less the rigor," he concluded, "the higher the student ratings of content, instructor, and course."

Who can really doubt that lenient standards promote favorable ratings from all those little strugglers sitting in class? Professors know the score: more of them lower standards over their careers than raise them. "While half to two-thirds reported no change in various requirements they had for students, the faculty members who altered their requirements because of the questionnaires were much more likely to decrease than to increase the amount and difficulty of material covered. Specifically, the amount of material covered was reduced by 22 percent of the faculty and expanded by just 7 percent, the the difficulty of course material was lowered by 38 percent of the faculty and raised by only 9 percent. Particularly notable was a finding on the rigor of course examinations: fully 33 percent of the faculty admitted that, as a result of these questionnaires, they lessened the difficulty of their tests; just 6 percent indicated they increased test difficulty" (Barnet, in Damon "Instructor" 12).

So go ahead, do it. No one will catch you because administrators wouldn't dream of correlating your ratings on evaluation forms with truncated reading lists, watered-down objectives, or even inflated grades. All administrators care about are high ratings and satisfied students. Give them both and make them happy!

Stroke the political biases of your students, especially if the evaluation form solicits information about the cultural, racial or gender biases of the instructor or course ("classroom climate" questions). Stanley Coren ("When") contends that when instructors present arguments and evidence on both sides of a controversial issue, such as cognitive differences between the sexes, they seriously endanger their evaluation scores. That's because students, like most people, are vulnerable to the "fundamental attribution error effect," which confuses "is" with "ought." Students think that when an instructor objectively reports the controversial conclusions of scientific research, he or she endorses or likes the findings. Coren discovered that a quarter of the students were apt to interpret the presentation of evidence about the genetic and racial differences in intelligence as motivated by racism, rendering the professor a racist for twenty-five percent of students (14). When the subject of discussion was the cognitive skills of men and women, twenty-six percent deemed the instructor "sexist" and motivated by a desire to put down women, with ninety-four percent of female students thinking this.

Even in law school it is becoming increasingly dangerous to teach controversial material in an even-handed way. Alan Dershowitz writes that a sizable group of law students were offended by his dispassionate examination of the legal issue of rape, and "used the power of their evaluations in an attempt to exact their political revenge for my politically incorrect teaching":

One student said that I do "not deserve to teach at Harvard" because of my "convoluted rape examples." Another argued that women be allowed an "option" not to take my class because I "spent two days talking about false reports of rape," another demanded that my "teaching privileges" be suspended. One woman purported to speak for others: "Every woman I know in the class including myself found his treatment of rape offensive and disturbing." Another woman felt "oppressed throughout the course." Although I always try to learn from my evaluations, I will not be bullied into abandoning a teaching style that I believe is best designed to stimulate thinking.... Are other less established teachers being coerced into changing their teaching by the fear of negative evaluations, which can be fatal at tenure? You bet they are, and it poses a real danger to academic freedom and good education. (Dershowitz 118-19)
If you don't have the teflon coating of an Alan Dershowitz, don't run any risks. Avoid problems and get those scores up by teaching only the "good news" that students want and expect to be told. Remember, they are your customers, and the customer is always right.

Let's face it, thanks to student evaluations, teaching has become, as Robert Weissberg says, a "damage-control activity" (19). So, ignore the humiliation and simply accept the fact that to raise your evaluation scores, you are going to have to suck up to your students. I'm not talking about mere impression management, we've already covered that. I'm talking about balls-out sucking up!

Use these proven methods to raise your numbers.

  1. Oprah students. Let them know that you are a victim and that you have suffered (spouse left you, mother just died, etc). Students, many of whom are soap-opera junkies, will cut you some slack.
  2. Rosie O'Donnell students. Fawn over them and praise them lavishly! Tell them they are wonderful and God's gift to graduate school. Inflate your evaluation numbers by inflating their egos.
  3. Bribe students. Lay on the goodies. For example, bring cookies to exams, let them out early on a regular basis and cancel a lot of Friday classes. Another way to get better ratings from them is to throw a party for them! For the biggest bang for the buck, throw it at the end of the semester, before evaluations are made out. No researcher has yet analyzed the correlation between end-of-the-semester suck-ups and evaluation scores, but the pay-off must be worth it because some instructors invest a whole lot of money into bringing pizzas to class.

Sure, some may scowl and call this "pander pollution" (the phrase is Crumbley's). But, as Peter Sacks reasoned, given the benefits and punishments professors face thanks to these evaluation forms, "pandering becomes quite rational and justifiable, however unfortunate its collective results" (86). And pandering is a whole lot more wholesome, and less riskier, than actually falsifying evaluation ratings, as several desperate professors have been caught doing (see U. Magazine December 1996: 7).

So, don't be embarrassed, join the vast army of professors who engage in pander pollution semester after semester for profit and praise. Become the Alpha Suckubus of your department.


(1) "If we truly want to discover which teachers make a positive and lasting difference, we will not poll pupils; we will examine them. We will not solicit their opinions; we will observe their behavior; to see which of them have been improved and which not" (Hocutt 58).

(2) As Damron pointed puts it, "This salutary effect would be lost if validation testing proved the instrument to be invalid" (Damron "Instructor" 7).

(3) Since the halo effect plays such a prominent role in evaluation research and in this essay, a word or two more about it would be appropriate. The concept is used to refer to the fact that if an individual is viewed as having some good qualities there is a bias towards assuming that the person in question has all good qualities. The flip side is that a person who is seen as being "bad" in one area is likely to be judged "bad" across a broad spectrum of behaviors. Some qualities of an instructor or student perceptions of an instructor have more power than others to create the halo effect. For example, being "charismatic" or "likeable" to students raises scores in almost all other evaluation categories, even those not associated with the instructor. Similarly, if an instructor is not charismatic, he or she is likely to be found wanting in other categories. Coren has argued that "if students are making negative comments on your course organization and your classroom presentation, it is likely that the operation of the halo effect will cause students to generalize their poor ratings to any questions with social and political content" (Coren "Are" 16). Simply put, "if the students do not like an instructor's teaching style, class organization or even the course textbook, when given the opportunity to do so, they are likely to label that instructor as racist, sexist and culturally biased" ("Are" 16).

Works Cited

Abrami, Philip, Wenda Dickens, Raymond Perry, and Les Leventhal. "Do Teacher Standards for Assigning Grades Affect Student Evaluations of Instruction?" Journal of Educational Psychology 72.1 (1980): 107-18.

Abrami, Philip. "Educational Seduction." Review of Educational Reserch 52.3 (Fall 1982): 446-64.

Ambady, Nalini, and Robert Rosenthal. "Half a Minute: Predicting Teacher Evaluations From Thin Slices of Nonverbal Behavior and Physical Attractiveness." Journal of Personality and Social Psychology 64.3 (1993): 431-41.

Barnett, Larry D. "Are Teaching Evaluation Questionnaires Valid? Assessing the Evidence." Journal of Collective Negotiations in the Public Sector 25.4 (1996).

Bauer, Henry H. "The New Generations: Students Who Don't Study." An unpublished paper delivered a the annual meeting of AOAC International, 10 September 1996.

Chacko, Thomas I. "Student Ratings of Instruction: A Function of Grading Standards." Educational Research Quarterly 8.2 (1983): 19-25.

Christophel, Diane M., and Joan Gorham. "A Test-Retest Analysis of Student Motivation, Teacher Immediacy, and Perceived Sources of Motivation and Demotivation in College Classes." Communication Education 44.4 (October 1995): 292-306.

Coren, Stanley. "When Teaching Is Evaluated on Political Grounds." Academic Questions (Summer 1993); reprinted in The Montana Professor 5.1 (Winter 1995): 12-14.

Coren, Stanley. "Are Student Attributions of Instructor Racism and Sexism on Course Evaluation Forms Valid?" The Montana Professor 5.1 (Winter 1995): 15-16.

Crumbley, L. D., and Eugene Flidner. "Accounting Administrators' Perceptions of Student Evaluation of Teaching Information." Manuscript, Department of Accounting, Texas A&M University, 1995.

Damron, John C. "Instructor Personality and the Politics of the Classroom." October 1996. Online posting <>.

Damron, John C. "The Three Faces of Teaching Evaluation." 1995. Online posting <>.

Dershowitz, Alan. Contrary to Popular Opinion. New York: Berkley Books, 1994.

DuCette, Joseph, and Jane Kenney. "Do Grading Standards Affect Student Evaluations of Teaching? Some New Evidence on an Old Question." Journal of Educational Psychology 74.3 (1982): 308-14.

Goldman, Louis. "Student Evaluations of Their Professors Rarely Provide a Fair Measure of Teaching Ability." The Chronicle of Higher Education (August 1990): B2-3.

Haleta, Laurie L. "Student Perceptions of Teachers' Use of Language: The Effects of Powerful and Powerless Language on Impression Formation and Uncertainty." Communication Education 45.1 (January 1996): 16-28.

Haskell, Robert E. "Academic Freedom, Tenure, and Student Evaluation of Faculty: Galloping Polls in the 21st Century." Education Policy Analysis Archives 5.6 (12 Feb. 1997): 1-34. Online posting (a peer-reviewed scholarly electronic journal) <>.

Hocutt, Max O. "De-Grading Student Evaluations: What's Wrong with Student Polls of Teaching." Academic Questions (Autumn 1987-88): 55-65.

Holmes, David S. "Effects of Grades and Disconfirmed Grade Expectancies on Students' Evaluations of Their Instructor." Journal of Educational Psychology 63.2 (1972): 130-33.

Howard, George S., and Scott E. Maxwell. "Correlation Between Student Satisfaction and Grades: A Case of Mistaken Causation?" Journal of Educational Psychology 72.6 (1980): 810-20.

Howard, George S., and Scott E. Maxwell. "Do Grades Contaminate Student Evaluations of Instruction? Research in Higher Education 16.2 (1982): 175-88.

McBroom, W. H. "Course Requirements and Student Evaluations: Evidence of an Inverse Relationship." Unpublished manuscript, September 1996.

McCroskey, James C., Virginia P. Richmond, Aino Sallinen, Joan M. Fayer, and Robert A. Barraclough. "A Cross-Cultural and Multi-Behavioral Analysis of the Relationship Between Nonverbal Immediacy and Teacher Evaluation." Communication Education 44.4 (October 1995): 281-91.

Moore, Alexis, Jon T. Masterson, Diane M. Christophel, and Kathleen A. Shea. "College Teacher Immediacy and Student Ratings of Instruction." Communication Education 45.1 (January 1996): 28-39.

Morris, Tracy L, Joan Gorham, Stanley H. Cohen, and Drew Huffman. "Fashion in the Classroom: Effects of Attire on Student Perceptions of Instructors in College Classes." Communication Education 45.2 (April 1996): 135-48.

Murray, H. A. "Teacher Ratings, Student Achievement, and Teacher Personality Traits." Paper read at the annual meeting of the Canadian Psychological Association, 1978.

Powell, Robert W. "Grades, Learning, and Student Evaluation of Instruction." Research in Higher Education 7 (1977): 193-205.

Renner, Richard R. "Comparing Professors: How Student Ratings Contribute to the Decline in Quality of Higher Education." Phi Delta Kappan 63.2 (October 1981): 128-30.

Sacks, Peter. Generation X Goes to College. Chicago: Open Court, 1996.

Snyder, C. R., and Mark Clair. "Effects of Expected and Obtained Grades on Teacher Evaluation and Attribution of Performance." Journal of Educational Psychology 68.1 (1976): 75-82.

Stone, J. E. "Inflated Grades, Inflated Enrollment, and Inflated Budgets: An Analysis and Call for Review at the State Level." Education Policy Analysis Archives 3.11 (26 June 1995). Online posting (a peer-reviewed scholarly electronic journal) <>.

Sullivan, Arthur M., and Graham R. Sranes. "Validity of Student Evaluation of Teaching and the Characteristics of Successful Instructors." Journal of Educational Psychology 66.4 (1974): 584-90.

Vasta, Ross, and Robert Sarmiento. "Liberal Grading Improves Evaluations But Not Performance." Journal of Educational Psychology 71.2 (1979): 207-11.

Viadero, Debra. "Dress Down." Teacher Magazine (September 1996): 21.

Weissberg, Robert. "Managing Good Teaching." Perspectives on Political Science 22.1 (Winter 1993); reprinted in The Montana Professor 5.2 (Spring 1995): 17-22.

Widmeyer, W. N., and J. W. Loy. "When you're hot you're Hot! Warm-cold Effects in First Impressions of Persons and Teaching Effectiveness." Journal of Educational Psychology 80 (1988): 118-121.

Williams, Wendy M., and Stephen J. Ceci. "'How'm I Doing?' Concerns About the Use of Student Ratings of Instructors and Courses." Unpublished manuscript. This work is scheduled to appear in the September/October 1997 issue of Change, in which my essay, "What the Numbers Mean," is also slated to appear.

Willimon, William H., and Thomas H. Naylor. The Abandoned Generation: Rethinking Higher Education. Grand Rapids, MI: William B. Eerdmans Publishing Company, 1995.

Worthington, Alan G., and Paul T. P. Wong. "Effects of Earned and Assigned Grades on STudent Evaluations of an Instructor." Journal of Educational Psychology 71.8 (1979): 764-75.

Contents | Home