[The Montana Professor 20.1, Fall 2009 <http://mtprof.msun.edu>]
David A. Swanson
Department of Sociology
University of California–Riverside
In spite of disagreements over their validity and use, student evaluations of faculty have become ubiquitous in higher education, and are seen as necessary components not only for internal review purposes, but also for external ones. Accepting the inevitability of these evaluations, the paper argues that they should not be used in isolation, but as an integral component of a comprehensive and integrated Quality Assessment System that clearly defines the process or product to be assessed, acknowledges its limitations, and is not onerous to administer. As a heuristic device, a hypothetical Quality Assurance System is described for a hypothetical undergraduate degree program that illustrates these points. The paper concludes with the observation that if a counterpart to the K-12 "No Child Left Behind" legislation is enacted for higher education, the hypothetical Quality Assurance System is presented as a means of stimulating thoughts on how the "No Freshman left Behind" legislation could be dealt with so that it is at least less onerous—if not more meaningful—than its K-12 counterpart.
In spite of long-standing questions over their need and use, student evaluations of faculty teaching have become ubiquitous in higher education (Abrami, 1989; Benson and Lewis, 1994; Cashin, 1996; Cahn, 1987; Committee to Assess Teaching Evaluation Methods, 2000; Curzon-Brown, 2000; Diamond, 1998; Marsh and Roche, 1997; Seldin, 1984, 1993; Scriven, 1994, 1995; Theall and Franklin, 1991; Trout, 1997). Virtually all accrediting organizations require that members and institutions seeking accreditation have comprehensive evaluation programs in place. A typical example is provided by the Southern Association of Colleges and Schools (2004: 15), where, in its core requirements for accreditation, is point no. 2.5:
"The institution engages in ongoing, integrated, and institution—wide research-based planning and evaluation processes that incorporate a systematic review of programs and services that (a) results in continuing improvement and (b) demonstrates that the institution is effectively accomplishing its mission...."
There are three major aspects of comprehensive evaluation programs: (1) that faculty be evaluated for teaching effectiveness; (2) that the curriculum be monitored for learning effectiveness; and (3) that policies for student retention be consistent with the objective of producing high quality graduates (see, e.g., AACSB International, 2001).
It is these ideas—evaluation is ongoing, comprehensive, integrated, research-based, and mission-focused—that should guide the design (or re-design) of an evaluation system. Along with these ideas are also practical ones—an evaluation system should not only be low in terms of financial cost, but in terms of both response and administrative burden as well.
There are a number of entry points into a discussion of evaluation. However, here I discuss the four elements deemed most critical to most programs: (1) student evaluations of faculty (SEF); (2) faculty evaluations of the course and its students (FECS); (3) alumni evaluations of the program (AEP); and (4) an external assessment of student achievement (EASA). These four points form what I term a comprehensive Quality Assurance System (QAS).
Suppose that we have a Bachelor of Science Program in International Business Administration (BScBA) where the "product" is composed of graduates and the overall goal is clear: To produce graduates who are successful in international business. Therefore, all assessment and evaluation activities must be in support of this overall goal.
Suppose further that our BScBA degree program is highly student-centered. The general educational climate composing the BScBA program strongly encourages positive learning outcomes. The academic structure itself plays a major role in this encouragement because it supports virtually all seven of the principles identified as good teaching practices in undergraduate education (Freeman and Capper, 1999):
The small, intensive, highly demanding classes comprising this hypothetical program typically require a great deal of group work by teams of diverse students in an active learning environment in which time on task is required and prompt feedback, a necessity. The faculty members who teach in this environment are literally forced by the academic structure to meet most, if not all, of these seven principles.
Given these conditions, what should a practical QAS for this hypothetical BScBA program look like? As stated earlier, the four elements most critical to this QAS are: (1) student evaluations of faculty (SEF); (2) faculty evaluations of the course and its students (FECS); (3) alumni evaluations of the program (AEP); and (4) an external assessment of student achievement (EASA). In the following section, I start an examination of these four elements with the student evaluations of faculty (SEF). A schematic overview of the structure and linkages for this hypothetical program and its QAS is provided in Figure 1.
In order to develop a valid SEF form for this hypothetical BScBA degree program, an understanding of this structure was linked with what has been learned about the strengths and weaknesses of SEF. The fundamental consideration was that the SEF form must be valid for the program. That is, in addition to providing feedback to faculty that leads to positive learning outcomes, it must (1) fit into an eventual overall assessment system for the entire program, and (2) assist in determining if our hypothetical BScBA Program is meeting its overall goal—the production of graduates who are successful in international business.
The SEF form is found in Appendix A.1, where it is labeled "SECF" (Student Evaluation of Course and Faculty) to indicate the fact that a distinction is made between course evaluation and faculty evaluation. It is designed to be used by students in evaluating selected instructor characteristics that are associated with the principles identified as good teaching practice and that themselves can be transmitted to faculty for purposes of teaching improvement. Fortunately, as alluded to earlier, some of the principles do not need to be addressed in the SECF form because they are so deeply embedded in our hypothetical program—reciprocity and cooperation among students (e.g., group work), respect for diverse talents and ways of learning (e.g., international students are always present), prompt feedback, time on task, active learning environment, and high expectations. This is important because it suggests that the SECF form can be kept short and simple—issues raised earlier in regard to validity.
In accordance with the preceding considerations, the SECF form is designed to collect information on important aspects of teaching performance. In addition, the form asks for comments on the course as well as a student self-assessment—these will be used to assess reliability, among other things. It also provides for optional comments (on the back side) from the student, which allows both for qualitative information and feedback beyond the scope of the closed-ended questions found in the SECF form itself.
There is little controversy about the (face) validity of student responses to the questions regarding the instructor and the course. For example, students can tell if an instructor presents course material in an organized manner—does the material covered in class match that asked in an exam? Does the instructor show up on time? Do the lectures hang together? Similarly, students know if an instructor holds posted office hours and if the hours posted are sufficient. Students also can judge if an instructor can answer questions, give convincing explanations, and in countless other ways demonstrate knowledge of the subject being taught.
The specific aspects of teaching covered in the SECF form are behavioral in nature and as such, subject to change by faculty, where appropriate. That is, if students believe that an instructor is not at all organized in the presentation of course material, the instructor certainly has the capability to become more organized. Similar changes can be made in regard to accessibility and subject matter knowledge. These behavioral features also are important for at least two important and closely related reasons: (1) they avoid any suggestion of a "popularity contest"; and (2) they do not address issues that might encourage faculty to lower academic standards.
One advantage of the SECF form is that it clearly states how the information will be used and how SECF fits into an overall evaluation system. Both of these items are important for students to know. After all, who is motivated to complete any questionnaire when its purpose and use are not made clear? Another advantage is that the form avoids the common kinds of mistakes often found on SEF forms that were described earlier.
Yet another advantage is that individual responses are kept confidential and only summary results will be provided to the instructor. The summary results are used both by the instructor and the administration as one of the tools used to improve performance relative to achieving the overall goal of the BScBA degree program. Before using the form, students should be aware of the full scope of the QAS.
A final important advantage of the SECF form is that it has elements that will allow for improved discrimination between an evaluation of the instructor and that of a given course. This is an important point because research shows that required core courses and their instructors receive lower student ratings that do others (Swanson and McKibben, 1999). This should not be surprising. Students in our hypothetical program, like those in a real general business administration degree program, would typically view required core courses like accounting, business mathematics, and statistics as obstacles to their immediate goal of completing the program and as of absolutely no relevance to their long-term career goals. As such, instructors in these courses tend to get lower ratings than do those teaching elective courses. Courses are "elective" by virtue of the fact that students have more freedom to choose the ones they want. As such, they are typically viewed by students as stepping stones, both to their immediate goal of completing the program and to their long-term career goals. Thus, instructors teaching such courses tend to get higher ratings than those teaching the ones perceived as "obstacles."
As was touched upon earlier, it is not only the content and format of a form that plays a role in the validity of an SEF system. One must also take into account (1) the method of data collection, (2) the types of data analysis, and (3) the uses to which the analyses are put. The form is designed to be administered in class by a staff member in the absence of the instructor at a point in time near the end of a course, but prior to any final exam. This method of data collection will serve several purposes, one of which is greatly to reduce any bias due to low response rates (Swanson, 1986, 1998, 2006).
In terms of use, the information obtained from the SECF generally avoids any of the aspects of teaching for which research suggests that students may not be the best source of information. This will make the results easy for faculty to interpret and, if necessary, act upon. This also applies to administrative use.
It is not common for any program to have an evaluation of students (in a given course) by the faculty member teaching it. However, given that the product-consumer model has taken hold in higher education, it is natural to have the "product" to be evaluated on a course by course basis in a manner siimilar to how the faculty members are evaluated.
The FECS form designed to be used by faculty to evaluate students in our hypothetical program is found in Appendix A.2. Like the SEF, the FECS is designed in the context of the BScBA Program, which because of its structure supports virtually all seven of the principles identified as good teaching practices in undergraduate education, as discussed in the preceding section describing the SEF form. The FECS form for the BScBA Program was designed to (1) fit into an eventual overall assessment system for the entire BScBA Program, and (2) assist in determining if the BScBA Program is meeting its overall goal—the production of graduates who are successful in international business. In this context, the FECS form is designed to collect information on important aspects of student performance that are themselves lined with positive learning outcomes. In addition, the form asks for comments on the course, the quality of administrative and logistical support provided, and a self-assessment. Several of these items can be used to assess reliability, among other things. It also provides for optional comments (on the back side) from the faculty member, which allows both for qualitative information and feedback beyond the scope of the closed-ended questions found in the FECS form itself.
There should be little controversy about the (face) validity of faculty responses to the questions regarding the students and the course. For example, an instructor can tell if students are doing assignments and able to answer questions in class, and in other ways demonstrate a commitment to learning. As was the case with the SECF, the specific aspects of student performance covered in the FECS form are behavioral in nature and, as such, subject to change by students. That is, if students consistently get the message that it is clear that they are not doing sufficient work outside of class, they can work to improve both on their own and with the encouragement of appropriate sanctions (e.g., grades). Similarly, given the scope of the QAS, it will be possible to have criterion-related validity assessments of not only specific items on the FECS, but the entire FECS itself—again, these include predictive, concurrent, convergent, and discriminant elements of criterion-related validity.
Like the SECF form, the FECS form clearly states how the information will be used and how the FECS fits into an overall assessment system for our hypothetical BScBA Program. Both of these items are important for faculty members to know. The summary results are both used by the administration as one of the tools used to improve performance relative to achieving the overall goal of the program.
As was touched upon earlier, it is not only the content and format of a form that plays a role in the validity of an SECF system. One must also take into account (1) the method of data collection, (2) the types of data analysis, and (3) the uses to which the analyses are put. The form is designed to be administered in class by a staff member in the absence of the instructor at a point in time near the end of a course, but prior to any final exam. It could also be administered online, given the appropriate incentives were in place to maintain student response levels at the same level as those found from in-class evaluations (Swanson 1986, 1998, 2006).
Routine summaries derived from the data would be made available to students, faculty, and others who request them. The summaries for general distribution are descriptive in nature because there is no need to use statistical inference—the intent is that the entire population of interest (students in a given class) will be surveyed. The summaries themselves consist of the absolute and relative distribution of each of the variables. This is an appropriate approach for variables measured at the ordinal level, which is the case for each of the questions included on the FECS form.
Until recently, it was not common to request alumni to evaluate the degree programs in which they participated. These evaluations have become more common because of the relative ease of tracking alumni and maintaining their records in an electronic database.
The questionnaire for the Alumni Evaluation Plan (AEP) is found in Appendix A.3. Like the preceding forms it is designed in the context of our hypothetical BScBA Program and in accordance with good survey research practices (Swanson, 1998). It is important to note, however, that unlike the SECF and FECS forms, it is much more retrospective and less behavioral in nature. That is, it is largely asking for information about items that took place over a longer period and further in the past than do either the SECF or FECS forms. In addition, the information being requested is more opinion-based.
Because the questionnaire is retrospective, special care has been taken to reduce recall error. For example, the form does not ask for highly specific information; rather, it asks for information at a level of generality that an alumnus or alumna should be able to answer form the top of his or her head, respectively.
The questionnaire is really designed to provide summary information about fundamental and deeply embedded aspects of the hypothetical BScBA program. As such, it has the potential to provide information that cannot be provided by current students and faculty. It is fundamentally strategic in nature whereas the SECF and FECS are tactical. It is, however, designed to link with information provided by both the SECF and the FECS, as is apparent from a careful reading of all three instruments. Moreover, the alumni survey is seen as the appropriate "internal" forum from which to obtain much of the information regarding the performance of the program administration and staff, with remaining key elements coming from faculty and the normal review procedures used by the hypothetical university in which our hypothetical BScBA program is housed.
While this component of overall evaluation can take several forms, it typically does not involve a questionnaire. A direct form of assessment for a professional degree such as our hypothetical BScBA program is provided by the "market"—are BScBA graduates being hired? Unfortunately, such direct assessments are not so clear cut for liberal arts and other non-professional degree students. Perhaps even more unfortunate is the fact that even a direct "market" assessment of the viability of our hypothetical BScBA program is generally not sufficient for many accreditation purposes. More typically, the assessments involve some type of examination of graduating seniors that goes beyond grades earned in courses. In practical terms, this type of examination is often administered by a given Department to students majoring in their respective disciplinary fields.
An example of the "External Assessment of Student Achievement" (EASA) is found at the University of Mississippi (2005), which requires all instructional units to complete this evaluation every other year for each degree program. In this assessment, for intended educational outcome (student learning) identified, a means of assessment must be described in terms of means of assessment, data collection plan, criteria for success and the source of the assessment information (e.g., major field test). The data collected must be directly linked to the criteria used to define a successful outcome and the faculty of a given degree program must describe how the assessment data were used to improve the instructional program. Finally, any improvement must be linked to the intended educational outcome stated in the box at the top of the page.
How could a practical, yet meaningful, EASA be developed for our hypothetical BScBA program? Should we have a committee of faculty evaluate graduating seniors in case study competitions, require graduating students to achieve a given score on a major field exam (e.g., the GMAT) in order to graduate, or should we request that students take a major field exam like the GMAT for extra credit as part of a senior-level required course? Each of these possibilities combines varying degrees of validity and reliability with mixtures of different costs, to include financial (e.g., should the Department pay for the GMAT exams if they are required?) and response and administrative burdens. (Should students have to pay for a required or optional GMAT? Should faculty be required to organize student case competitions and serve as judges for them?)
One approach that promises relatively high validity and reliability while minimizing response and administrative burdens is to require that graduating seniors achieve a minimum score on a proctored online test using questions from a test bank taken from a standard textbook used in an introductory business administration course used in the BScBA program. The students would be notified when they were admitted to the program (i.e., declared a BScBA major) that the minimum score is required on this test. The general requirements of the test and the conditions under which it would be taken would be part of the standard materials provided to students on entry to the BScBA program. The students could be reminded of this requirement when they met with their advisors to complete applications for graduation, on which it would be noted that this was achieved or had yet to be done.
The test could be arranged by the students in conjunction with an on-campus testing center using one of the widely available online teaching systems (e.g., ANGEL or BLACKBOARD). If a student did not pass the test, a provision could be made for taking it again after a suitable period of time had passed (e.g., one must wait a week). This arrangement provides an incentive to students to do well, but also does not penalize them for not being prepared the first (or second) time. Because the test is taken from a set of questions in a text book that covers the entire field of business administration in a general way (i.e., it is from an introductory text book), it is one that BScBA students can reasonably be expected to pass even if they took different specializations (e.g., marketing, management, finance). The questions could be randomly selected from sets of questions representing the general points within each major subfield of business administration (e.g., marketing, management, finance). This means that each time the test is administered it is different—for those students whose good friends just took the test before they did as well as for those students taking it a second time, the questions would be similar, but not the same. This feature of test would not only provide a high level of validity and reliability, but results from it could be used to provide feedback on the curriculum—if a high proportion of students consistently does poorly on the section dealing with finance, then it suggests changes to this component of the curriculum. Given that an EASA is required, this type of evaluation would also represent relatively low response burdens for the BScBA students.
Other than setting up the test in the first place, there is very little administrative burden associated with it. Using BLACKBOARD (and its variants, e.g., "iLearn" at the University of California Riverside), for example, results would be readily available to students taking it and to faculty advisors. Summary information across students is easily assembled in BLACKBOARD that provides the statistical information for an EASA, again keeping the administrative burden low.
It is self-evident that a weak program leads to poor faculty and poor students, while poor students lead to poor faculty and a weak program, and poor faculty lead to poor students and a weak program. Not so self-evident are the roles played by external factors and internal factors in the well being of any program, including our hypothetical one. The external factors typically include (1) budgets, (2) competition, (3) the pool of available students. Major internal factors typically include (1) program design, (2) course offerings, (3) administrative and staff commitment and quality, (4) physical plant, (5) faculty commitment and quality, and (6) student commitment and quality.
Clearly the three primary external factors represent constraints that are largely outside of the scope of university and departmental control. As such, they set parameters to which a QAS must adjust. If, for example, an annual operating budget is reduced, then steps should be taken accordingly. Thus, to a large degree, these factors set the conditions under which the program must be judged. This means that these factors should be components of the QAS. To this must also be added a secondary external factor—location. Is the program in question near a major population center?/1/
Given the external constraints, the six internal factors represent those that are much more under the scope of a university and to some extent a given department. It is these factors that the major elements of the QAS are designed to monitor. That is, it is these factors that form the core of the QAS and affect the ability of our hypothetical BScBA program to meet its goal: producing graduates who are successful in international business. Those items more under control of a university in general include two internal factors (1) program design, and (2) course offerings. That is, changes to either must ultimately be decided at the highest levels of a university. The remaining four factors are typically more under the direct control of a department. Of these four, it is worthwhile to describe factors (5) faculty commitment and quality and (6) student commitment and quality.
Faculty commitment and quality is subject only to one external constraint—availability. Given this, the QAS is designed not only identify and retain committed faculty of high quality, but also to nurture and develop them.
As stated earlier, the guiding principles underlying the QAS and its elements are that evaluation be ongoing, comprehensive, integrated, research-based, and mission-focused. Also guiding it are the ideas that it should not only be low in terms of financial cost, but in terms of both response and administrative burden as well. The elements represent an integrated system working toward a common overall goal. The elements also contain the overlap needed to address validity and reliability issues and provide a sound basis on which research-based (i.e., empirical) findings can be used not only on a routine basis, but also as a means to examine topical issues that are likely to come up over time.
The hypothetical BScBA program, like most, if not all, university level programs is governed by a complex set of entities, regulations, and traditions, more than a few of which are often specific to the university in question. The combined effect of them is greatly to limit the scope and speed of any changes suggested by the QAS. For example, consider the academic calendar—how realistic would it be to radically modify the standard semester or quarter system? Second, what about the courses making up a curriculum? It is more subject to change than an academic calendar, but any change is likely to be slow in coming. A third example has to do with students. What are the provisions for removing students who perform poorly? A fourth example is the alumni. Once a student graduates he or she is part of the alumni—only death can change that—and the hypothetical university that houses our hypothetical BScBA program has little influence on decisions made by the employers of its alumni beyond its efforts to have them hire its graduates and contribute to its development efforts.
While the QAS system just described is for a hypothetical business administration program, it illustrates the major points for all degree programs. The system fits the goals of the program, is comprehensive and integrated, and is practical—it is not overly onerous to administer. The same should be found in a QAS designed for a real program, whether it is in humanities and the arts, social sciences, physical sciences, life sciences, or a professional school. It is important to note that the QAS implicitly accepts the "product-consumer" model, something that is not to be taken lightly because there are valid points about the problems inherent in such an acceptance (Cheney, McMillan, and Schwarzman, 1997). Nonetheless, there is ample reason to assume that this model is only going to increase its hold on higher education. Witness the change that has affected American education at the K-12 level due to the "No Child Left Behind" Act of 2001, which reauthorized the Elementary and Secondary Education Act of 1965 (ESEA), places a major emphasis upon teacher quality as a factor in improving student achievement (Mississippi Institutions of Higher Learning, 2005). Unfortunately, evidence suggests that this legislation is onerous and not likely to improve the K-12 system (McNeil et al., 2008).
Evidence suggests important shortcomings of the "No Child Left Behind" Act are the lack of understanding of the linkages and roles in regard to assessment systems as well as the limitations of schools to effect changes implied by evaluations resulting from these systems (Dorn, 2007). This is a lesson to be learned for higher education and I have presented the hypothetical QAS as a means of stimulating thoughts on how any "No Freshman left Behind" legislation for higher education could be dealt with more effectively in order to make it less onerous—if not more meaningful—than is the case of its K-12 counterpart.
Notes
References
AACSB International (Association to Advance Collegiate Schools of Business International). (2001). Achieving Quality and Continuous Improvement Through Self-Evaluation and Peer Review. St. Louis, MO: AACSB International.
Abrami, P.C. (1989). How should we use student ratings to evaluate teaching? Research in Higher Education, 30, 221-7.
Benson, D.E., & Lewis, J.M. (1994). Students' evaluation of teaching and accountability: Implications from the Boyer and the ASA reports. Teaching Sociology, 22, 195-99.
Cahn, S. (1987, October 14). Faculty members should be evaluated by their peers, not by their students. The Chronicle of Higher Education, B2
Cashin, W. (1996). Developing an effective faculty evaluation system. Idea paper no. 33. Manhattan: Kansas State University, Center for Faculty Evaluation and Development (reported in Haskell, 1997).
Cheney, G., McMillan, J., & Schwarzman, R. (1997). Should we buy the "student-as-consumer" metaphor? The Montana Professor 7(3), 8-11.
Committee to Assess Teaching Evaluation Methods. (2000). Report by the committee to assess teaching evaluation methods. Dorothy F. Schmidt College of Arts and Letters, Florida Atlantic University, Boca Raton, Florida: Florida Atlantic University.
Curzon-Brown, D. (2000). Evaluation as a weapon. The Montana Professor 10(3). Retrieved June 2009 from http://mtprof.msun.edu/Fall2000/Brown.html.
Diamond, R.M. (1998). Designing & assessing course & curricula: A practical guide. San Francisco, CA: Jossey-Bass.
Dorn, S. (2007). Accountability Frankenstein: Understanding and taming the monster. Charlotte, NC: Information Age Publishing.
Freeman, M. & Capper, J. (1999). Educational innovation: Hype, heresies and hope. ALN Magazine 3(2).
Haskell, R. (1997). Academic freedom, tenure, and student evaluation of faculty: Galloping polls in the 21st century. Education Policy Analysis Archives 5(6). Retrieved June 2009 from http://epaa.asu.edu/ojs/article/view/607.
Marsh, H., & Roche, L. (1997). Making students' evaluations of teaching effectiveness effective: The critical issues of validity, bias, and utility. American Psychologist 52(11), 1187-1197.
McNeil, L.M., Coppola, E., Radigan, J., & Heilig, J. Vasquez. (2008). Avoidable losses: High-stakes accountability and the dropout crisis. Education Policy Analysis Archives, 16(3), 1-45. Retrieved June 2009 from http://epaa.asu.edu/ojs/article/view/28.
Mississippi Institutions of Higher Learning. (2005). No child left behind: Teacher quality improvement program.
Miyashiro, T. (1991). Migration of college students in the state of washington, 1990. REU- Working Paper Series, Demographic Research Laboratory: Bellingham, WA: Western Washington University.
Scriven, M. (1995). Student ratings offer useful input to teacher evaluations. ERIC/AE Digest. Retrieved June 2009 from http://www.ericdigests.org/1997-1/.
Scriven, M. (1994). Using student ratings in teacher evaluation. Evaluation Perspectives 4(1), 1-4.
Seldin, P. (1993, July 21). The use and abuse of student ratings of professors. The Chronicle of Higher Education, A40.
Seldin, P. (1984). Changing practices in faculty evaluation. San Francisco, CA: Jossey-Bass.
Southern Association of Colleges and Schools. (2004). Principles of accreditation: Foundations for quality enhancement. Retrieved May 2009 from http://www.sacscoc.org/pdf/PrinciplesOfAccreditation.PDF.
Swanson, D. (2006). A comparison of in-class and online student evaluations. Delta Education Journal 3(2), 37-47.
Swanson, D. (1998). Technical documentation and summary findings of the 1997 survey in Lincoln County, Nevada. U.S. Department of Energy, Office of Civilian Radioactive Waste Management (DE-AC01-91RW00134). U.S. Department of Energy, Yucca Mountain Project, Las Vegas, Nevada.
Swanson, D. (1986, July). Missing survey data in end-use energy models: An overlooked problem. The Energy Journal 7, 149-157.
Swanson, D. & McKibben, J. (1999). Teaching statistics to non-specialists: A course aimed at increasing both learning and retention. In L. Pereira-Mondoza, L. Kea, T. Kee, and W. Wong (Eds.) Statistical education—expanding the network: Proceedings of the fifth international conference on teaching statistics (pp. 159-166). International Association for Statistical Education. Voorburg, Netherlands: International Statistical Institute.
Theall, M. & Franklin, J. (1991). Using student ratings for teaching improvement. In M. Theall and J. Franklin (Eds.) Effective practices for improving teaching (pp. 83-96). San Francisco, CA: Jossey-Bass.
Trout, P. (1997). How to improve your teaching evaluation scores without improving your teaching. The Montana Professor 7(3), 17-22.
University of Mississippi. (2005). Institutional Assessment Instructional Program Forms. Retrieved June 2009 from http://www.olemiss.edu/depts/university_planning/instructional.html.
[The Montana Professor 20.1, Fall 2009 <http://mtprof.msun.edu>]