In a recent post,
Andrew Leigh wrote about Rate My Prof and the general issue of student evaluations. In the comments, he referred to an
article by Daniel Hamermesh and Amy Parker in
Economics of Education Review. (Hamermesh is a professor of economics at the University of Texas at Austin; Parker was one of his undergraduate students.)
In "Beauty in the Classroom: Instructors' Pulchritude and Putative Pedagogical Productivity" (love that alliteration!), Hamermesh and Parker had students rate photographs of professors on their physical attractiveness. They then correlated that measure with the responses of other students on end-of-the-term evaluations of the course and the instructor (very unsatisfactory, unsatisfactory, satisfactory, very good, or excellent).
The result of their study, once you get past the standard deviations, psychometric measures of concordance, and the like, is simple: the better looking the instructor, the higher the scores on student evaluations. The differences were significant: from the bottom to the top of the "beauty" scale, student evaluations increased one whole point out of five. And since few students give their instructors the lowest mark, the difference between least and most beautiful would appear even greater.
The authors noted that the implications of this go beyond just numbers on student evaluations. Since colleges and universities consider such input from students when making decisions on raises and promotions, there could be a correlation between how well professors are paid and how good they look.
"It was God who made me so beautiful," supermodel Linda Evangelista once said. "If I weren't, then I'd be a teacher." Maybe Ms Evangelista shouldn't have been so hasty. According to this study, if she had gone into higher education, she would be almost assured of high numbers on her teaching evaluations, and this in turn would help her receive regular raises and promotions. On second thought, maybe not. "I don't get out of bed for less than $10,000 a day," she once remarked, and even those of us with the best student evaluation numbers have yet to crack the five-digit-per-day mark.
S
tudent evaluations can be a touchy point in higher education. I taught for a few years at a state university in the
Midwest. Where other schools use student evaluations as just a part of the review process, this school used student evaluations exclusively. In other words, whatever number came out when they ran the student evaluations (filled in with number 2 pencils, of course) through the machine, that was our teaching score for that year.
This process gave results that suggested a wonderful precision in the review process (3.783! 2.311!), but s
ome of the faculty complained that if they demanded a lot of work or gave the low grades that they felt students sometimes deserved, their evaluations would be lower because of their higher standards. (Actually, empirical evidence on this is mixed.) These faculty therefore proposed that the student evaluations be weighted according to grade distribution, reading and writing requirements for the course, and so forth. In other words, faculty who assigned more books in their courses and gave lower grades would not be penalized for lower student evaluations.
Several of us pointed out that we would have a riot on our hands if students discovered that our raises and promotions were based in part on how many papers we assigned and how many students we flunked.
Until recently, the school where I currently teach
required numerical student evaluations for all courses. A couple of colleagues, since retired, showed that results could be manipulated. For example, the questionnaire we used asked students to rate their professors on the following statement: "I believe the teacher cares about students," or words to that effect. Several years ago, these colleagues made a point of telling their classes three times each semester "I care about how you do in the course." Their evaluation scores on those questions increased significantly, even though they did nothing else different in the course.
(We still uses student evaluations, but the forms no longer ask for numerical ratings.)
I don't know what to think about all this, but it's one reason I haven't posted a photo of myself above. I'm afraid folks will see it and think, "Oh, I bet students just HATE him."
The above was originally published, in slightly different form, in the Cartersville Daily Tribune News.