My colleague Heather Whitney has a post up at ProfHacker about one aspect of student evaluations: their occasional lack of truthfulness. Let me add my two cents:A year ago, as some readers of this blog may recall, I spent some time in the hospital. My classes that semester met on Tuesdays and Thursdays, and as a result of illness I missed four classes: two weeks total, in a fourteen-week semester. A lot to miss! — but perhaps not enough to warrant students writing on their evaluations, “It’s hard to evaluate this class because we hardly ever met,” or “It’s not Dr. Jacobs’s fault for being sick, but the fact that he missed most of the semester really hurt the success of this class.”I also recall, a couple of years back, some students complaining on their evaluations that they got poor grades on their papers because I didn’t give them any guidance — even though before turning in the final draft of their papers they had to submit (a) a proposal for the paper, to which I responded in writing with detailed suggestions, and then (b) a complete draft of the paper, to which I also responded in writing. If this was not “guidance,” I wonder what would have been.I can only explain this phenomenon — which is consistent among a minority of students — by speculating that some students think that evaluations are an opportunity not for them to speak truthfully, according to any naïvely “objective” or dispassionate model of truth, but rather for them to share their feelings — whatever those feelings happen to be at the moment. And at the end of a long and stressful semester, those feelings will sometimes be rather negative. This is one of the many reasons why student evaluations as they are typically solicited and offered are useless, or worse than useless — and I speak as someone with a long history of very positive student evaluations.Thus for a long time I have recommended, to anyone who will listen and to many who will not, that evaluations for a given course be solicited at least one full semester after the course is completed, when students are less emotionally involved in it. A year or more after would be even better. We might get fewer responses, especially from students who have graduated, but they would be better responses.Whenever I make this suggestion, the first response I get is always the same: “But a semester [or a year] later, they won’t remember anything from the class!”“That would be something worth knowing,” I reply.

Text Patterns

March 10, 2011

15 Comments

  1. In my (perhaps atypical) experience, there's a serious mismatch between how students view applications and how faculty view evaluations. The students often have no real idea how their evals will be used – and in particular do not grasp that this is a career-or-death matter for junior faculty.

    (And why should they? Comment cards, solicitations to evaluate one's experience, etc. – these things are a ubiquitous feature of consumer life in America, and who really believes that anyone's future depends on them?)

    So if they vent their emotions in the way that you describe, it's probably because, at least in some cases, they can't imagine that inaccuracies have any consequences.

  2. Additionally, they should institute an "expectations" review on the first day of class. It would be interesting to know what students want and from the class. The problem with evaluations is that they too often are a direct response to expectations, which usually is, "I want a good grade."

  3. Who wants these evaluations and what are they for?

    If administrators are just concerned about satisfied customers, then it makes sense to ask the customer, "Are you satisfied?" But even then, when you ask the question depends on whether you're more worried about satisfied freshmen coming back to pay tuition next year or satisfied alumni who see the benefits of their education and send the college their donations and children.

    If for some crazy reason someone is trying to measure what students have actually learned or how effective Alan Jacobs is as a teacher, asking students "What did you think of this class?" isn't going to tell you very much no matter when you ask it. But when something's hard to measure it's typical for organizations to measure something different and then pretend they've measured the hard thing.

  4. It would be interesting – and you could do this, I think – if you included in your student evaluations a set of questions designed to "test" students as to the details of the class. Perhaps asking them to identify five authors they read in the class, the titles of some of the works, etc.

  5. I think this is a great idea, or certainly better than what is currently the practice. Unfortunately it appears trumped by the current convenience of having the evaluating group of students in the same room at the same time. Perhaps online, asynchronous evaluations a few months after the course would get around this, although the details could be overwhelming and students may not take well to evaluating 4 courses they are no longer taking. In point of fact I believe student opinions have little merit anyway. Most of us with any experience are our own best critics. Coupled with all kinds of ancillary contingencies–elective versus required student heterogeneous audiences, kind of institution, prerequisite or terminal course, class size, required coverage of material, etc–there are constraints which are as important to consider. The student is but one stakeholder in the larger picture.

  6. As one of the students in those classes a year ago, let me add my two cents. I think that what people were trying to express is that we were left wanting more. Now, having had a couple other classes with you, I realize that will always be the case, in that the discussions we have in class open the door to a potential goldmine of thought. I would look at it not so much as "I feel let down that Dr. Jacobs missed some classes," but as "I loved this class, and lament having to miss out on four lectures." I know in my case, the latter is the truth. But yeah, I'm sure our evaluations make for rather laughable reading.

  7. The opposite is true, as well. How many students unreflectingly give positive reviews to the teacher?

    Administrators recognize that professors of classes which are more difficult (say, econometrics) will tend to experience poorer performance reviews as no student wants to admit that it was the difficulty of the subject matter which led to a grade that did not meet their expectations.

    Moreover, they also recognize that evaluations tend to bring out the extremes of the students.

  8. I find that evaluations are useful for a new course, but if I've been teaching a class for a while, they are close to worthless. And students don't understand how they are used. My department chair does not view comments like "this course is way too hard for a required class!" or "professor expected 100% attendance, which is unreasonable" as negative.

    What I want to know is whether students learn enough to do well in their later classes, and course evaluations do not tell me that.

  9. I've always thought evaluations should be sorted into bins by the grade each student received in the class whose instructor he's evaluating. That way you make more of an "apples to apples" comparison between two instructors.

    Another interesting approach would be to sort evaluations into bins by the delta between the student's score in the class and his or her grade average in *other* classes. The thought here is that a overall C student who scores a C might be less likely to evaluate "punitively" than an overall A student who scores a B.

  10. If evaluations are based on their feelings at the moment, it should be possible to engineer their feelings to go in certain directions. Last semester I gave the students a few things to think about while they were doing their evaluations:

    1. hey, didn't that textbook suck? I put a lot of effort into clarifying stuff. How do you feel about that?

    2. some profs spend *three times* as much time on topic X as I did. Should I have spent more time on it?

    And other stuff like that.. basically make them think about the ways the course could've been worse, then ask them to write the evaluation.

    I got my best evals ever.

    Of course, it may be that I just did a good job teaching, but I'm going to keep trying this.

  11. I think the problem is that, as anyone with an interest in teaching knows, assessment is a complex activity, and shortcuts like end-of-semester surveys are not well designed to measure that complexity. If we evaluated students the way that they evaluated us (anonymously, without context, with few if any explanatory remarks to explain our grades), we'd be sued–and justly so.

    The problem is (and anyone who has been through accreditation knows this as well), that few people outside of teachers ourselves, is really interested in this complexity. It's too complicated, too difficult, and raises too many uncomfortable questions. It's easier just to go along with the system we have.

    If and when I want real responses to what I'm doing in class, I give out brief but very specific surveys asking them to compare class activities and tell me which of these is most useful and which is least. I also do this in the middle of the semester when students have a greater steak in the outcome. It's not perfect, but it's more useful than the alternative. As for me being "evaluated" for tenure and promotions–well, I just accept that this is a hoop to be jumped through and that it has little to do with reality. Which is probably very close to the way that students think about their evaluations.

  12. Good comment Anonymous. You are doing it right in my opinion. One small thing, however. You said:

    "I also do this in the middle of the semester when students have a greater steak in the outcome. It's not perfect, but it's more useful than the alternative."

    Unless you are feeding your students, I think you meant "stake." 🙂

  13. Speaking as a teaching assistant, I find that the evaluations students give me throw the entire enterprise into question. In a class where my only responsibilities are marking, with no lectures or tutorials, I invariably get students who answer the questions rating my lectures and tutorials. I presume that this is because a number of students assume you shouldn't leave any circles blank. But what does it mean if I score a 5 out of 7 for something I didn't do? Does this in some way devalue the evaluations I receive for other tasks like in-person assignments or fairness of grades? I know that universities like quantifiable questions, but by far the most useful parts of evalutions are the least quantifiable, the sections where students write their views. Unfortunately, few students make substantive use of this section.

  14. Evaluations are the sine qua non at my institution, in my department. I get a teaching score on my annual evaluation which is a purely mathematical calculation of the average student score on a single question. These course scores (again, based on student responses to one single question) are averaged with my other course scores.

    Nothing else counts. To be eligible for tenure, I must receive a teaching evaluation score of X. And the only criteria for that score of X is the average score of student evaluations of my teaching based on a single question.

    Is this bogus or what? I am not joking.

Comments are closed.