David Gooblar

Columnist at Chronicle Vitae

No, Student Evaluations Aren’t “Worthless”

Full pedagogyunbound

Looking for more advice on teaching? Browse the Pedagogy Unbound archives.

If you've been teaching in higher education for any amount of time, you've probably encountered more than a few instances of what I like to call EESS — Extreme Evaluation Skepticism Syndrome. That ailment, which seems to infect more and more academics every year, causes ordinarily sensible instructors to utter such statements as “Student evaluations are worthless,” or
“It's the Yelpification of education.” The media is also quick to jump on any study that seems to confirm that students are not qualified to evaluate course effectiveness.

Betsy Barre, associate director of Rice University's Center for Teaching Excellence, noticed the rapid spread of EESS in 2014 and 2015 and decided to look into the phenomenon. Reading extensively in literature stretching back decades, she tried to sort out the signal from the noise: How good are student evaluations at measuring student learning?

In presentations to Rice faculty and in several blog posts, she presented her findings. In short: It's complicated. Because student evaluations are administered with a multitude of instruments, for a multitude of purposes, and at every kind of institution, it's nearly impossible to draw any simple conclusions about their reliability as measures of teaching effectiveness or student learning. But, she concluded, that doesn't mean they are worthless.

On the contrary, Barre found there were many more studies that showed a positive correlation between student evaluations and learning than studies that showed no correlation. She estimated that — once you control for known biases like student motivation, class size, and discipline — the correlation between scores and learning was around 0.5. That is a relatively strong but definitely imperfect correlation — meaning there are factors influencing the relationship that we still don't understand. However, she emphasized, while student evaluations are an imperfect tool, “we have not yet been able to find an alternative measure of teaching effectiveness that correlates as strongly with student learning.”

That is to say: Student evaluations may be flawed, but right now they're the best instrument we've got.

Of course it’s those flaws that make the use of evaluation scores in personnel decisions highly controversial. Given that decades of studies show how complicated this issue is, taking a handful of numbers from a faculty member’s file and expecting them to be a meaningful measure of quality is foolish. Yet Barre's work demonstrates — to me at least — that evaluations probably have some value, particularly for individual teachers. We may have legitimate cause to fear superficial interpretations of evaluation data, but that doesn't mean we should ignore what our students are saying about our courses. As James M. Lang wrote in his 2010 book On Course: A Week-by-Week Guide to Your First Semester of College Teaching, “the important thing to remember is that you will get information from students about your teaching, and you should take it seriously.”

But how should we take it seriously? How can we use our student evaluations to improve our teaching going forward? Here's some advice that I hope will help you make something constructive out of your evals.

Take your time. If you're anything like me, you check your student evaluations as soon as they are available. That makes sense. If your teaching is important to you, you're going to be interested in how students grade your teaching. But, as this 2006 blog post from the University of Virginia's Center for Teaching Excellence notes, you may not yet be prepared to draw conclusions from what you find. It's very easy — and totally natural — to be defensive about low scores and critical comments. You just spent a whole semester working countless hours to teach these students and this is the thanks you get? However understandable, a defensive mindset is not going to help you learn from your evaluations. Take a week off before coming back and looking at the results again. You'll be better able to see things with a bit more objectivity.

Do a deep dive. Once you do sit down to try to properly make sense of your evaluations, don't make the mistake of simplistically interpreting the results, good or bad. Don't be content to just look at the overall average. Pay special attention to the individual questions that target outcomes that mean the most to you. Although it can be tempting to fixate on the questions that straightforwardly evaluate performance (“Was the instructor prepared for each class?”), I focus instead on how my students answered questions about whether their writing and critical reading skills have improved. Our system at the University of Iowa allows me to click on any given comment and see how an individual student (still anonymous, of course) responded to the other questions. I sniff around to see if the negative comments came from a small number of disgruntled students or are more spread out.

Similarly, if you have low average scores over all, or on certain questions, is it because most responses clustered around that low figure, or were your generally high marks brought down by a few low-scorers? Two different circumstances might lead you to two very different conclusions. Look for patterns, take notes, and go slowly: It's not easy to draw conclusions from such a small sample, so exercise caution.

Go back in time. Some of the best strategies for making evaluations more useful need to be carried out well before the end of the semester. OK, so you probably can't travel back in time but it might be helpful to start thinking ahead to next semester's evaluations.

Try designing your own form and handing it out to students halfway through the semester. Informal or formal, anonymous or through class discussion, soliciting student feedback midway through the course gives you a chance to gauge how things are going and then adjust your approach if necessary. I do an exercise I've adapted from Peter Filene. I hand out notecards in class. On one side, students answer “What's going well in the course so far?” On the other, they answer “What do you think should be improved or changed?” Their answers are anonymous. I collect, shuffle, and redistribute the cards. Then I have students read the comments aloud and we discuss them. Giving students an opportunity to voice concerns early and often makes it much less likely you'll be blindsided by end-of-term complaints.

Another useful approach is student self-assessment. Having students evaluate their own performance in the course can provide metacognitive benefits. It can also remind them — when they go on to evaluate you — that they have a role to play in the course's success.

Many institutions (mine included) now give instructors the option of adding their own questions to the standard slate on the official form. Take advantage of that. Think about the kinds of information it would be helpful to hear from your students. Think about the best way to phrase important questions. Think about questions that would be particularly appropriate to your course.

Put the evaluations in context. Evaluations are anonymous but they’re not random numbers reported by unknown students. You already have a lot of information about the courses and the students you just taught. When you look at your evaluations, the challenge is to synthesize the new information from your evals with what you already know. Were there students in the class who nursed grudges about their poor grades? Did some students struggle to keep up with the course's level of difficulty? Were there students who came to class unprepared all the time? Keep all of that in mind as you read the comments.

Lang, in On Course, also advises early-career instructors to make an appointment with their department chair to discuss their evaluations. Among other things, you might find out that students at your institution always complain about workload, for example, on evaluations. It might not be you.

Above all, look for constructive lessons from your evaluations. Look for patterns — multiple students mentioning the same thing, or a particularly low cluster of scores on a certain question. Look for comments that confirm suspicions you already had about your teaching. And look for common complaints from different classes: If students in two or more different courses make a similar criticism, you may want to look into it. I think you'll find, once you recover from EESS, that there's a lot to learn.

Join the Conversation

17 Comments

Log In or Sign Up to leave a comment.