If we believe in the active-learning classroom — that the only way to bring about real learning is to engage students in ways that help them revise and broaden their thinking — then student participation is a nonnegotiable part of the equation. Learning does not happen without the student actively taking part.
Oddly, however, given its importance, our own definition of “student participation” is often quite limited. In the scholarship on teaching and learning, that term is almost always defined narrowly as the degree to which students take part in class discussions. And while discussion is obviously an important component of an active-learning classroom, it’s not the only component. There are many other ways in which students participate in class: writing, researching, and contributing to small group activities are just a few. If we want to accurately assess and reward participation in our courses, we need to expand our definition to include more than just the amount of times that students raise their hands.
What that means is that assessing student participation — which has always been a very difficult task — is now even more difficult. How do we accurately and objectively measure student participation over the course of a semester? How do we keep track of hundreds, if not thousands, of opportunities for students to engage in class with course activities? And how do we ensure that our own biases don’t lead us to unconsciously award higher participation scores to those students who do well on course assignments?
The issue has puzzled me for a long time. I’ve always graded students on class participation, but I’ve often felt that the points I awarded were rarely more than a best guess. There’s a lot to keep in your head as you teach — staying on top of who is participating and who is not can get lost among other, more pressing, matters. It can be useful to give out interim participation grades on a regular basis, as it helps you to grade students while their performance is fresh in your mind and it encourages students to participate more by letting them know how they’re doing. But those interim grades are still a very subjective measure made from a very limited viewpoint.
I’ve spent a lot of time reading research papers on different ways to assess student participation in the college classroom. Most methods suffer from one or both of the following problems:
- They place an undue burden on instructors (such as those approaches that require faculty to note and assess participation as it happens).
- They encourage an impoverished version of participation (such as only awarding points for the first two comments made by a student).
Ultimately, I’m looking to accurately measure participation in a way that encourages full engagement in a wide variety of class activities while being fair to students — and relatively painless for me. Maybe that’s impossible. But recently I came across an idea that might fit the bill.
Tony Docan-Morgan, a professor of communication studies at the University of Wisconsin at La Crosse, wrote last year about the “participation logs” he asks his students to fill out. He distributes a Microsoft Word template with space for students to record their contributions to class discussions, to note how they participated in group work, and to reflect more generally on their activity in class. An example of that template can be found here. Students fill out their logs on their own time, and submit them to the instructor. Docan-Morgan collects them at midsemester and at the end of the course, but I think asking to see them every two weeks would encourage students to update the logs more frequently.
Assign the logs as homework. When students submit them, go through and compare their assessments with your own. That way, you’ll have a fuller picture of what happened in class, and you can use the logs as valuable backup of your own memory. Give students feedback on their logs — a great way to encourage those quiet folks who should be participating more.
Aside from its utility in grading, the participation log is a valuable assignment in and of itself. It’s a fully metacognitive exercise — more so than just having students record how many times they commented in class. The log asks students to think about how they’ve conducted themselves in class and to reflect on their behavior. They have to consider what makes a meaningful contribution. Remember, learning depends on a student’s active engagement in the classroom. By requiring students to spend time reflecting on their contributions to various class activities, the participation log has the added benefit of emphasizing the significance of those contributions.
Docan-Morgan reports that, in his experience, the logs are generally accurate. He attributes that to the fact that he periodically reminds students that he is monitoring their participation. But because students have to stay on top of their own contributions, the instructor doesn’t need to play the role of participation policeman nearly as much.
By sharing the responsibility for assessing participation between instructor and student, participation logs are a promising way to encourage full engagement in our classrooms. The participation mark shouldn’t just be your way of goading students into raising their hands. It should be a reflection of the fact that participation isn’t ancillary to the real work students do in your class; it is the real work.