Kevin Gannon

Director of the Center for Excellence in Teaching & Learning and Professor of History at Grand View University

Rethinking My Exams

Full statelibqld   stylized

Image: students taking an exam, 1940

In the weeks approaching midterms, I’ve been thinking a lot about testing. That’s partly because I redesigned the survey course I'm teaching this semester, and overhauled the assessments I’m using. Ironically, the principal question that's occupied my thoughts is the same one that regularly emerged in anguished groans at 3 a.m., during my undergraduate career:

Why do we even do exams in college, anyway?

The answers I have now are different from the ones I had then. (It's always different on the other side of the podium.) College Me believed exams were similar to hazing — professors inflicted tests on us because they could, and since they were required to give us grades, something had to be there for us to try and earn points. Professor Me now knows that — when done well and designed appropriately — exams aren't meant to haze, but rather, to measure student progress on specific course objectives.

Exams should challenge students and push them to demonstrate their learning. But assessment shouldn't be weaponized (though, sadly, in some corners of academia, it is). Exams are just one tool in our assessment toolbox. And like a hammer, exams can build what we want when used well, but break things when used for the wrong purposes.

I thought about this a lot last year, while doing the research that led to my course redesign. I realized that I hadn't really thought very intentionally about one of the course's major pieces, at least in terms of how students' grades are determined. My exams had remained essentially as they were since I began teaching — a blend of short-answer and essay questions, with the rare addition of a multiple-choice section to assess familiarity with basic course content.

Why did my exams look the way they did? Because that's the way they looked in the courses that I was a TA for in graduate school, that's why.

Even with years of teaching experience since then, there were still areas of my pedagogy that remained as they always had been — unexamined and essentially running on autopilot. My exams were one of those areas. I don't think that’s unusual. As academics, we spend a lot of time talking about things like classroom-instruction techniques, designing effective research assignments, teaching information literacy, using digital tools — the list goes on and on, yet exams seem to be something we take for granted. And then we end up testing as we were tested, in ways that may or may not align with the actual goals we have the course.

So what are exams for? Our courses have outcomes — things we tell students (along with parents, accreditors, and other external audiences) that they will learn, or accomplish, or be able to do as a result of successfully completing our classes. Assessment is simply how we prove they did so, and exams are one component of that assessment.

It's not enough to say, for example, "my students have an understanding of the people, events, ideas, and processes that shaped the history of the United States from 1789 to 1898." I have to provide evidence that they, indeed, acquired and continue to possess that understanding. Just as we ask our students to deploy well-chosen and appropriate evidence to support their claims in a research paper, we have to model the same types of evidence-based practice in our courses.

Examinations can be — and for many courses are — a significant portion of the evidence that we use to demonstrate student learning. That means our exams ought to have learning outcomes attached, and those outcomes ought to align with our course's learning outcomes. Like a set of Russian nesting dolls, our individual assignments, exams, papers, and other course activities all ought to have outcomes that connect them with the larger goals of the course.

In my own case, I had those alignments clearly articulated and intentionally built into my other course assignments, but not for my exams. Without paying attention to how tests might be doing the work of assessing students’ progress toward my course goals, the best I could do if someone asked me why I gave exams would have been to stammer something along the lines of “well, I want to see if they've learned anything.” But I would not have known how to use those exams to prove whether my students had done that.

I went into the process of rethinking my exams believing that it would be a matter of format. Multiple choice? Essay questions? Take-home test? In-class? However, I quickly realized that intent and outcomes needed to come first.

Now when I design an exam, it has its own set of learning outcomes attached. From those outcomes flows the decision-making process regarding the exam's specific format. Am I trying to gauge whether students understand and remember specific concepts, people, and events? Then multiple-choice, fill-in-the-blank, or matching questions may be the most effective format. Am I trying to go beyond basic content literacy to look at high-order skills like evaluation and synthesis? In that case, subjective questions — particularly essay questions — would be the appropriate choice.

Most of us are familiar with Bloom's Taxonomy. I've found its articulation of different levels of student activity to be a useful guide for how I conceive of my exam's particular learning outcomes. But I've also had to be careful, as it is all too easy to associate one type of question format with a particular level of the taxonomy. It is possible, for example, to write effective multiple-choice questions that get at the parts of Bloom above the understand-and-remember base.

Some exams, however, need to do work that goes beyond the specific course we're teaching. A nursing course that seeks to prepare students for their board exams probably ought to use tests that mirror the board exams. An instructor teaching a senior-level accounting course might choose to build exam questions that help students prepare for their licensure requirements. Even in such cases, though, thinking intentionally about the exam's purpose helps to clarify its structure and format.

That's the key to writing effective exams: discernment. What are my goals for this exam? What am I asking it to assess? How can I ensure that it gets me the right materials with which to do that assessment accurately and well?

Of course, as I realized in doing the reading and research for this process, discernment is the key to effective course design as well. It was humbling to realize that — despite my assiduous efforts over the years to design effective assignments and other course activities — I'd basically left my examinations untouched.

As it turned out, my exams for this particular class aligned only partially with the overall course outcomes, and any alignment that occurred was in spite of, certainly not because of, the amount of attention I paid to their design. I ended up making significant revisions to the exams. I know now that my mistake was in thinking that exams, by their very nature, were effective tools for assessment. Not so. Only through the same type of process we use on our courses — defining outcomes and then aligning material and assignments with them — can exams perform the type of work we want them to in our courses.

When exams are in alignment with course objectives, we can tell the story of our students' learning more effectively and meaningfully. That alone makes the process of rethinking and, if necessary, rebuilding our examinations eminently worthwhile.

 

Join the Conversation

13 Comments

Log In or Sign Up to leave a comment.