Kevin Gannon

Director of the Center for Excellence in Teaching & Learning and Professor of History at Grand View University

Stuck in the Assessment Swamp?

Full vitae swamp

Image by finchlake2000, Creative Commons

As spring semester winds down on college and university campuses across the country, faculty thoughts often turn to what we’re doing over the summer — research, course redesign, family vacations, recharging, perhaps teaching a course or two. But then academic reality rears its head and our thoughts are forced from their Summer Happy Place to somewhere far more mundane: The Assessment Mire.

If where you teach is anything like my university, in addition to the assessment work we do for our own courses (grading piles of student essays, projects, and tests) there is often a layer of institutional assessment on top of that. We use various assignments to assess the outcomes in our institution’s core curriculum, for example, and then we aggregate the data to see how students across the university are doing with the core’s various dimensions.

I’ve had other experiences with institutional assessment, though, that are less — how do I say it — organized. I’ve filled in sections of prefabricated curriculum maps to show where my courses fit into any of the 57 (I kid you not: 57!) learning outcomes in the college curriculum. I’ve used alternate rubrics on my final exams to report findings to someone who promptly reported them to someone else, and so on up the ladder, in a process which I can only assume terminated with the data being boxed up in some warehouse and hidden from human sight like the final scene of Raiders of the Lost Ark.

It’s little wonder that “assessment” is one of those words that make faculty break out in hives. It’s all too often a top-down process with puzzling mandates and obscure language inflicted on — rather than decided with or by — faculty.

We tend to see it in somewhat the same light as dental surgery: Someone is telling us that it’s good for us, but we don’t understand what exactly is happening. We just know it’s taking too long and it hurts. I’m convinced that I could make T-shirts with the slogan “We Put the ‘Ass’ in Assessment” and sell hundreds at any academic conference.

This is a problem.

While it might be fun (and easy) to engage in a bit of gallows humor about assessment, the reality is that we faculty have largely ceded one of our most important tools to those who use it poorly and for the wrong purposes. Too often, assessment is seized by administrators focused exclusively upon discrete, quantifiably measurable outcomes: job-placement figures, time-to-degree stats, CPA exam results, drop-fail-withdrawal rates — and faculty then follow that lead. Part of the reason is institutional culture, to be sure. But those types of measurements are seductively easy for faculty to cut-and-paste into bullet-pointed executive summaries, too. And then the cycle repeats: We continue to use quantifiable outcomes to tell the story of processes that are not fully reflected in those outcomes, and then bemoan the fact that “bean-counters” are making decisions about our resources that fail to account for the complexity of higher education.

Because we’ve centered so much of our actual assessment practice around the fetish of outcomes, we’ve forgotten that the really important part of learning is the process that leads to those outcomes.

We know that our students’ repeated superficial reading of a textbook doesn’t promote understanding of the material so much as it does an illusion of mastery and a skill at rote short-term recall. We don’t realize, though, that the same principle applies to assessment. “Achievement” of some narrowly defined “outcome” — purportedly demonstrated by a series of disjointed snapshots — is not a demonstration of whether students have meaningfully learned over time.

We talk a great game in higher education. We tell students (and their families) that we will bring them into the scholarly conversation, that they will build a foundation for lifelong learning, that hitherto-unknown intellectual landscapes will open before them. But then we try to assess whether they have learned something by measuring things like, “can they write a 15-page research paper that employs correct APA format?” That creates a huge disconnect between what we say college-level learning is, and how we actually attempt to demonstrate that it has occurred.

If we peel back the admin-speak that so often accompanies conversations about it, we discover that assessment is simply telling the story of whether, and how, our students are learning. If we don’t tell our story well, there are plenty of others who will tell it for us. That’s why we’ve seen, for example, the emergence of narratives about higher education proclaiming that college is impractical, that the humanities are a waste of time, that faculty don’t teach, or that colleges and universities are centers of indoctrination and not education.

Those of us working in higher education know these narratives are pernicious and false. But we don’t always do a good job of sharing our story with external constituencies who — lacking an effective counterpoint — have accepted those narratives. I would argue that many of the problems we face in higher education can actually be dealt with effectively if faculty members reclaim assessment to do it diligently and well.

How would that work in practice?

First and foremost, I’d suggest we focus on process — as an outcome in itself. Yes, there may be a “right” answer to a particular question, or a particular proficiency we want our students to develop, but we also want to measure learning, not rote memorization. Assessment should accurately reflect that learning itself is a process. What is the process that students have used to understand, remember, and apply knowledge? How are students evaluating, synthesizing, and connecting what they’ve learned?

That is basic Bloom’s taxonomy stuff, but we need to make sure our assessment looks at the entire pyramid, not just its first two levels.

Let me offer an example: One of the assessments I use in my history courses is an exercise in which students have to respond to an essentially unanswerable question. Their answer is not what I’m assessing — there is no “right” or “wrong” reply to a “what-if” question. What I am assessing is the process they used to create their answer. Are they looking at evidence and using it to buttress the claims their scenarios make? How are they grappling with the weighty matters like causation and contingency? How well are they able to articulate a complex, evidence-based argument? Did they “think like historians” (i.e., critically examine multiple viewpoints, weighing arguments and evidence against one another) as they performed the task?

There’s no single specific outcome I’m looking for, because the process itself is the outcome. Between that assignment, and a brief reflective paper they write about doing it, I’m able to thoroughly assess things that really matter in my courses: critical thinking, information fluency, and understanding of the complexities of contingency and causation.

What if we took a similar approach to assessment on the programmatic and institutional levels? It can be done, and the results can be powerful. If we want to stop counting beans and writing bullet points and actually measure student learning to accurately tell its story, we should aim for the following:

  • Empower each department to describe its own student outcomes and how it plans to measure them. Some programs might have external outcomes as part of the mix — accreditation standards, for example — but allowing for organic, grassroots definition of student outcomes leads to more accurate and authentic assessment practices. If individual departments or units are part of a broad institutional initiative that needs to be assessed (like a core curriculum), then all of them ought to have a hand in developing the assessment plan. Support those departments and units as they follow their own assessment plans and collect data, even if that data is tentative and incomplete at first.
  • Stop talking about “deliverables” and focus on metacognition, disciplinary processes, and habits of mind. We’re assessing student learning, not UPS.
  • Articulate the processes that are foundational to either a particular discipline or to college-level learning in general. Create multiple opportunities for students to work through those processes, multiple ways to demonstrate their progress, and multiple opportunities for assessment. Create a culture where assessment is seen as a dialogue with a continuum of experiences rather than a set of disjointed data points. With data, context is everything. Meaningful data has several points of reference by which we can tell the story of change, of growth over time.
  • Make assessment a conversation rather than a product. Don’t just collect annual assessment reports and dump them in some bureaucratic black hole. Share data between departments. Look for larger trends. Find common insights. Use them to ask better questions.
  • Finally, we should always remember that learning is a process, not a destination. If our students ever “finish” learning, then we aren’t doing college right. Holding to a narrowly defined set of quantitative measures is an inadequate way to assess what we are actually doing. If we persist in that habit, we deprive ourselves of the most effective means of conveying the value and purpose of higher education to people who may not be well-disposed toward that message.

Assessment is our story. It’s the faculty telling the world — whether that means our campus administration, our students’ families, our field’s accreditors, or the legion of critics who charge us with malpractice — that we are indeed doing what we say we’ll do. It’s a way for us to show that we’re building the habits of mind and tools for informed citizenship that our students need. Higher education at its best — collaborative, communal, and meaningfully supported and valued — absolutely depends upon us making sure we do that story justice.

 

 

Join the Conversation

7 Comments

Log In or Sign Up to leave a comment.