Tricia Serio

Professor and Department Head at The University of Arizona

‘Radical Candor’ and Faculty Annual Reviews

Full vitae peer review showing others your work

Image: children writing a letter (via German Federal Archives)

More and more companies have been eliminating the dreaded annual ritual of employee performance reviews and ratings. The private sector isn't abandoning feedback altogether — rather, it is moving to replace the vague, once-a-year variety with more frequent, project-based evaluations, focused on building employee skills.

In academe, there’s still a place for annual performance reviews. Rather than abandon them completely, is there a way to transform the annual-review process to make it more useful to faculty, departments, and institutions?

Academia has long embraced peer review by external experts. Yet we are usually reluctant to solicit — or offer — frequent feedback within our own departments. Instead, we focus on evaluating our own or someone else’s long-term prospects for promotion and tenure. We sit in our offices, surrounded by potential sources of advice right down the hall, and yet we don’t systematically turn to each other for guidance. It’s a missed opportunity for the sort of regular career coaching that many professionals actually pay for in the private sector.

I took up this issue as chair (until recently) of a biology department. Like most departments, we conducted annual performance reviews. That usually meant listing what we’d done each year, rather than analyzing how well we’d done it — i.e., that we taught a particular course, not how well we taught it or how we could do it better next time. Our annual-review process ensured that everything got done but it didn’t provide much opportunity to recognize, support, or promote job performance that advanced our mission.

Rather than just counting beans, shouldn’t we be giving and receiving feedback that would promote our success? Seems simple, doesn’t it?

I thought so, too, until I initiated an effort in my department to revise the criteria for annual reviews to incorporate these ideas.

Almost immediately, we ran into a significant hurdle: the commonly held faculty belief that straightforward feedback is incompatible with collegiality. To be sure, feedback perceived as negative can be demoralizing for anyone. But if our perception of collegiality limits us from offering or receiving valuable advice, perhaps it’s time to redefine what collegiality means.

The concept of “radical candor” — the subject of a new book published this year — provided a framework for changing that perspective in our department. The idea behind “radical candor” is to link high regard for one another with the courage to give honest feedback that helps people develop professionally.

I won't say that we've completely reached this level of enlightenment in my department. But we've begun to pivot: Our annual reviews aren't meant to punish poor performance, but rather, to leverage our collective wisdom into a peer-coaching community.

Through an extended and iterative process of drafting, reviewing, and revising our assessment criteria, we were forced not only to acknowledge and reconsider our own individual biases, but also to clearly articulate the values that we embrace collectively as a department. We waded through the complex, diverse, and imperfect metrics for evaluating faculty work. Then we built consensus for review criteria by finding ways to accommodate diverse perspectives.

With the benefit of hindsight, I offer five key lessons that propelled us forward:

  • Make a rubric — but a flexible one. Opinions, experience, and judgment are important components of career coaching, but they are also subject to bias. Introducing structure into evaluations reduces bias but runs the risks of overlooking unique aspects of our work and of unfairly emphasizing differences among fields, such as citation frequencies. To balance those advantages and disadvantages, we developed a rubric based on ideal examples and tethered to the long-term goals of tenure and promotion. We then incorporated narratives into the review process to allow faculty to explain how variations from the ideal are equivalent. 
  • Allow maximal overlap between individual and collective aspirations. Faculty often consider themselves to be independent contractors within their college or university, but our activities also affect the success of our institutions. To succeed, we must consider both our individual and collective aspirations and align our activities to support both. 
  • Be true to your values. Quality is a subjective and complex characteristic, not easily assessed by objective metrics. We overcame that obstacle in two ways: (1) We abandoned our search for a single assessment and ultimately embraced combinations of metrics to give a more holistic picture of quality. For example, student evaluations, peer reviews, and measures of learning gains provide complementary information, so we used them all to assess teaching quality; and (2) when available metrics — such as those used to assess publication quality — left us wanting, we developed our own. By placing the journals in which our work was published and those that declined to do so into five tiers, we self-reported our own perceptions of quality in our subfields. We then combined the lists to create a ranking that will be used to evaluate the quality of future publications. 
  • Focus not only on outcomes but also on future success. Annual performance reviews have goals that are often perceived as competing: On the one hand, an annual review seeks to encourage by setting goals for your future success; on the other hand, it may be perceived as negative in assessing where you fell short in the past year or where you need to develop some skill. To circumvent this complication, we included outcomes as well as the activities that our collective wisdom identified as necessary to achieve the goals set in our assessment rubric. Have low teaching evaluations one year? Describe how you acted on the feedback from your peer-teaching review. Find yourself with a gap in funding? Show how have you modified your publication and application submission rates. Resources are still allocated through this process, but both outcomes and progress toward them — a cornerstone of successful faculty development programs — are assessed. 
  • Anchor yourself in reality. Once we identified useful assessments, we used them to analyze current faculty performance. With that data in hand, we set separate criteria for “meets expectations” in research, teaching, and service, and then adjusted expectations for ratings above and below that level. If your department is content with current performance, the average should reflect “meets expectations” or a higher rating. If not, shift the average to a lower rating, but keep in mind that this process might need to be incremental and iterative to keep the moving bar within reach of reasonable annual adjustments.

Have we come up with a perfect plan?

Absolutely not. Our rubric is meant to be a living document that is revisited and revised as needed. But the process of debating and developing it has already spurred useful conversations of what we expect of each other and of ourselves. It may not be radical yet, but I hope that candor becomes our new collegiality.

 

Join the Conversation

1 Comment

Log In or Sign Up to leave a comment.