- 9 hours ago
- 5 min read

February 21, 2026
Read Time - 4 minutes
"In the absence of clearly-defined goals, we become strangely loyal to performing daily trivia."
~ Robert Heinlein
It's performance review season, and last week I was on the phone with a close friend who is in her second year as department chair. She was venting about the annual ritual we all know too well.
"I just finished my first week of reviews," she said. "And I kept thinking: why are we doing this?"
She wasn't being cynical—it was more like genuine confusion. "We tell people it's for merit increases, but there's no merit pool. The reviews are on the calendar year instead of the academic year, so the timing makes no sense. And what am I even evaluating? People just list all the 'stuff' they did. Activities. Tasks. I read through seven pages about committees and course preparations, and I have no idea if this person is actually growing as a professional."
She paused. "Mostly, I feel like I'm making people prove themselves using criteria nobody can actually define."
According to Gallup, only 14% of employees strongly agree their performance reviews inspire them to improve. In higher education, where we pride ourselves on developmental mentorship with students, our faculty and staff reviews consistently fall flat.
We've confused documentation with development. And we're using evaluation tools so vague that nobody knows what success actually looks like.
Leadership Takeaway
Most academic performance reviews fail for two reasons: they're backward-looking compliance exercises rather than forward-looking development conversations, and they use evaluation criteria so general that "meets expectations" could mean almost anything.
Here's what happens: We inherit broken review templates. We check boxes on activities without defining what excellence actually looks like and the result feels more like compliance theater than genuine evaluation.
Academic culture values autonomy and expertise—which makes generic, unclear feedback feel especially insulting. Faculty and staff often have murky growth paths beyond "keep doing what you're doing." And we model terrible feedback culture while expecting our people to give students transformative, criterion-referenced feedback.
Performance reviews should do two things well: help people understand where they stand against clear standards, and chart a development path that builds their capacity for work they want to grow into. Most reviews fail at both.
Practical Guidance
Here's what effective reviews actually do:
1. Make the Goalposts Visible
Someone reads "meets expectations" on their review and wonders: does that mean I'm doing fine, or barely adequate? What would "exceeds" actually require?
If your rubric says "demonstrates scholarly productivity" or "provides excellent service" without defining what those mean at each level, you don't have standards—you have Rorschach tests.
What to do: Define each rating in observable behaviors and outcomes.
— Faculty research (Assistant Professor): Exceeds = 2+ publications in competitive journals, active grants, national conference presentations. Meets = 1-2 publications, submitting grants, clear tenure progress. — Staff program coordination: Exceeds = proactive process improvements, minimal supervision needed, mentors colleagues. Meets = completes projects on time, coordinates across departments, responds promptly.
Some leaders worry about making expectations too prescriptive or mechanistic—reducing complex professional work to a checklist. But ratings without clear goalposts feel subjective, which undermines the entire exercise. When people don't know what "meets expectations" actually requires, every rating feels arbitrary. The goal is to make your expectations transparent so they can direct their energy strategically instead of guessing what you value.
2. Set Goals Collaboratively
Most reviews evaluate against goals people never agreed to. You're measuring one thing; they thought success meant something else entirely.
What to do: Co-create goals together. Share institutional priorities. Ask what would make this a successful year for them. Negotiate 3-5 clear goals that serve both needs.
Critical: Don't just list tasks. Identify goals that require growth.
— Weak: "Serve on curriculum committee"Strong: "Lead curriculum revision, developing skills in facilitating difficult faculty conversations"
— Weak: "Coordinate student event programming"Strong: "Redesign student leadership development series, building skills in assessment and program evaluation"
If your planning document only captures activities, your review will only measure completion. Development means they're building new capabilities, not just doing their job.
3. Make Feedback Continuous, Not Annual
If someone hears about a performance concern in their annual review that you noticed months earlier, you haven't been leading—you've been documenting. But continuous feedback isn't just about correcting problems—it's about reinforcing what's working too.
What to do: Build feedback into your regular rhythm:
When someone facilitated a meeting well: "You showed great leadership when you redirected that conversation when it got heated."
After a challenging parent interaction: "Can I offer some coaching on how that went?"
When someone handles a crisis effectively: "You managed that scholarship appeal brilliantly. Let's talk about what made that work so we can apply it elsewhere."
Regular check-ins on goals and obstacles, not just annual summits
Google's Project Oxygen found regular coaching conversations—not annual reviews—were the #1 differentiator of effective managers. When feedback is continuous, annual reviews become summaries of ongoing conversations, not ambushes. And people know what behaviors to keep doing, not just what to fix.
4. Start with Self-Assessment, Then Discuss Gaps
The most revealing part of any review is the gap between how someone sees their performance and how you see it.
What to do: Before your meeting, ask them to self-assess against the goals you set together. In the meeting, start there: "Tell me how you think the year went."
What gaps reveal:
They rated themselves lower → coaching opportunity about imposter syndrome or lack of awareness of their impact
They rated themselves higher → clarity problem about expectations (back to Section 1)
You're aligned → confirmation your feedback has been clear all year
Self-assessment shifts the conversation from "here's my judgment" to "let's discuss where we see things differently."
Bottom Line
My friend's frustration revealed something important: we can't let performance reviews turn into performative exercises. List your activities. Check the boxes. Satisfy HR. The whole system runs on autopilot.
The real test of your performance review process isn't whether your documentation satisfies compliance requirements. It's whether your people know what success looks like, whether they're growing in their roles, and whether they leave the conversation energized about who they're becoming as professionals.
Academic institutions run on the assumption that good people given autonomy will figure it out. But autonomy without clear expectations creates confusion, not empowerment. And annual reviews without ongoing feedback create anxiety, not development.
Your faculty and staff need two things from you: clarity about what success looks like in observable terms, and regular coaching that helps them build new capabilities.
The leaders who get this right don't just retain good people—they develop people who become exceptional. And those people remember who invested in their growth.
Try This Before Friday:
Pull out your performance review template. Find one evaluation criterion that's currently vague—something like "scholarly productivity" or "effective service."
Draft what "exceeds expectations," "meets expectations," and "needs improvement" actually look like in observable behaviors for your specific context. Make it concrete enough that someone could self-assess against it.
Then...share your draft with a few faculty or staff members and ask for input. "Does this help you understand what I'm looking for? What would you change to make it clearer?"
Revise based on their feedback. Put it into effect for the coming academic year.
Thanks for reading!
See you next week.

