Back to Transform 2026 Sessions Dashboard

Transform 2026: Performance Reimagined

The Annual Review Is Dead. Now What?

Speaker after speaker at Transform 2026 agreed on one hard truth: the problem with performance management isn't the process. We've been asking the wrong question all along.

Beat Report • Performance Management • Transform 2026 Conference, April 2026 • Based on 74 sessions, 193 speakers

By the Numbers: Transform 2026 Performance Management

Here is the finding nobody wanted to say out loud: your performance data looks backward, so every decision it drives is already too late. You run a calibration in December for work done in January. You write a review for someone who has already decided to leave. You name a promotion candidate after the budget is closed.

Speakers on the Performance Reimagined track called this "the review paradox." The system built to improve performance can't change it in real time. In 2026, AI is speeding up every other business cycle. That lag is no longer okay.

"The thing of it is, it was always about like, you've — the decision's already made and you have the data versus like, well, how can I impact that at this point? So we wanted to make the shift from how do we actually make decisions at the moment when we're making the decision, before the decision's made."

Jeff BatahanFrom Insights to Action: How Tinuiti is Powering a High-Performance Culture Through Decision Quality & Transparency

This is the central tension in performance management right now. Better tools and richer data are running through the same broken system. Transform raised the real question: does speeding up a broken process count as change?

The Invisible Work Problem: AI Automates What You Measure, Ignores What Matters

The sharpest session on the track focused on work that never shows up in any dashboard.

Speakers in "Rewarding the Work That Doesn't Show Up in Dashboards" named a concept that landed hard in the room: silent work. This means mentoring a struggling peer. Serving as the team's institutional memory during a transition. Holding culture together through a reorg. None of it appears in OKRs or performance ratings. As AI takes over structured, measurable tasks (tickets, reports, deliverables), silent work grows more valuable, not less.

Most performance systems don't just fail to measure silent work. They punish it. They reward people who focus on what's visible.

"I don't want to do like what happened last year. I want to know right now who are my top-performing people. I don't want to make sure — I want to make sure that all worthy people are ready for promotions."

Tanaya DeviFrom Insights to Action: How Tinuiti is Powering a High-Performance Culture Through Decision Quality & Transparency

The demand is real-time visibility. But real-time visibility built on the wrong signals speeds up bad decisions. The voices pushing hardest at Transform kept returning to a simpler question: what are we actually trying to measure?

Recognition Isn't a Soft Program. It's Your Most Honest Performance Signal.

The session "What if Your People Data Told a Story — Not Just a Score?" made a claim most HR leaders aren't ready for. Everyday recognition data is more current, more behavior-rich, and more culturally accurate than any annual engagement survey or performance rating. Most organizations sit on it without using it.

Recognition only works as a performance signal when it's genuine. Several sessions flagged the same failure: organizations are automating the very thing that makes recognition meaningful.

"We're holding pretty firm that we're not going to allow AI to write the recognition messages and really insist on those remaining personal. Because while the technology is important, if we lose the personalization, we'll undermine the impact of what we're trying to do with recognition."

Tara StavinHuman-Centric Design in an AI World: Driving Performance Through Experience and Skill-Based Design

Another speaker described what recognition looks like when it actually works:

"The recognition was more meaningful when it came from somebody you worked with every day who knew what you did all day than from a senior leader who was so far removed that they didn't know you at all."

Human-Centric Design in an AI World: Driving Performance Through Experience and Skill-Based Design

Proximity beats hierarchy. Most recognition programs get this wrong. The biggest title on the award doesn't mean the most to the person receiving it.

There's also an operational problem hiding in plain sight. Recognition programs with manager approval steps and routing workflows don't fail because managers don't care. They fail because every approval layer adds friction. That friction compounds until people stop trying.

"Eliminate barriers to recognition. We have so much administrative burden, and our managers and our leaders are so busy, that if there are approval layers and hoops to jump through in order to deliver a simple recognition, they just won't do it."

Human-Centric Design in an AI World: Driving Performance Through Experience and Skill-Based Design

Bias Audits Are Not a Checkbox. (Yours Probably Is.)

The most technical conversation at Transform wasn't about AI features. It was about what a real performance bias audit requires. Most organizations are doing it wrong.

Tinuiti's people analytics team laid out their standard. It's much harder than what most HR leaders ask their vendors to prove. They called it out-comparability.

"What we do in Sigma Squared is a much stricter thing. What we do is called out-comparability. So what does that mean? That means the same score should predict the same exact thing across different demographic groups."

Tanaya DeviFrom Insights to Action: How Tinuiti is Powering a High-Performance Culture Through Decision Quality & Transparency

Most "bias audits" check whether scores spread evenly across demographic groups. Out-comparability checks whether a score means the same thing for people in different groups: whether it predicts the same outcomes. These are completely different questions. The gap between them is where systemic performance inequity hides.

This connects to a broader argument made in the session on Black women leaders leaving the workforce. That talk argued that bad performance management pushes talented leaders from underrepresented groups out the door. The result is a measurable creative and economic loss, not just an equity problem.

Psychological Safety Is Not Equally Distributed. Your Performance System Assumes It Is.

One of the sharpest moments at Transform came from Angela Briggs Page in "The Truth About Workplace Silence: Why Speaking Up Isn't Safe for Everyone." Most people wouldn't connect it to performance management. They should.

"Power has a cost. And the more positional power you have, the closer to power you are, the lower your costs for honesty. And that's just the truth. Identity adds a tax to that."

Angela Briggs PageThe Truth About Workplace Silence: Why Speaking Up Isn't Safe for Everyone

Every continuous feedback system and every real-time pulse check assumes speaking up is safe. It isn't, not for everyone. Systems that treat psychological safety as an on/off switch miss how unevenly it spreads across identity, power, and proximity to leadership.

"I have been Black all my life. And I have spent so much of my career managing the comfort of people regarding my Blackness before I even opened my mouth."

Angela Briggs PageThe Truth About Workplace Silence: Why Speaking Up Isn't Safe for Everyone

A performance system that rewards candor without asking who pays for it will fail. It will systematically disadvantage the people already carrying the heaviest load.

AI in Performance Management: Useful for Consistency, Not for Decisions

The clearest statement on AI's role in talent decisions came from Tinuiti's session:

"AI isn't going to make the decisions in the future, especially talent decisions. These are very personal and very important to your employees. They can help with consistency. They can help you with speed. But when it comes to decision-making, this is what humans — us — do."

Jeff BatahanFrom Insights to Action: How Tinuiti is Powering a High-Performance Culture Through Decision Quality & Transparency

The agreement across sessions isn't anti-AI. It's anti-abdication. AI genuinely helps reduce manager variability in how data gets read. It catches calibration drift before it grows. It surfaces signals buried in unstructured data. What it can't do is carry accountability. Many organizations right now are putting AI on top of broken processes and calling the speed-up transformation. Multiple speakers pushed back on this directly.

"The real question is, are we developing human systems around the technology and are we designing it in the right way?"

Human-Centric Design in an AI World: Driving Performance Through Experience and Skill-Based Design

Research from "Building a Culture of Performance: Insights from 1,800+ Organizations" confirmed the pattern. The organizations with the best long-term performance weren't the ones with the most advanced AI. They were the ones that treated engagement and performance as one integrated system, not two separate tracks.


What to Do Monday

  1. Audit your performance data for directionality. Pull the last three calibration cycles. Ask: at what point could this data have actually changed a decision? If the answer is "after the decision was already made," you have a backward-looking system. That's your redesign brief.
  2. Run a real bias audit, not a distribution check. Ask your vendor or internal analytics team one question: does a given performance score predict the same outcomes across demographic groups? If they can't answer that, you're not auditing for bias. You're checking a box.
  3. Cut at least one approval layer from your recognition program. Find the step in your recognition workflow that exists for compliance reasons but adds no signal value. Remove it. Track participation rates for 60 days.
  4. Map your "silent work": the contributions your system can't see. Run a short manager workshop. Ask: what behaviors drive team performance that never appear in a goal-tracking system? Start building language around these. If you can't name invisible work, you can't reward it.
  5. Before adding AI to your performance stack, clean what you have. The most data-savvy speakers at Transform gave the same warning: putting AI on fragmented, inconsistent performance data doesn't fix the fragmentation. It speeds it up. Fix the data model first.