HR's biggest governance question right now is accountability. When AI makes a bad hiring decision, who is responsible? Most companies have no answer.
By 2028, an estimated one in four job applicants will be fraudulent. Not embellished. Fraudulent. Government-backed actors are targeting remote jobs. Professional "ghost workers" run several full-time jobs simultaneously. Deepfake video interviews pass early screenings. This warning came not from a cybersecurity panel but from a talent acquisition session at Transform 2026, which shows how far compliance pressure has moved into core HR work.
Across 54 sessions on compliance topics, with 159 speakers from 348 organizations, one theme was clear: the rules haven't caught up to reality. Companies that wait for regulations to catch up will get hurt first.
For decades, HR compliance meant checklists. Check the box. Clear the audit. File it away. That model assumed regulations stayed stable and workers did predictable things. Neither is true in 2026.
Karina Bernacki, in the featured session "Governing AI Before It Governs Us," named the core problem:
"The static checklist itself doesn't work, right? It's very deterministic. You check it, you clear it, you're good to go, which doesn't work when you're dealing with an agent that is built to adapt and learn and continually change."
Karina Bernacki — "Governing AI Before It Governs Us"
An AI agent doesn't stay where you put it. It learns and makes decisions its designers never anticipated, which means governing it requires continuous monitoring. Most HR governance frameworks still rely on periodic audits and static policy documents, neither of which can track a moving target.
Marcus Sawyer, on the same panel, pushed the point further. AI systems will soon oversee other AI systems:
"AIs will govern other AIs if you know how they work. So I just want us to move from tools thinking to systems thinking."
Marcus Sawyer — "Governing AI Before It Governs Us"
HR leaders still debating AI in job descriptions are about three governance generations behind the real problem.
Ask most HR teams who is accountable when an AI-assisted hiring decision goes wrong. You'll get silence. Or a vague gesture toward "the algorithm." Neither holds up in a legal proceeding.
Bernacki kept returning to this pressure point:
"If we're going to give them authority to make decisions on our behalf, we probably should understand who's accountable for what happens next."
Karina Bernacki — "Governing AI Before It Governs Us"
"Who is the person that is going to be accountable if that decision goes wrong, right?"
Karina Bernacki — "Governing AI Before It Governs Us"
Most companies lack an answer. They deployed AI faster than they built accountability structures. A speaker in the AI literacy session made the stakes plain:
"I mean, I put on there that I think ultimately humans own all decisions, period. And because we have to own the risk. AI is not going to — if the house burns down, AI will still be here, but we won't be."
Melissa Laswell — "AI Literacy Is a Leadership Issue — Not a Training Program"
Regulators don't fine algorithms. They fine organizations, and increasingly, individuals. HR needs accountability structures that name a responsible person before a decision is made, not after something goes wrong.
The session "Fraud, Fakes, and the New Security Perimeter in Recruiting" put hard numbers on the table. One in four applicants will be fraudulent by 2028. Speakers documented government-backed actors targeting remote roles. Deepfake video interviews pass early screenings. Professional "multi-jobbers" use AI to hold four or five remote jobs at once. They collect multiple salaries and game every applicant tracking system in their path.
A speaker in the AI-assisted screening session laid out the compliance case for screening all applicants:
"I would've rather sat on like in that chair being questioned by an attorney going, yeah, my recruiters only like screen 2 to 3% of candidates, but this person that we dispositioned that we never even looked at or smelled, versus going, actually, we screened 100% of people, so we actually have data and we can actually tell you why this person wasn't selected to move on."
Tim Sackett — "When Everyone Has AI, What Actually Signals a Good Candidate?"
The alternative is 97% of applicants unreviewed with zero documentation: not a neutral baseline, but a liability with no paper trail.
Brian Christman, in "What Is a Job Now? Rethinking Work, Purpose & Value in the Age of Algorithmic Tools," named a governance risk that efficiency-first deployments miss:
"AI doesn't care about our reputation with our regulators. AI doesn't care about our personal relationships with customers. So we have to find a way to tune our talent to be thinking about that always and keeping those skills as high as building these new capabilities around AI."
Brian Christman — "What Is a Job Now? Rethinking Work, Purpose & Value in the Age of Algorithmic Tools"
Companies build regulator, union, and customer relationships over years of human judgment calls. An AI optimizing for speed will sacrifice those relationships for small process gains. It will never register what it traded away.
The governance challenge extends beyond stopping wrong decisions to stopping technically correct decisions that are strategically harmful.
Cross-border compliance failures don't stay in HR. They become diplomatic incidents. A speaker in "Growing Beyond Borders: Tactical Approaches to Faster Growth Through Global Hiring" described what happens when global employment goes wrong at scale:
"You have to play the long game. You have to understand when you're saying to a country, we will come in and employ 600 people in this rural place... and then you pull that away, there are repercussions at a government level that are above your CEO and your board of directors that heads of state start calling other heads of state."
Jenny Dearborn — "Growing Beyond Borders: Tactical Approaches to Faster Growth Through Global Hiring"
This is a documented pattern. Companies that treat international expansion as a legal checkbox exercise find out that employment relationships at scale carry geopolitical weight no checkbox captures.
The same session gave a smaller example. One company structured equity differently in the US versus Finland without explaining the difference to Finnish employees. When the company went public and US employees became millionaires, Finnish workers noticed. No compliance paperwork fixes a cultural wound that size.
HR teams often dismiss ISO standards as bureaucratic busywork. But ISO standards are the closest thing the profession has to a shared governance language across countries and legal systems. A speaker in the Culture + Belonging session described using AI for compliance research:
"If I have an HR problem... AI is really helpful for that because you can quickly research, figure out what the rules are around the new act, and also pivot to how to move employees around and keep them engaged in terms of information."
Lorelei Carobolante — "ISO International Standards: How HR Leaders Shape the Human + AI Future of Work"
The same speaker was equally direct about AI's limits:
"The AI is not supervising you. You're supervising the AI. It's got to be a meaningful collaborative partnership. You have to engage with the technical intelligence as a thinking partner because it cannot replace your thinking."
Lorelei Carobolante — "ISO International Standards: How HR Leaders Shape the Human + AI Future of Work"
Most companies haven't built this posture into how they work. AI as a research and drafting tool is very different from AI as a compliance authority. That difference matters most when the output is a policy, a worker classification, or a termination rationale.
Jeff Batahan's session on decision quality at Tinuiti kept coming back to explainability. If you can't explain why a data-driven decision was made, you can't defend it legally or sell it internally.
"Objectivity actually builds that trust and credibility. I think it's incredibly important on a few topics. One is, you need to be able to explain it, right?"
Jeff Batahan — "From Insights to Action: How Tinuiti Is Powering a High-Performance Culture Through Decision Quality & Transparency"
The same mistake came up across sessions. Companies run bias audits that check whether a decision affects protected groups at different rates. But they don't ask whether the underlying model predicts equally well across those groups. The first question satisfies legal compliance. The second determines actual fairness. Most vendors can show you the first number. Almost none can show you the second.