Welcome back

Sign in to access your screening dashboard

Don't have an account? Sign up free
complianceai-biashr-technologycampus-hiring

The "Rejection-First" Case Against Auto-Reject: What Mobley v. Workday Teaches Indian HR Teams

HireQwik April 22, 2026 4 min read

“The AI didn’t reject him. The recruiter did — she just looked at a number the AI printed.”

That line should be on a poster in every HR compliance team’s war room in 2026. Because Mobley v. Workday — the age discrimination class action that just had its amended complaint filed on March 28, 2026 — is about to become the case every HR tech buyer cites when asking their vendor a single question: who actually makes the reject decision?

What Happened in Mobley v. Workday

Derek Mobley applied to over 100 positions at companies using Workday’s AI-powered screening tools. He was rejected from all of them. His claim: Workday’s AI systematically discriminated against him on the basis of age (he is over 40), race, and disability — and did so through automated decisions made without meaningful human review.

The class is broad — all applicants aged 40 and over who applied to any role at any company using Workday’s AI screening tools since September 24, 2020. The case is now proceeding nationwide in federal court.

The legal theory isn’t novel. What’s new is the scale and the specific mechanism under attack: automated rejection with no human in the loop.

Why This Is the Story Indian HR Teams Need to Read

You might be thinking: that’s a US lawsuit, we’re in India, it doesn’t apply to us.

Three reasons it does:

1. Indian enterprises with US or EU parent companies are directly in scope. If your company uses a global HR platform to screen candidates for roles at US or EU entities, your screening process is subject to US employment law and EU AI Act provisions.

2. EU AI Act enforcement begins August 2, 2026. The EU formally classifies recruitment AI as “high-risk.” From August 2, employers and vendors using AI in hiring for EU-connected roles must maintain audit trails, conduct bias testing, and ensure human oversight before final decisions. Maximum fines: €15 million or 3% of global annual turnover.

3. The legal theory travels. Indian courts and regulators are watching. The precedent being set in US federal court on “automated rejection without human review” will influence how AI hiring disputes are framed globally.

The Architectural Question Every Vendor Should Answer

The core of the Mobley case isn’t whether AI was used. It’s whether a human made the final call.

This is an architectural question. Some platforms are designed so that the AI is the decision — a score below a threshold means automatic rejection, no human sees it. Other platforms — and HireQwik is explicit about this — are designed so that the AI produces a recommendation and a human makes the decision.

HireQwik’s output is a 3-tier classification:

  • Strong Go — recommended for immediate next round
  • Go / On Hold — HR reviews and decides
  • No Go — AI flags as likely mismatch, HR reviews before any communication goes out

The numbers from our pilot data: 67% No Go / 31% On Hold / 1.5% Strong Proceed. HR reviews every tier. No candidate is rejected by an algorithm alone.

That’s not a compliance workaround. It’s the product design from day one.

“I’m Looking to Look at 400 in Two Hours”

When Siddarth at HireQwik describes the product’s value proposition, he doesn’t say “the AI will reject 600 candidates.” He says: “I’m looking to look at 400 in two hours.”

400 reviewed by humans, not zero. That’s the difference between AI as a filter that eliminates candidates and AI as a tool that surfaces the right ones for human judgment.

The Mobley case is really a case about the second scenario being absent. An AI printed a number. A recruiter or system acted on it without review. Now there’s a class action.

The 3 Questions to Ask Your Current AI Screening Vendor

If you’re using any AI screening tool — not just Workday — run it through these three questions:

1. At what point, if any, does a human see a candidate before rejection? If the answer is “we send automated reject emails based on score thresholds,” you have the Mobley problem.

2. Can you produce a bias audit on demand? EU AI Act compliance (and increasingly, good practice) requires this. If your vendor can’t generate a demographic analysis of their screening outcomes, that’s a gap.

3. Is there a documented human override process? Even if your system has one, does it get used? Who has the authority to reverse an AI recommendation? Is that logged?

These aren’t hypothetical questions from a compliance checklist. They’re the questions plaintiff attorneys are asking right now.

The HireQwik Design Principle

HireQwik’s position isn’t “we’re not AI.” We are. Fully. But the AI produces a structured narrative — what the candidate said, how they handled follow-up questions, where they showed depth, where they deflected — and a human HR professional reads it and decides.

That narrative is inspectable. That decision is auditable. That process is defensible.

An AI that auto-rejects is a defendant waiting for a deposition. An AI that hands a structured, documented recommendation to a human recruiter is a productivity tool.

2026 is the year that distinction stops being theoretical.


Want to see HireQwik’s 3-tier decision framework in practice? Book a 20-minute walkthrough and we’ll show you exactly what HR sees before any candidate communication goes out.

See HireQwik in action

Run a free pilot with your next batch of candidates. Screen up to 100 candidates at no cost.

Try ROI Calculator