Welcome back

Sign in to access your screening dashboard

Don't have an account? Sign up free
complianceai-biashr-technologycampus-hiring

The EU AI Act Clock Runs Out in 107 Days. Is Your Hiring AI Auditable?

HireQwik April 18, 2026 6 min read

August 2, 2026 is 107 days away. On that date, the EU AI Act’s high-risk provisions become enforceable — and if your company uses any AI for recruiting, candidate screening, or ranking, you are in scope.

Fines: up to €15M or 3% of global annual turnover, whichever is higher. Annual third-party bias audits: mandatory. Human oversight on all consequential decisions: required.

This isn’t a future compliance problem. It’s a present one if you haven’t already started your audit trail.

Section 1: What “High-Risk” Classification Actually Means for HR

The EU AI Act categorises AI systems by risk tier. Annex III of the Act lists categories that are automatically classified as high-risk — and employment, worker management, and access to self-employment are explicitly included.

In plain terms: any AI system that influences who gets shortlisted, ranked, scored, or rejected for a job is a high-risk system under the EU AI Act starting August 2, 2026.

This applies to:

  • AI-powered ATS ranking and filtering
  • Automated candidate screening tools (voice, video, text)
  • Any AI that generates a score or classification used in hiring decisions
  • Resume parsing tools that use ML to rank or filter candidates

The geographic scope is broad. If your company is based in India but screens candidates who are EU citizens, or if you serve EU-based clients through GCC operations, you are potentially in scope. Indian IT services firms and GCCs with EU client contracts should be treating this as an active compliance requirement, not a distant regulatory concern.

Section 2: The 4 Compliance Gaps in Most Screening Tools Today

When you audit most AI screening tools against the EU AI Act’s actual requirements, four gaps appear consistently:

Gap 1: Black-box scoring Most AI screening tools produce a score or recommendation without exposing their reasoning. A candidate receives a “No” — but there’s no documented explanation of which criteria drove that decision. The EU AI Act requires documented, explainable decision criteria. Black-box outputs don’t qualify.

Gap 2: No bias testing documentation The Act requires documented evidence of bias testing — across age, gender, ethnicity, disability, and other protected characteristics — before deployment and annually thereafter. Most vendors have internal testing. Few have third-party audited documentation they can share with customers.

Gap 3: No human oversight layer The Act requires that consequential decisions have a human review stage. “AI screening and auto-reject” workflows — where candidates are eliminated before any human sees their file — create direct compliance exposure. There must be a human-in-the-loop decision layer.

Gap 4: Inadequate logging Full audit trails — who was screened, when, with what tool version, scored how, and what happened next — must be maintained and available for regulatory inspection. Many tools don’t retain this data in an accessible, structured format.

Section 3: Checklist — 7 Questions to Ask Your AI Hiring Vendor

Before August 2, ask every AI screening vendor you work with:

#QuestionWhy It Matters
1Is your system’s decision logic documented and explainable per candidate?Black-box scores fail the transparency requirement
2Do you have third-party bias audit reports available?Self-reported testing doesn’t satisfy the annual audit requirement
3Does your system include a human decision layer before candidates are rejected?Auto-reject without human review is non-compliant
4What data is logged, and for how long?You need full audit trail access on demand
5Are your AI prompts or scoring rubrics viewable?Documented prompt systems are required for risk management
6What is your risk management documentation?The Act requires documented risk management systems
7Have you conducted a Data Protection Impact Assessment (DPIA)?Required for high-risk AI systems under GDPR + EU AI Act

If a vendor can’t answer these questions with documentation, not just verbal assurances, that’s your exposure.

Section 4: Why Transparent Scoring + Human Decision Layer Beats Black-Box

The compliance argument and the product argument converge here: transparent scoring with a human review layer isn’t just legally safer — it’s a better screening product.

Here’s the comparison:

Black-box AI screening:

  • Candidate rejected → no documented reason
  • Recruiter cannot explain the decision to the candidate
  • No audit trail for regulatory inspection
  • Bias testing undocumented or inaccessible
  • Human sees only pre-filtered shortlist

Transparent scoring with human-in-the-loop:

  • Candidate classified → structured transcript + rubric score available
  • Recruiter reviews classification with documented rationale
  • Full audit trail: candidate ID, date, tool version, scores, outcome
  • Bias testing documented and auditable
  • Human makes final shortlist decision from AI recommendation

The second model is compliant. The first is not — and it’s also less useful for recruiters, because they can’t learn from or trust a score with no explanation.

What This Means for Indian GCCs and IT Services

Only 38% of Indian firms that have deployed GenAI in HR report high relevance — meaning most implementations are producing unreliable outputs on top of undocumented processes. That combination is a regulatory liability if EU clients or EU regulatory scope applies to your operations.

Indian GCCs serving European enterprises, and IT services firms with EU delivery centres, are in the highest-exposure bracket. Your EU-based client contracts likely already contain data processing clauses that will extend to AI Act compliance requirements once enforcement begins.

The NASSCOM guidance published earlier this year explicitly flagged this for Indian IT: EU AI Act compliance is not optional for firms in the EU services chain.

US Jurisdictions Are Following

The EU isn’t alone. NYC Local Law 144 requires annual bias audits for any automated employment decision tool used to screen NYC-based candidates. Colorado SB 205 creates similar requirements at state level. California’s proposed AI hiring regulations are in active legislative progress.

The direction is clear: AI in hiring is moving toward mandatory transparency, mandatory auditing, and mandatory human oversight globally. The EU AI Act is the most comprehensive framework in force, but it won’t be the only one for long.

The Audit-Ready Architecture

An AI screening system that is audit-ready by design has these properties:

  1. Structured scoring rubric — every candidate scored against the same documented criteria
  2. Explainable classifications — Strong Go / Go / On Hold / No Go with documented reasoning
  3. Human decision UI — recruiter reviews AI recommendation and makes final call
  4. Transcript logging — full conversation stored, retrievable, tied to candidate ID
  5. Documented prompt system — the questions and probing logic are versioned and auditable
  6. Bias testing documentation — available for third-party review

This is not a compliance checklist bolted onto a product after the fact. It’s how a screening system should be built if it’s going to be used for consequential decisions about people’s employment opportunities.

107 days. If you haven’t started your audit trail, start now. See how HireQwik is built for compliance at app.hireqwik.in.

See HireQwik in action

Run a free pilot with your next batch of candidates. Screen up to 100 candidates at no cost.

Try ROI Calculator