AI hiring audit trail: the seven questions your legal team will ask
AI hiring audit trail: the seven questions your legal team will ask
When legal is asked to sign off on an AI screening vendor, the conversation rarely stalls at price. It stalls at one question: show me the audit trail.
It’s a fair question. By May 2026, the legal landscape around AI hiring has shifted enough that “the tool said no” is no longer an acceptable answer. The Mobley vs Workday discrimination case in the United States made the point publicly. The EU AI Act made it a regulatory requirement for any vendor whose tools touch the European market. India has no equivalent statute yet — but every Indian GCC and IT services firm with a US, UK, or EU client is already feeling the export pressure through their customers’ compliance teams.
Below are the seven questions any competent legal review will ask before approving an AI screening tool, and what a defensible answer looks like.
1. What decision did the tool make, and on what evidence?
This is the floor. For every candidate, your audit trail must show three things: the verdict (advance, hold, reject), the score or rationale that produced it, and the underlying interview content the score was derived from.
A tool that ships a verdict without a rationale is not auditable. A tool that ships a rationale without the underlying transcript or recording is not verifiable. You need both. If the vendor’s product is a verdict-only API, walk away — the legal cost of explaining a discrimination complaint without source material is far higher than the implementation savings.
2. How long is candidate data retained, and where does it live?
Two sub-questions hide here. The retention window matters because you’ll be asked to produce records months after the rejection — and “we delete after 30 days for cost reasons” is a hard no. The data-residency question matters because Indian DPDP rules and EU GDPR/AI-Act rules disagree on where personal data may sit.
Defensible answer: candidate transcripts and recordings retained for at least a year, with a documented deletion process; primary storage region declared in writing; sub-processors listed.
3. What did the candidate consent to, and can you prove it?
The audit trail starts before the interview, not after. Every candidate must have seen and accepted (a) that they’re speaking to an AI, (b) what the recording will be used for, (c) how long it will be retained, and (d) whom they can contact to request deletion. The proof should be a timestamp + version-of-consent-document, not a checkbox you can re-skin later.
4. How can a rejected candidate ask for a human review?
Increasingly, this is being asked as a regulatory question and not a courtesy one. New York City’s Local Law 144 requires an offer of an alternative selection process for any automated employment decision tool. California’s pending regulations are heading the same direction. India hasn’t passed equivalent law, but if you serve EU- or US-headquartered clients, their procurement team will ask.
A defensible workflow: if a rejected candidate asks for a human re-screen, you should be able to schedule one within five working days, with the original audit record handed to the human reviewer untouched.
5. How does the model version affect the decision?
This one catches vendors out. If a candidate is rejected on Tuesday under model v3.2 and the same candidate would have been advanced on Wednesday under model v3.3, your audit trail must record which version made each decision. Otherwise you cannot answer the question “was the rejection consistent with how we scored other candidates in the same drive?” Answering that question is the whole point of an audit trail.
Ask your vendor to show you the model-version field in their decision log. If it isn’t there, they aren’t ready for compliance review yet.
6. Who can see what, internally?
A complete audit trail is also a privacy liability if it’s accessible to the wrong people. Ask: which roles can read transcripts, which can read recordings, who has access to scoring rationales, and is access logged? The right answer is role-based access plus a read-trail — not “anyone with HR-admin can see everything”.
7. What is the explainability standard for rejection?
This is the harder question and the one most vendors fudge. SHRM’s 2025 AI-in-hiring survey found that 88 percent of HR leaders consider AI screening a compliance risk, and the reason most cite is that the rejections feel opaque. “The model gave a 4.5 on communication” is a number, not an explanation.
A defensible explainability layer combines three things: (a) which dimensions were scored (communication, role fit, evidence-of-claims); (b) what specific moments in the conversation drove each score; (c) which dimension was the binding constraint. If a vendor cannot answer (b) — pointing to a moment, not just a number — they cannot defend a rejection in front of a complainant.
What the audit trail actually looks like in practice
The way we built this at HireQwik is deliberately verbose. Every interview produces a structured record: the verdict, the per-dimension scores, the rationale phrases the LLM emitted, the audio-derived signals (pace, fluency, hesitation), the model version, the prompt version, the candidate consent timestamp, and the full transcript and recording. This is what an HR director hands their legal team — not a number on a dashboard.
We made communication assessment a two-evaluator system deliberately for this reason. An LLM evaluates what the candidate said and an audio-analysis layer evaluates how they said it; both must agree before the tool surfaces a confident reject. The most defensible compliance posture is the one that never produces a reject from a single signal.
Where to start
If you’re inside the AI-vendor evaluation process right now, the practical move is to send the seven questions above to your shortlist by email and require written answers before any commercial conversation continues. The vendors who can answer all seven cleanly will surface themselves quickly. The ones who can’t will spend two weeks hedging — useful information either way.
If you’d like a worked example of how an audit-traceable screening decision actually looks, end-to-end, send us a sample candidate profile and we’ll walk you through the record we’d produce for it.
See HireQwik in action
Run a free pilot with your next batch of candidates. Screen up to 100 candidates at no cost.