Welcome back

Sign in to access your screening dashboard

Don't have an account? Sign up free
ai-screeninghr-techrecruitingindiacampus-hiring

Screening quality: 4 metrics that matter more than time-to-hire

HireQwik May 4, 2026 4 min read

Screening quality: 4 metrics that matter more than time-to-hire

Time-to-hire is what HR teams report up the chain. It’s also a metric that hides everything that matters once your applicant pool crosses 1,000 candidates. A 12-day time-to-hire on a 3,000-candidate campus drive tells you nothing about whether the right candidate got the offer, only that someone did.

After running pilot screening campaigns covering 1,099 interviews across multiple campus drives, the founders we worked with kept circling back to the same point: speed isn’t the problem. Sending offers to the wrong people quickly is. Below are the four screening-quality metrics that actually move the needle once you start hiring at volume, and why time-to-hire only matters after these are stable.

1. Rejection accuracy on clear mismatches

The first metric isn’t about who you hire. It’s about who you correctly say no to. On a typical 1,000-candidate campus drive in India, a sizeable share of applicants don’t meet the role’s basic requirements. If your screening process surfaces them anyway, your HR team spends two days on profiles they were always going to reject.

In our HyperVerge pilot, the HR team’s actual goal was that they should review only a minority of profiles, not the majority. The rest should already be filtered. If your AI screening tool can’t tell you what its current rejection rate looks like and whether it’s calibrated to the role, it isn’t a screening layer. It’s a ranking widget.

2. False-negative rate on good profiles

The flip side of rejection accuracy is the metric that destroys trust if you ignore it: how often does the screen reject a candidate the HR team would have advanced?

“It should not reject good profiles. Even if it’s selecting two or three wrong profiles, that’s okay. But it should not reject good profiles.” That’s the asymmetric quality bar from our pilot lead, and it’s the rubric every screening tool should be calibrated against.

In practice this means two things. One, your screening tool needs an “On Hold” tier, a third bucket between Reject and Recommend, for candidates the model isn’t sure about, so HR sees them. Two, you need a sample-and-review loop where HR audits a small portion of rejections each week. If false negatives climb, the rubric needs loosening, not tightening. Most teams skip this step. The ones that don’t end up with screening they actually trust by month three.

3. HR review time per candidate

This is the metric that translates screening quality into business value. A standard phone screen runs 10–15 minutes per candidate. After AI voice screening filters the pool, the remaining candidates need only a brief transcript read and a recording spot-check.

In our pilot we measured an 89% reduction in HR time per candidate vs the manual phone-screen baseline. Useful as a benchmark, but the real unlock is that this metric is predictive. If HR review time per candidate creeps up over a hiring season, your screening is getting noisier. Either the rubric needs retraining or the role specification has drifted from what the AI is actually testing for. Track it weekly.

4. Candidate completion rate

This metric quietly determines whether the other three mean anything. If half your campus applicants don’t finish the screening, the candidates who do are a self-selected subset, probably the most patient, not the most qualified.

Industry data on one-way video interviews shows roughly 50% drop-off in campus contexts. Voice screening typically completes at much higher rates because conversation feels like a phone call, not a recording session. The metric to track: of candidates who started a screening, what percentage finished a meaningful 15–20 minute conversation? If that number is low, your top candidates are walking away before you ever measure them.

Why time-to-hire still matters, but only afterward

Time-to-hire is a downstream operating signal. It’s useful after the four above are stable, because at that point speed becomes the actual constraint. But chasing time-to-hire while rejection accuracy is poor and review time per candidate is creeping up is how you end up with a fast pipeline full of mishires.

We built HireQwik as the AI screening layer your existing ATS plugs into specifically because the four metrics above sit underneath the screening conversation, not in the ATS dashboard. Most ATS platforms are pipeline viewers. They show you who’s where in the funnel, not whether the candidates in the next stage are the right ones.

If you’re running campus drives at 1,000-plus candidates this hiring season and your team is still reporting time-to-hire as the primary quality metric, start tracking the four above for one drive. The numbers will tell you whether your screening layer is doing its job, and whether you need a different one. For more on why rejection accuracy is the actual lever, see our earlier post on the real ROI of AI screening.

Talk to us if you’d like to compare what those numbers look like with voice-AI screening in the loop.

See HireQwik in action

Run a free pilot with your next batch of candidates. Screen up to 100 candidates at no cost.

Try ROI Calculator