According to Fast Company, artificial intelligence is now deeply embedded in hiring, with the World Economic Forum reporting about 88% of employers use AI for initial candidate screening like filtering applications. Currently, around 21% of U.S. companies use AI to conduct initial interviews. Research shows candidates interviewed by AI have a higher success rate, with 54% landing the job compared to 29% from traditional resume screening. However, a new study by Brian Jabarian, a researcher at the University of Chicago Booth School of Business, examines the impact of giving candidates a choice between an AI or human interviewer. His paper, “Choice as Signal: Designing AI Adoption in Labor Market Screening,” which is not yet peer-reviewed, finds this seemingly fair choice can create a new hurdle for low-ability candidates—those whose skills are below a firm’s hiring threshold.
Choice as a Silent Screener
Here’s the thing that’s so counterintuitive. You’d think offering a choice is purely a benefit, right? More agency for the applicant. But Jabarian’s research flips that on its head. The theory is that the choice itself becomes a signal to the employer. A candidate who opts for a human interviewer might be signaling a lack of confidence in their technical skills or a fear of being objectively assessed by an algorithm. Conversely, choosing the AI route could signal comfort with technology and confidence that your skills will be measured fairly. So it’s not just about the interview performance anymore. The very first decision you make in the process—human or bot—is already being weighed against you. That’s a whole new layer of psychological game theory added to an already stressful process.
The Skills Gap Gets Wider
This finding hits hardest for the so-called “low-ability” candidates. Basically, the people who might need the job the most could be inadvertently screening themselves out before they even say a word. If they perceive the AI as a more intimidating or opaque gatekeeper and choose the human, they might be tagged with a negative signal. It creates a weird paradox where a policy meant to be inclusive and flexible could end up reinforcing existing disadvantages. And this is happening as the push for skills-based hiring grows. The tools are changing, but the potential for bias is just morphing into new, subtler forms.
Where Does This Leave Hiring?
So what’s the solution? Should companies just force everyone into the AI interview chamber? Not necessarily. But it does mean we need way more transparency. If a company is using these systems, they have a responsibility to explain how they work and what candidates can expect. Blind choice, where the employer doesn’t know which option you picked, could be one technical fix. The bigger issue is that we’re racing ahead with adoption—as detailed in reports like this one from the World Economic Forum—without fully understanding the behavioral side effects. We’re learning that implementing AI isn’t just a technical challenge; it’s a massive human psychology experiment. The full paper, available on arXiv, and related work like this SSRN paper, show we’re just scratching the surface.
The Unintended Consequences of AI Adoption
Look, this is a pattern we see everywhere AI gets deployed. The goal is efficiency, consistency, and scale. But we keep stumbling over these unintended social consequences. In hiring, the promise was to remove human bias. Instead, we might be coding in new biases or, in this case, outsourcing the bias to the candidate’s own choice architecture. It’s a stark reminder that you can’t just drop a new technology into a complex human system and expect it to work cleanly. The next few years will be critical for establishing the guardrails and ethics around these tools. Because right now, for a job seeker, that “choice” is starting to look less like an option and more like the first test—and you didn’t even get to study for it.
