Here is a fact that surprises most hiring managers: the typical unstructured job interview — where you ask what feels natural, follow threads that seem interesting, and end with a gut feeling — predicts job performance only marginally better than chance.
That's not an opinion. It's a finding replicated across hundreds of studies and distilled in meta-analyses spanning 50 years of industrial-organisational psychology research. And yet most companies still conduct unstructured interviews as their primary evaluation mechanism.
What the Research Actually Says
The foundational meta-analysis is Schmidt and Hunter (1998), which assessed the predictive validity of 19 different selection procedures across decades of data. Structured interviews showed a validity coefficient of 0.51 for predicting job performance — compared to 0.38 for unstructured interviews and 0.10 for years of experience alone.
A validity of 0.51 is substantial in selection research. It means that candidates who score well in a structured interview are meaningfully more likely to perform well on the job. The 0.38 for unstructured interviews is real, but the gap matters enormously at scale — across hundreds of hires a year, the difference compounds into measurably better or worse teams.
The number that matters
A validity of 0.51 vs 0.38 might sound small, but in a company making 50 hires per year, that difference translates to approximately 6-8 fewer hiring mistakes annually — with all the cost, team disruption, and re-hiring overhead that implies.
What "Structured" Actually Means
The word "structured" is used loosely. Here's what it means with precision:
- Same questions for every candidate. Not the same topics — the same questions, in the same order. This allows apples-to-apples comparison and prevents interviewers from unconsciously asking easier questions to candidates they already like.
- Behavioural or situational question types. Behavioural questions ask about past behaviour ("Tell me about a time when…"). Situational questions ask about hypothetical scenarios ("What would you do if…"). Both outperform open-ended questions because they elicit specific, evaluable responses.
- Scored against pre-defined criteria. Each question has a scoring rubric defined before the interview. "Strong" answers are described in advance — not evaluated post-hoc against the best answer you happened to hear.
- Independent scoring before debrief. Each interviewer scores independently before seeing anyone else's ratings. This prevents anchoring bias — where the first opinion expressed disproportionately influences the group.
Why Companies Don't Do This (And Why They Should)
The honest answer is that structured interviewing feels like more work upfront. Writing good questions takes time. Defining rubrics takes more time. Training interviewers to use them takes more time still.
But here's the calculation most teams miss: the upfront investment in structure pays back many times over in reduced re-hire costs, shorter decision timelines, and — most importantly — better hires who stay longer and perform better.
"The evidence is unambiguous. Structured interviews are one of the highest-validity selection tools available, and they're consistently underused relative to their demonstrated effectiveness."
There's also a secondary benefit rarely discussed: structured interviews are fairer. When everyone gets the same questions and is scored against the same criteria, the process is less susceptible to the affinity biases that corrupt unstructured conversations. Candidates from underrepresented groups often benefit disproportionately — because they're being evaluated on what they actually said, not on how comfortable the interviewer felt with them.
A Practical Starting Point
You don't need to overhaul your entire process overnight. A minimum viable structured interview looks like this:
- Pick four to six competencies that genuinely predict success in the role. Not generic ones — specific to this role and this team.
- Write one behavioural question per competency. Test it by asking: would a strong candidate answer this differently from a weak one?
- Define what a strong answer looks like for each question. Write it down before the first interview, not after.
- Give every candidate the same questions. Resist the urge to improvise based on where the conversation goes.
- Score each candidate independently on each competency before the debrief.
That's it. You don't need a sophisticated system. You need discipline — and ideally, a tool that makes following the process easier than skipping it.
Unstructured interviews aren't worthless. The 0.38 validity is real. They also serve a purpose that pure structured interviews don't — giving candidates a chance to experience the team, ask questions, and decide whether they want to work there. But using an unstructured conversation as your primary evaluation mechanism is leaving significant predictive power on the table.
Structure your questions. Define your criteria. Score before you discuss. The research is clear on what happens when you do.