Experimenting with hiring processes

Part of my role as Technical Talent Scout is to improve the process and systems we use to hire people.

We aim to hire the top 5% of software developers worldwide, which anyone who works in the software industry would know is a very difficult ask.

We've had a lot of investment into our proprietary systems and process for doing this successfully year on year. Our focus is on automating as much of the process as possible, so we spend person-hours on the people most worth talking to.

Until recently, our process looked like this:

  1. Technical puzzle (automated)
  2. Technical interview (1.5hr with one of our Consultants)
  3. Management interview (1hr with state leadership)
  4. Offer

This had not been tweaked since we first introduced the automated KnockKnock technical challenge in 2014.

We have a high-quality problem of way more work coming in than people to perform it. The volume of candidates our pipeline needs to service has increased dramatically in the past 3 years of stasis. We found that one month, we performed over 80 technical interviews at 2hrs each - a massive time investment, with low pass-rates being very demoralising for the interview team.

Part of the Technical Interview stage was a debugging challenge, which we'd usually do at the end of the interview. This has tended to be a good predictor of job success - quite often people would sound great in theory, but put them in front of a code editor and suddenly it all falls apart. These cases were frequent, and causing 1.5hrs wasted time.

We decided to experiement by splitting this debugging test out as its own phase:

  1. Technical puzzle (automated)
  2. Live Coding Challenge (0.5hr with one of our Consultants)
  3. Technical Interview (1hr with one of our Consultants)
  4. Management interview (1hr with state leadership)
  5. Offer

We set out on this experiment by defining metrics and success criteria which we would measure after collecting a sufficient sample size. These metrics were:

  • Technical interview pass rate (goal: triple it)
  • Total time to hire (goal: maintain average)
  • Total consultant time invested in hiring (goal: reduce it by 2/3)
  • Feedback from candidates and interviewers

We intentionally did the minimum amount of systems work possible to facilitate this. We didn't build any fancy auto-emailing logic for the result of this stage. Back to manual email sending until we prove the approach's worth. We had to do a bit of work on join.readify.net to advertise the change in process (set expectations for candidates) and implement separate markers so we could measure the differences between the old and new approaches.

Results

It's been a while (about 2 months), so how did we go?

  • Tech interview pass rate increased by about 10%
  • Total time to hire remained constant
  • Total consultant time invested in hiring reduced by 37%

The reduction in consultant time investment was a particularly valuable improvement. It's a shame we didn't see a bigger increase in the overall pass rate of tech interviews, but we can always try different things and tweak the process in other ways to get there.

Data-driven HR decisions

I was very specific in setting this up as an experiment following a fairly scientific method. Having built our own recruiting systems, we had 3 years of historical data to analyse and measure for trends, which has been available to the rest of the organisation for a while now. This was a great baseline to compare the results against.

We commenced the experiment with a set of success criteria (goals against the metrics), and ran it for long enough to get a reasonable sample size for drawing some conclusions.

Using separate stage markers was important to ensure we could easily segregate who had gone through the old process vs the new.

Transparency in all of these details was also important. We use Yammer heavily for collaboration and communication. Planning for, announcing and wrapping-up the experiment were all done with full visibility to all staff, with specific guides for the interviewing team to understand the changes and how they would be impacted.

What's next?

Shooting for that increase in passed tech interview rate. It looks like this might require a slightly more radical re-work of the process, and from here I don't think adding extra steps is going to be the solution. It's already a long process. Mulling over what to try next!

What would you try?

(PS: want to work for a transparent, data-driven software org like ours? https://join.readify.net/ 😊)