Attracting and hiring the right team members is the soul of every business. It is the right people in the right seats who make it possible for a company to make its journey from a good company to a great one. In this journey, one of the biggest challenges that every company faces is evaluating/scoring the applicants during the hiring process.
Score/evaluate an applicant on a range of different attributes and cross-sections while still being able to pull all these scores together into one coherent bigger picture when required.
Ensure collaboration among multiple recruiters & hiring managers across different stages of the recruitment process; allow everyone on a hiring team to score an applicant independently while cumulative scores make sure everyone stays on the same page.
Minimize the redundant and manual parts of the evaluation process. Reduce dependency on inefficient tools (paper records, excel sheets) and significantly reduce the time spent on evaluating a candidate.
Make record-keeping and information access as smooth as possible. Reduce the number of records (multiple excel sheets, physical paper records) and pull all the information into one place, making it quick and easy to access.
Make it easier to navigate through scores for hundreds of different candidates. Allow the recruiters/hiring managers to efficiently analyze the scorecards, so they can reach quicker, better-informed and confident hiring decisions.
HireCinch is an Applicant Tracking System (ATS) meticulously designed and engineered to provide the best hiring experience to recruiters, hiring managers and applicants around the world. The product works as a highly collaborative and intuitive platform that helps teams to source, track and evaluate to find the next right fit for their teams. Our broader product vision focuses on two main points:
1. Assistive hiring
2. Reduce the time to hire per applicant
In the light of aforementioned problem statement and challenges, our objective for this feature directly inherits its vision from the broader product vision.
Everything we do at CB is tied to these series of steps, cumulatively known as the Venture Design process, an idea presented by Alexander Cowan (www.alexandercowan.com). He describes it as a process that "helps you know where to focus. It offers a systematic execution of continuous design and delivery that helps you focus on the right things at the right time, leveraging the best of what’s out there in modern practices like design thinking and Lean Startup."
For better clarification in the context of this case study, we are dividing our process into 3 distinct phases as following:
Bring clarity and focus with personas and problem scenarios. Whenever we get an idea about a feature or a product, the first questions we ask ourselves are the Who? and the Why? It is important to know who is the persona for the feature you are building and why does he/she need it in the first place? Hence, we started out with a list of assumptions and then went out to talk to actual users. User research is all about establishing a real human connection with your end user and try to understand who they are, what do they think, see, feel or do about the subject matter. For this particular feature, our target persona for the user research is provided below:
Alice has been working as a hiring manager for about 5 years. She likes to arrive early at the office and check her emails. She looks for any scheduled meetings & plans her day accordingly. She finds herself really busy during the hiring season. She takes the behavioral interviews where she needs to judge if an applicant is a good cultural fit for the company. She prepares by preparing a list of questions and putting measurable metrics against each of them. After the interviews, she has to write a full report on the candidates and has to forward it to her boss. The report consists of metrics and conclusions of all the interviews.
She thinks that her workflow contains communication gaps with her teammates and while preparing candidate lists, there's a lot of redundant work involved which is both tiring as well as frustrating for her.
In light of the assumptions about our persona drafted above, we started out with a range of problem scenarios, also known as jobs to be done and then finalized the following four as the most important ones for the research:
The next logical step in our process was the validation of our hypotheses:
1. Is our persona correct? Do we have the correct understanding of our user?
2. Do the problem scenarios we drafted actually exist? Is the need real?
With these questions in mind, we went out and conducted one on one interviews with the users. By the time we were done with the sessions, our persona and problem scenarios had refined to a very accurate representation of our actual users and their needs. Here are some key learnings we extracted from the interviews
Generally, hiring managers score applicants on a Binary (Yes/No) or 5 point rating (1-5) or a mix of both.
Different attributes in a scorecard can have different weights in comparison with each other.
Users prefer to use existing scorecards with little tweaks for different positions instead of creating new ones.
At the end of each stage, most users feel the need to check team's score average for a specific candidate.
Most hiring managers use different scorecards for different stages of their hiring process.
Finally, we paired the validated problem scenarios with our propositions. At this point we knew exactly what we were going to build !
Sketching out a wireframe is a great way to demonstrate what you had in mind so that you can have a directed discussion about how to iterate on possible solutions. After formulating user stories, it was time to see how they would actually translate into implementable solutions. Hence, we started building quick wireframes and prototypes either on paper or using Balsamiq. Without going into a lot of detail, we quickly sketched a few solutions and did some early testing to see if we were on the right track.
After designing low fidelity wireframes and testing them with the users, we had clear user flows & design patterns which helped us move towards high fidelity designs. Once we have the high fidelity designs we test their usability with the users. Here is what the initial visual design looked like:
Similar to what we did in the research phase, we took our prototypes to the user again. To test the usability of our prototype, we used a three point system to check any issues in our user journeys. The score of '1' means the user got it easily, '2' means the user had issues but eventually got it or there were other insights involved & the score of '3' means user couldn’t achieve it without help or there was a significant struggle. We observed user behavior and prepared our detailed insight notes. This approach helped us put laser sharp focus on the problematic areas. Here are the results of the tests that we conducted.
Sections and sub-sections help the users categorize questions. Weightage and evaluation scale help them fully customize the scorecard according to their needs.
Templates help the users save a scorecard so they can use it whenever they want, eventually saving them from a lot of manual redundant work.
Lets the users score a candidate on a scale of their choice; 1-5 scale or Yes/No
Categorical and well-represented results help the users clearly interpret them and reach more meaningful decisions in lesser time
Users can easily view the score of any other team member or the team average. Hiring stages make sure you can attach a separate scorecard to interview, test or screening stage
Compare results of all candidates in a single view with the ability to sort them and apply custom filters on them to visualize the scoring results however you need them