Editor’s note: Stuart Nisbet is chief data scientist at Cadient Talent, a talent acquisition firm based in Raleigh.

RALEIGH — At Cadient Talent, it’s a question that we wrestle with on a daily basis: How do we eliminate bias from the hiring process?

The only way to address a problem or bias is to acknowledge it head on, under the scrutiny of scientific examination. Through the application of machine learning, we are able to learn where we have erred in the past, allowing us to make less biased hiring decisions moving forward. When we uncover unconscious bias, or even conscious bias, and educate ourselves to do better based on unbiased machine learning we are able to take the first step toward correcting an identified problem.

What is bias?

Bias is defined as a prejudgment or a prejudice in favor of or against one thing, person, or group compared with another, usually in a way that is considered to be unfair. Think of bias as three sets of facts: The first is a set of objective facts that are universally accepted. The second is a set of facts that confirms beliefs, in line with what an individual believes to be true. Where bias enters the picture is in the intersection between the objective facts and the facts that confirm personal beliefs.

By selectively choosing the facts that confirm particular beliefs and focusing on the things that confirm those beliefs, bias enters. If we look at hiring from that perspective, and if our goal is to remove bias from the hiring process, then we need to remove the personal choice of which data points are included in the process. All data points that contribute to a positive choice (hire the applicant) or negative choice (decline the applicant) are included in the process and choosing the data points and their weights is done objectively through statistics, not subjectively through human choice.

 

How can computer algorithms help us do this? Our goal is to be able to augment the intelligence of humans, in particular by using the experiences and prior judgment in past hiring decisions, with an emphasis on those that resulted in good hiring decisions. “Good hiring” can be measured in a number of ways, that don’t implement inappropriate bias, such as the longevity of employees. If a new hire does not remain on the job very long, then perhaps the recruiting effort was not done well, and, in hindsight, you would not have chosen that applicant. But, if you hire someone who is productive and stays for a long time, that person would be considered a good hire.

Why do we want to remove bias from hiring decisions?

We want to remove bias when it is unintentional or has no bearing on whether an employee is going to be able to perform the job in a satisfactory manner. So, if a hiring manager’s entire responsibility is to apply their knowledge and experience to determine the best fit, why do we use machine learning to eliminate bias? Because, artificial intelligence only removes the bias towards non-work-related candidate attributes and augments decisions based on relevant work traits, where there is appropriate bias.

Our goal is then to make the hiring process as transparent as possible and consider all of the variables that are used in a hiring decision. That’s extremely complicated, if not impossible, if you have nothing but a human-based approach because the decision-making of a hiring manager is far more complex and less understood than those of a machine learning algorithm. So, we want to focus on the strength of simplicity in a machine learning algorithm; meaning we only want to look at variables, columns, and pieces of data in the algorithm that are pertinent to the hiring process and do not include any data points that are not relevant to performance.

Stuart Nisbet

An assessment result, for example, whether cognitive or personality-based, may be a very valid data point to consider if the traits being assessed are pertinent to the job. Work history and demonstrated achievement in similar roles may be very important to consider. The opposite is very clear, too. Gender, ethnicity, and age should have no legitimate bearing on someone’s job performance. This next point is critical. A hiring manager cannot meet an applicant in an interview and credibly say that they don’t recognize the gender, ethnicity, or general age category of the person sitting across from them. No matter our intentions, this is incredibly hard to do. Conversely, it is the easiest task for an algorithm to perform.

If the algorithm is not provided gender, ethnicity, or age, there is no chance for those variables to be brought into the hiring decision. This involves bringing in the data that is germane, having a computer look at what hiring decisions have been made in the past that have resulted in high performing long-term employees, and then strengthening future decisions based on the past performance of good hiring management practices. This will ultimately remove the bias in hiring.

One of the things that deserves consideration is the idea of perpetuating past practices that could be biased. If all we are doing is hiring like we have hired in the past and there have been prejudicial or biased hiring practices, that could promote institutional bias.  Through time, we have trained computers to do exactly what a biased manager would have done in the past.  If the only data that is used (“trained”) for hiring is the same data that is selected by biases of the past, then it is difficult to train on data that is not biased. For example, if we identify gender as a bias in the hiring process, and we take the gender variable out of the algorithm, gender would not be considered. When we flag previous bias, we are able to minimize future bias.

We should unabashedly look at whether we are able to identify and learn from hiring practices that may have had bias in the past. This is one of the greatest strengths of applying very simple machine learning algorithms in the area of hourly hiring.

What if an explicit goal is diversity? Can we still hire the best?

An aspect of the hiring process that opens up a lot of opportunities in the area of artificial intelligence and machine learning is implementing diversity.

Artificial intelligence can really differentiate itself here. Machine learning can make the very best hiring decisions based on the data that it’s given; if you have diversity goals and want hiring practices to encourage a diverse work population, it is very simple to choose the best candidates from whichever populations are important to corporate goals. This can be done transparently and simply. It doesn’t prioritize one person over another. It allows the hiring of the very best candidates from each population that you’re interested in representing the company.

Upon scrutiny and scientific examination, machine learning can be a very valuable tool for augmenting the hiring decisions managers make every day and help to understand when bias has entered into our decisions and yielded far less than our collective best.