Artificial Intelligence In Hiring: Is Bias A Major Concern?

  • Updated on: 10 Apr 2023
  • Published on: 21 Jan 2020
Artificial Intelligence In Hiring: Is Bias A Major Concern?

Artificial Intelligence has rapidly moving from a concept on paper to a legitimate technique with applications across domains. The tech world was talking about AI for a while now, but now, the other industries are abuzz with it, too. The use of AI is slowly becoming normalised; something to be expected, even. This is especially true in the corporate world when it comes to recruiting. Employers are now turning to AI-driven tools to help with recruitment in the initial stages. Companies like Amazon, Ikea, Target and PepsiCo have already tested or used algorithms to decide who fits the bill for an interview call, and the list is only growing. 

Advocates claim several advantages to the use of AI in hiring. Among them, the major ones are that it can reduce the workload of HR managers, and also that it would reduce, or even potentially eliminate, human bias in the early stages of recruitment. Critics, however, warn that these tools would be no less biased than the people who train them. Let’s take a closer look:

Cases of bias

Backing the critics’ argument is the real example of Amazon’s gender-biased AI recruitment tool. In 2015, the company had to scrap its AI-based recruitment system after finding out that it was showing bias against women. The tool was trained to scan incoming resumes and select candidates with the most promising profiles for upcoming rounds. It was trained using resumes of candidates over a 10-year period. Most of the applicants in this period had been men, thus increasing their probability of getting hired over women. As a result, the recruitment tool began downgrading resumes which included keywords such as “women’s club”, and also those who went to women’s colleges. To Amazon’s credit, the tool never made it past the testing stage.

The sad part is that this was not a one-off situation. In 2016, Microsoft faced a similar problem with an AI chatbot developed to interact with Twitter users to learn and “get smart”. However, AI tools learn from the inputs they are given. So the chatbot learnt profanity, race-based stereotyping and inappropriate language from the users, which it then began to use. 

Why Does This Happen?

The problem with AI at its current stage of evolution is that it largely requires training using data. If the data fed to the system is biased, it would generate bias in the system itself. An apt example is the gender bias in the tech industry. Although an increasing number of women are rising in the sector, it still remains largely male-dominated. So, like with Amazon, if the AI hiring tool uses data in which the candidates are predominantly male, it is highly likely to show gender bias. This applies to racial bias as well. 

Another problem is that targeted advertisements for jobs also tend to be biased. If the job posting doesn’t reach the right candidates in the first place, future biases in recruitment would persist. Additionally, job descriptions themselves are often gender-biased. If a job sounds too “macho”, women are unlikely to apply. 

Is AI Bias A Major Concern?

As AI continues to witness increasing adoption and popularity in recruitment, bias is potentially a concern. However, the use of AI has some inherent traits that eliminate biases, at least to some extent. The most important one is that AI-based tools are simply efficient. Consider the task of scanning through all the applications an organisation receives. This task will certainly be performed faster and more efficiently than the current hasty and often biased screening techniques human employers use. According to an article in Harvard Business Review, companies get more than 250 applicants for a single open role. Manually handling every single application is simply not practical, so recruiters currently review only 10-20% of the applications, based on their colleges, employee referral programs, etc. However, this screening method drastically reduces the diversity of the candidates. Using AI, this problem can be eliminated, and all the candidates’ applications would at least be scanned. 

Of course, the fact that the applications are scanned wouldn’t matter at all if the system is biased. However, this can be remedied, too. One way to approach this is to control the data that the program uses. It may be set to exclude data about gender, ethnicity and other irrelevant information that could lead to bias. Such a step may have, to some extent, at the very least, eliminated problems similar to the one Amazon faced. 

The bottom line is that although AI in hiring can be just as biased as the humans who program it, it’s still possible to identify and correct bias in AI, unlike in humans. If conscious steps are taken to ensure that the AI recruitment tools used are not biased, it could potentially eliminate bias in the initial stages of recruiting.

 Share

Our top picks

Can Millennial Stress be Resolved by Financial Wellness?
Finance | 3 mins read
How Organisations Can Measure the Impact of Financial Wellness Programs
Finance | 3 mins read
How Can HR help Overcome Staffing Challenges in the Digital Age?
Corporate | 3 mins read
5 Signs of A Good HR Function
Corporate | 3 mins read