TECHNOLOGY IN TALENT ACQUISITION: CAN WE REALLY FIND A GOOD BALANCE?

Reading time: 6 min

Technological advancements have been enjoying a lot of press since the huge AI updates of 2023 and 2024, and organizations the world over are seeking new, clever ways to turn tech advancements in their favor. There’s no question that we have seen some absolutely astonishing achievements in terms of technology, but a lot of people still feel wary about letting it handle big, important decisions. Should tech actually be managing things like hiring decisions? What are the ethics behind this?

We’re going to take a look at the pros and cons of this, exploring things like reduced costs, access to diverse talent pools, and the question of fairness.

It cuts costs

Hiring is an astonishingly expensive business! For instance, in the UK, the Chartered Institute of Personnel and Development estimates that the average cost to hire somebody is £6,125. There are a lot of expenses, both in terms of advertising jobs and in terms of the work hours dedicated to the hiring process. A lot of time needs to be spent reviewing incoming applications, assessing potential hires, interviewing, consulting, performing further interviews, and finally reaching decisions. It’s not an easy process!

Of course, AI looks like a smart solution here; it is already being used by many companies. It’s estimated that 3 in 10 UK employers are implementing AI in their recruitment processes and it reduces the average cost of hiring a candidate by as much as 71%. It looks appealing to a lot of employers because it helps them cut through swathes of candidates and focus on a select few. At this point, humans within the organization can step in. A lot of time and money has been saved.

So certainly, cost-cutting makes AI an attractive measure, but what else might AI help with? Let’s look at diversity next.

Could it increase diversity?

A lot of people believe that using AI in the hiring process has the potential to increase diversity. Because it doesn’t have the same unconscious biases that many humans have, it’s got the potential to be fairer… or that’s what many people believe. However, we need to tread very carefully here and not make assumptions about AI that aren’t necessarily true.

It may seem that an AI is less likely to be biased because it doesn’t think like a human does. It doesn’t respond to things like a shared school or a similar first name or the same skin color or religion because it doesn’t need to relate to people. That seems like a great way to reduce bias in our hiring systems, but it comes with a big flaw. AIs are programmed by people and they make decisions based on data that has been generated by people.

That means that in actual fact, there’s a big risk of AIs amplifying existing biases. A 2024 study  found that human biases feed into AIs and the resulting bias within the AI’s thinking is very likely to increase the bias in the human’s thinking. The research found that people who interact with biased AIs become more likely to absorb the bias themselves, perpetuating harmful views and ideas. This could result in people underestimating the performance of certain nationalities, for example. It has the potential to be enormously harmful in a hiring world where the odds are already stacked against the minority candidates.

Indeed, researchers in this space admit to being “shocked” by the “magnitude” of bias  in many of today’s AI systems, even those programmed to help their users, rather than assess them. In this study, the AI’s assumptions about ethnicity and ability are highly concerning, highlighting just how deep these biases may run and how harmful they could be. If an AI is used in a screening process, the humans behind it may never even realize what’s going on.

That doesn’t mean that we can’t use AI in our hiring processes, however; it just means humans need to be heavily involved and we need to be vigilant about signs of bias. So, what might this look like?

Anonymization could be the answer

An AI can only act in a biased way if the information it is fed allows it to do so, meaning that the more we can anonymize the hiring process, the easier it is to eliminate bias in the system. For example, if an organization notices that their hiring AI is biased against Mexicans, they might decide to remove names and other potentially race-identifying details from resumes before feeding them into the system. This could significantly reduce the risk of bias in hiring decisions.

If organizations can effectively anonymize candidate information, this approach has the potential to increase diversity in hiring. By preventing AI from accessing factors like age, race, religion, or other personal identifiers, organizations can ensure that hiring decisions are based purely on skills and experience.

However, fully removing personal information from resumes can be challenging, and biases may persist before being detected. Additionally, some cultures do not prioritize anonymized hiring practices or may rely on network-based recruitment, where personal connections and background play a significant role. This can make it even more difficult to fully eliminate bias in certain hiring contexts. That’s why it’s critical for organizations to regularly audit their hiring processes and AI systems. Organizations must continuously evaluate whether their hiring technology, often trusted to remove biases, is actually achieving that goal or inadvertently reinforcing discrimination.

Organizations that do this have the potential to increase their effectiveness and their reputations; they will be able to hire far more diverse teams and they’ll benefit from the variety of their candidates. There’s no question that the cost-cutting nature of AI has some major appeal and it’s a tool with a lot of potential for reducing bias, but as it currently stands, it is likely introducing huge problems into already very flawed hiring processes.

As a result, organizations should be extremely wary of introducing AI at any stage in their hiring, whether the initial screening or the later steps; while this smart technology seems to hold a lot of promise, its “smartness” is unquestionably hampered by its inability to understand moral dilemmas or to be truly fair when its training data is biased.

Time to get hands-on!

To ensure job descriptions are welcoming and free from gender-coded language, try using tools like Gender Decoder. Copy and paste a job posting into the tool and analyze the results. Does it contain subtle biases that might discourage certain groups from applying? If so, revise the wording to be more neutral and inclusive. This exercise is especially important when using AI-driven hiring tools, as biased job descriptions can influence who applies and how AI ranks candidates.