Can We Entrust Decisions About An Organization's Most Valuable Asset—its People—to AI?

Can We Entrust Decisions About An Organization's Most Valuable Asset—its People—to AI?

The use of AI in HR processes is not a new concept. Over the past decade, advancements in HR technology have transformed the way HR professionals operate, shifting from manual tasks such as traveling for interviews, collecting CVs, and administering entry exams to automated applicant collection, video interviews, and digital candidate pools. The introduction of fast-growing technology has allowed for the automation of mundane tasks and the streamlining of processes like online assessments, video interviews, and onboarding. With the emergence of ChatGPT and its various applications, HR professionals are often enticed to rely on AI, particularly in the decision-making process for hiring.

If you're using LinkedIn Recruiter, you may already be familiar with the algorithmic rankings of candidates based on their fit for job postings. However, despite the advancements, there is still room for growth in this area.

While AI has impressed us with its ability to speed up tasks such as essay writing, translation, and knowledge gathering, there is a significant question to consider when it comes to HR management: Can we truly entrust decisions about an organization's most valuable asset—its people—to AI?

ChatGPT can undoubtedly be a useful tool for information gathering and assisting with certain tasks. However, there are several drawbacks to using it in the assessment of candidates during the hiring process:

  • Lack of context: ChatGPT lacks access to real-time information about candidates, such as their current employment status, recent achievements, or any updates since its last training data. This absence of context can lead to inaccurate or outdated assessments.
  • Biases and limitations: Language models like ChatGPT are trained on large datasets that often contain biases present in the data. These biases can result in skewed responses or unfair evaluations of candidates, leading to discrimination or biased treatment based on factors like gender, race, or socioeconomic background. If the historical data used for training is biased, AI systems may inadvertently perpetuate underrepresentation of certain groups, resulting in a lack of diversity and inclusion within an organization. Additionally, ChatGPT may not possess a comprehensive understanding of all industries, job roles, or cultural nuances, which can limit its ability to assess candidates accurately.
  • Subjectivity of assessment: Assessing candidates for a role requires considering a combination of qualifications, skills, experience, and cultural fit. While ChatGPT can provide general information, it may not possess the capability to make subjective judgments or effectively assess soft skills.
  • Ethical considerations: Relying solely on AI as the basis for candidate assessment raises ethical concerns, as it may introduce biases and discrimination. It is crucial to maintain a fair and transparent hiring process that is free from discrimination, and relying solely on an AI model may not align with those principles.
  • Lack of transparency: AI algorithms often operate as black boxes, making the decision-making process difficult to explain or understand for humans. This lack of transparency can undermine trust and accountability, as it becomes challenging to identify how the AI system arrived at its assessment or the specific factors it considered.
  • Over-reliance on technical skills: AI assessments tend to prioritize technical skills and hard qualifications over soft skills, cultural fit, and emotional intelligence. Soft skills, such as communication, teamwork, adaptability, and problem-solving abilities, are crucial for many roles and may not be accurately assessed by AI models alone.

A notable example that illustrates the risks involved is Amazon's failed AI experiment from 2018. The company developed an AI system to automate the screening of job applicants, but it exhibited bias against women by penalizing resumes that included terms like "women's" while favoring male candidates. This bias was a result of the AI being trained on historical resumes submitted to Amazon, reflecting the gender imbalance in the tech industry. Therefore, while ChatGPT can be a helpful tool in the hiring process, it is best used as a supplementary resource rather than the primary method for assessing candidates. Human judgment, interviews, reference checks, and other established assessment methods should be combined with AI tools to make well-informed decisions. Otherwise, there is a risk of repeating the mistakes made by Amazon.

Several studies have revealed gender bias in facial analysis during video interviews and racial bias in resume screening. To mitigate these dangers, it is important to use AI tools as supplements to human judgment rather than relying on them as the sole basis for candidate assessment.

AI systems have their limitations, as they are constrained by the information they have been trained on, which impairs their ability to make subjective judgments or accurately interpret complex situations, and thus are is unlikely to completely replace HR professionals. Human judgment, intuition, and the ability to assess soft skills remain critical in evaluating candidates. AI can enhance HR processes by automating repetitive tasks, analyzing data, and providing insights. However, the unique human skills and capabilities offered by HR professionals, such as empathy, relationship building, contextual understanding, ethical decision-making, and strategic thinking, make them indispensable in organizations. Combining AI technologies with human expertise allows for a more effective and well-rounded HR function.