Seven Unseen Dangers of Using AI in Hiring

Artificial Intelligence (AI) is celebrated as a transformative technology in the hiring process. Vendors selling AI-powered applications describe the value propositions of increased productivity, greater objectivity, and data-driven decisions. Internally, employees are proposing new AI projects to improve work activities.

But there are significant risks of using AI in the hiring and other people-centric processes that are frequently overlooked. Here are seven risks of using AI in hiring along with specific steps you can take to mitigate each risk.

 

1. Over-Reliance on Data Over Human Insights

Risk:

Many business leaders believe AI’s data-driven decisions eliminate human bias, but AI-powered applications have been shown to perpetuate human bias on a large scale. This can lead to over-reliance on quantitative metrics, sidelining the qualitative insights human bring to the hiring process. AI lacks the intuitive understanding of culture fit and the nuances of human potential.

Solution:

  • Balance AI candidate assessments with organized and structured human judgment.
  • Use AI to handle preliminary screening and make certain that final decisions involve humans who can best evaluate each candidate’s hard skills, soft talents, organizational fit, and potential beyond what is quantifiable.
  • Human instinct is part of candidate assessment, but this requires interviewers to support their candidate assessment with data-grounded evidence to support their assessments.

 

2. Inherent Biases

Risk:

Despite the widespread belief that AI is objective, it is not objective. Algorithms can perpetuate and amplify biases contained in the historical hiring data and common practices used to train the AI algorithms. This leads to biased outcomes and discrimination at scale. 

Solution:

  • Implement periodic audits for bias testing of your hiring processes and AI apps. Here’s a low-tech example: state government agencies embed an employment law attorney in the hiring process to identify and eliminate potential biases in job descriptions, interview questions and candidate assessments. This approach may feel cumbersome for most for-profit companies. One alternative is to perform sample post-hire audits of hiring processes to identify and mitigate inherent biases.
  • Validate that diverse and inclusive data is included in the algorithm training and testing data sets.
  • Establish a “human-in-the-loop” feedback method where hiring outcomes are continuously monitored and adjusted to address any biases that emerge.

 

3. Loss of Personal Connection

Risk:

Decades of recruitment and hiring process automation has depersonalize the candidate experience, making interactions feel impersonal and less human. This negatively impacts employer branding, deteriorates the employer’s ability to attract qualified applicants and causes candidates to perceive the company negatively. These negative impacts make hiring more difficult.

Solution:

  • Enhance AI-driven processes with personalized human interactions at key steps throughout the recruiting, interviewing and onboarding processes.
  • Periodically have internal recruiters source and screen potential candidates without using AI, then compare the manual and AI generated candidate screening outputs. This will reveal the human and AI biases in the screening process.
  • Automated systems should be used to complement, not replace, personal interactions to enhance the candidate’s recruitment journey.
  • Use systems to automate repetitive tasks. Do not automate activities best performed by a human. Embedding a human touch throughout your hiring process increases offer acceptance rates, reduces time-to-hire and saves money on recruiting costs.

 

4. Personal Privacy and Security Concerns

Risk:

AI systems require extensive data which raises significant privacy concerns. The collection and storage of candidate information can lead to data breaches and misuse. Every day another company reports that it had a data breach where sensitive personal data has been accessed by unauthorized people both internally and externally.

Solution:

  • Invest in robust data security technologies, awareness training and auditable data handling practices.
  • Communicate to candidates about what measures are in place to protect their personal information.
  • Regularly review and update privacy policies to comply with changing legal regulations and best practices.

 

5. Resistance to Change

Risk:

AI adoption is not believed to be universally beneficial. Implementation of AI-powered apps are frequently met with significant resistance from employees and stakeholders who fear job loss or devaluation of human roles.

Solution:

  • Involve employees early during the evaluation and implementation of AI projects that directly impact their work. Participation fosters acceptance and ownership.
  • Provide training about how AI will enhance rather than replace their work.
  • Highlight success stories where AI has enhanced human capabilities.

 

6. Reduced Diversity of Thought

Risk:

AI systems learn from historical data that favors candidates who fit a specific mold, thereby reducing diversity of thought and stifling innovation. Historical hiring data used to train the AI algorithm is likely embedded with unconscious biases and hidden discriminations against people with diverse thinking and perspectives. Diversity research has proven that diverse teams, are more productive, perform better and generate higher profits.

Solution:

  • Use AI as a complement to rather than a replacement or crutch for human decision-making.
  • Encourage diverse hiring panels and use structured candidate evaluation formats that allow for a variety of perspectives.
  • Ensure your AI tools are tuned to promote diversity by including a wide range of variables in the decision-making process.

 

7. Disregard for Non-Traditional Career Paths

Risk:

AI-driven hiring systems typically disproportionately favor candidates with linear and traditional career paths. This practice potentially overlooks individuals with unconventional career progressions who can bring fresh perspectives, insights and skills to the organization.

Solution:

  • Train AI systems to recognize and value non-traditional career and educational paths by including diverse training data and expanding the algorithmic parameters for screening candidate credentials.
  • Emphasize evidence-based candidate evaluation criteria that considers a variety of experiences and transferable skills.

 

Summary

AI in hiring promises productivity and savings but contains significant hidden risks. Over-relying on data and algorithmic decision-making minimize human insights, introduces algorithmic biases and make the candidate’s application experience impersonal.

AI also raises privacy concerns and employee worries about job security.

Solving for these risks requires:

  • Balancing AI with human judgment and personal communications,
  • Periodically auditing AI systems and hiring processes for biases,
  • Embedding person-to-person communications at key hiring process milestones
  • Working with cybersecurity teams to ensuring robust data security

Effectively using AI in hiring is about integrating AI at specific steps of hiring processes, in a manner that enhances human intelligence and retains the humanity of the hiring experience.

Close

50% Complete

Two Step

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.