Candidates wary of recruiters using AI
Originally posted on The Horizons Tracker.
Since the widespread rollout of ChatGPT and the other generative AI platforms, there has been interest in its applications in all areas of life. HR and recruitment are no different. Research1 from NUS Business School explored how job seekers feel about the role AI might be playing in their application.
The researchers quizzed around 1,000 people who had been recruited via Mechanical Turk. They each underwent four scenario experiments to understand how they perceive the use of AI in the recruitment process.
Automatically assessed
The results suggest that most people felt that the AI was not particularly fair or trustworthy, especially compared to human-based recruitment. Interestingly, this general trend remained consistent even when the candidates were ultimately successful in their application.
When the researchers delved into why this might be the case, the participants revealed that they didn’t think AI was effective at identifying the unique characteristics of each candidate, unlike human recruiters who they thought would better spot the things that set us apart.
The researchers believe their findings highlight that AI might not be able to provide the fairer and less biased recruitment we hope. Instead, they call for a balance between man and machine, with AI acting as a “co-pilot”.
“For example, algorithms can flag that the recruiter is not shortlisting enough women or people from a minority group. Algorithms can also flag the uniqueness of a candidate compared to other applicants,” they explain.
Article source: Candidates Wary Of Recruiters Using AI.
Header image source: Created by Bruce Boyes with Perchance AI Photo Generator.
Reference:
- Lavanchy, M., Reichert, P., Narayanan, J., & Savani, K. (2023). Applicants’ fairness perceptions of algorithm-driven hiring procedures. Journal of Business Ethics, 188(1), 125-150. ↩