McDonough School of Business
Handshake between a human and robot hand
Research and Insights

Office Hours: Is AI the Future of Talent Acquisition?

Artificial intelligence is reshaping how companies hire, as algorithms increasingly take on tasks such as sourcing, screening, and even interviewing job applicants. While AI may provide faster decisions, greater efficiency, and fewer biases, it also raises questions about transparency, fairness, and privacy. 

Jason Schloetzer, associate professor and area chair of accounting and business law at Georgetown McDonough, explains how AI is transforming hiring, what it means for job seekers and employers, and how frameworks for oversight may help ensure accountability.

Jason Schloetzer

Jason Schloetzer

Why are organizations increasingly turning to algorithmic hiring systems? 

Beyond cost savings and scalability, organizations believe that algorithmic hiring systems can enhance the consistency of outcomes by standardizing the decisions of multiple individuals across an organization who screen and interview candidates. There is concern that human recruiters miss relevant job applicant experience, fail to ask the right questions during job interviews, and perform inconsistently due to individual biases. There is a belief that AI-based hiring tools promote consistency across job applications, particularly in the screening process, by, for instance, reducing the likelihood that different HR employees use different bright lines (e.g., a degree from an elite university, a minimum number of years of work experience) to screen out otherwise suitable candidates. 

Consistency via algorithmic hiring systems can help organizations identify job applicants who may be overlooked due to persistent biases in the human-led screening and interviewing processes. The idea here is that consistent job applicant processing allows organizations to reduce human judgment in hiring, potentially enhancing transparency and fairness and increasing workplace diversity by limiting human bias.

What are some of the risks associated with using AI in the hiring process?

As my colleague, Kyoko Yoshinaga, and I write in our related research, one of the risks that caught my attention is the potential for automating the collection of information beyond job application materials. For instance, a recent survey notes that 71% of over 1,000 hiring managers report using social media sites to research job candidates, with 55% reporting they have found content that has caused them not to hire an applicant. Algorithmic hiring systems can automate the collection and integration of social media data with job applicants’ submitted materials. This situation raises privacy concerns regarding the collection of job applicant data, as it crosses into the personal domain, including the intentional or unintentional collection of posts, images, and names of friends and family from the applicant’s non-work-related social network. As was the case in Japan with Recruit Career Company, there is also the risk that a third party would sell data, such as job applicants’ web-browsing histories of jobs posted by other employers, to help organizations assemble information about the likelihood of applicants declining a job offer. Imagine not receiving a job offer because an algorithm predicts that you won’t accept it based on your LinkedIn browsing history!

How can companies ensure fairness and transparency with AI in hiring?

I’m a proponent of organizations 1) creating guidelines on the development and use of algorithms in the hiring process and providing publicly-accessible documents on how the organization develops and utilizes algorithmic hiring systems; 2) ensuring centralized control over algorithmic hiring systems by forming an interdisciplinary team — an “Algorithmic Hiring Ethics Committee” — with authority to oversee algorithm use across the entire hiring process; and 3) establishing ongoing performance monitoring of these systems and holding employees accountable for results.

What about policymakers? How might they contribute to this conversation?

It seems natural for policymakers to develop rules and regulations that protect job applicants and current and former employees from data privacy risks, algorithmic bias, and the risks associated with employee surveillance and monitoring. It would be interesting to see some action around collecting, analyzing, and reporting failures and incidents caused by algorithmic

hiring systems, or even establishing an AI Oversight and Coordination Agency that oversees algorithmic hiring systems and requires routine audits and reporting around system performance. Given the potential labor market implications at scale, some level of oversight rather than self-regulation is warranted.

What is the future of recruitment with AI?

Recruitment systems will be end-to-end automated – sourcing, screening, interviewing, onboarding, the entire process. Every stage could operate through AI systems that predict applicant performance, corporate culture fit, and retention probabilities based on extensive data on past and current employee performance. AI-driven hiring will accelerate the shift from degree-based employment to skills-based employment, as these tools enhance our ability to match job requirements with applicants’ talents. Recruiters will become “AI supervisors” who oversee this automated process, adding the human touch to a primarily automated process.

Tagged
Accounting
AI
Faculty