Are you using Artificial Intelligence (AI) for recruitment?
Are you using Artificial Intelligence (AI) for recruitment?
Artificial Intelligence (AI) is starting to play a key role in recruitment. According to a study by the UK Government, 48% of UK recruitment agencies have adopted some form of AI technology.
The recruitment process, historically reliant on human judgment and manual procedures, is being transformed by AI tools. The use of these AI tools in processes such as sourcing, screening, and selection can offer huge benefits and cost savings to employers. However, these AI tools also create many risks for candidates and organisations including compromises of information rights and potentially biased and discriminatory outcomes.
Many of the AI tools used for recruitment purposes such as screening and selection, will qualify as high-risk systems both under the UK and EU GDPR and EU AI Act. This is because AI systems can perpetuate existing biases, cause digital exclusion and create discriminatory job advertising and targeting.
Screening tools are used to score candidate competencies and skills based written applications and CVs, predict a candidate’s ‘interest’ in the role and predict the likelihood of a candidate being successful. Selection tools can assess a candidate’s skills and fit to a role based on psychometric assessments, written responses to interview questions and text transcriptions of in-person or video interviews. The selection tools can also evaluate a candidate’s language, tone and contribution in video interviews to predict their personality type.
Regulatory Focus
Given the intrusive nature of these AI systems, regulators in the UK and beyond have begun to look closely at their use. They have, so far, provided gentle reminders and guidance to encourage and support compliance by organisations using recruitment AI tools.
In 2024, the Information Commissioners Office (ICO) carried out a consensual audit with developers and providers of AI-powered sourcing, screening and selection tools used in recruitment. Some of the key findings from the ICO review included the following:
- Discriminatory outcomes identified where search functionality has allowed recruiters to filter out candidates with certain protected characteristics
- Existing tools filtered, estimated or inferred people’s gender, ethnicity and other characteristics from their job application or name, rather than asking candidates directly. Such inferences of information may not be accurate enough to monitor bias effectively to meet the GDPR or discrimination laws
- Many providers are monitoring the accuracy and bias of their AI tools, but the testing lacked accuracy
- AI tools collect more personal information than necessary and retain information indefinitely to build large databases
- There is a lack of transparency in how information is processed, and decisions are made
- Recruiters and candidates were often unaware that information was repurposed by AI developers
- Many AI providers incorrectly defined themselves as processors rather than controllers. They passed responsibility for data protection compliance on to clients and did not comply with data protection principles
- The use of AI recruitment tools is often based on vague or unclear contracts leaving recruiters in the dark on role, responsibilities and obligations both with the AI providers and also directly with any affected data subject.
AI recruitment tools; actions for any users
Any organisation using AI recruitment tools need to ensure they are meeting regulatory expectations.
You should consider the potential risks posed by these AI systems and identify appropriate assurance mechanisms both before and during the procurement process. This will ensure you comply with applicable privacy, AI and discrimination laws and regulations. Some actions you can take include:
- Review existing AI-powered recruitment systems and consider the potential risks posed by these systems and how such risks could be remediated
- Understand the underlying function of the AI tools and address any gaps in transparency and accountability
- Apply adequate procurement checks to AI procurement
- Seek assurance from your AI providers that their AI tools, and the quality of training data sets, are as diverse as possible
- Maintain a high degree of human oversight.
If you have any queries or would like further information, please visit our data protection services section or contact Christopher Beveridge.