Artificial Intelligence in recruitment: data protection implications

What are the data protection implications that should be factored into an AI recruitment project? Transparency and accountability are critical features of the General Data Protection Regulation (GDPR) and are likely to be highly relevant to AI. This article looks at one way in which artificial intelligence can be used by HR.

AI and HR

Before we all point out that the H in HR stands for human and that this might seem to be an area unsuitable for machines, when you dig a little deeper, you can see how automated decision making has the potential to not only create efficiencies, but also to enhance the recruitment process.

Impact on bias - the postivies

Automated decision-making has the ability to be objective and remove unconscious bias in areas like recruitment and shortlisting. In recent years, as knowledge and understanding of overt discriminatory practices has improved, the focus has shifted to eradicating unconscious bias in the minds of recruiters.

Put simply unconscious bias is where we make decisions using the short cuts in our brain to identify with people who are like us or share our values. Unconscious thoughts can involve making negative decisions by applying stereotypical views and attitudes that affect our understanding, actions and attitudes in an unconscious manner, for example assuming that a parent may not wish to travel to work and discard their application on that basis.

AI models at the recruitment stage can be used to produce decision-making free from such unconscious bias. For large employers looking to make efficiencies, the development of this type of talent acquisition software can assist by scanning, reading and evaluating a large number of applicants very quickly.

Potential side effects of input bias

Like any emerging technology, unintended consequences may arise. One of these is the scope for input bias to creep into an AI system. A recent UK Parliament Select Committee Report on Artificial Intelligence considered the possibility that data input in AI could be subject to bias as well as the scope for the algorithms to produce biased decisions. The report refers to AI used in the American criminal justice system to assess risk in sentencing, explaining that this system ‘commonly overestimated the recidivism risk of black defendants and underestimated that of white defendants.’ Employers therefore need to be live to these issues, ask questions and carry out appropriate diligence before introducing AI into their operations.

Data protection issues

Decisions made using automated decision-making have a variety of data protection implications, which have been amplified by the introduction of GDPR. Some of the key issues employers should consider before adopting AI in their processes, are discussed below:

  • At the start of any AI project employers should consider whether a data protection impact assessment (DPIA) is required. Employers must carry out a DPIA where a type of processing is likely to result in a high risk to the rights and freedoms of individuals. A DPIA involves the identification of privacy risks and a consideration of what is necessary and proportionate. If algorithms are used, there should be transparency about how this is applied in order to demonstrate accountability.
  • If decisions are made about an individual at an automated level, this should be made clear in the data privacy notice issued to candidates. The transparency principle inherent in GDPR requires that individuals have the right to know how their personal data is processed.
  • The GDPR also sets out additional protections where a decision is based solely on automated processes which have a legal or similarly significant impact on a data subject. This is very likely to include a recruitment decision without any human input and the recitals to the GDPR that deal with automated decision making explicitly refer to e-recruiting practices. Individuals have the right for such decisions not to be taken solely based on automated means unless this is authorised by law, necessary for a contract, or where explicit consent was given. Even then, except where it is authorised by law, specific safeguards must be in place such as a mechanism for the individual to challenge the decision and to obtain human intervention. The lawful basis for processing personal data in this way needs to be considered and identified before proceeding with automated profiling.

None of these steps should stop employers embarking on the introduction of AI. In the new GDPR world we live in, we should all be ‘baking in’ privacy by design into workplace systems. Automated decision-making simply adds an additional GDPR layer to the mix. 

 

Article originally posted on CMS.

Share this

FROM SOCIAL MEDIA

.@CENTI_London is offering one-on-one investor meetings with top VC`s and private meetings with local government. E… https://t.co/2nggDpM00x
Given that a cyber attack is no longer an 'if' but more likely a 'when', board members need help with guidance on w… https://t.co/jKWQZBRuwW
Join us on 27 March to explore a local focused approach to local public safety service delivery challenges… https://t.co/4ZgaPn1Iy7
It was great to host @AngusMacNeilSNP, chair of @CommonsIntTrade, to have an engaging discussion with @EntForumUK a… https://t.co/0vhUDjfxV8
We look forward to seeing all our guests tomorrow at The Defence Spring Dinner!
How can governments and industry collaborate effectively to combat cybercrime? Join discussions at the annual Chath… https://t.co/Gqpz8bZvqd
Following today’s launch of the @CDEIUK's 2-year Strategy and Work Programme, take a look at @techUK’s summary of t… https://t.co/qCC20C6ahB
Become a Member
×

Become a techUK Member

By becoming a techUK member we will help you grow through:

Click here to learn more...