The ICO's updated AI and Data Protection guidance: emphasising fairness and transparency

As Artificial Intelligence (AI) technologies continue to advance and businesses increasingly adopt AI solutions, the requirement to address data protection concerns associated with AI has become more important than ever. On March 15, 2023, the ICO addressed the UK industry’s demand for clarity on AI fairness requirements by updating its AI and data protection guidance.

The updated guidance reflects the ICO’s commitment to facilitating new technology adoption while protecting people and vulnerable groups. The new guidance reorganises existing content into new chapters and introduces fresh content, such as "Transparency in AI" and "Fairness in the AI Lifecycle" and addresses key concerns in AI implementation.

The ICO's current focus areas include fairness in AI, dark patterns, AI-as-a-service, AI and recommender systems, biometric data and technologies as well as privacy and confidentiality in explainable AI. The guidance provides a clear methodology for auditing AI applications to ensure they process the data in a fair, lawful and transparent manner.

Below we discuss the key changes to the guidance and important points of consideration when implementing and using AI solutions.
 

Transparency in AI

The ICO introduces a separate chapter on transparency in AI, stressing the importance of organisations being transparent about their personal data processing in AI systems. To adhere to the transparency principle, organisations must openly communicate their use of AI-enabled decisions, including when and why they use them, and actively inform individuals about AI-enabled decisions that affect them.

The guidance underscores that if organisations obtain data directly from individuals, they must supply privacy information at the point of collection before using the data to train a model or applying the model to those individuals. Although UK data protection law is technology-neutral and does not specifically reference AI or associated technologies like machine learning, it places significant emphasis on the large-scale automated processing of personal data.

It is true that not all AI solutions involve solely automated processing that produces legal or similarly significant effects. Those that do attract additional requirements under the UK GDPR, which includes the right of individuals to receive meaningful information about the logic involved and the significance of expected consequences.

To adhere to these regulations, organisations must develop a transparent framework for explaining AI-driven processes, services and decisions to improve accountability. The ICO has also published the “Explaining Decisions Made with AI” guidance, which offers practices for informing individuals about data usage throughout the AI lifecycle.
 

Lawfulness in AI

The revised ICO guidance introduces new chapters on the use of inferences, affinity groups and special category data in AI systems.

The ICO emphasises that AI-generated inferences could be considered special category data if an organisation either can infer relevant information about an individual, intends to do so, or aims to treat someone differently based on the inference (even without a reasonable degree of certainty). This would mean that organisations must remain vigilant to ensure they comply with the more onerous requirements that come with the processing of special category data.

Notably, the guidance clarifies that if an AI system generates inferences about a group (forming affinity groups) and associates these with a specific individual, data protection law is applicable at various processing stages. This could be because personal data were used to train a model (development stage) or the results of the model were applied to those individuals who do not form part of the training data set (deployment stage).
 

Fairness in AI

A notable addition to the guidance is the chapter on 'Fairness in AI', which discusses data protection’s approach to fairness, its application to AI and essential legal provisions to consider. It explains the distinctions between fairness, algorithmic fairness, bias and discrimination, and offers high-level considerations for assessing fairness, inherent trade-offs and the connection between solely automated decision-making and relevant safeguards under Article 22 of the UK GDPR (where these apply).

The guidance also introduces an annex, 'Fairness in the AI Lifecycle', which examines data protection fairness considerations throughout the AI lifecycle, from problem formulation to decommissioning. It emphasises the influence of AI's fundamental aspects on fairness and outlines potential sources of bias and mitigation measures. This highlights the differences between fairness in data protection law and algorithmic fairness, encouraging organisations to adopt a holistic approach and consider the context in which decision-making takes place.
 

Why is this significant and what does it mean for me?

The proliferation and rapid evolution of AI tools give rise to ever-complex privacy challenges. Indeed, AI continues to be a strategic priority for the ICO, as also highlighted in the ICO25 strategic plan which seeks to address issues like AI-driven discrimination. AI is expected to garner even greater attention considering the UK government’s AI White Paper 'A Pro-Innovation Approach to AI Regulation' published in March 2023. In line with the government’s stated goal of the UK becoming ‘a science and technology superpower by 2030’, the document establishes a framework for identifying and managing AI-related risks to promote responsible innovation and sustainable economic growth in AI.

In light of these developments, the revised ICO guidance will be an important starting point in a data protection compliance journey for organisations in the UK that rely on AI solutions. It highlights the significance of accountability measures and a risk-based approach for organisations using AI. The guidance will be of interest for UK organisations regardless of whether their AI tools involve solely automated decision-making. This is because the majority of foregoing requirements continue to apply to AI solutions that involve meaningful human intervention, albeit potentially to a lesser degree (e.g., transparency requirements).

Balancing trade-offs in AI depends on the use case and context, and organisations must remember that they are accountable for their choices, ensuring proportionate efforts based on the risks of the AI system. As highlighted in the ICO’s AI and Data Protection guidance, organisations must adopt a risk-based approach when developing and deploying AI, evaluating its necessity and considering more privacy-preserving and effective alternatives. Lack of understanding of legal requirements and applicable frameworks for AI applications can negatively impact business operations, potentially leading to security breaches, monetary losses, or damage to your organisation’s reputation.

If you have any queries or would like further information, please visit our data protection services section or contact Christopher Beveridge.

 

SUBSCRIBE: DATA PRIVACY UPDATES

Subscribe: Data Privacy Updates

Please refer to the Introduction to our Privacy Statement and the Contacts section, which tell you what we do with your personal information, your rights and other relevant information.