A Guide To The ICO's Approach To Regulating AI

A Guide To The ICO's Approach To Regulating AI

Home 9 Articles 9 A Guide To The ICO’s Approach To Regulating AI

A Guide To The ICO’s Approach to Regulating AI

Jun 10, 2024 | Articles

When it comes technology generally, if you were ever in doubt about the truth of the saying “if it’s free, you’re the product”, consider the recent announcement by Meta, the owner of Facebook and Instagram, to its European customers that from 26 June 2024, users’ public posts, photos, captions, and messages will be used to train its AI. This came hot on the heels of the storm over Open AI’ allegedly’ using a voice eerily similar to Scarlett Johansson’s, despite her refusing permission for the company to do so. These examples, alongside the AI arms race alighting Silicon Valley, have forced regulators to make serious moves to control this new technology. One watchdog in the middle of ordinary folk and businesses and Big Tech will be the Information Commissioner’s Office (ICO), which has just released Regulating AI: The ICO’s Strategic Approach (the Strategy).

The Strategy details how the ICO is driving forward the principles contained in the 2023 AI regulation white paper and the Government’s 2024 guidance on implementing those principles.

What is the ICO’s current position on AI?

The ICO clearly takes a positive view regarding AI and its possibilities for improving life on Earth. However, it acknowledges that developing and using AI comes with risks, especially concerning children, healthcare, law enforcement, and education. AI can be biased, unfair, and lack transparency and accountability. Regarding risks within the ICO’s remit, the Strategy confirms that the ICO already has the resources to manage them. It states that the risks relating to AI do not require “new, extensive, cross-cutting legislation, but appropriate resourcing of existing UK regulators and their empowerment to hold organisations to account.

The Strategy also points out that the ICO has been regulating AI for over a decade and already provides an array of guidance to assist organisations in applying data protection law to AI.

What are the ICO’s AI Strategy key takeaways?

When it comes to AI, the ICO is committed to:

  • Taking a de facto leadership role in regulation – the UK is taking a different approach from the EU in that it has made no moves to set up an independent regulator to oversee AI. Instead, the language of the AI Regulation White Paper aligns with established data protection principles such as transparency, fairness, and accountability. Given that data protection and privacy laws apply to all stages of AI development and distribution, it makes sense that, for the moment, the ICO steps in as the de facto regulator. However, we have little doubt that as AI technology grows and tensions between innovators, the Government, and the general public increase, a separate AI regulator will be necessary.
  • Focusing on risk – the Better Regulation Framework and Smarter Regulation: Delivering a Regulatory Environment for Innovation, Investment and Growth both focus on the UK Government’s commitment to balancing the need for regulation to be necessary and proportionate to support economic growth. To this end, the Strategy suggests that the ICO will let organisations develop AI provided they conduct Data Privacy Impact Assessments and implement appropriate safeguards where required. However, the Strategy also makes clear that enforcement action will be taken where compliance is breached.
  • Working to understand AI – consultations, information requests, and invitations to sandbox projects will be a regular feature of the ICO’s work over the coming months and years as it focuses on understanding how it is being used and its impact on data protection. Collaboration with other regulators is also identified in the Strategy as a crucial element of the ICO’s plans, whether via direct bilateral contact or groups such as the Regulators and AI Working Group. The Strategy points out that the ICO was a founding member of the Digital Regulation Cooperation Forum (DRCF), which also includes the Competition and Markets Authority, Ofcom, and the Financial Conduct Authority. AI will be a priority for the DRCF, and it has already published a paper on it’s benefits and harms .

What is in store for the future of AI and data protection?

The UK Government will review the ICO’s Strategy alongside the Strategies of other regulators. This will help it determine whether stand-alone AI legislation is required or if existing legislation simply needs amending. However, it seems businesses can be confident that with the exception of high-risk and potentially harmful uses, the Conservatives will not implement far-reaching AI regulation anytime soon. The Labour Party, which is widely anticipated to win the general election to be held on 4 July 2024, has also made positive signs that it plans to be fairly liberal regarding AI regulation.

As an external risk management and legal specialist we can assist you with any questions or concerns you have regarding AI development or distribution as well as data protection and privacy laws. This includes running compliant Data Privacy Impact Assessments and setting up a detailed risk register.

To find out more about any matters discussed in this article, please email us at [email protected] or phone 0121 249 2400.

The content of this article is for general information only. It is not, and should not be taken as, legal advice. If you require any further information in relation to this article, please contact 43Legal.

 

 

How To Avoid Contract Termination Disputes

Get In Touch

6 + 4 =

Recent In The Know Articles

Keep Up With Articles

15 + 1 =