EU AI Act Comes Into Force 1 August 2024

EU AI Act Comes Into Force 1 August 2024

Home 9 Articles 9 EU AI Act Comes Into Force 1 August 2024

EU AI Act Comes Into Force 1 August 2024 

Jul 19, 2024 | Articles, EU AI Act

EU AI Act Comes Into Force 1 August 2024

Those who expected the new Labour Government to introduce legislation strengthening controls around artificial intelligence (AI) may have been disappointed by the July 2024 King’s Speech. In his address to Parliament, His Majesty King Charles III said the Government “will seek to establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models” but crucially stopped short of announcing an AI Bill similar to the EU’s AI Act which has recently been published in the Journal of the European Union.

Labour’s manifesto outlined plans to “ensure the safe development and use of AI models by introducing binding regulation on the handful of companies developing the most powerful AI models and by banning the creation of sexually explicit deepfakes”, while also saying its “industrial strategy supports the development of the Artificial Intelligence (AI) sector.” So why is there no clear directive regarding AI legislation? And what does the EU’s AI Act cover?

When does the AI Act come into force?

The AI Act will come into force on 1 August 2024. There is a gradual phase-in for many of the new laws, and within 24 months, all AI developers working within the EU will be subject to the rules contained in the Act.

Examples of the phased-in rules include:

  • All banned or ‘unacceptable risk’ activities will be illegal within the first six months. These include developing facial recognition software by untargeted internet or CCTV scraping and China-style social credit scoring.
  • Nine months from 1 August 2024, codes of practice will apply to developers of in-scope AI apps.
  • Twelve months after the AI Act comes into force, Global Partnership on Artificial Intelligence (GPAI) transparency requirements will apply.

What does the EU AI Act contain?

Regarding the aims of the AI Act the European Commission states:

“The AI Act aims to provide AI developers and deployers with clear requirements and obligations regarding specific uses of AI. At the same time, the regulation seeks to reduce administrative and financial burdens for business, in particular small and medium-sized enterprises (SMEs).”

The AI Act takes a risk-based approach. Any AI systems deemed a threat to safety, livelihood, and rights of people are classified as an unacceptable risk and will be banned outright.

AI technology used in the following is classified as high-risk:

  • critical infrastructures that could put the life and health of citizens at risk;
  • educational or vocational training, that may determine the access to education and professional course of someone’s life (e.g. scoring of exams);
  • safety components of products (e.g. AI application in robot-assisted surgery);
  • employment, management of workers and access to self-employment (e.g. CV-sorting software for recruitment procedures);
  • essential private and public services (e.g. credit scoring denying citizens opportunity to obtain a loan);
  • law enforcement that may interfere with people’s fundamental rights (e.g. evaluation of the reliability of evidence);
  • migration, asylum and border control management (e.g. automated examination of visa applications);
  • administration of justice and democratic processes (e.g. AI solutions to search for court rulings).

Under the AI Act, before any such technology can hit the market it must meet strict compliance obligations, including:

  • adequate risk assessments with mitigation added where required;
  • all activity logged so test results can be traced;
  • detailed documentation so authorities can assess its compliance;
  • appropriate human oversight to minimise risks.

Limited risk AI technology refers to “the risks associated with lack of transparency in AI usage.” For example, suppose a company is using a chatbot to interact with customers. In that case, those customers should be made aware of the fact so they can choose whether to continue or communicate with a human. In addition, AI-generated content must be identifiable, and any such content published to inform people about public interest matters must be clearly labelled as AI created.

Finally, the AI Act provides that minimal risk AI, such as spam filters can be used freely.

What is the UK’s plans to regulate AI?

In 2023, the previous Conservative Government published a white paper entitled AI Regulation: A Pro-Innovation Approach, which set out the then Government’s plan for regulating AI development. It made clear that in the future, the UK will take a ‘pro-innovation’ stance and regulation will be light-touch.

“Responding to risk and building public trust are important drivers for regulation. But clear and consistent regulation can also support business investment and build confidence in innovation. Throughout our extensive engagement, industry repeatedly emphasised that consumer trust is key to the success of innovation economies. We therefore need a clear, proportionate approach to regulation that enables the responsible application of AI to flourish. Instead of creating cumbersome rules applying to all AI technologies, our framework ensures that regulatory measures are proportionate to context and outcomes, by focusing on the use of AI rather than the technology itself.”

In the King’s Speech, the incoming Labour Government promised regulation of the most powerful AI technologies. However, it is an inconvenient fact that tightly regulated AI development does not match Labour’s policy of “unlocking growth” and “taking the breaks off Britain”. The UK technology sector is valued at around £1 trillion and employs three million people. It is; therefore, a big gorilla to be poking a regulatory stick at, and it is not surprising Labour is taking time to think about how much regulation is required and what form it will take.

Wrapping up

Now that the EU AI Act is in force, companies that do business with the EU must keep abreast of the legislation to ensure they do not breach compliance and risk their R&D capital and commercial reputation.

To find out more about any matters discussed in this article, please email us at [email protected] or phone 0121 249 2400.

The content of this article is for general information only. It is not, and should not be taken as, legal advice. If you require any further information in relation to this article, please contact 43Legal.

 

 

How To Avoid Contract Termination Disputes

Get In Touch

15 + 5 =

Recent In The Know Articles

Keep Up With Articles

14 + 10 =