The AI Act is here! But is it here for you?
A guide to the AI Act’s dates of application
Digital transformation is becoming an increasingly pressing need for businesses wishing to stay competitive in the age of emerging technologies. But do you really know what it means and how it can benefit your business?
If 2023 was the year of AI, 2024 was the year of the AI Act. The EU’s regulation, widely regarded as a landmark piece of legislation, is the first comprehensive law on artificial intelligence. Its publication in the EU’s Official Journal on June 12, 2024, and its subsequent entry into force on 1 August 2024 concluded a long legislative journey since the proposal was published in April 2021. The next step is, of course, implementation. A series of key dates determines when different parts of the regulation will apply. But what do these mean for companies, member states, and other stakeholders? This article is here to guide you through the key deadlines.
Why are there different implementation dates?
The main reason for the gradual application of the AI Act’s provisions is to allow national authorities, industry stakeholders, and market actors sufficient time to prepare and adapt to the new regulation, while minimizing disruptions in the affected sectors.
While most parts of the regulation will become applicable on 2 August 2026, some provisions will take effect earlier. The first provisions of the AI Act will become applicable in early 2025, followed by other key dates in the subsequent year.
Below are the key dates for the applicability of the regulation:
- Prohibited practices: The first prominent deadline is quickly approaching: on this date, the prohibited practices listed in the AI Act will officially be banned. These are the AI systems and related activities that are listed in the Regulation under the category of unacceptable risk.
- AI literacy requirements: Additionally, the AI literacy requirements outlined in Article 4 of the Regulation will take effect.
What does AI literacy mean?
According to the AI Act, it refers to the skills, knowledge, and understanding that enable providers, deployers, and affected individuals to make informed decisions about AI systems and to recognize the related opportunities, risks, and potential harm. This means that in the coming months, AI-related competencies will become not just a competitive advantage but a necessity.
Keep in mind that individuals using AI systems for purely personal or non-professional purposes are excluded from the Regulation’s scope, this requirement does not apply to them.
The AI Office and Member States are tasked with developing codes of conduct on AI literacy in the coming months, although with the date of application just a few months away, it is uncertain whether they will be published by that date.
Codes of conduct are voluntary documents that will establish ethical guidelines and principles for AI development and use under specific conditions. They aim to encourage organizations to create AI policies and adhere voluntarily to the Act’s obligations. The Regulation lists topics on which such documents should be created, AI literacy being only one of them.
By this date, codes of practice for general-purpose AI systems must be finalized.
What are the codes of practice?
These documents – not to be confused with the codes of conduct mentioned in the previous paragraph – will provide detailed guidance for responsibly developing and deploying such systems.
Essentially, they will be tools for ensuring compliance with specific obligations under the AI Act: providers will be able to rely on them to demonstrate compliance with their obligations. For example, codes of practice will be developed to address detailed rules for providers of general-purpose AI models and those with systemic risks, as well as the detection and labeling of artificially generated or manipulated content.
1. Designating national authorities
One year after the Regulation’s entry into force, national authorities must be designated. These will be the authorities responsible for overseeing the compliance with the AI Act in the Member States.
Each country must establish or designate (meaning it is not obligatory to create a new institution, an already existing one can be appointed) at least one notifying authority and one market surveillance authority as national competent authorities.
Why will there be two authorities?
The two types of designated authorities will be responsible for different tasks and will have different competences.
- Notifying authorities will handle procedures for assessing, designating, and monitoring conformity assessment bodies.
- Market surveillance authorities will conduct activities and take necessary actions under the EU Regulation on market surveillance and product compliance (otherwise known as Regulation (EU) 2019/1020).
2. Rules on governance become applicable: Furthermore, on this date, the provisions in Chapter VII, covering governance at Union and national level, will become applicable.
3. Rules on penalties become applicable: Lastly, the provisions of Chapter XII (titled Penalties) will also apply from this date.
Two years after its entry into force, most AI Act provisions will apply, except for a few with specific extensions. These extended deadlines primarily address AI systems placed on the market before the Regulation becomes fully applicable. The aim of these provisions is to ensure legal certainty, ample time for adaptation for businesses and to avoid disruptions in the market.
What are the extended deadlines to look out for?
Some of these deadlines only apply to very specific categories of AI-systems used by public authorities or models that are part of large, EU-level IT-systems in the areas of freedom, justice and security – such as the Schengen Information System, Visa Information System, or Eurodac. We don’t explain these in detail, as these are not particularly relevant to most market actors. However, there is one deadline to keep in mind:
2 August 2027:
- General-purpose AI systems launched before August 2025: Providers of general-purpose AI systems launched before 2 August 2025 have time to comply with the Act’s obligations until 2 August 2027.
- High-risk AI systems listed in Annex I of the Regulation: The same date applies to high-risk AI systems used as products or safety components of products covered by EU harmonization legislation listed in Annex I of the Regulation. Examples include toy safety, vehicle security, medical devices, and elevators. Note that this particular deadline does not apply to high-risk AI systems listed in Annex III, such as those used in critical infrastructure, education, or employment.
How Can You Prepare Your Business?
Familiarize yourself with the rules to determine whether the AI Act applies to you and identify the requirements and deadlines relevant to you. Keep the dates above in mind and develop a strategy on compliance that also aligns with your business goals. As a first step, considering the upcoming deadline, it is worth ensuring that you and your organization meet the AI literacy requirements stipulated in the Regulation.
If you have any questions regarding the AI Act or would like us to dive into a specific part of the Regulation relevant to your business, feel free to contact us at office@kassailaw.com and don’t forget to follow our new #aiactseries for clarifications and industry insights. If you would like to learn more about the AI Act, or the potential applications of artificial intelligence for your business, explore KassaiLaw Academy’s AI-related, fully customizable trainings here!