On 12 July 2024, the EU AI Act (“AI Act”) was published in the Official Journal of the European Union. As the AI Act will enter into force 20 days from the date of its publication (1 August 2024), this starts the clock for organisations within the scope of the AI Act to prepare for compliance.
The exact amount of time organisations have to comply with their relevant provisions under the AI Act will depend on the role they play under the AI Act, as well as the risk and capabilities of their AI systems. For example, providers[1] of general-purpose AI systems will be required to comply with the requirements of the AI Act before providers of high-risk AI systems.
Key Dates for Organisations
- 2 February 2025:
- Ban on certain AI systems: AI systems that present the highest risk under the AI Act (i.e. certain AI systems that deploy subliminal techniques with the objective or effect of materially distorting the behaviour of individuals, AI systems that exploit vulnerabilities of individuals, AI systems that conduct untargeted scraping of facial images to create databases, etc.) will be prohibited.
- AI literacy obligations take effect: Providers and deployers of AI systems will also be subject to AI literacy obligations; namely, they will be required to take measures to ensure a “sufficient and appropriate level of AI literacy” of their employees and other personnel dealing with the operation and use of AI systems on the organisation’s behalf.
- 2 May 2025:
- Codes of practice to be published: The EU AI Office is required to have published codes of practice aimed at assisting providers of AI systems to demonstrate compliance ahead of their respective deadlines by this date.
- 2 August 2025:
- Obligations on general-purpose AI models: Providers of general-purpose AI models will be required to comply with their relevant obligations of the AI Act from this date.
- Deadline to qualify for extended compliance period for providers of general-purposes AI systems: Providers of general-purpose AI models who have already placed their AI models on the market by this date will have until 2 August 2027 to comply with the AI Act.
- Enforcement provisions: The AI Act’s operative provisions regarding penalties for non-compliance will also apply on this date, and EU member states will have been required to implement and notify the European Commission of their respective rules on penalties and other enforcement measures by then. EU member states will also have appointed their respective national competent authorities by this date.
- Annual review of prohibited and high-risk AI lists: The European Commission’s first annual review of the list of prohibited AI systems and high-risk AI systems is to be completed by this date, although it is not clear whether this review will lead to any immediate changes.
- Serious incident reporting guidance: The AI Act requires the European Commission to have issued guidance regarding serious incident reporting (i.e. an incident or malfunctioning of an AI system that directly or indirectly leads to, among others, death or serious harm to persons, critical infrastructure, property or the environment) by this date.
- 2 February 2026:
- High-risk AI systems guidance: The AI Act requires the European Commission to have issued guidance on the practical implementation of the requirements pertaining to high-risk AI systems under the AI Act, with practical examples and use cases, by this date.
- 2 August 2026:
- Obligations on a subset of high-risk AI systems: Organisations (primarily providers, importers and distributors, and other third-parties along the value chain in certain circumstances, such as when they place their name or trade mark on the relevant AI system) will be required to comply with the AI Act’s obligations relating to a subset of high-risk AI systems, namely those listed in Annex III of the AI Act (i.e. AI systems used for biometric or emotional identification, in educational and vocational training, in the context of employment and management of workers, etc.). Note that operators[2] of high-risk AI systems who have already placed their AI systems onto the market by this date will only be required to comply with the AI Act if they subject their high-risk AI systems to “significant changes in their designs”.
- Obligations for limited risk AI systems: Providers and deployers[3] of certain limited-risk AI systems (i.e. AI systems that are intended to interact directly with humans, are capable of generating synthetic content, manipulating content, etc.) will be required to comply with the relevant requirements of the AI Act.
- 2 August 2027:
- Obligations on remaining high-risk AI systems: Providers of certain high-risk AI systems, namely AI systems that (i) are intended to be used as a safety component for a product or otherwise constitutes a product under the EU legislation listed in Annex I of the AI Act and (ii) are required to undergo a third party conformity assessment (i.e., certain radio and pressure equipment, personal protective equipment, agricultural vehicles, etc.) will be required to comply with the relevant requirements of the AI Act.
- Deadline for compliance for providers of general-purpose AI systems: Providers of general-purpose AI systems who have placed their relevant AI systems onto the market before 2 August 2025 will be required to comply with their relevant requirements under the AI Act by this date.
- Deadline to qualify for extended compliance period for operators of AI systems that constitute components of certain large-scale IT systems: Operators of AI systems which are components of certain large-scale IT systems (i.e., the Schengen Information System, the Visa Information System, the European Travel Information and Authorisation System, etc.) (“Large Scale IT AI Systems”) that have been placed on the market or put into service before this date will not be required to comply with the AI Act until 31 December 2030. The AI Act is silent as to when operators of such AI systems must comply with the AI Act if they were to place their AI systems on the market or put their AI systems into service after 2 August 2027; on a restrictive interpretation of the AI Act, such operators will likely be required to comply with the AI Act immediately upon placing their AI systems on the market or putting them into service after 2 August 2027.
- 2 August 2029:
- First review of the AI Act: The European Commission will undertake a review of the AI Act by this date, and every 4 years thereafter. As with the annual review of the lists of prohibited and high-risk AI systems, it is not clear whether the European Commission will introduce changes to the AI Act around this time, as part of this review.
- 31 December 2030:
- Deadline for compliance for operators of Large Scale IT AI Systems: As flagged above, Operators of Large Scale IT AI systems that have been placed on the market or put into service before 2 August 2027 must comply with the requirements of the AI Act by this date.
Commentary and Takeaways for Organisations
As a preliminary step, organisations should determine whether and how they fall within the scope of the AI Act in order to assess which timelines are applicable to them. While 2 August 2026 is the date of the AI Act’s general applicability, organisations may have a shorter compliance period, particularly if organisations constitute providers of general-purpose AI systems or use/intend to use AI systems that are deemed to have the highest risk under the AI Act.
Organisations should also take heed of the deadlines to qualify for extended periods of compliance or to exempt a high-risk AI system from the requirements of the AI Act. In particular:
- Providers of general-purpose AI that place their AI systems on the market before 2 August 2025 will benefit from an additional 2 years to comply with the AI Act.
- Operators of high-risk AI systems that place their AI systems on the market before 2 August 2026 will be generally be exempt from the requirements of the AI Act, unless these AI systems are subject to a “significant changes in their design”. While the AI Act clarifies this to mean a substantial modification, there is still a degree of uncertainty as to what may constitute such a change, and it is unclear whether a change in use or purpose of the AI system may trigger the application of the AI Act.
As a result of these benefits, there may be increased activity leading up to these dates as organisations push to qualify for them by their respective dates. Organisations should also note that such exempted high-risk AI systems may still be subject to regulatory scrutiny. The European Data Protection Supervisor (the EU supervisory authority in charge of, among others, enforcing EU data protection and privacy standards and monitoring new technology that may affect the protection of personal information) has previously noted that the concept of “significant change” lacked clarity, and has expressed concerns that high-risk AI systems may present risks to the rights of individuals if they can be deemed to operate without “significant changes” or “substantial modifications” and thus be exempt from the restrictions of the AI Act.
Further developments from the European Commission should also be observed. The European Commission’s review of the AI Act and the lists of prohibited and high-risk AI systems may lead to changes in the relevant obligations for organisations. In addition to the codes of practice mentioned above, the European Commission has also committed to issue guidance on other aspects of compliance with the AI Act, such as with regards to transparency and on the definition of an AI system, and is empowered to issue delegated acts that may impose additional requirements on organisations within scope of the AI Act. However, it has not provided a specific timeline or date for most of these guidance or delegated acts.
Organisations based in the UK should also take note of existing compliance efforts to align with the AI Act. The AI Act’s extraterritorial effect also means that UK-based organisations may fall within scope of the AI Act in certain conditions, such as if they place AI systems onto the EU market. The recently-elected Labour government has also pledged to introduce “binding regulation” to regulate AI, and although no formal text has been proposed to date it is likely that the AI Act will be influential in shaping UK AI regulation, if and when it should be introduced. We are watching this space closely for updates.
[1] Under the AI Act, a “provider” of an AI system means a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge.
[2] Under the AI Act, the designation of “operators” is a broad role encompassing other roles under the AI Act, such as providers, their authorised representatives, deployers, importers and/or distributors.
[3] Under the AI Act, a “deployer” means a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity.