On 30 November 2022, OpenAI made its ChatGPT generative artificial intelligence chatbot publicly available. In the two years since, its unprecedented growth has fostered a dramatic shift in public attention to and interest in all forms of AI. Now, the possibilities and risks presented by the continued development of AI are also firmly at the top of mind for businesses and regulators across the world.
Due in part to the unexpected acceleration in the AI boom, and not for the first time, regulation was one step behind. On 13 March 2024, the European Parliament voted to adopt the European Union’s Artificial Intelligence Act. The AI Act became law on 1 August 2024, but the application of its provisions is staggered over three years. When fully operational, the AI Act will (i) classify AI models and other AI-enabled products and services according to risk, (ii) impose obligations on developers, importers, distributors and users of AI systems, both within and outside of the EU, and (iii) enforce outright bans on AI that presents an unacceptable level of risk.
The EU AI Act represents the world’s first attempt to establish a legislative framework for AI, and is likely to influence – indeed, it already has – the development of laws in other jurisdictions, just as the General Data Protection Regulation for data protection legislation did. A notable outlier is the UK, which has indicated that it will introduce “binding regulation” on the small number of companies developing the most powerful AI models. Where the EU prioritises regulation over innovation, the UK appears to be taking the opposite approach.
Like the UK, the U.S. has not introduced national legislation aimed at establishing a regulatory framework governing the development, provision or use of AI. Where legislation has been introduced or guidance issued, often at the state level (for example, the Colorado AI Act), regulation of AI is often sector-specific or focused upon individual issues. Indeed, with an echo of its approach to data protection, the U.S. is not expected to introduce a federal AI law. Nevertheless, the extraterritorial scope of the EU AI Act and the dominance of U.S. companies in the AI sector mean that most American businesses with European customers will be subject to a law drafted 2,000 miles from home.
Outside of the EU, UK and U.S., plans for the regulation of AI are showing signs of development. In July 2024, Taiwan – whose near monopoly on the semiconductor market makes it integral to the future of AI – published its draft Basic AI Act, which shares similarities with the EU’s risk-based AI Act. By contrast, South Korea’s long-awaited AI legislation, which is under review by the National Assembly, is reflective of the UK’s approach, providing for strict regulation of only high-risk AI and promoting a developer- and industry-friendly approach to lower risk AI. Canada’s even longer-awaited Artificial Intelligence and Data Bill, which has been drafted by reference to the approaches promoted by the EU, UK and U.S., is expected to finally become law in 2025. In its current form, the Bill will impose greater obligations on high-risk AI systems than the EU AI Act, due to its wider concept of “high impact” software. Finally, although China has not yet published or publicly disclosed plans to implement all-encompassing AI regulation, it has put in place several national laws over the past few years that collectively reflect a similar approach to that of the AI Act.
With much of this legislation still subject to drafting, debating and approval by legislatures, there is no certainty as to how AI regulation will evolve in the coming years. However, if 2023 was the year that AI entered mainstream consciousness, and 2024 the year that the debate around regulation was taken seriously, 2025 looks likely to become the year that AI laws take shape – and effect – globally.
For more information on PLI’s new edition of its cyber law treatise, Cybersecurity: A Practical Guide to the Law of Cyber Risk, click here.