Artificial Intelligence is already present in almost all types of business, and this is just the beginning of the process of implementing it into various business processes. The tremendous possibilities demonstrated by ChatGPT and similar models have kindled hopes for the automation of many processes, but they have also raised concerns and even fears that AI capabilities will be used for the wrong purposes.
These concerns, and the many areas of life affected by the potential impact of AI, compel regulatory steps that prioritize the development and deployment of responsible AI systems, foster transparency, and mitigate risks associated with the technology, ultimately safeguarding individuals, society, and fundamental rights.
Overview of the EU AI Act
On 9 December 2023 EU Parliament and the EU Council negotiators reached a provisional agreement on the Artificial Intelligence Act. The AI Act is going to be the world’s first comprehensive legislation designed by the European Union to regulate the development and use of artificial intelligence (AI) technologies. With a focus on fostering trustworthy AI, the Act aims to establish a unified framework, addressing ethical concerns, promoting transparency, and mitigating risks across various AI applications. The regulation applies to providers, users, importers, distributors, product manufacturers, and authorized representatives of AI systems in the EU or those outside the EU whose outputs are used in the EU. High-risk AI systems, falling under Union harmonization legislation, are subject to specific articles. Exemptions include AI systems used for activities outside Union law, military, defense, or national security, and those not placed in the EU. Public authorities in a third country using AI in international agreements with the Union are also exempt. The regulation excludes intermediary service providers‘ liability provisions, AI for scientific research, general research activities, and personal non-professional AI use, with certain exceptions.
Risk-based approach
The regulator focuses on the potential risks associated with different types of artificial intelligence (AI) applications. The idea is to tailor regulatory requirements according to the level of risk posed by various AI systems. The AI Act proposes a tiered system where different requirements and obligations apply based on the risk classification of AI applications. There are four levels of risk defined.
Unacceptable Risk
This level includes AI systems that pose risks that are considered unacceptable under any circumstances. The AI Act prohibits certain AI practices falling into this category. Examples of such practices may involve serious harm, manipulation, discrimination, or other severe ethical and legal violations. Any social scoring systems or biometric identification and categorization fall into this category. Some relaxed rules apply only for law enforcement authorities.
High Risk
AI systems classified as high risk are those that have the potential to cause significant harm or have a high impact on individuals‘ rights and safety. These systems are subject to more rigorous regulatory requirements and obligations to ensure transparency, accountability, and robustness. There are two categories of AI systems falling in this category – these which are used in products controlled under EU product safety regulation (e.g., toys, cars, medical devices), and systems explicitly listed (e.g., management and operation of critical infrastructure; education and vocational training; employment, worker management and access to self-employment; assistance in legal interpretation and application of the law). High-risk AI systems face stringent requirements before market entry, including thorough risk assessment, quality datasets to minimize risks and discrimination, activity logging for traceability, detailed documentation for authorities, clear user information, human oversight, and high levels of robustness, security, and accuracy.
Limited and Minimal Risk
AI systems with low risk are subject to fewer regulatory requirements compared to high-risk systems. These may include certain transparency and documentation obligations to ensure users are informed about the AI’s capabilities and limitations. For example, users of chatbots should be aware of that. Similarly, results of the AI systems generating or manipulating content in the way that the output resembles existing persons or places (so called ‘deep fake’), should be annotated. Other AI systems are not subjected to any regulations.
While the pillars of the new law appear to be already established, it should be kept in mind that the specific criteria for determining risk levels and the associated regulatory measures may be subject to updates or revisions as the legislative process unfolds.
Is my system an AI system?
Special attention should be paid to the various types of definitions which shape the landscape of rights and obligations. The consequences of not adhering to the guidelines can cost dearly, as the stipulated fines go into the millions of euros. One of the fundamental questions is whether a solution will be classified as AI at all and, if so, into which category. The basic definition is that of an AI system:
‚artificial intelligence system’ (AI system) means a system that is designed to operate with elements of autonomy and that, based on machine and/or human-provided data and inputs, infers how to achieve a given set of objectives using machine learning and/or logic- and knowledge based approaches, and produces system-generated outputs such as content (generative AI systems), predictions, recommendations or decisions, influencing the environments with which the AI system interacts
which is a very broad definition and many simple software solutions, like text OCR will fall into this definition. Most of them, however, will fall into the category of low-risk systems with very limited restrictions.
An important category of AI systems are General Purpose AI systems, the best-known representative of which will certainly be OpenAI’s ChatGPT, but many more systems will join this family, and some of the algorithms in the category of so-called Large Language Models, such as Llama2 from Meta, and Gemini from Google. All of them are not specialized for a specific narrow task or application but are designed to perform a broad range of tasks, akin to the concept of general artificial intelligence (AGI). AGI implies a level of artificial intelligence that can understand, learn, and apply knowledge across diverse domains, resembling human cognitive abilities, and as such, they can potentially be used for harmful purposes. So, such systems are treated in a similarly restrictive manner to high-risk systems, except that it is possible to avoid increased restrictions if the system provider provides rigid restrictions that prevent the system from high-risk uses.
There are several other definitions, such as ‘provider’, ‘importer’, ‘distributor’, ‘operator’, ‘life cycle of an AI system’, and others.
Timeline
Although a political agreement has been reached, the finalized text is pending. To meet the time constraints before the European Parliament elections in June 2024, there is a push to finalize and publish the Act in the EU’s Official Journal. Upon publication in the Official Journal, the Artificial Intelligence Act will come into force 20 days later. Full applicability is set for 36 months after entry, following a phased implementation:
-
Prohibited use rules take effect 6 months after entry.
-
GPAI governance obligations apply 12 months after entry.
-
All rules, including those for high-risk systems in Annex III, become applicable 24 months after entry.
-
Obligations for high-risk systems listed in Annex II (Union harmonization legislation) apply 36 months after entry.
Summary
Without entering into a discussion of the new regulations, it should be said that the EU AI Act can have significant implications for organizations involved in the acquisition of AI technologies or implementation of such technologies in their systems.
However, one should not be afraid to stock up, as a large part of AI systems will be subject to minimal restrictions. It is worth noting that research and development activities on AI systems are not subject to the new regulations. The AI Act proposes creating coordinated AI „regulatory sandboxes“ to encourage innovation in the EU. These supervised environments allow businesses to experiment with new AI products under regulatory guidance but limited restrictions.
With all of the above in mind, it is worthwhile to familiarize oneself now with the assumptions of the new regulations, and to follow the changes being carried out at the next stages of the legislative process.