Blog
Spotlight on Regulatory Cross Border: AI Act Advances Through the European Parliament
Blog
March 19, 2024
On March 13, 2024, three years after the proposal by the European Commission (the Commission), the lawmakers in the European Parliament approved the Artificial Intelligence (AI) Act (AI Act), the first regulation on artificial intelligence in the world, with an overwhelming majority of 523 votes in favor, 46 against and 49 abstentions. The AI Act will most likely take effect in May, after its final endorsement.
The AI Act aims to find a balance between the right kind of regulation that still boosts innovation. As such, it focuses on ensuring that artificial intelligence systems and models marketed within the European Union are used in a way that is ethical, safe, and respects EU fundamental rights, while at the same time allowing for strengthening uptake, investment and said innovation.
Main Takeaways
Compliance
All entities involved in supplying, distributing, or deploying AI systems and models, whether they are companies, foundations, associations, research laboratories, or any other legal entities, operating within or outside the EU, are required to adhere to the regulations outlined in the AI Act if they intend to market their AI products within the EU. The AI Act does not apply to military AI systems nor to AI systems used for the sole purpose of scientific research and development. Furthermore, it does not apply to open-source AI, if it is not banned or classified as high-risk.
Risk-based system
The new rules of the AI Act will apply in the same way across all EU member states through a framework based on four different levels of risk: unacceptable risk; high risk; minimal risk; and specific transparency risk.
Unacceptable risk—AI systems are AI systems considered a clear threat to the fundamental rights of people. This includes AI systems or applications that manipulate human behavior to circumvent users’ free will. Examples include emotion recognition in the workplace and schools and predictive policing. Such AI systems will be banned.
High-risk—AI systems include certain critical infrastructures; medical devices; systems to determine access to educational institutions or for recruiting people; certain systems used in the fields of law enforcement, border control, administration of justice and democratic processes; and biometric identification. Such AI systems will be required to comply with strict requirements—including risk-mitigation systems, high-quality of data sets, logging of activity, detailed documentation, clear user information, human oversight, and a high level of robustness, accuracy, and cybersecurity.
Minimal-risk—AI systems are AI-enabled recommender systems or spam filters. These AI systems will not be subject to any obligations, as they present only minimal or no risk for citizens’ rights or safety. It should be noted that it is expected that the vast majority of AI systems will fall into this category.
Specific transparency-risk—AI systems refer to AI systems such as chatbots. Such AI-generated content will have to be labeled as such, and users will need to be informed when biometric categorization or emotion recognition systems are being used.
It should be noted that most of the AI systems that we know today fall into the last two categories.
Transparency requirements
General-purpose AI (GPAI) systems (i.e., AI systems that are able to perform generally applicable functions such as audio/video generation, patent detection, question answering, and image/speech recognition), as well as GPAI models they are based on, must meet certain transparency requirements. Here, The AI Act imposes additional obligations that include self-assessment and mitigation of systematic risks, reporting of serious incidents, and conducting test and model evaluations as well as cybersecurity requirements. However, it should be noted that the AI Act leaves some question as to the level of transparency and interpretability that will be imposed on AI systems, as well as what their ‘interpretability’ to users will mean.
Governance
The European AI Office, established in February 2024 within the Commission, will oversee the AI Act’s enforcement and implementation within the member states and will ensure coordination at the EU level. Along with the national market surveillance authorities, the AI Office will be the first body globally that enforces binding rules on AI and is therefore expected to become an international reference point. It should be noted that the AI Office will also supervise the implementation and enforcement of the new rules on GPAI models.
Penalties
Companies that are noncompliant with the AI Act will be fined: (i) from EU€35M or 7% of global annual turnover (whichever is higher) for violations of banned AI applications; (ii) EU€15M or 3% for violations of other obligations and EU€7.5M; or (iii) 1% for supplying incorrect information.
More proportional caps are foreseen for administrative fines for small and medium enterprises (SMEs) and startups in case of infringements of the AI Act.
Penalties will take effect 12 months after the AI Act becomes effective.
Next steps
The AI Act will become law on the 20th day after publication in the EU Official Journal.
After this publication, the AI Act will become effective two years after its entry into force; however, prohibitions will already apply after six months, while the rules on GPAI will apply after 12 months. Codes of Practice should be ready nine months after the Act takes effect. Obligations for high-risk AI systems will apply after three years.
To bridge the transitional period before the AI Act becomes generally applicable, the Commission will be launching an AI Pact. It will convene AI developers from Europe and around the world who commit on a voluntary basis to implement key obligations of the AI Act ahead of the legal deadlines.
One of the primary challenges will be to see if the AI Act will stand the test of time, considering the rapid evolution of AI technology.
What clients should think about
Companies should assess the impact of the upcoming AI Act on their business and discuss with their legal teams how the passage of the AI Act may impact them in various EU jurisdictions, in light of aggressive timelines for compliance set forth by the AI Act. Companies that do not have a physical presence in the EU should keep in mind that as long as they market AI systems or models in the EU, the AI Act applies.
Please contact the authors or your Winston & Strawn relationship attorney if you have any questions or need further information.
Related Professionals
Related Professionals
This entry has been created for information and planning purposes. It is not intended to be, nor should it be substituted for, legal advice, which turns on specific facts.