AI Law EU AI Act

The World’s First Comprehensive AI Law — the EU AI Act — is a Landmark in AI Regulation

Globally, business owners are asking how the European Union’s Artificial Intelligence Act (EU AI Act or the Act) affects their business. This article will delve into the Act and how businesses will be affected globally, with an emphasis on the U.S.

Companies foreign to the EU, such as the United States, using AI in products/services directed to EU residents will be subject these new governance obligations. U.S. companies should familiarize themselves with the new requirements to ensure their AI tools and governance programs are built and maintained in accordance with the forthcoming expectations.

While this article provides great detail of what is likely to be required in the final EU AI Act, there is a clear need to explore and discuss these factors in detail to parse out all obligations and ramifications this law requires. For the purposes herein, we have scoured the material available to provide the legal framework expected with succinct information to arm business owners, board members, and directors to make decisions that mitigate risk, follow the curve of allowable decision making, and provide freedom to explore the opportunity of AI systems we have come to depend on and will continue to build in the future.

If You are New to the AI Regulation Party, What is the EU AI Act?

After months of intensive negotiations and three-day marathon talks between the European Commission, Parliament, and Council, on December 8, 2023, a provisional agreement was reached on the Act. The Act solidifies the EU as a front-runner of AI regulation. The Act may read as if it only impacts the EU; however, as with the GDPR, it has a far-reaching and broader legal framework that reaches beyond the borders of the European Union. This work further sets the EU apart in the realm of technology regulation on the world stage.

In and of itself, the Act aims to ensure AI systems are safe and respect fundamental rights and EU values. In addition to providing consumer safety, it is critical to the EU that there is legal and financial certainty for investors to ensure innovation in AI continues and minimized compliance cost for providers.

Who does the Law Apply to?

The Act will apply to both providers (e.g., tech companies licensing AI models or companies creating their own AI models) and deployers (companies that use or license AI models). Some known exceptions include AI systems used exclusively for military or defense purposes or to AI systems used for the sole purpose of research and innovation.

If your company is using, developing, or distributing/deploying AI in the EU, even if your company is based in another region or continent, your company will be required to reset AI governance teams, and if your company does not have a governance team, build one as soon as possible.1

AI Classifications

The act takes a ‘risk-based’ approach: the higher the risk, the stricter the rules. Some AI systems are banned entirely (unacceptable risk), barring narrow exceptions, high-risk AI systems have specific obligations, including testing, documentation, transparency, and notification duties, and the remaining categories, limited risk, and minimal/no risk adapt to the level of risk apportioned.2

Unacceptable-Risk Systems

The AI systems deemed as posing an unacceptable risk include:

  • Biometric categorization systems utilizing sensitive characteristics such as political, religious, philosophical beliefs, sexual orientation, and race.
  • Indiscriminate scraping of facial images from the Internet or CCTV footage for the creation of facial recognition databases.
  • Emotion recognition within workplaces and educational institutions.
  • Social scoring based on social behavior or personal characteristics.
  • AI systems manipulating human behavior to override free will.
  • Exploitative use of AI targeting vulnerabilities based on age, disability, social, or economic situations.
  • Specific applications of predictive policing.

Biometrics

While biometric identification systems are generally prohibited, there are specific exceptions for their use in publicly accessible spaces for law enforcement purposes. These exceptions are tightly regulated, requiring prior judicial authorization and limiting their application to a well-defined list of crimes. Additionally, the deployment of post-remote biometric identification systems is reserved exclusively for the targeted search of individuals convicted or suspected of serious crimes.

Real-time biometric identification systems must adhere to stringent conditions, with usage restricted to specific periods and locations. These systems are sanctioned for targeted searches involving victims of abduction, trafficking, or sexual exploitation; prevention of a specific and immediate terrorist threat; and the localization or identification of individuals suspected of committing specific crimes outlined in the regulations, such as terrorism, trafficking, sexual exploitation, murder, kidnapping, rape, armed robbery, and participation in a criminal organization. Notification to national data protection authorities is mandatory when employing biometric identification systems.

High-Risk Systems

AI systems that possess “significant potential harm to health, safety, fundamental rights, environment, democracy, and the rule of law” will be designated as high-risk under the Act. Examples include:

  • Critical infrastructures in areas such as water, gas, and electricity.
  • Medical devices (products covered by EU’s product safety litigation).
  • Systems involved in determining access to educational institutions or recruiting individuals.
  • Certain systems utilized in the realms of law enforcement, border control, administration of justice, and democratic processes.
  • Biometric identification, categorization, and emotion recognition systems.

Mandatory compliance with obligations such as risk mitigation, data governance, detailed documentation, human oversight, transparency, robustness, accuracy, and cybersecurity will be in effect for high-risk systems. To ensure compliance with the Act, these AI systems will undergo conformity assessments and mandatory fundamental rights impact assessments. EU citizens have the right to launch complaints about AI systems and receive explanations regarding decisions made by high-risk AI systems that impact their rights.3

Limited-Risk Systems

AI systems classified as limited risk, including chatbots, certain emotion recognition and biometric categorization systems, and those generating deepfakes will be subject to less stringent transparency obligations. These requirements involve informing users that they are interacting with an AI system and labeling synthetic audio, video, text, and image content as artificially generated or manipulated, both for users and in a machine-readable format.

Minimal / No-Risk

For all other AI systems not falling into the three main risk classes, such as AI-enabled recommender systems or spam filters, they are classified as minimal/no-risk. The EU AI Act permits the unrestricted use of minimal-risk AI systems while also encouraging the adoption of voluntary codes of conduct.

How do Companies Rank Risk?

To risk-rank effectively, companies will need to know the specific examples implicated by the Act (as seen from examples above). Comprehensive classification requires deep review of the laws referenced in the Act, use cases identified by the EU, and legislative schemes incorporated in the Act’s drafts and revision notes.

Data Analysis Requirements

AI system deployment requires data verification to ascertain whether or not high-risk AI usage is trained with “high-quality” data. The Act requires accurate and relevant data to be utilized in the company’s high-risk AI model. For companies licensing a commercial large language model and building their own application for a specific high-risk use case, understanding the rights (such as IP and privacy) to use the data is crucial.

Risk Assessment Methodology Requirements

Prior to deploying AI systems, the Act requires testing, auditing and monitoring. This is important as it allows companies to identify and analyze known and foreseeable risks, estimate risks during intended use and foreseeable misuse, evaluate emerging risks through post-market monitoring, and adopt suitable risk management measures.4 It is also critical for companies to always engage this requirement despite how similar a system may be to another system previously deployed because although some risks have already materialized, others will materialize in the future. Risk Assessment coupled with auditing will continue bringing new risk considerations to the forefront.5

Auditing Requirements

The Act also requires regular auditing and in between the routine audits, monitoring, mitigation and testing. The baseline requirement is risk mitigation. This is the Act’s critical requirement to ensure risk mitigation occurs in the areas of algorithmic impact, intellectual property concerns, data accuracy, product safety, data privacy, cybersecurity, and antitrust.

Key discussions centered around generative AI and large language model drift as models have shown deviation from initial inputs and expectations over time. If continuous testing and auditing does not occur, risk mitigation is eviscerated overtime. Engineers have opined that companies need to ensure codes are programmed to ensure consistent alerts if the models deviate beyond the scope of the coded expectations. Adopting this process now versus building models without auditing and testing capacity will result in cost prohibitive overhauls when the Act is effective in a mere two years.6

Recordkeeping & Documentation Requirements

The Act requires evidence of continuous testing, monitoring, and auditing to be documented in the logging and metadata of the AI system. Prior to an AI system being placed on the market or put into service, the Act requires the technical documentation to be drawn up, which includes all tests that have been run, the mitigation steps that have been taken, and the continuous monitoring process that is expected to be present in the AI system. After the AI has been deployed, companies must ensure maintenance of the technical documentation. Ensuring the system has reporting capabilities is not optimal — it edges on necessity to ensure expenses are kept at bay.7

End-User Transparency & Access

High-risk AI system licensors and licensees are required to be transparent with end-users about AI capabilities and limitations (“sufficiently transparent to enable users to interpret the system’s output and use it appropriately”).8 Additionally, the AI systems should be explainable to third-party auditors or regulators. The transparency requirement aligns with data privacy public facing notices so that end users are not left in the dark.

Human Oversight

The Act also calls for human intervention to intervene in order to correct deviations from expectations in real time. Companies cannot stand on the coding and processes built into the AI system/model alone. Human oversight is an expected cost that if ignored or passed upon will not only provide clear violation and noncompliance but also prevent early discovery of issues to avoid pile up when the routine audit becomes due.9

Cybersecurity & Secondary Protection Systems

In the event that expected code or parameters are bypassed and the system is unable to be restored/revert back to risk assessment and reporting usage, systems must have a mechanism to turn the system off to ensure unauthorized use does not occur. Further, high-risk AI systems must be resilient against malicious actions/actors that may compromise the security of the AI system and result in harmful or otherwise undesirable behavior. It will be imperative for companies to ensure the cybersecurity employed is able to protect against data poisoning, adversarial attacks), and any exploitation of vulnerabilities in the AI system’s digital assets or the underlying ICT infrastructure.10

Enforcement & Penalties

Enforcement of the EU AI Act is expected to be primarily carried out by surveillance authorities in each Member State. Additionally, a new entity, the European AI Office, established within the EU Commission, will undertake various administrative, standard-setting, and enforcement tasks, particularly related to the new regulations. This setup aims to ensure effective coordination. The European AI Board, consisting of representatives from member states, will continue to serve as a platform for coordination and advisory purposes to the Commission.

Penalties for violations under the Act will be contingent on the type of AI system, the company’s size, and the severity of the infringement. Fines will vary, ranging from:

  • 7.5 million euros or 1.5% of the company’s total worldwide annual turnover (whichever is higher) for supplying incorrect information.
  • 15 million euros or 3% of the company’s total worldwide annual turnover (whichever is higher) for breaches of the EU AI Act’s obligations.
  • Up to 35 million euros or 7% of the company’s total worldwide annual turnover (whichever is higher) for violations involving banned AI applications.

Significantly, as a result of trilogue negotiations, the EU AI Act will now introduce more proportionate caps on administrative fines for smaller companies and startups. Furthermore, the legislation will empower individuals, whether natural or legal persons, to report instances of non-compliance to the relevant market surveillance authority.

Conclusion

While these obligations will not go into effect until two years after the final text of the law is published, most likely in 2026, U.S. and other foreign companies should familiarize themselves with the new requirements to ensure cost prohibitive builds, compliance, and avoid fines for companies developing, licensing, deploying, or using high-risk AI and incorporating requirements for the additional AI system classifications.

Footnotes

  1. European Council, Press Release, “Artificial intelligence act: Council and Parliament strike a deal on the first rules for AI in the world“, 9 December 2023 (“European Council Press Release“).
  2. See, EU AI Act: first regulation on artificial intelligence, (August 6, 2023) (last accessed Jan. 4, 2024).
  3. SeeEuropean Commission Version of the Draft Artificial Intelligence Act, (Apr. 21 2021) at Art. 6 pg. 45, Art. 8 – 9, pg. 48 (Apr. 21 2021) (last accessed Jan. 4, 2024).
  4. SeeEuropean Commission Version of the Draft Artificial Intelligence Act, Section 5.2.3 pg. 13 and Art. 9 page 46 (Apr. 21 2021) (last accessed Jan. 4, 2024).
  5. Id. See also, European Commission Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence and Amending Certain Union Legislative Acts (Jan. 6, 2021) (last accessed Jan. 3, 2024).
  6. SeeEuropean Commission Version of the Draft Artificial Intelligence Act, Section 5.2.3 pg. 14 and Art. 10 pgs. 48-49 (Apr. 21 2021) (last accessed Jan. 4, 2024).
  7. SeeEuropean Commission Version of the Draft Artificial Intelligence Act, at Arts. 11-12, pgs. 49-50 (Apr. 21, 2021) (last accessed Jan. 4, 2024).
  8. SeeEuropean Commission Version of the Draft Artificial Intelligence Act, (Apr. 21 2021) at Art. 13, pg. 50 (last accessed Jan. 4, 2024).
  9. SeeEuropean Commission Version of the Draft Artificial Intelligence Act, Section 3.3 pg. 10, Section 4 pgs. 10-11, Recital 48, pg. 30 and Art. 14 page 41 (Apr. 21 2021) (last accessed Jan. 4, 2024).
  10. SeeEuropean Commission Version of the Draft Artificial Intelligence Act, Recitals 50-51, pg. 30 (Apr. 21 2021) (last accessed Jan. 4, 2024).

Disclaimer

This material is provided for informational purposes only. It is not intended to constitute legal advice nor does it create a client-lawyer relationship between Hall Booth Smith, P.C. and any recipient. Recipients should consult with counsel before taking any actions based on the information contained within this material. This material may be considered attorney advertising in some jurisdictions. Prior results do not guarantee a similar outcome.

Blog Overview

Subscribe for Updates

About the Author

Jade Davis

Jade Davis

Partner | Tampa Office

T: 813.329.3890
E: jdavis@hallboothsmith.com

Jade Davis focuses her practice on data privacy, cyber security, and construction matters. Jade provides strategic privacy and cyber-preparedness compliance advice and defends, counsels, and represents companies on privacy, global data security compliance, data breaches, and investigations.

Leave a comment