A Closer Look: EU’s Finalized AI Act & What It Means for the U.S.
Introduction
On Wednesday, March 13, 2024, the European Parliament officially enacted the landmark European Union Artificial Intelligence Act (AI Act). Touted as the world’s first comprehensive legal framework of its kind, the AI Act will go into effect in stages over the next three years. The AI Act will apply to both businesses operating within the EU and to any AI developers or creators whose AI systems are used in EU countries.
This raises the questions:
- How will the AI Act be applied,
- What does the AI Act mean to businesses operating in the U.S., and
- Should we expect the U.S. to follow suit with similar legislation?
How is the Act Applied?
The AI Act regulates the use of AI based on level of risk: (1) unacceptable risk, (2) high-risk, (3) limited risk, and (4) minimal or no risk.
Unacceptable Risk
AI systems that threaten the safety, livelihood, and fundamental rights of individuals pose an unacceptable risk and will be outright banned. Such systems include those that may subliminally manipulate a person’s voting behavior, exploit a person’s vulnerabilities to incite harmful behavior, or categorize individuals based on certain sensitive characteristics (i.e., religion, ethnicity, sexual orientation, etc.).
High-Risk
High-risk AI systems are those with the potential to negatively affect the safety and rights of individuals. The use and development of these systems will be highly regulated by the AI Act. AI usage in industries such as law enforcement and education are more likely to pose a high risk.1
Limited Risk
AI systems such as ChatGPT are categorized as limited risk. Because such systems still pose a risk of manipulation or deceit, developers and providers of limited risk AI will be required to comply with certain transparency obligations, such as disclosing that content was generated by AI to permit users the chance to make an informed decision on whether to engage with it.
Minimal or No Risk
AI systems with minimal or no risk, such as AI-enabled video games or spam filters, are fully permitted under the AI Act.
What Does the AI Act Mean for Businesses in the U.S.?
Most of the obligations outlined in the AI Act apply to “providers” and “developers” of AI systems. Providers and developers are those who intend to place on the market or put into service AI systems in the EU. The obligations apply to every provider or developer of AI systems operated in the EU, regardless of where the developer is located. These obligations will apply to U.S. based providers and developers of AI systems that are operated in the EU.
The AI Act also includes obligations for entities that deploy or “use” a third-party’s AI system in the EU. Like the EU’s General Data Protection Regulations (GDPR), the AI Act will apply to U.S. companies that market or provide services within the EU and do so using AI technology.
Regardless of whether your business provides, develops, or employs AI technology, the AI Act has the potential to impact your operations within the EU. Penalties for violations of the AI Act can be steep: up to 7% of a business’s global revenue or 35 million euros, whichever is higher.
Will the U.S. Adopt Legislation Similar to the AI Act?
When the GDPR became effective in 2018, a number of other countries and even individual U.S. states pass data privacy legislation similar to the GDPR. However, the U.S. has yet to pass federal data privacy legislation like the GDPR. The AI Act will likely spur other countries to pass similar laws, but we are not likely to see “AI Act”-type federal legislation in the U.S. anytime soon.
Though President Biden recently issued an Executive Order on standards for AI safety and security, Congress has not seemed willing or able to pass federal legislation in the data privacy and technology space. More likely, we will see federal agencies, like the FTC, use existing laws and regulations, or possibly adopt new regulations, to address the development and use of AI.
Closing
Need assistance determining whether the AI Act applies to you and how to comply? Contact Richard Sheinis and Jade Davis, leaders of the Hall Booth Smith Data Privacy & Cybersecurity practice group.
Footnotes
- For a full list of obligations for high risk systems, see Article 16: Obligations of Providers of High-Risk AI Systems of the EU Artificial Intelligence Act.
Disclaimer
This material is provided for informational purposes only. It is not intended to constitute legal advice nor does it create a client-lawyer relationship between Hall Booth Smith, P.C. and any recipient. Recipients should consult with counsel before taking any actions based on the information contained within this material. This material may be considered attorney advertising in some jurisdictions. Prior results do not guarantee a similar outcome.
Blog Overview
About the Author
Richard Sheinis
Partner | Charlotte Office
T: 980.859.0381
E: rsheinis@hallboothsmith.com
Richard Sheinis assists businesses in the areas of data privacy and cyber security, employment, and technology. He works with a wide variety of companies from small technology businesses to publicly traded companies with a global footprint.
Lea McBryde
Attorney at Law | Charlotte Office
T: 980.949.7826
E: lmcbryde@hallboothsmith.com
Lea McBryde is an Associate in our Charlotte office, where she focuses her practice on data privacy and cybersecurity matters.
Leave a comment
You must be logged in to post a comment.