McBryde AI Review Social Card

Artificial Intelligence: A Year in Review

Introduction

As artificial intelligence (AI) continues to rapidly evolve, its growing impact on data privacy and cybersecurity has prompted governments and organizations to focus on developing comprehensive policy frameworks and regulatory measures. In 2024, AI legislation proliferated as countries across the globe took significant steps to balance the opportunities AI presents with the risks it poses. From privacy concerns to ethical considerations and economic implications, the regulatory framework for AI becomes more nuanced by the day, aiming to ensure that AI is developed and used responsibly.

Global Regulatory Developments

In 2024, several key themes emerged as countries worked to regulate AI technologies.

1. The European Union AI Act

The European Union continues to lead the charge in AI regulation with the implementation of the EU AI Act. First proposed in 2021, the landmark legislation officially went into effect on August 1, 2024. Touted as the world’s first comprehensive legal framework of its kind, the Act will apply to both businesses operating within the EU and to any AI developers or creators whose AI systems are used in EU countries.

The Act establishes a risk-based approach to AI governance, categorizing AI systems based on their potential to harm society. These categories include (1) unacceptable risk, (2) high risk, (3) limited risk, and (4) minimal or no risk.

AI systems categorized as high-risk will face the most stringent regulations and include applications that involve biometric data and critical infrastructure management. AI usage in industries such as law enforcement and education is more likely to pose a high risk.

AI systems that pose limited risk and/or low risk will face lighter regulatory requirements, with the focus more on ensuring transparency and user awareness.

Regardless of whether a business provides, develops, or employs AI technology, the Act has the potential to impact its operations within the EU. Penalties for violations of the AI Act can be severe: up to 7% of a business’s global revenue or 35 million euros, whichever is higher.

Enforcement of the Act will occur in stages, and we expect the EU Commission to publish a series of guidance documents in the coming year to further assist companies with compliance.

2. National Developments

In the United States, the AI regulatory landscape has been characterized by a more market-driven approach, though 2024 marks a year of growing attention from policymakers and federal agencies. Certain states have taken concrete steps to balance AI innovation with the need for ethical standards and safety mechanisms, and we expect this trend to continue.

  • In 2024, Colorado passed the Colorado AI Act, set to take effect on February 1, 2026. The focus of Colorado’s law is the concept of algorithmic discrimination. Any employer doing business in Colorado with more than 50 employees will have specific obligations with regard to hiring, retention, and promotion of employees when using AI to make decisions.
  • In 2024, Illinois also passed a law to regulate the use of AI by employers. The law will take effect on January 1, 2026. Illinois’s legislation, which amends the Illinois Human Rights Act, makes it unlawful for employers to use AI that discriminates on the basis of a protected class. Employers are also prohibited from using zip codes as a proxy for protected classes and have an affirmative duty to notify employees if using AI.
  • On September of 2024, California passed a generative AI law that requires developers to publish information on their websites regarding the types of data used to train their AI systems. Generative AI is defined as AI “that can generate derived synthetic content, such as text, images, video, and audio, that emulates the structure and characteristics of the artificial intelligence’s training data.” AI developers must ensure compliance with the law by January 1, 2026.
  • The U.S. Federal Communications Commission (FCC) declared that AI-generated voices fall within the meaning of “artificial” under the Telephone Consumer Protection Act (TCPA), and communications using such voices must comply with TCPA requirements.

Nonetheless, much of the regulatory framework for AI in the U.S. remains fragmented, with different states introducing their own AI-specific laws. The challenge for policymakers will be to create a more unified national strategy that balances technological advancement with accountability.

3. China’s Approach to AI Regulation

China continues to develop its AI governance strategy, aiming to become a global leader in AI while also maintaining tight control over the technology. In 2024, the Cyberspace Administration of China (“CAC”) released a draft regulation titled Measures for Labelling Artificial Intelligence Generated Synthetic Content, aimed at regulating the labelling of AI-generated content.

4. Other Global Efforts

While the United Kingdom did not pass any major AI legislation in 2024, its new administration affirmed its intention to regulate AI in the King’s Speech. The UK has taken a pro-innovation approach to AI that focuses on the safe use of AI without stifling development.

In Australia, the Department of Industry, Science and Resources released the “Voluntary AI Safety Standard,” which builds on previous efforts to support and promote consistency among best practices when developing AI. While not mandatory, the Australian standard consists of specific guardrails, including testing, transparency, and accountability requirements.

Countries in South America and Africa also introduced legislation governing the use of AI in 2024, though no laws have been officially passed.

Ethical Considerations and Challenges

As AI technologies advance, ethical concerns have become a central topic of policy discussions. In 2024, lawmakers focused on issues such as:

  • Bias and Discrimination: AI systems have been found to replicate and amplify biases, especially in areas like employment, law enforcement, and lending. Governments are increasingly requiring that AI developers address such biases through diverse data sets, transparency, and fairness audits.
  • Privacy and Data Protection: With AI systems processing vast amounts of personal information, legislation such as the General Data Protection Regulation (GDPR) and California’s Consumer Privacy Act (CCPA) continue to work towards the implementation of stricter data privacy regulations.
  • Accountability and Liability: Determining who is responsible when AI systems cause harm or make errors remains a key challenge. In 2024, policymakers worked on frameworks to ensure accountability, particularly in autonomous vehicles, healthcare, and finance, and additional accountability measures are expected in 2025.
  • Job Displacement: The automation potential of AI has raised concerns about the job market and displacement. Governments are exploring policies to address the future of work, focusing on technology education and retraining workers displaced by AI technologies.

Conclusion

The regulatory developments of 2024 set the stage for an exciting and challenging future where AI can be guided by principles of fairness, safety, and transparency. For a deeper look at the road ahead for AI policy and regulation under a new administration, click here.

Disclaimer

This material is provided for informational purposes only. It is not intended to constitute legal advice nor does it create a client-lawyer relationship between Hall Booth Smith, P.C. and any recipient. Recipients should consult with counsel before taking any actions based on the information contained within this material. This material may be considered attorney advertising in some jurisdictions. Prior results do not guarantee a similar outcome.

Blog Overview

Subscribe for Updates

About the Author

Lea McBryde

Lea McBryde

Attorney at Law | Charlotte Office

T: 980.949.7826
E: lmcbryde@hallboothsmith.com

Lea McBryde is an Associate in our Charlotte office, where she focuses her practice on data privacy and cybersecurity matters.

Leave a comment