NIST’s AI Guidance & Toolkits for U.S. Companies
Introduction
Artificial Intelligence (AI) is transforming industries, enhancing consumer experiences, and advancing societal goals. However, as AI’s influence rapidly grows, so do the complexities of managing its risks — particularly around cybersecurity and privacy rights.
To guide organizations in navigating this evolving landscape, the National Institute of Standards and Technology (NIST) released critical frameworks and profiles designed to help companies manage the opportunities and risks posed by AI.
NIST’s AI Risk Management Framework (AI RMF): A Voluntary but Essential Tool
NIST has been at the forefront of developing actionable frameworks for emerging technologies, and its AI Risk Management Framework (AI RMF) is no exception. This voluntary framework is intended to help companies incorporate trustworthiness and risk management considerations into the design, development, and deployment of AI systems.
The AI RMF, along with its companion AI RMF Playbook, is a toolkit that enables organizations to better identify, assess, and mitigate the risks posed by AI technologies. It addresses various facets of AI risks, from safety concerns to transparency and accountability.
Moreover, the AI RMF Roadmap and AI RMF Crosswalk offer comprehensive tools for businesses to navigate the AI landscape responsibly. NIST also launched a Perspectives page to provide examples of NIST RMF incorporation and feedback for companies to look at what other companies are doing. For companies grappling with how to ensure their AI systems are trustworthy and secure or simply do not know where to start, these resources are invaluable in building robust risk management protocols.
Generative AI & New Risk Profiles
Generative AI, with its ability to create content, code, or designs autonomously, presents unique risks, including intellectual property challenges, ethical concerns, and heightened cybersecurity vulnerabilities. On July 26, 2024, NIST released a new profile, NIST-AI-600-1: Artificial Intelligence Risk Management Framework – Generative AI Profile in compliance with an October 30, 2023, Executive Order for organizations leveraging generative AI technologies.
Companies incorporating generative AI into their workflows should consider this a must-read for aligning their risk management strategies with federal guidance.
The Intersection of AI, Cybersecurity, & Privacy
As AI capabilities advance, they present both new opportunities and risks for cybersecurity and privacy. On one hand, AI can enhance cybersecurity tools, improving threat detection and response times. On the other hand, AI systems themselves can become targets of cyberattacks, with adversaries exploiting vulnerabilities to gain unauthorized access or disrupt operations.
One of the primary concerns for privacy is the potential for AI to re-identify individuals from disparate datasets, revealing sensitive information. Additionally, AI’s predictive power can lead to invasive behavioral tracking and surveillance. Organizations need to be mindful of these privacy risks and implement measures to mitigate data leakage, particularly during AI model training.
NIST’s Program on Cybersecurity & Privacy of AI
Recognizing the critical need for standards and guidelines in this space, NIST is establishing a dedicated program for the cybersecurity and privacy of AI. This initiative aims to address the gaps in current frameworks and develop best practices for securing AI systems, protecting personal data, and defending against AI-enabled attacks.
The program will leverage existing NIST resources, such as:
- Secure Software Development Practices for Generative AI and Dual-Use Foundation Models (NIST SP 218A)
- Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations (NIST AI 100-2)
- Draft Guidelines for Evaluating Differential Privacy Guarantees (NIST SP 800-226)
- The NICE Workforce Framework for Cybersecurity, which includes a focus on AI security.
Additionally, NIST offers tools like Dioptra, a test platform for evaluating machine learning algorithms, and the PETs Testbed, which explores privacy-enhancing technologies for protecting AI models from privacy breaches.
Adapting Existing Frameworks: A Collaborative Approach
In collaboration with the National Cybersecurity Center of Excellence (NCCOE), NIST is developing an AI Community Profile to adapt existing frameworks like the Cybersecurity Framework, Privacy Framework, and NICE Framework. The AI Community Profile will address three primary areas:
- Managing cybersecurity and privacy risks from the use of AI;
- Defending against AI-enabled cyberattacks; and
- Enhancing cybersecurity defenses and privacy protections through AI technologies.
This collaborative effort will help organizations better understand the dependencies on data across their operations and update their risk management strategies accordingly. For instance, organizations might need to revise their data asset inventories and anti-phishing training to account for AI-enabled threats like voice spoofing.
The Path Forward: A Holistic Approach to AI Risk Management
AI will continue to evolve and shape the future of technology and business. For U.S. companies, staying ahead of the curve requires adopting a holistic approach to AI risk management. This involves not only understanding the ethical and privacy rights implications of AI but also implementing cybersecurity and privacy practices that safeguard against AI-related risks.
By leveraging the tools and frameworks provided by NIST and aligning with guidelines such as those from the U.S. Department of State, companies can harness the power of AI responsibly while protecting their operations, customers, and stakeholders from potential harm.
Conclusion
For more information, U.S. companies are encouraged to explore NIST’s resources and stay up to date with the latest developments in AI risk management. Adopting these practices today will ensure a safer and more secure future in the age of AI.
Hall Booth Smith, PC is committed to helping our clients anticipate and comply with emerging guidelines and frameworks. Reach out to our Data Privacy & Cybersecurity team to stay ahead of the ever-changing digital landscape.
Disclaimer
This material is provided for informational purposes only. It is not intended to constitute legal advice nor does it create a client-lawyer relationship between Hall Booth Smith, P.C. and any recipient. Recipients should consult with counsel before taking any actions based on the information contained within this material. This material may be considered attorney advertising in some jurisdictions. Prior results do not guarantee a similar outcome.
Blog Overview
About the Author
Jade Davis
Partner | Tampa Office
T: 813.329.3890
E: jdavis@hallboothsmith.com
Jade Davis focuses her practice on data privacy, cyber security, and construction matters. Jade provides strategic privacy and cyber-preparedness compliance advice and defends, counsels, and represents companies on privacy, global data security compliance, data breaches, and investigations.
Leave a comment
You must be logged in to post a comment.