Avera AI Report Social Card

The Treasury’s AI Report: Challenges, Opportunities, and a Path Forward

The United States Department of the Treasury recently released its 36-page Report on the Uses, Opportunities, and Risks of Artificial Intelligence in the Financial Services Sector. This timely document provides a macro-level synthesis of feedback from financial industry stakeholders, offering insights into generative AI’s transformative potential and challenges. The Treasury gathered concerns, highlights its regulatory efforts, and other agencies’, and underscores the importance of continued collaboration among national and international regulatory bodies.

The Expanding Role of AI in Financial Services

Perhaps unknown to the average consumer, AI has long been a staple of the financial industry. Since the 1940s, “traditional AI”—statistical models trained on defined datasets—has been used for credit underwriting, trading, investment advice, customer service, compliance, forecasting, and process automation.

Generative AI, with its ability to produce complex outputs from vast datasets, has rapidly expanded these use cases. From back-end processes to customer-facing applications, generative AI offers immense promise but also introduces heightened risks. Stakeholders emphasized that sophisticated oversight and governance are essential to mitigate potential consumer harm and reputational damage.

Financial Institutions’ Key Concerns with Generative AI

The Treasury’s report identifies primary risks raised by stakeholders:

  1. Data Quality and Privacy
    Generative AI systems rely on massive datasets, often sourced from third parties. This raises serious questions about data integrity, security, privacy, and intellectual property. Improper data handling can result in breaches, unauthorized use, or biased outputs, threatening both consumers and financial institutions.
  2. Bias and Fairness
    Generative AI models trained on historical data can inadvertently replicate societal biases. These biases create risks of discriminatory outcomes in credit underwriting, lending, and hiring, potentially violating civil rights laws like the Equal Credit Opportunity Act (ECOA) and the Fair Housing Act (FHA). The report emphasizes that these laws are technology-neutral, ensuring firms remain accountable for biased outcomes, regardless of whether decisions are made by humans or AI.
  3. Explainability and Accountability
    Many generative AI models function as “black boxes,” making it difficult to understand or explain their decision-making processes. This lack of transparency is especially problematic in high-stakes contexts like regulatory compliance and financial decision-making, where accountability to regulators and consumers is paramount. Some Respondents offered retrieval augmented generation (RAG) to increase decision-making transparency, but this method is not widespread.
  4. Accuracy and AI Hallucinations
    A key concern for institutions developing or purchasing AI products to interact with customers’ sensitive financial information is accuracy. Generative AI can produce “hallucinations”—outputs that are confidently presented but factually incorrect. In customer-facing roles, such as chatbots or financial advisory tools, these errors can erode trust, cause financial harm, and expose firms to liability.
  5. Cybersecurity and Fraud Risks
    Generative AI can be weaponized by bad actors to create deepfakes, phishing campaigns, or fraudulent schemes at scale. These vulnerabilities increase the burden on financial institutions to develop advanced countermeasures to detect and prevent fraud.

Stakeholders Call for Regulatory Clarity

Stakeholders expressed strong support for a more coordinated regulatory framework, emphasizing the need to reduce fragmentation and streamline compliance requirements. Specific recommendations included:

  • Aligning AI definitions across agencies to facilitate interagency coordination;
  • Clarifying standards for data privacy, security, and quality;
  • Expanding consumer protections to address AI-specific risks;
  • Harmonizing federal regulations to prevent conflicting state laws and regulatory arbitrage;
  • Promoting collaboration among domestic and international regulators to share best practices and monitor systemic risks.

Interestingly, many respondents favored increased regulatory oversight, countering popular narratives suggesting less regulation for AI. Some stakeholders even proposed the creation of a dedicated agency to coordinate AI resource-sharing and policy across industries and federal agencies.

Proposed Treasury Action

In response to the industry’s input, the Treasury outlines a series of next steps:

  1. Strengthen Collaboration
    The Treasury recommends enhancing coordination among governments, regulators, and the financial sector to develop consistent standards for AI use domestically and internationally.
  2. Address Regulatory Gaps
    Further analysis is needed to identify and address gaps in existing frameworks, particularly in areas like consumer protection, data privacy, and supervisory consistency between banks and nonbanks.
  3. Enhance Risk Management
    Regulators should refine risk management frameworks and clarify expectations for firms deploying AI systems, ensuring robust oversight and accountability.
  4. Facilitate Information Sharing
    The Treasury supports public-private initiatives to share best practices, develop data standards, and create tools like “AI nutritional labels” to assess risks and ensure compliance.
  5. Prioritize Compliance
    Financial firms must proactively review their AI systems for compliance with existing laws before deployment. Periodic reassessments are essential to adapt to evolving risks and regulatory expectations.

Conclusion

The Report navigates complex trade-offs between innovation and consumer protections. As generative AI reshapes financial services, firms must prioritize fairness, transparency, and security in their system – with no reprieve from regulators. Civil rights laws stand firm regardless of emergent technology – requiring financial institutions to keep their AI’s under scrutiny to avoid illegal discrimination.

For legal and compliance professionals, the report offers guidance on the evolving regulatory landscape, actively involved agencies, and regulatory attitudes. Firms should review the full report to stay ahead of emerging requirements and align their practices with industry best standards.

For a more detailed assessment of your financial institutions’ artificial intelligence initiatives & compliance, reach out to the Data Privacy & Cybersecurity team at Hall Booth Smith, PC.

Disclaimer

This material is provided for informational purposes only. It is not intended to constitute legal advice nor does it create a client-lawyer relationship between Hall Booth Smith, P.C. and any recipient. Recipients should consult with counsel before taking any actions based on the information contained within this material. This material may be considered attorney advertising in some jurisdictions. Prior results do not guarantee a similar outcome.

Blog Overview

Subscribe for Updates

About the Author

Savannah Liner Avera

Savannah Liner Avera

Attorney at Law | Atlanta Office

T: 404.954.6973
E: savera@hallboothsmith.com

Savannah Liner Avera protects the rights of clients in health care and cyberspace. She handles aging services litigation and serves on the firm’s Coronavirus Strategic Team that counsels clients on complex matters related to the global pandemic. She represents providers including hospitals, skilled nursing facilities, assisted living facilities, and sub-acute facilities in a wide range of liability claims.

Leave a comment