Feds double-down on financial services cybersecurity warnings
Cyberthreat actors are increasingly using AI to perpetuate fraud, the U.S. Treasury Department reports.
The U.S. Treasury Department underscored for members of the financial services sector recently that artificial intelligence is becoming a powerful weapon for fraudsters and cyberattackers — who will, for a time, outgun defensive efforts.
“Like other critical infrastructure sectors, the financial services sector is increasingly subject to costly cybersecurity threats and cyber-enabled fraud,” the department stated in a report released on Apr. 3, 2024. “As access to advanced AI tools becomes more widespread, it is likely that, at least initially, cyberthreat actors utilizing emerging AI tools will have the advantage by outpacing and outnumbering their targets.”
The department’s report was based on interviews with representatives from 42 financial services and technology companies on the current state of AI fraud and cybersecurity risks and safeguards.
The department stated it will work with the federal Financial and Banking Information Infrastructure Committee and the industry’s Financial Services Sector Coordinating Council “to map major existing and anticipated regulatory regimes relevant to financial sector firms and their vendors in the cybersecurity and fraud space.”
Treasury stated it will pay heed to banks’ concerns regarding the potential for “regulatory fragmentation,” which would compel financial institutions to comply with disparate federal, state and even international regulations regarding AI cybersecurity and fraud defense.
Treasury, the FBIIC and FSSCC “will explore potentially enhancing coordination across regulators with the goal of fostering responsible AI advancements, while addressing risk, and understanding applicable regulatory regimes,” the report stated. “The coordination actions could include the recommendation to establish AI-specific coordinating groups, as allowable, to assess enhancing shared standards and regulatory coordination options.”
David Adams, a securities regulatory attorney at Mintz, said the Treasury report takes a “holistic view” of the risks the financial services sector faces from AI.
Regulatory fragmentation is “a real issue for many of our clients,” Adams said. “What you hope for is that you have alignment across jurisdictions, across states, across federal governmental agencies, but you very rarely get that.”
In a statement accompanying its report, the department said it also will “work with the private sector, other federal agencies, federal and state financial sector regulators, and international partners on key initiatives to address the challenges” that AI presents.
“While this report focuses on operational risk, cybersecurity, and fraud issues, Treasury will continue to examine a range of AI-related matters, including the impact of AI on consumers and marginalized communities,” the department added.
In its report, Treasury cited interviewees who suggested the National Institute of Standards and Technology AI Risk Management Framework could be expanded to include more substantive information related to AI governance with regard to the financial sector.
The report stated that many financial institutions are using artificial intelligence but that smaller banks lack the resources to develop their own in-house AI systems.
Those interviewed for the report also noted a lack of consistency across the financial sector on the definition of “artificial intelligence,” which Adams called concerning.
“Many people mean different things when they say AI,” he said.
“The more we can move toward a world where people have an established set of definitions, the better,” Adams added. “Clear definitions will also help avoid some regulatory fragmentation … because, if everyone’s working off the same definitions, it becomes much easier to regulate things in a common way.”
The department released its report amid the Biden administration’s announcement this week of new mandatory safeguards for all federal agencies to help ensure their responsible use of AI.
See also: