Scammers around the world use artificial intelligence to trick their way inside banks and credit unions’ systems. iStock illustration
New Hampshire banks are facing cybersecurity threats more formidable than anything seen before: hackers armed with the latest AI tools.
Cybercriminals are still trying to rob banks and their customers via phishing attacks or embedding malware into bank computer systems to access highly sensitive financial data.
But with AI tools, some of which can be purchased on the dark web for as little as a few hundred dollars, the quality, volume and effectiveness of attacks have been exponentially enhanced to a degree unimagined only a few years ago.
“AI has made bad guys better,” said Matthew Sweet, vice president and senior technology and cybersecurity officer at First Seacoast Bank in Dover. “It’s really upped the quality of attacks – and the volume of attacks.”
“It’s armed threat actors with more and better tools and capabilities, no doubt,” said Jason Golden, chief executive of Mainstay Technologies, a Manchester managed service provider (MSP) that oversees IT infrastructures for a wide variety of businesses, including financial institutions. They’re still using roughly the same tactics, like phishing, but with much better tools.”
The response of banks and other businesses to the new AI threat? In effect, they’re fighting fire with fire.
Cybersecurity experts are increasingly using AI to help monitor, identify, react and contain cyberattacks when they occur – and even cleaning up damaged IT systems after attacks.
“As it is, AI and automation also fit cleanly with threat-protection,” said Golden. “We’re using AI to fight AI.”
AI Successfully Impersonates Coworkers
Banks and other businesses certainly need all the help they can get when battling AI security threats.
Deloitte’s Center for Financial Services recently estimated that AI could enable additional fraud of up to $40 billion in the United States by 2027, up from $12.3 billion in 2023.
Yet, according to some sources, more than 50 percent of U.S. banks remain vulnerable to the most basic phishing attacks because of weak DMARC (Domain-based Message Authentication, Reporting, and Conformance) implementation.
Meanwhile, none other than Sam Altman, chief executive of OpenAI, has recently warned that the financial industry is facing a “significant impending fraud crisis” tied to AI “voice clones” that can impersonate a person’s voice to defeat verbal-authentication security measures.
Eventually, AI “video clones” are expected to be widely deployed by hackers to fool bankers’ colleagues into providing critical corporate information or bypass advanced video-authentication measures.
Another major cybersecurity concern: AI is allowing more would-be hackers to engage in sophisticated cyberattacks – even would-be hackers with little or no coding experience.
New Tech Democratizes Cybercrime
All non-coders have to do today is buy a cybercriminal-designed AI model, such as XanthoroxAI, on the dark web – and they’re off and running.
In effect, a well-designed AI tool knows what to do once a target is selected and the launch button pressed.
AI malware models are getting so sophisticated that, once embedded within a targeted IT system, they can now seek coding help from the cloud whenever they’re stymied by security measures, experts say.
“AI can make [attack] changes on the fly,” said Jason Sgro, a senior partner at The Atom Group, a Portsmouth cyber incident and response company that has worked privately with banks in the past. “It knows when something is not going the right way and it needs to make corrections. You can let AI make decisions. It’s alarmingly good.”
In the end, AI’s ability to act on its own is removing much of the human aspect of cybercriminal activity, Sgro said.
“It’s lowering the barrier of entry for those who want to be cyber-criminals,” Sgro said.
Both Quality, Quantity of Attacks Increasing
AI is also increasing the actual volume of cyber-attacks.
What were once, for instance, phishing campaigns aimed at hundreds of targets are now campaigns aimed at thousands of targets, thanks to advanced AI tools, Srgo said.
AI’s enhanced “force multiplication” is dramatically changing the sheer scope of cyberattacks around the world, Srgo said.
Meanwhile, the quality of attacks, not just the volume of attacks, is also improving.
Setting aside AI malware that can literally take corrective action in the middle of attacks, AI tools have improved the quality of one of the oldest and most basic types of cybercriminal activity: email phishing attacks.
Gone are the days of detecting malicious phishing emails by their poor grammar, misspellings and odd syntax tone.
“AI has done away with the poorly worded email scams,” said First Seacoast’s Sweet. “If you want, AI can have a very precise, business-like language. A [hacker] can tell the AI tool to use business-executive like language in emails – and it then produces that language.”
Moving forward, Sweet said it’s important for banks and other financial institutions to work together to come up with effective strategies and tactics to counter new AI-empowered security threats.
“There’s a lot of discussion and activity around AI issues,” said Sweet, noting he talks with other local bank officials via the New Hampshire Banker Association. “We do share information.”
