Deepfake fraud has surged by a factor of 10 globally, and generative artificial intelligence-enabled social engineering attacks are dominating the headlines. Fraud losses due to gen AI are expected to reach a compound annual growth rate of 32% in the U.S. alone.
The effectiveness of traditional fraud prevention signals
Here's the reality: Banks are facing an expectation gap when it comes to generative AI's role in fraud prevention — especially when it comes to preventing gen AI-powered attacks.
For fraudsters, gen AI is a highly effective tool in carrying out financial crime. Gen AI decreases the time it takes for bad actors to write phishing emails, generate fake images and sequence information, among other capabilities. It also allows fraudsters to automate repetitive workflows like mining stolen data, creating fictitious identities, inputting user credentials en masse and mimicking human behavior to bypass detection.
But for banks, using gen AI is more complicated. Some people seem to think that by simply implementing gen AI, banks will be better equipped to detect and prevent all forms of fraud, including sophisticated gen AI-driven scams. It's true that gen AI does have applications in fraud prevention — like recognizing and explaining patterns in a dataset. It can also help banks create fake documents to train machine learning models, thereby helping with data aggregation.
But banks need to see gen AI for what it is: an automation tool rather than a precision tool. They also need to remember that when we collectively speak about how advanced gen AI's capabilities are, we primarily do so from a beginner's perspective. Fixing the gap between what banks expect out of gen AI and gen AI's current capabilities in fraud prevention is critical to protecting banks' assets and customers. That's because gen AI, as it stands, is not a silver bullet: It should be a small part of a layered approach to fraud prevention.
Before banks jump to incorporating gen AI into their fraud prevention processes, they need to ensure they are effectively using basic fraud prevention techniques like step-up authentication. This is especially true given that many financial institutions still rely on outdated modes of fraud prevention, like knowledge-based authentication, or KBA. In fact, financial services organizations have reported a
While some banks are stuck in the past using KBA, there are many who are already using step-up authentication methods like two-factor authentication, or 2FA, and multifactor authentication, or MFA. Simpler, easier-to-implement, precision-focused moves — like adjustments to risk tolerance — are more impactful in many cases than using gen AI as a fraud prevention tool. Once a bank has ironed out the finer points of how and when to trigger MFA, then that bank can layer on additional, harder-to-spoof verification methods.
When it boils down to it, the effectiveness of a bank's fraud prevention strategy depends on having a solid foundation of authentication, data quality and predictive modeling. Gen AI can then act as a complementary technology for enhancing detection capabilities. But it isn't a viable replacement for layered fraud prevention. In other words, it's time to stop fighting gen AI-driven fraud with more gen AI.
In addition to step-up authentication, customer education can also be a fundamental layer of fraud prevention. At the end of the day, customers are both a bank's biggest revenue driver and their biggest vulnerability. Banks can create an additional layer of defense by educating customers about the risks of AI-driven fraud, like
Groups like Coinbase, Tinder and Meta have already joined the fray by
To summarize: Gen AI technology is exciting and has potential. But assuming that banks need to fight gen AI-driven fraud with more gen AI is to blow this technology's power out of proportion.
Instead of rushing to implement gen AI, banks should focus on strengthening their existing security measures, MFA or a zero-tolerance risk policy.
The bottom line is that there's no substitute for multi-touch fraud prevention. Practical in-house fixes and zero-tolerance risk policies can empower banks to protect themselves against gen AI fraud better than, well, gen AI itself.