The rising tide of AI is lifting all fraud – the current situation is bad and getting worse.
Proof’s Transaction & Identity Fraud Bulletin (PDF) highlights and explains a surge in cyber-driven fraud. The primary causes are the growth of personal data on the internet (from sources such as social media), the ability of AI to harvest and collate that data, and the emergence of fraud-as-a-service turbo-charged by AI.
“Fraud today doesn’t look like it did five years ago. It’s synthetic, it’s autonomous, and it’s scaling,” comments Pat Kinsel, CEO of Proof. “We’re seeing high-risk interactions involving billions in assets… trust must now be engineered in a world where identity can be convincingly faked and monetized at scale.”
The potential threat is daunting. Deloitte’s Center for Financial Services has predicted that gen-AI could enable fraud losses to reach $40 billion in the United States by 2027, up from $12.3 billion in 2023 at a compound annual growth rate of 32%.
There are two primary factors feeding this growth. The first is the sheer amount of personal data available to bad guys on the internet. Some is stolen by malware such as infostealers and more can be scraped from social media sites. Combine and collate this information and you have more than enough to start the fraud process.
The second factor, explains John Heasman, CISO at Proof, is, “The emergence of generative AI and the ability to spoof aspects of how individuals present themselves during a transaction process – deepfake voice, fake driver’s license, false documentation, all generated by generative AI. When you combine these two things, you’ve got well prepared threat actors coming into business processes with full knowledge of their target victims.”
Much of future fraud will be driven by consolidation within the emerging criminal fraud-as-a-service offerings. For now, the service is somewhat disjointed; but that won’t continue. “There are three aspects to this service,” suggests John Heasman. First is the source target data. “They’re selling fullz, logs from infostealers and so on. Next, they sell ‘knowledge’ on how to create deepfakes and how to bypass KYC on specific sites such as a particular cryptocurrency exchange. The third aspect is the service element – how to generate a driver’s license from the fullz you already have. That’s the current ‘service’ element.”
For now, fraud-as-a-service is a patchwork of sellers selling different parts of the process. It’s the democratization of cybercrime that we already see throughout the criminal underground. You no longer need to be technically capable; you merely need to be a criminal to be a cyber fraudster.
That alone will increase the incidence of fraud; but two other developments will make it worse. The first is the more effective use of AI. AI agents could eliminate the manual task of isolating targets from purchased fullz logs, finding the 100 most promising targets from a list of 100,000 fullz records – the first element of the three-part attack process. In the second element, knowledge on how to create deepfake voice will expand to include the creation of compelling deepfake video. Deepfake voice has already crossed the uncanny valley, and deepfake video will follow suit.
In the third element (service) of the fraud process, Heasman expects to see the arrival of AI-assisted ‘aging’. Fraudsters seek to transfer money into accounts they own, but creating a new account just to receive stolen money is a weakness. “Anything new attracts greater scrutiny from fraud prevention services,” he explains. “Bank accounts that have existed for several months and have a history of ‘normal’ operation are less likely to attract attention. ‘Aging’ is the process of creating and maintaining these fake accounts that are solely designed to accept stolen funds without attracting attention.”
Currently, this is a manual, and time-consuming process. “I can see agentic AI taking on this process,” he continued, “just continuously creating and aging email and bank accounts so the threat actor has a constant supply of ready-made accounts that fly under the radar of scrutiny because they look normal.”
Sooner or later, some enterprising criminal group is likely to wrap up and consolidate these disparate elements of the fraud process into a single unified start to end service – perhaps using AI to do so.
The simple reality is that the cyberworld is ripe for fraud. The speed, scale, and sophistication now added by AI mean that manual legacy approaches to fraud detection can no longer cope. But just as bad guys call on AI to improve the generation of fraud, so can good guys use AI to increase the speed, scale and sophistication of detection. AI-based fraud detection may never be 100% successful, but fraud detection without AI would be catastrophic.
Related: Fraud Losses Reached $12.5 Billion in 2024: FTC
Related: Washington Man Admits to Role in Multiple Cybercrime, Fraud Schemes
Related: Bureau Raises $30M to Tackle Deepfakes, Payment Fraud
Related: New Google Project Aims to Become Global Clearinghouse for Scam, Fraud Data

