It was nearing the end of the year when Mickey Carol, a science and tech reporter for Britain’s Sky News, received an alarming email from a work colleague regarding an AI scam.
The opening line read: “We’ve been deepfaked!”
There, in a link to a pixelated Facebook video advertisement, at the bottom of the email, Mickey was claiming–with staccato diction and stiffened movements–to have won £500,000 (US$676,000) in a week.
Alongside her in the deepfake was an AI-generated version of her fellow presenter, Matt Barber; as well as Apple CEO Tim Cook promoting an ‘online game with a 95 percent winning percentage’ available on the App Store.
A series of illegal apps had by-passed Apple’s security checks by presenting themselves as simple child-friendly iPhone games–likely also AI generated–during the app verification process, only to later redirect users to illicit casino platforms once downloaded.
Surging Fraud
They listed legitimate companies such as MyUnit, a utility-sharing app, as the app owners on the Apple Store.
The scam took place at the end of 2024.
Following Sky’s subsequent investigative report, Apple quickly took down the apps and the UK’s regulatory Gambling Commission swiftly followed suit. But the incident, occurring in one of the world’s largest and advanced iGaming markets, set a new precedent.
Online AI-fuelled fraud is surging.
And iGaming–an entirely digital industry worth billions–is a prime target.
In 2024 alone, legal iGaming platforms lost an estimated £730 million (US$1bn) to fraud. And who knows how much was lost in the murky shadow industry that exists alongside regulated iGaming operators – one that, increasingly, relies on deepfakes, fake apps and false promises to prise punters from their money.
As generative AI tools become more widespread and sophisticated, the distinction between legitimate iGaming platforms and fraudulent imitations are increasingly harder for users to recognise.
Undetectable Generative AI
2025 saw the release of multiple generative AI video platforms.
Tools such as OpenAI’s Sora, MovieGen by Meta and Veo by Alphabet, are readily accessible to the general public. All it takes is a few minutes and a simple written prompt for models to release increasingly increasingly hyper-realistic video content.

Seemingly overnight, the realism of these videos has increased so dramatically that they are becoming near undetectable.
By September 2025 Sora 2 was released, and the pixelated ‘uncanny valley’ look of the Sky deepfake videos was a thing of the past.
As seen with the case of Sky News, the use of familiar faces, branding and established names in illicit social media marketing has become an increasingly popular scammers entry-point.
And this new year expects to see a boom in illicit AI-generated activity targeting both iGaming’s perators and players.
An investigation by Reuters revealed that Meta delivered up to 15 billion scam ads a day to its users; particularly on Facebook, where individuals were exposed to up to 11 scam advertisements a day. Not only did Meta fail to remove these fraudulent ads, they actually encouraged ill-fated users to click on more scam ads through skewed algorithms.
Turbocharged
“The problem now is that AI has turbocharged fraud.” says Warren Russell, founder of identity verification specialist eyeDP, whilst speaking to Tribuna.com, an online sports media platform.
Hitherto, one of the best lines of defense against generative AI scams was AI literacy.
But in this mutating, rapidly-evolving scam sphere, our eyes, brains and instincts quite simply can’t keep pace with the illicit technological advances.

And AI-fuelled-scams are not just a problem for the unregulated market, licensed operators also face real danger:
For operators who once knew what to look for–and what to block–, iGaming fraud has transformed from unsophisticated techniques, consisting largely of cloned cards and chargebacks, into something more insidious and harder to detect.
“What the gambling industry has been good at is using AI for customer personalisation, marketing and game optimisation,” continues Russell.
“But they’ve been quite poor in implementing it across back-office systems. And that’s not just gaming, that’s across most sectors. They go for the snazzy revenue-generating stuff before plugging the hole in the bottom of the bucket.”
Verification
Today regulated iGaming is experiencing a deluge of new challenges: From AI-generated synths conducting mass bonus abuse, to fraudsters creating AI generated deepfake documents, outwitting many onboarding checks.
Verification platform Sumsub reported a growth in AI-generated fake documents encountered by 78 percent of gambling operators using their service this year.
Warren Russell thinks the argument is clear: Operators must use AI in the battle against AI fraud.
They must also examine behavioural analytics instead of solely relying on identity checks, which in this new environment of deep-fakes are no longer sustainable.
While licensed operators continue to invest heavily in compliance and security, many of the most damaging scams now occur beyond their direct control – in app stores, social feeds and messaging platforms where regulation remains fragmented.
Digital Safety
For licensed operators, this creates a dual exposure. Generative AI is being used to defeat internal controls, while simultaneously fuelling scams that redirect users to illicit platforms.
Operators can continue to harden their own systems, but as AI lowers the cost of large-scale impersonation, a growing share of iGaming fraud is being generated outside the remit of licensed platforms.
So in 2026, the question is twofold:
What security measures and tactical changes can operators take?
And how will the responsibility for AI-enabled deception be shared across the wider digital ecosystem?
iGamingFuture will continue to explore these ideas in our brand new 2026 series on digital safety.
Watch this space?