Synthetic UK Media Scams: Who bears The Burden Of AI-enabled Deception? 


Continuing their exploration of the quickening impact of AI on the iGaming space, Trilby Browne, of iGF's Special Investigations Unit, lifts the lid on so-called "Synthetic" Media Scams in the UK

Seemingly overnight, generative AI has become deeply entrenched in every aspect of our lives. Non-elective AI tools are embedded into platforms we use daily, including search engines, social media. Workplaces encourage its use. And the iGaming industry has embraced it.   

From fraud prevention to harm-play identification and even marketing, platforms have increasingly integrated generative AI at every level of their operations. 

But in tandem, a serious issue is emerging. 

“Synthetic” media scams–or deepfakes–have exponentially risen in the last year. 

According to the Home Office’s Accelerated Capability Environment, around eight million deepfakes were shared in the UK last year. That’s almost four times the number shared in 2023. 

Perhaps even more damning, a 2026 report from the AI Incident Database stated this type of fraud has gone “industrial”. 

And Fred Heiding, a leading researcher from Harvard University studying AI-fuelled scams, has warned that “the worst is yet to come”. 

Deepfake Videos

Scammers can impersonate anyone they wish with ease, with generative AI currently capable of creating highly convincing voice cloning abilities and ever-improving deepfake videos on the rise.  

Industry-based intelligence firm Gambling IQ found that sector-fraud surged 73 percent between 2022 and 2024. And deepfakes–used to overcome KYC checks or conduct mass bonus-abuse–contributed to this significantly. 

AI-enabled crime is causing serious personal and social harm and big financial losses, affirms Joe Burton, Professor of Security and Protection Science in the School of Global Affairs at Lancaster University

Within this vast, converging environment of actors, the question of responsibility is a particularly sticky one because AI fraud is growing faster than regulatory bodies can catch up.

AI enabled systems–deceptive or otherwise–exist within a nexus of actors, decisions and governance structures. 

As one major study noted, this makes the subsequent network of responsibility relations ‘complicated’. 

At present, most governments and legislative bodies are simply unable to tackle fraud on this scale. 

UK law enforcement, for example, was declared “inadequately equipped to deal with AI–fuelled fraud” in a 2025 report by the Alan Turing Institute.  

Serious Harms

Joe Burton, Professor of Security and Protection Science in the School of Global Affairs at Lancaster University, author of the report, identifies the issue at hand plainly:

“AI-enabled crime is already causing serious personal and social harm and big financial losses. We need to get serious about our response and give law enforcement the necessary tools to actively disrupt criminal groups,” he warns.

“If we don’t, we’re set to see the rapid expansion of criminal use of AI technologies.” 

At present, the UK Gambling Commission (UKGC) places primary responsibility on operators to prevent crime in the regulated industry. 

Operators must implement their own policies, procedures and controls to mitigate risks, ranging from fraud and identity abuse to money laundering.

But with AI capabilities developing so rapidly, platforms alone cannot bear the weight of tackling AI-driven fraud and deception. Nor should they, given that the convergence of AI-related scams in the iGaming world extend well beyond the regulated platforms.

Disinformation’s Gordian Knot 

The iGaming industry has grown in tandem with social media: Online groups build communities, encourage user-to-user interaction, and social media platforms have become a central pillar of marketing and brand visibility. 

Roshtein, the self-styled “Gambling Philosopher”

It’s even led to the rise of iGaming influencers, with figures such as Roshtein, the ‘slot-streaming OG’,  drawing-in swathes of adoring and highly-engaged followers. 

But this expansion is coupled with equally growing concern over the lack of regulatory oversight in these ‘third places’ – particularly in relation to advertising.

Disinformation in digital environments is a Gordian Knot. 

The very structure of social media platforms intrinsically contributes to this. 

As privately-owned systems, their underlying algorithms are wholly inaccessible to regulators and governments. 

Amplifying Algorithms

Yet we spend huge chunks of our lives on these platforms, amplifying algorithms. 

Put simply, the more we interact with something, the more an algorithm shows us. 

This has led to much harmful content being prioritised and distributed, amplifying misleading or unverified material. Disinformation then is not only present but algorithmically amplified, effectively turbocharged by design, as reported by the Financial Times newspaper last year.  

In November 2025, Will Horowitz reported for Reuters that META’s own internal data projected that 10 percent of its 2024 revenue–approximately US$16 billion (£11.94bn)–came from adverts on scams and banned goods. 

Last week, the same team found that–despite META’s promises to slash scams shown on their platform in the UK–the social media giant had failed to do so over 1,000 times in one week alone. 

Among the scams were a clutch of illicit online casinos, many of them harnessing deepfakes to lure users into clicking. 

This raises a broader question for regulation: If platform algorithms play a decisive role in the spread and visibility of harmful AI-generated scam content, then surely their opacity is a clear driver? 

Calls for greater transparency, including making aspects of these systems accessible to regulators, hope to create both accountability and an environment of meaningful oversight and regulation. 

Online Safety Hack

As of early 2026, Ofcom, the communications regulator, has begun writing new legislation to regulate deepfakes in the UK under the 2023 Online Safety Act and the recently adopted Data Use and Access Act of 2025. 

But their own guidance highlights the limits of the current framework. 

Certain AI chatbots fall outside regulatory scope altogether, where they operate as closed systems, do not function as search services, or do not enable user-to-user interaction. And thus do not technically fall under the Online Safety Act, as administered by Ofcom. 

Even where they are in scope, this does not mean that all the content they generate is regulated. 

While Britain’s Online Safety Act began coming into force in March 2025, allowing regulators to fine platforms for illegal user-generated content, powers to act on paid scam ads remain delayed, until at least 2027. 

Burden

This leaves enforcement reliant on voluntary measures by companies like Meta, while both the Financial Conduct Authority and Ofcom lack the direct authority to intervene.

For example, materials produced without accessing external sources, including synthetic images and videos, will often fall outside regulatory oversight unless it meets specific thresholds. Namely, being pornographic or shareable between users.

In other words, this framework is only partial. It can govern certain cases, under certain conditions, within certain services. But it cannot truly contend with how these systems are used in practice.

In the meantime, the burden of deepfake-fuelled scams continues to fall on platforms and users, despite the systems producing these risks sitting beyond their control.

What is clear is that no single regulatory body, no single platform, can meaningfully address the threat and negative impact of AI-powered scamming in isolation. 

Today AI-enabled risks operate across disparate systems that are governed separately — if at all.

Only a multiplex approach can quash the challenge of renegade tech. And it demands shared responsibility and unity of action and compliance across the entire iGaming space.

Published on:
Categories
Featured Compliance UK & Europe