UK Government Pledges Law Against Sexually Explicit Deepfakes

The UK government has promised to make the creation and sharing of sexually explicit deepfake images a criminal offence.

It said the growth of artificially created but realistic images was alarming and caused devastating harm to victims, particularly the women and girls who are often the target.

The government promises to introduce a new offence, meaning perpetrators could be charged for both creating and sharing these images under the government's Crime and Policing Bill, which will be introduced when parliamentary time allows.

It will also create new offences for the taking of intimate images without consent while those installing equipment for the purpose of making intimate images without consent are also set to be covered by the law.

In a statement, victims minister Alex Davies-Jones said: "It is unacceptable that one in three women have been victims of online abuse. This demeaning and disgusting form of chauvinism must not become normalized.

"These new offences will help prevent people being victimized online. We are putting offenders on notice – they will face the full force of the law," she said.

A two-year jail term could apply to both criminals who take an intimate image without consent and those who install equipment for that purpose.

In a statement Baroness Jones, technology minister, said: "With these new measures, we're sending an unequivocal message: creating or sharing these vile images is not only unacceptable but criminal. Tech companies need to step up too - platforms hosting this content will face tougher scrutiny and significant penalties."

The Justice Ministry said sexually explicit deepfake offences are set to apply to images of adults, as the law already covers such images of children.

It is already an offence to share or threaten to share intimate images, including deepfakes, under the Sexual Offences Act 2003, following amendments that were made by the Online Safety Act 2023.

In September last year, some of the largest AI firms in America promised to prevent their AI products from being used to generate non-consensual deepfake pornography and child sexual abuse material.

Adobe, Anthropic, Cohere, Microsoft, OpenAI, and open source web data repository Common Crawl were among those making the non-binding commitments to the Biden administration.

Google's YouTube has also created privacy guidelines that allow people to request the removal of AI-generated videos that mimic them, the company said in July last year. ®

RECENT NEWS

From Chip War To Cloud War: The Next Frontier In Global Tech Competition

The global chip war, characterized by intense competition among nations and corporations for supremacy in semiconductor ... Read more

The High Stakes Of Tech Regulation: Security Risks And Market Dynamics

The influence of tech giants in the global economy continues to grow, raising crucial questions about how to balance sec... Read more

The Tyranny Of Instagram Interiors: Why It's Time To Break Free From Algorithm-Driven Aesthetics

Instagram has become a dominant force in shaping interior design trends, offering a seemingly endless stream of inspirat... Read more

The Data Crunch In AI: Strategies For Sustainability

Exploring solutions to the imminent exhaustion of internet data for AI training.As the artificial intelligence (AI) indu... Read more

Google Abandons Four-Year Effort To Remove Cookies From Chrome Browser

After four years of dedicated effort, Google has decided to abandon its plan to remove third-party cookies from its Chro... Read more

LinkedIn Embraces AI And Gamification To Drive User Engagement And Revenue

In an effort to tackle slowing revenue growth and enhance user engagement, LinkedIn is turning to artificial intelligenc... Read more