FTC Asks Normal Folks If They'd Like AI Impersonation Scam Protection, Too

The FTC is moving to make not only the fraudulent AI impersonation of government and business folk illegal but is also now asking the American public if they'd like some protection too. 

The US consumer watchdog announced as much on Thursday, alongside the introduction of a final rule that will give the Commission the ability to directly file federal lawsuits against AI impersonation scammers who target businesses and government agencies. The changes will also make it possible for the agency to target the makers of the code used in such scams more quickly.

The initial proposal doesn't cover the impersonation of private individuals, however. So the FTC is releasing this [PDF] supplemental notice asking for public comment on whether they should be covered by the new rules as well.

"Fraudsters are using AI tools to impersonate individuals with eerie precision and at a much wider scale," said FTC chair Lina Khan. "With voice cloning and other AI-driven scams on the rise, protecting Americans from impersonator fraud is more critical than ever." 

Beyond simply making it illegal to impersonate another individual to commit fraud, the proposal also includes a provision to hold businesses accountable for misuse of technology they create.  

The so-called "means and instrumentalities" provision in the proposal would give the FTC the ability to hold companies who create AI tech that could be used to impersonate people accountable if they "had reason to know that the goods and services they provided will be used for the purpose of impersonations," the FTC said.

Despite a provision to hold developers accountable for misuse of their tech, it's not clear who, or to what extent, organizations could be prosecuted. 

According to the proposal, it's illegal for a scammer to call or message a person while posing as another individual, send physical mail misrepresenting affiliation, creating a website, social media profile or email address impersonating a person or placing ads that pose as a person or their affiliates.

Whether the orgs transmitting fraudulent messages could be held liable along with companies that facilitate the creation of AI voices and video isn't clear. We've asked to the FTC for clarification, but haven't heard back.

The FCC made its own moves to combat AI impersonation earlier this month, deciding that it was illegal to use AI-generated voices in robocalls. Unlike this newly-proposed FTC rule, the FCC simply clarified that existing telephone consumer protection laws covered the use of AI-generated voices. ®

RECENT NEWS

Uncovering The Tactics: How Hackers Exploit Developing Countries In Ransomware Testing

In recent years, there has been a concerning rise in hackers using developing countries as testing grounds for ransomwar... Read more

From Silicon Valley To Down Under: Musk's Defense Of Public Interest In The Digital Era

In recent headlines, tech titan Elon Musk has once again captured global attention, this time for his intervention in an... Read more

The Global Semiconductor Landscape: Navigating Through Market Shifts Post Samsung's Earnings Triumph

In the first quarter of 2024, Samsung Electronics announced a staggering 931% surge in operating profits, reaching 6.6 t... Read more

The Balancing Act: Google's Paywalled AI And The Quest For Digital Equity

In an era where artificial intelligence (AI) is no longer the stuff of science fiction but a daily utility, Google's lat... Read more

The Meteoric Rise Of Anthropic: Valuation And The Future Of AI

In an era where artificial intelligence (AI) is not just a buzzword but a cornerstone of technological advancement, Amaz... Read more

The Future Of Sports Strategy: Navigating The AI Revolution

In the fast-evolving world of competitive sports, the introduction of Artificial Intelligence (AI) has been nothing shor... Read more