mardi 30 juillet 2024

Microsoft wants Congress to outlaw AI-generated deepfake fraud

Microsoft wants Congress to outlaw AI-generated deepfake fraud
An illustration depicting a featureless face against a pink, white, and blue background.
Illustration: Alex Castro / The Verge

Microsoft is calling on members of Congress to regulate the use of AI-generated deepfakes to protect against fraud, abuse, and manipulation. Microsoft vice chair and president Brad Smith is calling for urgent action from policymakers to protect elections and guard seniors from fraud and children from abuse.

“While the tech sector and non-profit groups have taken recent steps to address this problem, it has become apparent that our laws will also need to evolve to combat deepfake fraud,” says Smith in a blog post. “One of the most important things the US can do is pass a comprehensive deepfake fraud statute to prevent cybercriminals from using this technology to steal from everyday Americans.”

Microsoft wants a “deepfake fraud statute” that will give law enforcement officials a legal framework to prosecute AI-generated scams and fraud. Smith is also calling on lawmakers to “ensure that our federal and state laws on child sexual exploitation and abuse and non-consensual intimate imagery are updated to include AI-generated content.”

The Senate recently passed a bill cracking down on sexually explicit deepfakes, allowing victims of nonconsensual sexually explicit AI deepfakes to sue their creators for damages. The bill was passed months after middle and high school students were found to be fabricating explicit images of female classmates, and trolls flooded X with graphic Taylor Swift AI-generated fakes.

Microsoft has had to implement more safety controls for its own AI products, after a loophole in the company’s Designer AI image creator allowed people to create explicit images of celebrities like Taylor Swift. “The private sector has a responsibility to innovate and implement safeguards that prevent the misuse of AI,” says Smith.

While the FCC has already banned robocalls with AI-generated voices, generative AI makes it easy to create fake audio, images, and video — something we’re already seeing during the run up to the 2024 presidential election. Elon Musk shared a deepfake video spoofing Vice President Kamala Harris on X earlier this week, in a post that appears to violate X’s own policies against synthetic and manipulated media.

Microsoft wants posts like Musk’s to be clearly labeled as a deepfake. “Congress should require AI system providers to use state-of-the-art provenance tooling to label synthetic content,” says Smith. “This is essential to build trust in the information ecosystem and will help the public better understand whether content is AI-generated or manipulated.”

Aucun commentaire:

Enregistrer un commentaire

Pegasus spyware maker NSO Group is liable for attacks on 1,400 WhatsApp users

Pegasus spyware maker NSO Group is liable for attacks on 1,400 WhatsApp users Photo by Amelia Holowaty Krales / The Verge NSO Group, the ...