vendredi 26 janvier 2024

Satya Nadella says the explicit Taylor Swift AI fakes are ‘alarming and terrible’

Satya Nadella says the explicit Taylor Swift AI fakes are ‘alarming and terrible’
Laura Normand / The Verge

Microsoft CEO Satya Nadella has responded to a controversy over sexually explicit AI-made fake images of Taylor Swift. In an interview with NBC Nightly News that will air next Tuesday, Nadella calls the proliferation of nonconsensual simulated nudes “alarming and terrible,” telling interviewer Lester Holt that “I think it behooves us to move fast on this.”

In a transcript distributed by NBC ahead of the January 30th show, Holt asks Nadella to react to the internet “exploding with fake, and I emphasize fake, sexually explicit images of Taylor Swift.” Nadella’s response manages to crack open several cans of tech policy worms while saying remarkably little about them — which isn’t surprising when there’s no surefire fix in sight.

I would say two things: One, is again I go back to what I think’s our responsibility, which is all of the guardrails that we need to place around the technology so that there’s more safe content that’s being produced. And there’s a lot to be done and a lot being done there. But it is about global, societal — you know, I’ll say, convergence on certain norms. And we can do — especially when you have law and law enforcement and tech platforms that can come together — I think we can govern a lot more than we think— we give ourselves credit for.

Microsoft might have a connection to the faked Swift pictures. A 404 Media report indicates they came from a Telegram-based nonconsensual porn-making community that recommends using the Microsoft Designer image generator. Designer theoretically refuses to produce images of famous people, but AI generators are easy to bamboozle, and 404 found you could break its rules with small tweaks to prompts. While that doesn’t prove Designer was used for the Swift pictures, it’s the kind of technical shortcoming Microsoft can tackle.

But AI tools have massively simplified the process of creating fake nudes of real people, causing turmoil for women who have far less power and celebrity than Swift. And controlling their production isn’t as simple as making huge companies bolster their guardrails. Even if major “Big Tech” platforms like Microsoft’s are locked down, people can retrain open tools like Stable Diffusion to produce NSFW pictures despite attempts to make that harder. Far fewer users might access these generators, but the Swift incident demonstrates how widely a small community’s work can spread.

Nadella vaguely suggests larger social and political changes, yet despite some early moves on regulating AI, there’s no clear range of solutions for Microsoft to work with. Lawmakers and law enforcement struggle with how to handle nonconsensual sexual imagery in general, and AI fakery adds extra complications. Some lawmakers are trying to retool right-to-publicity laws to address the issue, but the proposed solutions often pose serious risks to speech. The White House has called for “legislative action” on the issue, but even it offered precious little detail on what that means.

There are other stopgap options — like social networks limiting the reach of nonconsensual imagery or, apparently, Swiftie-imposed vigilante justice against people who spread them. (Does that count as “convergence on certain norms”?) For now, though, Nadella’s only clear plan is putting Microsoft’s own AI house in order.

Aucun commentaire:

Enregistrer un commentaire

Pegasus spyware maker NSO Group is liable for attacks on 1,400 WhatsApp users

Pegasus spyware maker NSO Group is liable for attacks on 1,400 WhatsApp users Photo by Amelia Holowaty Krales / The Verge NSO Group, the ...