In a series of Threads posts this afternoon, Instagram head Adam Mosseri says users shouldn’t trust images they see online because AI is “clearly producing” content that’s easily mistaken for reality. Because of that, he says users should consider the source, and social platforms should help with that.
“Our role as internet platforms is to label content generated as AI as best we can,” Mosseri writes, but he admits “some content” will be missed by those labels. Because of that, platforms “must also provide context about who is sharing” so users can decide how much to trust their content.
Just as it’s good to remember that chatbots will confidently lie to you before you trust an AI-powered search engine, checking whether posted claims or images come from a reputable account can help you consider their veracity. At the moment, Meta’s platforms don’t offer much of the sort of context Mosseri posted about today, although the company recently hinted at big coming changes to its content rules.
What Mosseri describes sounds closer to user-led moderation like Community Notes on X and YouTube or Bluesky’s custom moderation filters. Whether Meta plans to introduce anything like those isn’t known, but then again, it has been known to take pages from Bluesky’s book.
Aucun commentaire:
Enregistrer un commentaire