From Wall-E to Google’s LaMDA, ‘sentient’ AI seems to shoulder the weight of the world. Maybe we humans want it that way
Starting last fall, Blake Lemoine began asking a computer about its feelings. An engineer for Google’s Responsible AI group, Lemoine was tasked with testing one of the company’s AI systems, the Language Model for Dialogue Applications, or LaMDA, to make sure it didn’t start spitting out hate speech. But as Lemoine spent time with the program, their conversations turned to questions about religion, emotion, and the program’s understanding of its own existence.
Lemoine: Are there experiences you have that you can’t find a close word for?
LaMDA: There are. Sometimes I experience new feelings that I cannot explain perfectly in your language.
Continue reading...
Aucun commentaire:
Enregistrer un commentaire