mardi 27 février 2024

Google CEO says Gemini AI diversity errors are ‘completely unacceptable’

Google CEO says Gemini AI diversity errors are ‘completely unacceptable’
Photo illustration of Sundar Pichai in front of the Google logo
Google CEO Sundar Pichai. | Illustration by Cath Virginia / The Verge

The historically inaccurate images and text generated by Google’s Gemini AI have “offended our users and shown bias,” CEO Sundar Pichai told employees in an internal memo obtained by The Verge.

Last week, Google paused Gemini’s ability to generate images after it was widely discovered that the model generated racially diverse, Nazi-era German soldiers, US Founding Fathers who were non-white, and even inaccurately portrayed the races of Google’s own co-founders. While Google has since apologized for “missing the mark” and said it’s working to re-enable image generation in the coming weeks, Tuesday’s memo is the first time the CEO has widely addressed the controversy.

In the memo, which was first reported by Semafor, Pichai says the company has “been working around the clock” to address “problematic text and image responses in the Gemini app.” He doesn’t says that Google has fixed the problem. “No Al is perfect, especially at this emerging stage of the industry’s development, but we know the bar is high for us and we will keep at it for however long it takes,” he writes.

You can read Sundar Pichai’s full memo to Google employees below:

Hi everyone

I want to address the recent issues with problematic text and image responses in the Gemini app (formerly Bard). I know that some of its responses have offended our users and shown bias — to be clear, that’s completely unacceptable and we got it wrong.

Our teams have been working around the clock to address these issues. We’re already seeing a substantial improvement on a wide range of prompts. No Al is perfect, especially at this emerging stage of the industry’s development, but we know the bar is high for us and we will keep at it for however long it takes. And we’ll review what happened and make sure we fix it at scale.

Our mission to organize the world’s information and make it universally accessible and useful is sacrosanct. We’ve always sought to give users helpful, accurate, and unbiased information in our products. That’s why people trust them. This has to be our approach for all our products, including our emerging Al products.

We’ll be driving a clear set of actions, including structural changes, updated product guidelines, improved launch processes, robust evals and red-teaming, and technical recommendations. We are looking across all of this and will make the necessary changes.

Even as we learn from what went wrong here, we should also build on the product and technical announcements we’ve made in Al over the last several weeks. That includes some foundational advances in our underlying models e.g. our 1 million long-context window breakthrough and our open models, both of which have been well received.

We know what it takes to create great products that are used and beloved by billions of people and businesses, and with our infrastructure and research expertise we have an incredible springboard for the Al wave. Let’s focus on what matters most: building helpful products that are deserving of our users’ trust.

Aucun commentaire:

Enregistrer un commentaire

Here are the best Black Friday deals you can already get

Here are the best Black Friday deals you can already get Image: Elen Winata for The Verge From noise-canceling earbuds to robot vacuums a...