Search

06 Sept 2025

Inaccurate images generated by AI chatbot were ‘unacceptable’, says Google boss

Inaccurate images generated by AI chatbot were ‘unacceptable’, says Google boss

The historically inaccurate images generated by Google’s Gemini AI chatbot were “unacceptable”, chief executive Sundar Pichai has said in a memo to staff.

Last week, users of Gemini began flagging that the chatbot was generating images showing a range of ethnicities and genders, even when doing so was historically inaccurate – for example, prompts to generate images of certain historical figures, such as the US founding fathers, returned images depicting women and people of colour.

Some critics accused Google of anti-white bias, while others suggested the company appeared to have over-corrected over concerns about longstanding racial bias issues within AI technology which had previously seen facial recognition software struggling to recognise, or mislabelling, black faces, and voice recognition services failing to understand accented English.

Following the Gemini image generation incident, Google apologised, paused the image tool and said it was working to fix it.

But issues were then also flagged with some text responses, with an incident highlighted where Gemini said there was “no right or wrong answer” to a question equating Elon Musk’s influence on society with Adolf Hitler’s.

Now Mr Pichai has addressed the issue with staff for the first time and promised changes.

In his memo, Mr Pichai said the image and text responses were “problematic” and that Google had been working “around the clock” to address the issue.

“I know that some of its responses have offended our users and shown bias – to be clear, that’s completely unacceptable and we got it wrong,” he said.

“No Al is perfect, especially at this emerging stage of the industry’s development, but we know the bar is high for us and we will keep at it for however long it takes. And we’ll review what happened and make sure we fix it at scale.”

He said Google had “always sought to give users helpful, accurate and unbiased information” in its products and this was why “people trust them”.

“This has to be our approach for all our products, including our emerging AI products”, he added.

Going forward, Mr Pichai said “necessary changes” would be made inside the company to prevent similar issues occurring again.

“We’ll be driving a clear set of actions, including structural changes, updated product guidelines, improved launch processes, robust evals (sic) and red-teaming, and technical recommendations. We are looking across all of this and will make the necessary changes,” he said.

To continue reading this article,
please subscribe and support local journalism!


Subscribing will allow you access to all of our premium content and archived articles.

Subscribe

To continue reading this article for FREE,
please kindly register and/or log in.


Registration is absolutely 100% FREE and will help us personalise your experience on our sites. You can also sign up to our carefully curated newsletter(s) to keep up to date with your latest local news!

Register / Login

Buy the e-paper of the Donegal Democrat, Donegal People's Press, Donegal Post and Inish Times here for instant access to Donegal's premier news titles.

Keep up with the latest news from Donegal with our daily newsletter featuring the most important stories of the day delivered to your inbox every evening at 5pm.