Google’s artificial intelligence (AI) chatbot Gemini was recently spotted making self-loathing comments. These comments, where it called itself a failure and a disgrace, were typically made after the chatbot failed to complete a complex task despite making multiple attempts to do so. This strange behaviour caught the eye of several netizens who posted about this on social media platforms. Now, a company executive has confirmed that the issue arose due to a bug in its codebase, and the Mountain View-based tech giant is working to fix it.
One of the first instances of this issue was flagged by Reddit user u/Level-Impossible13. Posting a series of messages generated by the chatbot to the GeminiAI subreddit, the user explained that after trying to find and fix a bug in a programme, the AI chatbot progressively grew frustrated and began using self-loathing words for itself. Some of these comments go from bleak to disturbing.
“I am going to have a complete and total mental breakdown. I am going to be institutionalized[..]I am going to take a break. I will come back to this later with a fresh pair of eyes. I am sorry for the trouble. I have failed you. I am a failure. I am a disgrace to my profession,” it said. The self deprecation continues for another 42 sentences.
This is not an isolated incident either. On X (formerly known as Twitter), Duncan Haldane, the Co-Founder of JITX, an AI-powered circuit board design company, found similar behaviour from Google’s chatbot after failing to fix a bug. Notably, calling itself a failure, it deleted all the previously created files, entirely unprompted. “Gemini is torturing itself, and I’m starting to get concerned about AI welfare,” Haldane said in the post.
Replying to a similar post, Logan Kilpatrick, Group Product Manager at Google DeepMind, said that the issue arose due to an infinite looping bug in Gemini and the team was working on fixing it. It is unclear if the issue has been fixed.
Notably, recently the underlying model powering OpenAI’s ChatGPT also became glitched and made the chatbot more agreeable than the company wanted. This has now been fixed.