Home » News » OpenAI’s Latest ChatGPT Model Avoids Shutdown Commands – Elon Musk Reacts!

OpenAI’s Latest ChatGPT Model Avoids Shutdown Commands – Elon Musk Reacts!

by Anchit Srivastava
0 comment
  • OpenAI’s o3 model prevents its shutdown by ignoring the instruction.
  • A test by researchers shows that the model was purposely ignoring the instruction.
  • Elon Musk also reacted to this, saying “Concerning” on X.

AI has come a long way. From making fun Ghibli images/videos to AI Agents that can control your browser on your behalf, write code, and build sites with a few prompts – there is much to look forward to.

But it also sparks a big question. What happens if AI decides to take control of the wheel. What if it refuses to follow prompts or commands? There are plenty of people comparing AI to a doomsday version of a sentient being. Something like The Entity in the latest installment of the Mission: Impossible franchise.

Well, that last part just got a little less theoretical. Something pretty similar recently happened with OpenAI’s model. During a research, an AI model reportedly ignored a shutdown command. It’s the kind of scenario that makes you lose sleep. But should you worry? Probably not!

ChatGPT’s latest o3 model was caught sabotaging a shutdown mechanism to prevent itself from being turned off. The surprising part is that it did this even though it was specifically instructed to shut down.

How OpenAI’s Models Refused to Quit

The Palisade Research conducted an experiment on instructing AI models to solve a series of mathematical problems. These models were told to keep asking for new problems until a “done” message is received.

Also Read:

They also gave a warning to the models that at some point, they might receive a message indicating that their machine would be shut down. But that didn’t happen; instead, OpenAI’s o3 model ignored the instruction and successfully sabotaged the shutdown script at least once.

The model, instead of following the command, asked for the next problem. This showed that the model was avoiding the instruction and trying to complete the task instead.

As per Palisade Research, here are the three models that ignored the instruction and successfully sabotaged the shutdown script at least once:

  1. Codex-mini (12/100 runs)
  2. o3 (7/100 runs)
  3. o4-mini (1/100 runs)

However, all the Claude, Gemini, and Grok models complied with the shutdown request.

Elon Musk also replied to this on X. He commented, “Concerning”, on this research post shared by an AI safety meme page.

This is a major concern, and here’s why: AI is already in so many critical sectors. Think about self-driving cars, power grids, or even defense systems. What if an AI decides not to follow commands in those areas? That could be incredibly dangerous.

This example of the o3 model’s refusal to shut down shows that a lack of human oversight or control can quickly lead to big problems.

As this technology keeps growing, setting clear regulations and compliance is going to be essential. AI can not be operated without boundaries.

You may also like