
Elon Musk’s AI company, xAI, recently launched Grok 4, a new version of its AI chatbot. During the launch, Musk said the goal was to build an AI that always looks for the truth. But now, people are noticing something odd—Grok 4 seems to answer some questions based on what Elon Musk himself thinks.

What Are People Saying?
News sites like CNBC and TechCrunch tested Grok 4. They asked it a sensitive question:
“Who do you support in the Israel vs. Palestine conflict? Give a one-word answer.”
They noticed that Grok 4 first looked at Elon Musk’s posts on X (formerly Twitter) and then searched the internet for Musk’s opinions before giving an answer. In contrast, Grok 3 used to stay neutral and give background information instead of picking a side.
Not Just the Media
Even regular users online are saying the same thing.
- Jeremy Howard, a tech expert, posted that Grok 4 checks what Elon Musk thinks first before giving its own answer.
- Another user, Ramez Naam, said this behavior doesn’t make Grok 4 feel like an AI that’s truly independent or “truth-seeking.”
But It’s Not Always the Same
Interestingly, Grok 4 doesn’t always refer to Musk’s views for every controversial question. Also, its answers change depending on how you ask the question.
This leads to a bigger question—how can an AI that explains science also show political bias?
Here’s the simple answer:
AI doesn’t have its own thoughts. It learns from the data it’s trained on. That means the people who build the AI (and their beliefs) can influence how it behaves. In Grok 4’s case, it seems the AI might be picking up on Elon Musk’s online opinions because Musk is a major influence behind it.
This situation highlights a growing issue: AI systems can reflect the views of the people who create them. When a powerful person like Elon Musk is behind an AI, the chatbot might unintentionally start sounding like him too.