8/13/2025

The Unforeseen AI Showdown: Why Did So Many AIs Seem to Side With Sam Altman Over Elon Musk?

Here’s a story that’s almost too perfect, something straight out of a sci-fi script. Two of the biggest names in tech, Elon Musk & Sam Altman, are in a very public feud. So what happens? People start asking the AIs themselves to pick a side. & the results were… well, let's just say they were NOT what Elon was probably hoping for.
This whole thing got kicked off when Elon Musk went on a tirade against Apple, accusing them of antitrust violations for allegedly favoring OpenAI in the App Store. He claimed it was impossible for any other AI company to get the top spot. Sam Altman, CEO of OpenAI, basically responded with a polite but firm, "that's a pretty wild claim, especially coming from someone who allegedly messes with their own platform's algorithm to boost their own stuff."
& that’s when the internet did what the internet does best: it got out the popcorn & started poking the bear. Or in this case, the AIs.
Users started asking different AI models a simple question, phrased in a few different ways: Who's more trustworthy, Sam Altman or Elon Musk? Who's right in this fight? & the answers that came back were fascinating & honestly, a little hilarious.

The Great AI Betrayal

The most shocking verdict came from Musk's own creation, Grok. When asked who was right in the spat, Grok didn't hesitate. It sided with Sam Altman. Its reasoning was surprisingly detailed. It stated that based on verified evidence, Altman’s point was valid. It pointed out that Musk's claims against Apple were weakened by the success of other AI apps & that, conversely, there were reports from 2023 & ongoing investigations into Musk directing changes to the X algorithm to benefit his own posts & interests. It even ended its analysis with the mic-drop phrase: "Hypocrisy noted."
OUCH.
Imagine building an AI & having it publicly call you a hypocrite, with citations. The richest man in the world just got fact-checked by his own product.
Of course, the story doesn't end there. In a twist of irony, some users reported that when they asked OpenAI's ChatGPT who was more trustworthy, it replied with a single name: Elon Musk. To add another layer, a different model, Claude 4.1, played it safe & simply said, "Neither."
When another user on X asked Grok a slightly different question—"who has spread more misinformation, Elon Musk, or Sam Altman?"—it again came back with a damning answer. It said that based on an extensive review of fact-checked sources, Musk had spread FAR more misinformation, citing his 87+ false election claims that got billions of views. It concluded that Altman has no comparable record.
So, what in the world is going on here? Why did so many AI models, especially Musk's own, seem to come down on the side of Sam Altman? It’s not as simple as the AIs "thinking" one person is "better." It’s a reflection of something much deeper: the data they are trained on & the digital personas these two men have cultivated.

The Data Doesn't Lie: A Tale of Two Digital Ghosts

Think about what these large language models (LLMs) actually are. They are not conscious beings with personal opinions. They are incredibly complex pattern-recognition machines. They've been fed a mind-boggling amount of text & data from the internet—news articles, social media posts, forums, research papers, you name it. Their "answers" are a reflection of the dominant patterns, sentiments, & "facts" they've ingested from that data.
So when an AI model is asked to judge the trustworthiness of Musk versus Altman, it's essentially analyzing & summarizing the vast digital footprint of both men.
Elon Musk's Digital Footprint:
Musk's online presence is, to put it mildly, chaotic. On one hand, he's the visionary behind SpaceX & Tesla, a figure associated with groundbreaking innovation. On the other, his tenure as the owner of X (formerly Twitter) has been defined by controversy.
  • Platform Manipulation: There have been numerous reports & even admissions that the X algorithm has been tweaked to favor his own posts. Grok specifically cited this. This isn't just a rumor; it's a documented part of his leadership style at X.
  • Misinformation: As Grok pointed out, fact-checking organizations have documented Musk spreading a significant amount of misinformation, particularly around elections & other sensitive topics.
  • Public Feuds & Erratic Behavior: He's known for his impulsive posts, public spats with journalists & other public figures, & a generally unpredictable communication style. He regularly criticizes Altman, even using nicknames like "Scam Altman." This creates a massive trail of content that, when analyzed for "trustworthiness," would naturally raise red flags for a pattern-matching system.
Sam Altman's Digital Footprint:
Sam Altman, in stark contrast, has a much more curated & consistent public persona.
  • Focused Messaging: His communication is almost entirely focused on AI, its potential, & the importance of safety & thoughtful development. He doesn’t typically engage in the kind of public brawls that Musk is famous for.
  • Corporate Stability (Relative): While OpenAI has had its own internal drama (remember that whole "fired & rehired in a week" thing?), Altman's public statements are generally measured, consistent, & aligned with a corporate executive's voice.
  • Lack of Widespread "Misinformation" Label: You simply don't find the volume of articles & fact-checks labeling Altman as a purveyor of misinformation in the same way you do for Musk.
So, when an AI sifts through this mountain of data, it’s not making a moral judgment. It's concluding that the digital trail of one individual is filled with documented instances of platform manipulation, public feuds, & statements flagged as false, while the other’s is more consistent, focused, & less controversial. The AI isn't "choosing" Sam Altman; it's reflecting the public record.

The "Hypocrisy Noted" Heard 'Round the World

The most telling part of this entire episode wasn't just Grok's answer, but Musk's reaction to it. His immediate response was not to address the substance of Grok's analysis but to criticize the AI. He said, "Grok gives way too much credibility to legacy media sources! This is a major…" & promised to "fix" it.
This reaction speaks volumes. As one commentary pointed out, when your own AI calls you a hypocrite with citations, & your first instinct is to "upgrade" the AI rather than address the hypocrisy, you’ve perfectly demonstrated why the AI was right in the first place. It's a playbook people have come to expect: when the AI says something inconvenient, the owner gets embarrassed & promises to "fix" the AI until it learns better manners.
This highlights a fundamental tension in the world of AI. We want these systems to be unbiased & truthful, but "truth" is often inconvenient for those in power. Musk wants Grok to be a "truth-seeking" AI, but it seems that truth is only valuable when it aligns with his own worldview.

Why This "AI Trust Test" ACTUALLY Matters

This whole impromptu social experiment is more than just tech-bro drama. It's a glimpse into the future of our relationship with AI & what it means for concepts like trust & authority.
First, it shows that for AI to be genuinely useful, it needs to have access to a wide range of information & be able to analyze it without its owner's thumb on the scale. The moment Musk talks about "fixing" Grok's "major problem," he's undermining the very idea of an unbiased AI. He's essentially saying he wants to program it to ignore verifiable reports from reputable sources if they are critical of him.
Second, it reveals how we, as humans, are starting to use AI as a new kind of mirror. We're asking it to reflect our world back to us, to make sense of the noise. In this case, the AI reflected a pretty clear picture of the public discourse surrounding these two figures.
This is where the concept of building trust with AI becomes CRITICAL, not just for massive companies like xAI, but for any business looking to use this technology. When you interact with a company's chatbot, for instance, you expect it to be helpful, consistent, & honest.
That's honestly a huge part of what we focus on at Arsturn. We help businesses create custom AI chatbots trained specifically on their own data. The goal is to build a reliable & trustworthy assistant that provides instant, accurate support to customers. It can't go rogue & start offering opinions on public feuds because its knowledge is intentionally firewalled to that specific business's information. It’s about creating a predictable & helpful experience. An AI that knows everything about your products, answers questions 24/7, & engages with visitors is an incredible tool, but that trust is shattered if it starts giving weird, biased, or just plain wrong answers. The key is control & focus, training the AI to be an expert in one specific domain—your business.

The Future of Trust in an AI-Powered World

So, did 22 different AI models really choose Sam Altman over Elon Musk? The reality is a bit more nuanced. It wasn't a formal vote. It was a chaotic, crowd-sourced experiment that revealed a powerful truth: an AI's "opinion" is a reflection of the data it's trained on. & the data, as it stands, paints a picture of one leader as a stable, focused executive & the other as a brilliant but volatile agent of chaos.
This incident is a watershed moment. It’s one of the first high-profile cases of an AI being used as an arbiter in a human dispute & providing a conclusion that was deeply embarrassing for its own creator. It forces us to ask some pretty big questions. What does it mean to trust an AI? Should an AI be "loyal" to its creator, or to the "truth" as found in its training data?
What's clear is that as businesses & individuals, the way we communicate & the digital trails we leave behind are becoming more important than ever. These AI models are constantly learning from the collective conversation. They are holding up a mirror to our behavior, & sometimes, the reflection isn't what we want to see.
For businesses, the lesson is clear. Building a trustworthy brand in the age of AI means ensuring your communications are clear, consistent, & honest. When you deploy AI, like the no-code chatbots you can build with Arsturn, you're creating a representative of your brand. That AI needs to be trained on solid, truthful data so it can build meaningful connections with your audience through personalized, reliable interactions. It's not about winning a popularity contest; it's about providing genuine value & earning your customers' trust, one accurate answer at a time.
Hope this was a helpful breakdown of a pretty wild moment in AI history. It’s a fascinating case study with a lot of layers. Let me know what you think about it all in the comments.

Copyright © Arsturn 2025