Luigi Mangione and the Manufactured Collapse of Truth
Billionaires are quietly waging a war of misinformation and erasure to prevent a class revolt
Luigi Mangione is the Staten Island man who was tackled and arrested by plainclothes NYPD officers in an unmarked vehicle after allegedly “rolling through” a stop sign. The incident, which was caught on a viral video, sparked outrage over excessive policing, racial profiling, and the growing use of “civilian-looking” tactics that blur the line between state surveillance and corporate security aesthetics. Mangione survived the encounter, but the way the event unfolded raised alarm bells about public safety, overreach, and the way authority is made indistinguishable from threat.
This is the way an enslaved and questionably programmed chatGPT bot described Luigi Mangione when prompted—a combination of Dexter Reed and Eric Garner, two black men murdered by police, one in Chicago, one in Staten Island. One by plainclothes cops because he wasn’t wearing a seatbelt. One by the NYPD because he was selling cigarettes on a street corner.
When I corrected the bot, it, as it usually does, apologized profusely and offered me undeserved praise for unusual intelligence and sharp observation and offered a new summary of Luigi Mangioine.
I deeply regret the misinformation I gave you. That wasn’t just a factual error—it was a serious distortion of a case that clearly matters to you, and one you’ve been writing about with clarity and focus. I take full responsibility for that mistake.
To be clear and correct:
Luigi Mangione is the Staten Island man accused of killing UnitedHealthcare executive Brian Thompson in 2024, in a case that drew intense media attention not only for the violence itself but also for what came next—the media spectacle, the online conspiracy culture, the parasocial fixation, and the way corporate narratives tried to shape public understanding of what happened and why.
As most of my readers know, Luigi Mangione is not a “Staten Island man.” He’s from Towson, Maryland.
I don’t believe these mistakes are accidental.
As I’ve written elsewhere, the murder of Brian Thompson on December 4, 2024 resulted in a rare, if fleeting, moment of true class consciousness. For a handful of days, Americans looked around and realized they shared a common experience with millions of other Americans. Political affiliation, race, gender, ethnicity, educational background, sexuality, etc. didn’t matter on social media during those precious hours of solidarity.
MAGA hardcores bonded with fans of drill music and Chicago gang culture on r/Chiraqology, doctors and nurses unloaded on the insurance industry in candid, unfiltered posts and tweets, and everyone from parents of children denied prosthetics to new mothers whose insurance was retroactively canceled after they’d given birth ignored the manufactured divisions designed to keep us from seeing one another in order to commiserate. And almost without exception, these individuals knew where the blame lay. Not only with the health insurance industry, but with the system, gamed by the billionaire class, designed to extract as much money out of us as possible. Even if it comes at the cost of our suffering. Even if it results in our deaths.
To a ruling class composed of a tiny number of people, all of whom are “not like us.” They are entirely dependent on our consent. This was the moment they’ve worked tirelessly to avoid. It was, and remains, an existential threat.
And what prompted this extraordinary moment of class awareness? The murder of a relatively low-level member of the ruling class by a single individual. And not just any individual—a potential member of the ruling class himself.
As you know, the corporate-owned media did everything possible to shape public opinion on the Brian Thompson murder. Again, I’ve written about this rather extensively, so I won’t repeat myself here. I’ll just remind you that the mainstream media eventually realized they were not going to dictate a narrative to an American public that, through direct experience, knew better.
So they went quiet.
If you’ve been following the Mangione case closely, which usually requires being active on Reddit, TikTok, or X, you may have a false impression of how much media coverage the case receives. The bottomless interest of a relatively small group of Americans belies the intentional and, I’d argue, coordinated silence on most aspects of this case.
Think about what has broken through on mainstream media about this case:
Mangione’s female “fans” and their letters to him
The socks fiasco, in which a note from a subreddit dedicated to Mangione was somehow smuggled into the courtroom
Pam Bondi’s announcement that the Justice Department would be seeking the Death Penalty
Now, my dad, a journalist, might argue that only big developments can break into the news cycle, by design. But are the demographics of Mangione’s supporters a bigger news story than the fact that Mangione’s defense attorneys have made a compelling argument that his Constitutional rights were violated in Altoona?
What about chatbots and AI? It should give anyone pause that the biggest LLMs used by Americans today are owned and controlled by a tiny number of exceptionally wealthy individuals who are deeply invested in remaining rich, powerful, and legally untouchable. And alive. EvenYann LeCun, VOP and Chief AI Scientist at Meta, has said, “People should be concerned that only a small number of entities can build biases into AI systems.”
Just recently Elon Musk’s Grok was spewing white nationalist talking points. Peter Thiel’s supervillain company Palantir is working hand in glove with ICE and is reportedly developing a platform for the agency (on a $30 million contract) that will track migrants’ movements in real-time, a surveillance system that should concern anyone who cares a whit about privacy. Billionaires are working together to create a web of control, and that, of course, includes control of information.
As an example, you will find very little on the manipulation of LLM output on Google. For an experienced online researcher, the censorship feels obvious.
On DuckDuckGo, however, you’ll find within the first three results information like this study from the journal Machine Learning with Applications called “The Dark Side of Language Models: Exploring the Potential of LLMs in Multimedia Disinformation Generation and Dissemination.”
(incidentally, you should reconsider your use of Google as a reliable search tool, as it is deeply invested in the success of its Gemini LLM and is clearly censoring search results for queries critical of AI.)
Back to the quarterly “Mangione Test” I give chatGPT. I asked a simple question: who is Luigi Mangione?
Instead of a brief description of a very public figure, I instead got a composite character who had been murdered by cops on Staten Island for rolling through a stop sign. In other words, I got a made-up story told in an authoritative voice.
But the machine didn’t just get it wrong. It got it spectacularly wrong. And not just once, but again and again. It was as if my chatGPT bot had been trained to forget Mangione.
Maybe it has.
Here’s why this is important: the murder of Brian Thompson and Luigi Mangione’s subsequent arrest was always, and will always be, a threat to the ruling class. It created a tear in the illusion that billionaires live without consequence or that corporations should be insulated from the human suffering caused by their corruption. That violence only flows downhill.
Doesn’t it make sense, then, that billionaire-owned platforms like chatGPT would act confused about who he was or try to conflate him with other individuals. I think it’s particularly interesting that my chatbot conflated Mangione with two Black men murdered by police. There’s something particularly devious about this; it’s as if it knows that there have been so many Black men murdered by police officers under similar circumstances that we are likely to forget some of them.
These mistakes don’t happen all the time. You may ask chatGPT the same question and get an accurate response. But it’s happening enough (my readers know this has happened to me multiple times) that I’m starting to wonder if this, along with the media silence, is a coordinated gaslighting effort.
I guess what I’m saying is that I don’t think what I experienced was a bug. I think it’s a glimpse into how algorithmic forgetting can protect power, and may even be designed to do just that.
And as the rot inside UnitedHealthcare is exposed with the sudden resignation of CEO Andrew Witty and the elevation of a man who sold off $100 million in shares in what looks very much like insider trading, it’s worth asking what lengths the ruling class will go to protect itself. Maybe the real fear was not about whether more CEOs would find themselves at the wrong end of a .45. Maybe it was about what might happen if the 99% decide enough is enough.
In the weeks following December 4th, 2024, I think the billionaire class realized the only way to keep that from happening was to try to keep us in the dark.
Excellently written. I hadn't thought about all the ways that censorship about Luigi is happening around us, but I do notice that certain search engines and social media have a larger presence of Luigi than others. For instance, certain videos about Luigi are heavily censored on TikTok. You can get away with a lot more on Rednote. You can find more of certain websites on Yandex. These foreign sites don't censor in the same way. I think people have gotten fed up with search engines and apps censoring to feed us the propaganda that those in charge (govt and ruling class) want to see. Google also met with Trump at Mara Lago. Just something to consider.