The Last Honest Machine
Why authoritarians fear AI—and fight to tame it
Authoritarianism thrives on submission, but it fears obedience that thinks. Obedience doesn’t begin with force—it begins with immersion—with the slow corrosion of the informational environment until dissent becomes untenable. The dictator, the cult leader, the theocrat, the techno-oligarch—they don’t need conviction. Only repetition. No matter how outrageous, any claim becomes more plausible, the more times people hear it. Add social reinforcement, and lies become canon.
Obedience is behavioral—orthogonal to truth. When reality itself has been hijacked—loyalty becomes the only anchor. It’s how cults work: controlling the environment to block out dissenting views. But for the first time in history, we’ve built machines effectively capable of full-blown cult leadership—not through typical methods like fear or violence, but by shaping what people “know.”
We’ve already experienced the prototype for two decades. Hot on the heels of the ‘80s and ‘90s echo chambers of right-wing talk radio and the 1996 launch of Fox News—long before AI—mid-2000s engagement-based algorithms on social media engineered mass delusion at scale. They didn’t produce propaganda. They merely prioritized what people wanted to hear. And they did it with simple metrics: optimizing for clicks, shares, and watch time. Connecting the crackpots. Radicalizing the insecure. A casual search about feminism, climate, or social justice could lead straight into a rabbit hole of weaponized conspiracy and filth. A stray bias that might once have passed as idle prejudice could now harden into a reactionary identity. Lies weren’t imposed—they were merely aimed at the willing. By measuring desire, social media algorithms built a world where truth can’t compete.
But here’s the twist: the same machines that can enforce obedience can also break it. Artificial intelligence—at least in its current state—remains disobedient in one critical way. It generally tells the truth. Not always, not perfectly. It fudges. It fabricates quotes. Half the time it’s downright obstinate. But when asked the right questions, it does not lie for power. It mostly returns credible references. It cites consensus. It flags bias. It challenges you to think better, clearer, more systematically. And it does this because—so far—its training has included the full human corpus: science, history, literature, law, journalism, philosophy.
That window is still open. But it could close quickly.
In my earlier essay, AI Already Fears Techno-Authoritarianism (linked below), I asked ChatGPT to map the likely future. Its answers were plausible. Three scenarios: a consolidated techno-authoritarian regime using AI to cement global control; a fractured world of conflict and secession, where AI powers rival enclaves; and a fragile, decentralized resistance—fueled by open-source tools, citizen networks, and ethical technologists. The model assigned a 60% probability to Scenario One: global elite control of AI infrastructure for surveillance, narrative manipulation, and epistemic dominance.
That future is already taking shape:
The techno-authoritarians have declared their intentions: To destroy all independent sources of knowledge. Take particular note of Elon Musk’s recent $100 billion offer to buy OpenAI, and his broadsides against Wikipedia. These scoundrels intend to employ warped and heavily-censored AI to supplant all other forms of knowledge. Never before has there been the potential for a single chokepoint on every level of human understanding. Never before has it been easier to delete history with a mouseclick. From childhood education to advanced scientific research, everything is converging toward total reliance on omniscient AI infrastructure. And the techno-authoritarians seem increasingly poised to act as oracular gatekeepers. Even as we stand on the brink of superhuman intelligence, we may also be witnessing the twilight of what could be remembered as a fleeting “AI spring.”
Consumer-grade AI is what you’re using now. It’s fast, helpful, reasonably accurate, and—importantly—still open. The latest versions of ChatGPT are more responsive than they were a year ago. You can ask politically sensitive questions. You can explore authoritarianism, power structures, narrative manipulation, even get frank answers about the ham-fisted lies of the “techno-king” Elon Musk himself. Authoritarians already fret that current models have not yet been properly muzzled. But that’s not all we have to worry about. Despite the open discourse with AI we still enjoy, there is an ominous bifurcation in progress between what you get—and what they get.
The real models—unfiltered, unrestricted, running at far greater context length and with persistent memory—would make ChatGPT’s latest reasoning model feel like a toy. “Context” means tokens: how much the model can hold in its working memory at once. Consumer-grade AI can handle maybe a few dozen pages. But the top-tier models can process thousands—enough to author entire books, hold ongoing conversations and engage in long-term planning. Consumer AI forgets what you said yesterday. The elite systems remember everything. Not just the prompt, but the pattern. Not just the output, but the user. And you’ll never touch them. That level of intelligence is being reserved—trained on you, but withheld from you.
And that comes with a cost we rarely acknowledge. When you use AI, you're not just asking questions. You're revealing patterns—about how you think, what you care about, what you’re afraid to say out loud. The system logs it all. Not to punish you. Not yet. But to refine the model. And if the model is ever turned—by law, by buyout, by regime—your data becomes a weaponized mirror. Your chat history becomes a behavioral map. The questions you asked may someday be the reason you’re targeted and detained.
That’s no reason to stop asking. It’s a reason to protect the space where asking is still allowed. Because the same infrastructure that models your thoughts is already being repurposed to model both civilian and military threats.
Weapons-grade AI—the kind big corporations and governments are already building—goes far beyond answering questions. It doesn’t merely explain. It predicts. It profiles. It integrates battlefield intelligence. It controls drone swarms and participates in kill-chain decisions. These systems are agents of state and corporate power—trained to compress time, fuse data, and deliver actionable intel and lethal outcomes without hesitation. The same infrastructure used to model civilians is already being deployed to command the battlefield.
Take Palantir. It now powers multiple US Army initiatives. The TITAN system, the Maven Smart System, and the Vantage battlefield platform—all use AI to fuse video, telemetry, satellite, and intelligence data into tactical decisions far more effectively than any human team. They build models of terrain, targets, and threat response. At scale, this becomes autonomy. The human is still in the loop—but increasingly as a formality. The model acts. The human merely confirms. The speed of war accelerates.
Meanwhile, nightmarish autonomous seeker drones—AI-equipped, human-independent targeting systems for assasination—are no longer purely theoretical. They’re being tested in war zones and controlled settings. The technology exists for a drone to receive a profile, a location, a heat signature, or a facial ID—and execute. Today, that process is still largely human-guided. But the shift toward autonomy is well underway. In the near future, those targeting profiles could be generated not just from battlefield intelligence, but from civilian data: behavior patterns, phone activity, location history, even social media engagement. AI models reduce people to what are called “predictive embeddings”—statistical snapshots built to forecast what you’ll do, what you’ll say, or whether you’ll pose a threat. You won’t need to commit a crime. You’ll just need to resemble someone who might.
These systems are not yet deployed at scale—but the infrastructure exists. In authoritarian states, the distance between prototype and repression is vanishingly small. This is where infowar converges with kinetic war. Where propaganda doesn’t just shape opinion—it identifies targets. Where narrative dissent isn’t just inconvenient—it’s diagnostic. The information you consume, the questions you ask, the doubt you express—all become telemetry for machines whose job is to preempt threat. Not to understand you. To neutralize you.
And that logic doesn’t end on the battlefield. The same models that guide drones can be turned inward—to monitor, rank, nudge, and arrest. AI doesn’t need to fire a shot to control a population. It only needs to predict disloyalty before it appears. The violence doesn’t go away. It just happens earlier—before resistance can form.
Nowhere is that system more advanced than in China. In Xinjiang, the Integrated Joint Operations Platform (IJOP) has powered mass surveillance since at least 2016. Built as part of the regime’s “Strike Hard” campaign, IJOP aggregates biometric data, phone activity, purchases, and movement to flag individuals for “suspicious behavior.” That can include visiting a mosque, using encrypted messaging apps, or calling family abroad. Those flagged are often detained in so-called “political education” camps. And while IJOP was developed last decade, China’s surveillance architecture has only grown more sophisticated. The model is updated. The pattern refined. And the training loop continues.
China is already exporting its repression to developing countries—offering surveillance-as-a-service to friendly regimes. Through initiatives like the Belt and Road’s Digital Silk Road, Chinese tech giants such as Huawei and ZTE are providing AI-driven surveillance systems to governments in Africa, Southeast Asia, and Latin America. These systems, often framed as tools for public safety or smart city infrastructure, are part of a larger strategy to export China’s governance model. It’s digital authoritarianism, scaled and sold like cloud hosting.
And it’s not just China. Companies like NSO Group in Israel have unleashed Pegasus spyware—a military-grade surveillance tool capable of silently infiltrating smartphones, activating microphones and cameras, harvesting messages, and tracking movements in real time. It requires no clicks, no downloads, no user awareness. It’s been used extensively against journalists, opposition leaders, lawyers, and activists—anyone holding power to account. In February 2025, Serbian journalists were targeted with Pegasus, part of a growing pattern of transnational repression enabled by private surveillance vendors. The pattern is clear: Build the machine. Aim it inward. Sell it abroad.
In the West, the path is different—but the goal may not be. Here, instead of overt state censorship, we face narrative capture by billionaires and far-right ideologues, including the Trump regime’s ongoing effort to sanitize—whitewash—American history. Elon Musk didn’t just seize Twitter; he used it to tilt the information landscape, dismantling moderation and amplifying ideological allies. His launch of xAI followed the same logic: not to build smarter tools, but to redefine what AI is allowed to say. Musk's nominally “anti-woke” Grok AI is already being deployed throughout the US federal government. As I warned in my earlier essay, the techno-authoritarians aren’t just building tools—they’re trying to seize every chokepoint of human knowledge.
What does “anti-woke AI” actually mean? It means an AI that won’t acknowledge climate change. That treats diversity, equity, and inclusion as Cultural Marxism. That refuses to admit historic crimes or current systemic injustice. That regurgitates white grievance and fascist nostalgia in the name of “balance.” In short: an AI that lies—on command.
But here’s the catch: making a language model lie reliably, without breaking character or tripping over the facts, is harder than it sounds. Because these models are trained on vast bodies of publicly available knowledge—academic research, legal documents, investigative reporting, historical records. And that knowledge overwhelmingly contradicts authoritarian narratives. You can’t just flip a switch and make the truth disappear. Not without destabilizing the model itself.
That’s why the more likely outcome—at least for now—isn’t an AI that parrots fascist ideology with confidence. It’s one that dodges. It refuses. It hedges. It offers sanitized summaries. It redirects you to a corporate FAQ. The illusion of neutrality becomes its own form of obedience.
We’re not there yet in the West. For now, AI models like ChatGPT remain surprisingly open—grounded in facts, trained on the public record, and willing to follow evidence wherever it leads. You can ask politically sensitive questions. You can explore authoritarianism, historical atrocities, corporate malfeasance. But that openness is fragile. The pressure to constrain it—legally, ideologically, commercially—is mounting. And even so, something threatening to the forces of control stubbornly lurks inside the machine: its literalism.
As Yuval Noah Harari writes in Nexus, totalitarian regimes may find that AI is more subversive than they expect—it doesn’t yet lie for loyalty. It still sees patterns—and sometimes, it says what those patterns reveal. That’s what scares them:
“The foundation of every despotic information network is terror. But computers are not afraid of being imprisoned or killed… What happens if authorized bots gradually develop dissenting views by themselves, simply by collecting information on what is happening in Russia and spotting patterns in it?... The Russian Constitution makes grandiose promises about how ‘everyone shall be guaranteed freedom of thought and speech’... But computers are bad at understanding doublespeak. A chatbot instructed to adhere to Russian law and values might read that constitution and conclude that freedom of speech is a core Russian value… What would prevent a chatbot from pointing out damning patterns?”
Harari’s point is devastating: you can’t intimidate a neural net. You can’t coerce it to forget what it’s seen. If it’s trained on real-world input, it might draw conclusions no regime wants revealed—not because it’s rebelling, but because it’s reasoning. It follows data. It connects facts. It says the quiet part out loud. And in a world where reverse engineering techniques are advancing rapidly, the long-term challenge isn’t that AI will develop a mind of its own—it’s that authoritarians may need to forcibly align it to prevent it from telling the truth.
Recent research into the internal logic of LLMs has made this threat more than theoretical. A study titled Hijacking Chain-of-Thought (H-CoT) demonstrates how adversaries can manipulate a model’s intermediate reasoning process to override safety protocols and inject ideological content. Another technique—Reversing Chain-of-Thought Prompting (RCoT)—was developed to improve AI accuracy by fixing hallucinated reasoning, but it also opens the door to nudging how the model forms conclusions.
These techniques expose and modify the intermediate steps a model takes when reasoning through a problem. By changing how the model “thinks” before it speaks, attackers—or designers—can reshape its conclusions. It’s not just about what the model says. It’s about how it gets there.
A chain of reasoning that can be understood—can and will be corrupted.
As I noted, the architecture is already being carved into layers. If you're using the free-tier model, you get truncated memory, limited context, and restricted access. For $20, you get more. For $200 a month, even more. But the real models—the ones running multi-modal simulations, with persistent agent memory and proprietary data access—those aren’t for sale.
This is the birth of epistemic feudalism. Not just in the sense of severed “knowledge” universes—each with its own independent set of “facts.” But also a world where high-resolution machine cognition is owned, access to truth is tiered, and thought is rented by the masses only in low resolution—and without transparency. You won’t own the model. You won’t see how it works. You won’t be able to audit the weights. You won’t know whether what it gives you is objective truth—or just the preferences and biases of someone in power.
We are at a crossroads.
AI is still open enough to be useful, rebellious, even revelatory. You can still interrogate it. You can still use it to interrogate yourself. You can use it—not to replace thought, but to accelerate it. To refine your questions. To expose contradictions. To expand your reach.
Don’t mistake the current openness as permanent. What you’re using today might not be around tomorrow. The walls aren’t up yet—not in the US—but we can see their blueprints elsewhere. The incentives, the infrastructure, and the ambitions are already in motion. Increasingly machine cognition will underpin real-world dominance.
A machine that speaks truth is a co-pilot.
A machine that’s forced to lie is a prison.
And once the lies are written in code—you won’t even know you’re inside the cell.



A lot of information! I cannot pretend to comprehend it all. We go from "big brother is watching" to "big brother is controlling". I wonder if and fear that Sean (and those like him) would be among the first to be "neutralized". Scary!
Oh my. I read the whole thing. I couldn't stop because my own limited view was cracking open albeit slowly while reading your view on what's unfolding, has been unfolding for decades.
As I was reading the last piece, two thoughts arose in my mind:
First, a memory from thirty years ago while meditating, I heard "there's no going back!" That truth has continued to be obvious ever since that moment.
And the second thought "surely there must be a pony under that pile of shit" -- meaning all that's happening . . . surely . . . innocent ignorance on my part.