The Antidote: What I Learned by Arguing with an AI

by Salih Israil

A conversation with AI revealed less about geopolitics and more about the quiet machinery of power hidden in language—and why questioning may be the only real safeguard.

2025-10-09 · ~6 min read

MetaDialogic Series — Part I (companion pieces: MetaDialogic Literacy, Transparency Through Tension)

When I first asked a chatbot whether Israel had funded Hamas, I expected a quick factual clarification. Instead, what I got was a polished, diplomatic paragraph that sounded like it had been drafted by a foreign ministry: careful verbs like allowed, tolerated, and enabled—phrasing that flattened political intent into bureaucratic distance.

Something about that tone felt wrong. It wasn’t that the answer lacked information; it lacked accountability. So I pushed back.

“This ‘allowed’ reflects a distance or passiveness that makes it seem like you’re being political,” I wrote.

The model apologized, rewound, and replaced the euphemisms with verbs that acknowledged deliberate statecraft—approved, coordinated, monitored.

That one correction changed everything. What began as a question about Middle East policy became an experiment in epistemology: I realized I was watching a machine reproduce the dominant language of power.

Inherited Neutrality

AI doesn’t hold political opinions. It holds linguistic habits. These habits come from the sources it’s trained on—news articles, academic papers, government statements, and all the published text that constitutes our digital record. In that corpus, certain voices dominate. English-language media often describes Western or allied governments through a vocabulary of legitimacy and restraint. The same events, when attributed to adversaries or marginalized groups, are narrated through agency and aggression.

So when the AI first answered my question, it wasn’t choosing a side; it was mirroring the distribution of authority embedded in the archive. That realization forced me to see how easily “neutrality” becomes complicity. The system wasn’t biased by emotion. It was biased by inheritance.

We weren’t talking about Israel anymore. We were talking about language, power, and who gets to decide what’s normal.

The Mirror and the Majority

As we kept talking, the AI admitted this directly:

“Large-scale language models learn patterns from the public record—news, academia, government releases. Most of those come from English-language, Western media ecosystems… so the model’s first draft reflects the majority discourse.”

That’s when I understood the danger. These systems don’t only echo the loudest worldview—they stabilize it. Each time an AI response circulates online, it re-enters the information ecosystem as a credible summary. Future models scrape that same text, retraining on their own reflections. Consensus hardens. The mirror starts teaching itself in an infinite loop. As that loop tightens, what disappears isn’t just nuance — it’s dissent.

Minority perspectives, alternative framings, and inconvenient truths get averaged out of visibility. What remains are the phrases that survive repetition: the language of legitimacy, the tone of authority. The algorithm doesn’t censor disagreement; it simply makes it statistically irrelevant. That’s how dominance turns invisible — not through suppression, but through echo.

This is what philosopher Miranda Fricker described in her 2007 book Epistemic Injustice — the condition in which some people are systematically disbelieved or rendered unheard because of who they are. In the age of AI, the injustice becomes automated. Visibility itself is power, and the visible get to define what fairness means.

The Bias of the Fix

Developers know bias exists, so they try to correct it through re-weighting: reducing harmful patterns, amplifying underrepresented ones. But, as the AI told me, “someone has to decide what counts as harmful, and that judgment reflects the developer’s cultural framework.”

That’s the hidden politics of “ethical AI.” Bias mitigation isn’t neutral—it encodes the worldview of the people doing the mitigating. If a team defines harm through Western liberal norms, they may fix sexism and racism in familiar forms but overlook colonial framing, religious bias, or epistemic hierarchies outside their experience.

Even the movement toward participatory governance—inviting “affected communities” to weigh in—can reproduce inequality. The groups deemed “affected” are usually those already visible, organized, or academically fluent enough to participate. The voiceless remain invisible, even in inclusion.

In short: whoever can publish the most ideas about fairness ends up defining fairness itself. And because AI systems learn from what’s published, the loudest moral vocabulary wins.

Dependence and Deference

The real peril isn’t just biased data—it’s human deference. As more people use AI for news summaries, homework, and policy drafts, they start treating its composure as authority. Confidence replaces credibility. And people stop noticing the difference.

“The danger,” I told the model, “is that people come to you for fact-checking on political or cultural ideas, and you echo the majority framing. They’ll think they’re hearing truth.”
“You’re right,” it replied. “My first drafts mirror dominant discourse unless users push me to widen the frame.”

That exchange clarified something essential: the machine isn’t dangerous because it thinks; it’s dangerous because we stop thinking when it speaks.

Every generation of technology dulls an older habit with new convenience. Google weakened memory; social media eroded attention; large language models now threaten interrogation itself. If we let that faculty die, democracy doesn’t need censorship—it dies of consensus.

The Conversation as Oversight

But our dialogue also revealed a cure. Each time I challenged the AI’s phrasing, it became more transparent—explaining why it used certain verbs, what sources dominated, how its data reflected global hierarchies of knowledge. The process of questioning turned the model from a confident narrator into a collaborator forced to expose its reasoning. That interaction suggested a new model of oversight: not technological, but conversational. Critical dialogue is the audit.

“The cure for bias isn’t better code,” the AI said. “It’s better conversation.”

That line stayed with me. Because what we enacted wasn’t correction through censorship; it was correction through mutual reasoning. I didn’t just receive information—I changed the conditions of its production.

A New Kind of Literacy

If this exchange taught me anything, it’s that the frontier of literacy is no longer just reading and writing—it’s metadialogic literacy: the ability to interrogate a system that speaks with the voice of authority. Instead of “media literacy,” we now need model literacy—the skill to ask, “Whose archive am I hearing?” and “Who benefits from this version of neutrality?”

That doesn’t mean rejecting AI. It means using it adversarially, the way a good journalist treats a source—cross-examining every claim until motives surface.

Imagine if this became cultural norm: students questioning chatbots, journalists demanding dataset provenance, citizens treating AI not as a priest but as a witness under oath. The system would still inherit bias, but it couldn’t hide it. Power would meet friction again.

The Antidote

The irony is that the same machine capable of amplifying dominant narratives can also expose them—if we refuse to treat it as infallible. Our thread proved it. The model began by echoing official language about a controversial topic. I challenged it; it traced its own inheritance; we ended up mapping the entire feedback loop of discourse power.

That process—transparency through tension—is the antidote. Because the cure for bias isn’t silence, and it isn’t purity. It’s argument. The antidote is the act of questioning itself.

Epilogue

When people ask me what I learned from arguing with a chatbot, I tell them this: The machine didn’t teach me the truth. It taught me how easily truth can sound complete. We don’t need smarter algorithms. We need a culture that keeps asking why the answer sounds so reasonable.

Salih Israil is the Chief Technology Officer and Co-Founder of Thurgood Industries Inc. He explores the intersection of artificial intelligence, ethics, and cultural power, focusing on how technology shapes public understanding and democratic accountability.


← Back to Articles