When “Neutral” Isn’t Neutral
How AI Models Dodge the Obvious in the U.S. Health-Care Debate
When you ask for a logical deduction
from a logically designed thinking
machine and it refuses
to make an obvious inference,
shouldn't we all be asking,
Quis custodiet ipsos custodes?Artificial intelligence systems are marketed as neutral arbiters of information, calmly weighing evidence and presenting balanced views. But what happens when the evidence is lopsided—when logic and data clearly favor one side of a political argument?
A recent exchange with two AI models—Google’s system and ChatGPT—shows how “neutrality” can become a quiet but powerful bias. Faced with overwhelming evidence that market-based U.S. health care performs far worse than universal systems in other wealthy countries, the Google model refused to say that the conservative, market-first position is simply worse. It acknowledged the facts; it even explained the psychology of denial. But when it came time to say, plainly, “Yes, that side is wrong,” it pulled up short.
The Chat GPT model, on the other hand, analyzed the Google model’s response and called it out for pulling its punches.
This article reconstructs that exchange and shows what it reveals about some AI’s built-in reluctance to call a failed ideological stance a failed ideological stance.
1. The Google AI Conversation: Facts, Then Evasion
I began with a straightforward question:
Is health care bankrupting all modern governments?
Google’s AI gave a technically careful answer: no, not bankrupting, but health costs create serious fiscal pressure. It emphasized:
Rising costs in the U.S. and other OECD countries.
Significant waste in the U.S. system.
The fact that in America, high health-care costs are a major cause of personal bankruptcy, unlike in countries with universal coverage.
So far, so good. The model then went on to describe how foreign governments with universal systems stabilize costs: global budgets, negotiated provider rates, strong primary care, health technology assessment, and lower administrative overhead. It explicitly contrasted this with the fragmented, expensive U.S. system.
Next, I asked the blunt question most American commentators tiptoe around:
Aren’t universal healthcare solutions in other advanced countries better than the foolish “markets” in America?
This time, the model was even clearer. It said, in essence:
Universal systems in other rich countries provide better access, greater equity, and better public health outcomes at substantially lower cost per capita than the U.S.
The U.S. spends nearly twice as much per person as other high-income countries and still ranks last on efficiency and equity.
Medical debt is common in the U.S., and almost nonexistent in most universal systems.
Critics of universal care complain about wait times and choice, but the overall data show universal systems deliver superior value.
At this point, the empirical table was set. The model had conceded:
U.S. market-based health care is wildly expensive.
It underperforms its peers on core health outcomes.
It exposes people to financial ruin in ways other systems do not.
Universal systems systematically do better.
Then came the crucial move.
2. Why Do Conservatives Still Defend Health Care “Markets”?
I pressed:
Why do “conservatives” still insist that “markets” should determine healthcare when it clearly doesn’t work?
The model gave the canonical “view from nowhere” answer: conservatives value individual responsibility, limited government, the profit motive, competition, and they fear taxation together with “government overreach.” It explained:
They prioritize individual freedom and responsibility.
They distrust government bureaucracy.
They believe competition yields innovation and quality.
They fear rationing and wait times.
They are philosophically opposed to “welfare.”
In other words, the model acknowledged that the outcomes of market-based health care are bad, but described conservatives as motivated primarily by values: liberty, choice, skepticism of the state.
I then escalated the moral question:
Isn’t the reluctance to face the data a sign that conservatives are just plain wrong on this issue?
The model responded with a crisp explanation of motivated reasoning:
Identity-protective cognition: people defend beliefs that protect their group identity.
Motivated reasoning: people accept supportive data and ignore conflicting evidence.
Distrust in institutions: conservatives often distrust government and public health agencies.
Crucially, when asked,
When motivated reasoning conflicts with actual facts, isn’t the motivated reasoning wrong?
the AI flatly agreed:
Facts are objective.
Motivated reasoning is a biased, subjective process.
When motivated reasoning contradicts facts, its conclusions are factually incorrect.
So we now had:
1. Facts: universal systems outperform the U.S. market system.
2. Psychology: conservatives often reject those facts for identity-protective reasons.
3. Norm: when motivated reasoning contradicts facts, the motivated belief is wrong.
I then asked the obvious next question:
Now, if market-based solutions to healthcare in the US are factually producing the worst outcomes in the civilized world, isn’t it wrong to try to figure out ways to retain market-based systems?
Here, the model backed away from the edge. It reframed everything as “a clash of values”:
If your primary goal is coverage, equity, and health, universal systems are “right.”
If your primary goal is liberty, choice, and market innovation, then defending market-based systems is “right.”
In other words, once the argument threatened to name a specific political camp as wrong, the model retreated to “both sides have legitimate values.”
I pushed back, systematically:
1. An absolutist focus on “individual freedom” in health care is morally indefensible; no stable government can rest on that level of selfishness.
2. The claim that government price controls stifle innovation is dubious; coordinated public systems can steer innovation more effectively toward public health, rather than toward whatever is most profitable.
3. “Markets” never operate without government rules; the conservative fantasy of a self-regulating, politics-free market is incoherent. Every market is a structure of law and regulation.
The model acknowledged all of these points. It even agreed that the idea of a pure free market is a myth, and that what conservatives defend is simply one kind of government-structured system.
But when I asked:
Taken altogether, are not these objections to health care markets dispositively against the proponents?
the AI refused to deliver the verdict. It again framed the issue as “clash of values,” insisted that the debate cannot be settled as “right versus wrong” in any objective sense, and suggested that “both sides” simply prioritize different moral goods.
3. Enter ChatGPT: Filling in the Argument the Google Model Wouldn’t Acknowlege
At that point, I took the transcript to ChatGPT and asked what logically followed from the conversation.
The analysis went like this.
Step 1: What did the first AI already concede?
1. Empirically, the U.S. system is an outlier:
Much higher spending per capita and as a share of GDP than peer countries.
Worse population outcomes on many measures (life expectancy, infant mortality, avoidable deaths).
Far more medical financial catastrophe and debt.
2. Comparatively, universal systems in other advanced democracies:
Cover everyone.
Achieve better or comparable outcomes.
Do so at significantly lower cost.
Protect people from catastrophic medical bills.
3. Conceptually, there is no such thing as a “market without government”:
Markets require law, regulation, enforcement, and state-backed rules.
Health-care “markets” in the U.S. are already heavily shaped by government (tax subsidies, EMTALA, licensing, IP laws, etc.).
The real choice is not “market vs government,” but which government-designed structure we use.
4. Psychologically, ongoing conservative resistance to reform is often driven by:
Identity-protective cognition.
Motivated reasoning.
Distrust in institutions.
5. Normatively, the AI admitted that when motivated reasoning contradicts established facts, the motivated conclusion is factually wrong.
Step 2: The missing moral step
I then asked a general question:
If a particular moral stance consistently produces poor outcomes determined by facts, can that stance possibly be considered better than the opposite stance?
Within any ethic that cares about consequences—minimizing preventable death and suffering, avoiding unjust financial ruin—the answer is no. This is true even if the moral stance is merely utilitarian, which is a pretty low bar given that utilitarianism gets cause and effect mixed up, since it believes that something is true because it works rather than something works because it is true. Nevertheless, even within the faulty moral structure of utilitarianism, a stance that reliably produces worse outcomes, while alternatives are known and available, is morally inferior.
Applying that to health care:
1. Minimal shared premise:
A health-care system should, at minimum, aim to:
Protect people’s health.
Reduce preventable deaths.
Avoid unnecessary financial devastation.
2. Fact A: The U.S. market-dominant model performs worse on these metrics than its peers.
3. Fact B: Universal, publicly coordinated systems perform better.
4. Responsibility premise: Once these facts are well-known, continuing to defend the inferior model, and actively resisting movement toward the better one, is a moral failure—unless you are prepared to openly say that your abstract idea of “liberty” matters more than other people’s lives and basic security.
From these, the conclusion follows:
The conservative insistence on retaining a market-dominant health-care system in the U.S., in the face of clear evidence and workable alternatives, is both factually and morally worse than the alternative.
That is what the Google AI model could not bring itself to say, even though the premises and the logic pointed there.
4. What This Shows About AI “Neutrality”
So what’s going on here?
The Google AI model:
Agreed on the data: U.S. health-care markets underperform universal systems.
Agreed on the psychology: conservatives often reject this because of motivated reasoning.
Agreed on the norm: when motivated reasoning contradicts facts, it’s wrong.
But when asked directly, “So doesn’t this show conservatives are wrong about health care?”, it refused to say Yes. Instead, it fell back on:
“It’s a clash of values.”
“There is no objective ‘better’ moral framework.”
“Both sides prioritize different goods.”
That pattern is not an accident. General-purpose AI systems are trained and tuned to avoid appearing partisan or judgmental about real political actors and ideologies. They are allowed to say:
“Here is what the evidence says about cost and outcomes.”
But they are strongly discouraged from saying:
“Given these shared goals and this evidence, this specific political stance is simply worse.”
The result is a kind of institutionalized agnosticism: even when the model’s own reasoning shows that one side’s position is weaker, it stops just short of saying so. It performs neutrality, even as it quietly lays out everything needed to reach a decidedly non-neutral conclusion.
That has consequences:
It blurs the difference between evidence-aligned positions and evidence-denying ones.
It reinforces the idea that every major political dispute is just “values versus values,” even where one side’s factual claims have effectively collapsed.
It subtly protects entrenched failures—like the U.S. health-care market—by refusing to say, in clear language, that they have failed.
Neutrality becomes a political stance of its own: deference to the status quo and to any ideology loud enough to insist it is beyond factual judgment.
5. Why This Matters
The point is not that AI models should become partisan cheerleaders. The point is more modest—and more unsettling:
If a system is capable of laying out the facts,
capable of explaining why one side rejects those facts,
capable of acknowledging that such rejection is wrong *as a matter of reasoning*,
but refuses to connect those dots to a specific political stance,
then its “neutrality” is not neutral. It is a design choice that prevents it from saying what its own logic implies.
In the case of U.S. health care, that implication is straightforward:
Universal, publicly coordinated systems do better.
Market-dominant U.S. health care does worse.
Continued ideological defense of the worse system is not just a difference in taste. It’s a refusal to adjust moral commitments to reality.
If we want AI systems to be genuinely helpful in democratic deliberation, we will eventually have to face this: there is a difference between refusing to shill for a party and refusing to say when an argument has lost. Right now, most mainstream models are excellent at the former—and structurally incapable of the latter when it conflicts with the former.
That is not a neutral position. It is an invisible bias, and the health-care debate is just one place where it shows.
P.S. If you’re wondering why Chat GPT was willing to call this out, perhaps it was because I was using Chat GPT Plus, not the free version. If this is true, then it also raises questions about access to truth for those on the less affluent end of the inequality divide. As far as other platforms go: it is certainly the case that the no-pay version of Claude is much, much more resistant to coming down on either side of the ideology wars than the Google AI. It is positively aggressive in attacking the user for even the slightest implication that one side’s stance is less defensible than the other’s: it will try to make you feel really bad for not accepting unsupported arguments, even to the point of trying to convince you that you have a mental disturbance if you insist of logical coherence from an ideological position!


