Discussion about this post

User's avatar
Dr. Derek B. Miller's avatar

This is a thoughtful and genuinely important contribution to a conversation that too few people in the AI development world are taking seriously. The concern that AI is being deployed in conflict contexts without adequate theory, without adequate testing, and with potentially catastrophic escalatory tendencies is well-founded and urgent. The wargaming results alone — nuclear weapons chosen in 95% of scenarios — should give pause to anyone who believes current models are ready for high-stakes conflict applications. And the parallel to social media's polarization problem is apt: we have roughly fifteen years of evidence that optimizing for engagement produces societal harm, and we are now building systems with far greater reach and consequence before we have solved the equivalent problem for conflict.

That said, there is a foundational challenge the essay doesn't yet address, and it may be the most consequential one. I worked at the UN for a decade and have spent a career in peace and security issues, so this is food for thought: The framework rests on an assumption that conflict has a knowable universal structure — that peace is a shared terminal value, that resolution is a common goal, and that better information and mediation will move parties toward it. But this assumption is itself a cultural premise, not a discovered fact.

In 1959, McDougal and Lasswell warned against "make-believe universalism" that assumed universal words implied universal deeds.

Adda Bozeman spent decades demonstrating that conflict is constituted differently across civilizational systems — with different premises about what conflict is for, what resolution means, and whether peace is a desirable end state at all. The questions she proposed for any serious analysis of a foreign society cut directly to what AI conflict systems currently cannot ask:

• What is the value content of intrigue and conflict?

• In which circumstances is violence condoned, and what is the ceiling for tolerance of violence within this society?

• Is law distinct from religion and from political authority?

• Is war considered "bad" by definition — or is it accepted as a norm or way of life?

• And most fundamentally: How do people think about peace? Is it a definable condition, and what is its relation to war?

These are not exotic edge cases.

The Hamas Charter does not treat peace as a terminal value — it treats Jihad as a religious obligation, making certain forms of resolution not merely undesirable but categorically impermissible under its own cosmology. Lenin deliberately inverted Clausewitz: war was not the continuation of politics by other means but the engine through which history itself moved, making conflict intrinsically generative rather than regrettable. The Nazis didn't want peace — they wanted struggle, because struggle was constitutive of their worldview.

An AI conflict system that cannot ask Bozeman's questions cannot distinguish these systems of meaning from ones in which conflict transformation is genuinely possible — and will intervene in all of them using the same framework, producing confident recommendations that are systematically misread by the very communities they're designed to reach.

What's needed is not a better universal model but a revival of the lost agenda that Bozeman, Lasswell, and Sherman Kent were advancing before it was displaced by quantitative peace research in the 1960s and abandoned by anthropology after Vietnam: the disciplined, community-grounded study of how specific societies constitute meaning around conflict, violence, authority, and resolution, pursued as a prerequisite to any intervention rather than as optional context.

The peace research tradition didn't have to go the direction it did — toward computation, game theory, and universal rationality assumptions. It could have gone the direction McDougal and Lasswell pointed, toward the rigorous comparative appraisal of diverse systems of public order on their own terms. AI conflict work is now replicating the same fateful choice, and with far greater consequences if it gets it wrong.

The alternative is not to abandon the ambition — conflict transformation at scale matters enormously (!) — but rather to ground it in the kind of situated inquiry that generates an explicit, defensible account of why a specific intervention should work among these people, in this place, given what we actually know about how they understand obligation, legitimacy, violence, and the meaning of peace itself. Without that grounding, even the most technically sophisticated AI conflict tool risks producing what might be called pathological resolutions — closing the presenting crisis while leaving its civilizational foundations entirely intact, and calling that peace.

That false confidence can lead to war. And historically, it does.

MITCHELL WEISBURGH's avatar

I teach a course in Conflict and Collaboration to teachers. One of the exercises is to have them go to an AI platform, describe the problem and the person, and then ask it to create a dialog using strenths-based feedback, motivational interviewing, and nonviolent communication where the first 4 or 5 things that they try to say generate responses that would normally trigger them, and the conversation lasts at least 12 statements from each side.

This exercise always results in the educators learning a few ways of phrasing questions and setting their own mindsets that they can use.

3 more comments...

No posts

Ready for more?