6 Comments
User's avatar
Dr. Derek B. Miller's avatar

This is a thoughtful and genuinely important contribution to a conversation that too few people in the AI development world are taking seriously. The concern that AI is being deployed in conflict contexts without adequate theory, without adequate testing, and with potentially catastrophic escalatory tendencies is well-founded and urgent. The wargaming results alone — nuclear weapons chosen in 95% of scenarios — should give pause to anyone who believes current models are ready for high-stakes conflict applications. And the parallel to social media's polarization problem is apt: we have roughly fifteen years of evidence that optimizing for engagement produces societal harm, and we are now building systems with far greater reach and consequence before we have solved the equivalent problem for conflict.

That said, there is a foundational challenge the essay doesn't yet address, and it may be the most consequential one. I worked at the UN for a decade and have spent a career in peace and security issues, so this is food for thought: The framework rests on an assumption that conflict has a knowable universal structure — that peace is a shared terminal value, that resolution is a common goal, and that better information and mediation will move parties toward it. But this assumption is itself a cultural premise, not a discovered fact.

In 1959, McDougal and Lasswell warned against "make-believe universalism" that assumed universal words implied universal deeds.

Adda Bozeman spent decades demonstrating that conflict is constituted differently across civilizational systems — with different premises about what conflict is for, what resolution means, and whether peace is a desirable end state at all. The questions she proposed for any serious analysis of a foreign society cut directly to what AI conflict systems currently cannot ask:

• What is the value content of intrigue and conflict?

• In which circumstances is violence condoned, and what is the ceiling for tolerance of violence within this society?

• Is law distinct from religion and from political authority?

• Is war considered "bad" by definition — or is it accepted as a norm or way of life?

• And most fundamentally: How do people think about peace? Is it a definable condition, and what is its relation to war?

These are not exotic edge cases.

The Hamas Charter does not treat peace as a terminal value — it treats Jihad as a religious obligation, making certain forms of resolution not merely undesirable but categorically impermissible under its own cosmology. Lenin deliberately inverted Clausewitz: war was not the continuation of politics by other means but the engine through which history itself moved, making conflict intrinsically generative rather than regrettable. The Nazis didn't want peace — they wanted struggle, because struggle was constitutive of their worldview.

An AI conflict system that cannot ask Bozeman's questions cannot distinguish these systems of meaning from ones in which conflict transformation is genuinely possible — and will intervene in all of them using the same framework, producing confident recommendations that are systematically misread by the very communities they're designed to reach.

What's needed is not a better universal model but a revival of the lost agenda that Bozeman, Lasswell, and Sherman Kent were advancing before it was displaced by quantitative peace research in the 1960s and abandoned by anthropology after Vietnam: the disciplined, community-grounded study of how specific societies constitute meaning around conflict, violence, authority, and resolution, pursued as a prerequisite to any intervention rather than as optional context.

The peace research tradition didn't have to go the direction it did — toward computation, game theory, and universal rationality assumptions. It could have gone the direction McDougal and Lasswell pointed, toward the rigorous comparative appraisal of diverse systems of public order on their own terms. AI conflict work is now replicating the same fateful choice, and with far greater consequences if it gets it wrong.

The alternative is not to abandon the ambition — conflict transformation at scale matters enormously (!) — but rather to ground it in the kind of situated inquiry that generates an explicit, defensible account of why a specific intervention should work among these people, in this place, given what we actually know about how they understand obligation, legitimacy, violence, and the meaning of peace itself. Without that grounding, even the most technically sophisticated AI conflict tool risks producing what might be called pathological resolutions — closing the presenting crisis while leaving its civilizational foundations entirely intact, and calling that peace.

That false confidence can lead to war. And historically, it does.

Jonathan Stray's avatar

Hi Derek. This is a very thoughtful response. I take it to touch upon at least two very deep issues.

1) How do you identify a truly implacable enemy (e.g. Hamas), and what do you do when you identify him? If violence is a "last resort," what exactly are we morally obligated to try first, and how much harm must we suffer meanwhile?

2) To what extent are there generalizable principles of conflict and how do we find them? I take your point on the specificity of conflict situations and the ways in which anthropological or other "thick" inquiry can product specific insights. On the other hand, if there are no general principles anywhere, then it is meaningless to say that someone can be an "expert" at conflict -- or that machines could have "better" or "worse" conflict properties.

I must admit I don't share your skepticism of the quantitative turn. I wholeheartedly agree game theory has only very limited utlity in understanding human conflict, but I see enormous value in systematic comparative datasets such as VDEM and ACLED.

Thanks for reading.

The Big Middle's avatar

thanks for all you're doing to advance this line of inquiry. looking forward to hearing more during The Big Middle LIVE on Weds May 6 at Noon Central

MITCHELL WEISBURGH's avatar

I teach a course in Conflict and Collaboration to teachers. One of the exercises is to have them go to an AI platform, describe the problem and the person, and then ask it to create a dialog using strenths-based feedback, motivational interviewing, and nonviolent communication where the first 4 or 5 things that they try to say generate responses that would normally trigger them, and the conversation lasts at least 12 statements from each side.

This exercise always results in the educators learning a few ways of phrasing questions and setting their own mindsets that they can use.

Morgan Rivers's avatar

I loved this post! The table comparing different models and their (in)ability to mediate conflict is striking. I wonder whether Anthropic would be open to simply adding a short set of conflict mediation principles as part of their system prompt, at least when the system detects the question pertains to mediation? Especially knowing that Anthropic already had their AIs take part in military operations, it seems really important to add this. I would imagine this might come at the cost of less short-term user engagement, but maybe over a longer period people would recognize what's good for them.

Morgan Rivers's avatar

I would be interested in a prompt I could put into my LLM agents that encouraged the model to follow principles of mediation. Does such a prompt exist?