Tariq Saeedi
International relations have always been about information—gathering it, interpreting it, and acting on it before your counterpart does.
For centuries, the advantage went to those with better intelligence networks, sharper analysts, and more experienced diplomats who could read the room and sense what wasn’t being said. That fundamental calculus hasn’t changed, but the tools available to pursue it have transformed dramatically.
Artificial intelligence is now working quietly in the background of diplomatic channels, both written and oral, and its role is expanding at a pace that would have seemed unimaginable just a few years ago.
The appeal is obvious. — Diplomacy demands the synthesis of enormous amounts of information under time pressure. A foreign ministry might need to understand how a proposed trade agreement affects seventeen different sectors, aligns with commitments under eight existing treaties, and plays domestically in three key coalition partner countries—all before a minister steps into a negotiating room the next morning.
What might take a team of analysts days to compile, AI can process in seconds. It can scan thousands of documents, identify relevant precedents, flag potential conflicts, and even suggest language that threads the needle between competing interests. For someone preparing talking points at midnight before a crucial meeting, this isn’t a luxury—it’s a lifeline.
But the introduction of AI into international relations creates dynamics that go well beyond simple efficiency gains. We’re seeing the emergence of a new kind of information asymmetry, one based not on who has access to secrets but on who has better tools to make sense of what everyone already knows. Two countries entering negotiations may both have access to similar economic data, similar historical records, similar public statements from each other’s officials. But if one side has superior AI capabilities—better at detecting patterns, better at simulating scenarios, better at identifying leverage points—that side enters the room with an inherent advantage.
The negotiations still happen human to human, as they must. The final decisions still rest with people who bear the weight of consequences. But one diplomat is playing with a supercharged research assistant while the other is working from conventional briefing books.
What makes this particularly interesting is that even when both sides deploy similar AI models, the outcomes won’t be symmetric. These systems are being fed with each country’s specific interests, historical grievances, risk tolerances, and strategic priorities. They’re learning from their own nation’s past negotiating positions and internalizing the particular way that country frames issues.
An AI system trained on American foreign policy documents will develop different pattern recognition than one trained on Chinese diplomatic archives, even if the underlying technology is identical. In effect, AI becomes an extension of national strategic culture, amplifying existing perspectives rather than providing some neutral, objective view from nowhere.
The human element remains crucial, and for good reason.
Diplomacy isn’t just about optimal outcomes calculated on a spreadsheet. It’s about relationships, trust built over years, the ability to read a counterpart’s body language and sense when to push and when to give ground.
It’s about understanding that a foreign leader facing domestic political pressures might need to say certain things publicly while signaling something different in private. These are precisely the kinds of nuances where AI, for all its computational power, still struggles.
A machine can tell you that a particular phrase has been used in eighty-three previous contexts with specific outcomes, but it takes human judgment to know whether this particular moment calls for precision or deliberate ambiguity.
This creates a paradox at the heart of AI-assisted diplomacy. The technology is most useful for tasks that require speed and comprehensiveness—scanning vast amounts of information, identifying connections, preparing options. But international relations often hinge on things that resist quantification: the unstated implications of a carefully chosen word, the significance of who was placed where at a state dinner, the meaning of a studied silence.
Can an AI distinguish between a calculated insult meant to signal displeasure and an awkward translation error that means nothing? Can it account for the fact that two countries might use identical language in a joint statement while understanding it to mean entirely different things?
The seasoned diplomat, equipped with both deep experience and AI tools, probably represents the optimal combination—at least for now. The human provides context, judgment, and the ability to navigate the unspoken dimensions of negotiation. The AI provides speed, comprehensiveness, and the ability to spot patterns that might escape even expert attention.
But this hybrid model assumes that the human remains firmly in control, using AI as an assistant rather than a replacement. The question is whether that division of labor will hold, or whether the pressure to move faster and process more information will gradually shift more decision-making authority toward the algorithms.
There’s also the trust problem. When you sit across from another country’s negotiators, you’ve traditionally been able to assess their mandate, their constraints, their room to maneuver. But if you know your counterpart is receiving real-time AI suggestions, where does the human judgment end and the algorithmic recommendation begin? Does it even matter? Perhaps not, if the human is genuinely making the final call. But it adds a layer of uncertainty to interactions that already involve plenty of strategic ambiguity.
A more subtle risk involves the potential for groupthink. If foreign ministries around the world increasingly rely on similar AI systems, trained on largely overlapping datasets, do we lose something valuable in terms of diverse diplomatic thinking?
International relations have often benefited from different countries approaching problems from genuinely different analytical frameworks. If everyone’s AI is identifying the same patterns and suggesting similar strategies, we might end up with a more homogeneous diplomatic culture—one that’s perhaps more efficient but potentially less creative and adaptable.
None of this is hypothetical. AI is already embedded in the back-end systems of diplomatic communications. — It’s already helping draft cables, analyze foreign media sentiment, and prepare briefing materials. Its presence will grow because the advantages it offers are too significant to ignore.
But unlike AI applications in commerce or entertainment, where mistakes are measured in money or inconvenience, errors in international relations can escalate into genuine crises. A misinterpreted signal, a poorly calibrated response, a failure to detect early warning signs—these aren’t just inefficiencies. They’re potential disasters.
The countries that master this technology, that learn how to combine AI’s capabilities with human diplomatic expertise, will have meaningful advantages in the years ahead. They’ll be able to process information faster, identify opportunities sooner, and enter negotiations better prepared.
But the final decisions—whether to compromise or hold firm, whether to trust or verify, whether to escalate or de-escalate—those will remain human choices, at least for now. The challenge is ensuring that AI enhances that human judgment rather than obscuring it, that it provides better tools for understanding without replacing the need for wisdom.
In diplomacy, as in few other fields, getting that balance right isn’t just desirable. It’s essential. /// nCa, 27 January 2026
