The 'peace tech' company making bets on war | Hacker Times
Listen to this article (with local TTS)
Sorry, fortune tellers, the work of predicting the future is now being automated — but thanks to one startup, it could be for the greater good.Â
Anadyr Horizon, a “peace tech” company, is using predictive AI to mitigate the risk of war, perBusiness Insider.Â
Founded in 2024, its clients already include government agencies as well as corporate risk managers with investments in other countries.
Why it matters
Co-founder Arvid Bell, a former Harvard lecturer who taught classes on conflict de-escalation, says that, unlike defense tech, Anadyr’s tech is intended to prevent wars, not fight them.
It comes at a time of increasing global conflict and as a growing number of AI companies, like OpenAI and Meta, compete for high-profile defense contracts.
Plus, violent conflict was estimated to cost the global economy $19T in 2023, and the company’s VC backers think “peace tech” could make billions.Â
How it works
The software, dubbed North Star, uses AI to simulate how global leaders might react in real-world scenarios (e.g., in the event of economic sanctions or a military blockade) by observing their digital twins, which the company says can even account for factors like sleep deprivation. Â
It runs thousands of simulations, each with slight variations, to determine the likelihood of a given outcome. And while Anadyr acknowledges it can’t exactly predict the future, it can produce probabilities.Â
The hope, Bell told BI, is that diplomats and politicians will use those results to make better strategic decisions that promote conflict resolution.
But not everyone is as optimistic of the tech’s potential.Â
What could go wrong?Â
Experts fear the tech doesn’t have the aptitude for such high-stakes applications.
While the company has shared little about what its AI is trained on, AI researcher Timnit Gebru warns that an AI trained on open-source information will represent the biases of the loudest voices online, which tend to be Western or European. Â
A 2024 study on the diplomatic decision-making uses of LLMs found that AI models tend to favor warmongering. Â
Beyond military uses: A corporation pulling its investments out of a country that Anadyr's AI incorrectly predicts will face unfavorable conditions could lead to consequences like mass unemployment and currency depreciation.Â
Bell recognizes these challenges, but says the company is moving deliberately in developing its model and selecting who it works with.
For his part, he told BI: “I want to simulate what breaks the world. I don't want to break the world.”Â
Here’s to hoping all those decision-makers feel the same.