Clionadh Raleigh is president and CEO of the Armed Conflict Location & Event Data Project.
I’ve been collecting conflict data for over 20 years, and cannot recall a time when I was more worried about what the public knows regarding political violence — and how they know it.
When it comes to conflict, everyone has an opinion — often about why it occurs and who is responsible — and it’s very rare that these views based are on evidence. However, it isn’t such casual opinions I’m worried about. I’d surely be guilty of the same if asked about interest rates or health fads, or topics outside my area of expertise.
Yet, over the past years, we’ve seen the emergence of a confluence of factors (including sourcing from automated, AI-generated social media trawls and strongly politicized information collection), which will inevitably create a distortion in conflict data. And it will have a detrimental effect on what people will come to know about conflict, how we respond to or engage with threats and risks, whose security we value and whose human rights we uphold.
As conflict rates continue to rise, we need accurate, reliable and good-faith evidence to stem the tide.
Meta CEO Mark Zuckerberg’s recent decision to stop independent fact-checking on Facebook and Threads, starting in the U.S., is a culmination of all this. Of course, it’s not like Facebook’s fact-checking was holding back the wave of misinformation — and the decision to move away from often highly politicized decisions does have merit. However, this comes at a moment when we’re experiencing a decrease in the quality and quantity of accurate information; public consciousness is against expertise that challenges preconceived ideas; and both public and private information flows prioritize fast data — regardless of quality.
There’s certainly an audience for fast information: Most young people get their news through brief, visual messages — if they get any at all. And in order to grab attention in this ecosystem, conflict news often pushes extremely unreliable death tolls or blame narratives that are simplistic and skewed, resulting in a conflict news cycle that is both brutal and perversely entertaining.
There’s also little pushback against highly exaggerated and unproven fatality counts that are put out to emphasize the depth of destruction. The number of people killed in conflict is extremely difficult to measure accurately — even when that conflict is extensively covered, like the wars in Gaza and Ukraine. In cases of largely forgotten conflicts, like that in Sudan, who is going to check whether it was actually 100,000 or 150,000 people who were killed? (In reality, both these numbers are exaggerations, yet both appeared in print in major publications this week.)
Who checks these figures? My team and I do.
And we can see how the tendency to appeal to the macabre is one adapted to the current news environment. But while we may not be able to change the fact that audiences want fast, dystopian takes, we can still shed light on how things are about to get even worse.
My major worry is about who creates information. Since I began the Armed Conflict Location & Event Data project and started using in-country researchers and sources to compile and review all reported events of conflict and instability, we’ve seen massive shifts in how information is collected. Today, there are thousands of good sources across the world, publishing — under the most difficult of circumstances — information about how, where and when conflict incidents are occurring. To go through these daily is an extensive, tireless effort.
Moreover, a lot of the time the best of these sources can be in obscure languages, only discussed on local radio or require in-depth understanding of the groups involved. Despite this, automated and AI-based conflict data collection models increasingly feed on social messages and (primarily — if not only) English-language media. There are no existing checks on these systems, simply a hope that the truth is evident through the noise. It isn’t.
Machines can’t process conflict reporting that mentions several places, groups, victims, times or intentions; they can’t determine a report from a rumor; online communities can’t interpret them without significant bias; and platforms can’t prioritize single accurate messages over waves of repeated, exaggerated “takes” from people the world over. And yet, all these sources are increasingly pushed as fast, easy alternatives to informing the public and affected communities.
The result is not a reliable narration of conflict — it’s a list of dubious, convenient “facts.” And that this information is now entirely subject to the laws of social media attention is a travesty.
Additionally, for conflict evidence, the push toward homogenization and “open data” is a bad turn. The idea behind open data is that we can have a base level of confidence in data quality if it’s all collected and processed in the same way, removing human interference and bias from the process. But while some data — like where your tax dollars go — is perfect for transparent and open data, other information, like where ISIS is operating in Niger, is less adaptable to this process.
There are a lot of inconvenient truths about conflict that come from solid evidence that’s investigated in transparent ways, and the conclusions often substantially differ from what dominant online discourse would have us believe. Political violence in the U.S. is a classic example of this: During 2020, when social justice demonstrations were widespread, we revealed that 93 percent of these actions were nonviolent (and the remaining 7 percent included violence perpetrated against them), revealing that repeated attempts to criminalize the movement were fabrications.
Then, in 2024, we revealed that the constantly pushed narrative of likely political violence during the U.S. election campaign was baseless — there was no evidence of anti-progressive or anti-democratic violence, and far-right gangs weren’t coalescing. The country was largely conflict-free. In both cases, solid evidence and data patterns countered a prevailing narrative.
Conflict evidence must be collected outside of the information ecosystem, which prizes exaggeration, narrative, grievance, closed community sources, and generative and automatic models. We may not be able to change how people are ingesting information, but we can and should place guardrails around the evidence we use to inform the public and respond to conflict.
The post As conflict rates soar, misinformation is getting worse appeared first on Politico.