How Organizations Fight Disinformation and Cyber Threats at Scale with AI
The open web has always been a source of intelligence. After all, it’s where news breaks first and where threat actors leave their digital traces. For instance, coordinated campaigns – whether intended to spread false narratives or lay the groundwork for a network intrusion – typically have a public footprint.
Unfortunately, the same properties that make public data valuable for defenders also make it exploitable by those we’re defending against. For this reason, understanding that tension is now a prerequisite for any serious risk and security leader.
One battlefield, two fronts
Disinformation and cyber threats have traditionally been handled as separate problems, dealt with by separate teams. Namely, comms experts would manage public messaging about the organization in question, while security teams would safeguard its infrastructure against potential breaches.
Today’s adversaries, however, don’t observe those boundaries. A coordinated influence operation often starts weeks before anyone tries to breach a network – muddying the information environment early enough that genuine warning signs get dismissed as noise. Canada’s National Cyber Threat Assessment 2025–2026 details how state-sponsored actors routinely pair data theft with information ops, drawing on stolen records and publicly available data to generate propaganda targeted at specific audiences.
The same report documents the use of generative AI to run synthetic disinformation campaigns against democratic processes at scale, with Russia and China as the primary attributable actors – though non-state groups now have access to similar tools.
At the organizational level, the convergence is visible in the attack chain itself. In February 2024, an employee at the engineering firm Arup transferred $25 million to fraudsters after joining what looked like a routine video call with colleagues – every person on the call was AI-generated. This shows that deepfakes are every bit as harmful financially as they are in terms of maintaining one’s reputation – the incidence of deepfakes shot up by as many as 257% in 2024 alone.
There’s a related problem worth flagging, as well: a loud, sprawling disinformation incident pulls attention in multiple directions at once, which is exactly the point – while the implicated organization is attempting to clean up the resultant public mess, a more targeted attack is already underway.
Manual defense can’t withstand large-scale attacks
The tools most organizations built their monitoring programs around were not designed for such volume. When an analyst reviews threats manually, one item at a time, their work progresses much more slowly than an automated adversarial system that operates nonstop and simultaneously across dozens of regions.
Hiring more analysts doesn’t fix this. Threat actors running AI-assisted phishing, reconnaissance, and content generation at scale move faster than any manual process can track – detection has to be automated to keep pace. Defenders relying on manual processes are not. According to Verizon’s 2024 Data Breach Investigations Report, the human element is involved in around 60% of all breaches – a statistic that reflects how effectively attackers have learned to exploit cognitive limits under volume and velocity.
Automation is the necessary response, albeit not sufficient on its own. Flood an analyst with hundreds of low-confidence alerts, and they start missing the ones that matter. The real value of AI-driven detection is in cutting that noise down to something a human can actually act on, so that when an analyst does pick up a signal, it’s worth their attention.
Effective threat monitoring isn’t just a processing problem—it’s a visibility problem. A campaign targeting users in Southeast Asia may not be visible at all from a monitoring system anchored in Western Europe. This is where the practical architecture of threat intelligence matters. Collecting signals globally requires an infrastructure that can access public web content from various locations.
AI as force multiplier – on both sides
The so-called “AI arms race” in cybersecurity has matured beyond speculation. Both sides – attackers and defenders – now rely on AI. But the dynamic is by no means a symmetric one – adversaries face no governance requirements, no safety reviews, and no auditability obligations. Which means they can adopt new capabilities immediately.
Darktrace’s analysis of 2025 cybersecurity trends notes that threat actors are moving toward multi-agent AI systems specialized in autonomous tasks: initial access brokering, surveillance, privilege escalation, and smart data exfiltration. And since they operate without the guardrails that legitimate organizations must follow, their efforts are likely to outpace defences in the near future.
“Likely”, however, doesn’t mean “certain”. For instance, behavioral anomaly detection, which flags deviations from established baselines, is more resilient against novel attack variants than simply tracking known patterns. With this, AI-assisted triage can surface high-confidence signals and route lower-priority alerts for batch review, allowing analysts to focus on what matters.
As for the struggle against disinformation, automated monitoring across languages and platforms can identify patterns such as unusual amplification, network formation, and timing correlations that no team composed exclusively of humans could replicate manually.
Given the above, it’s no surprise that agentic AI was the dominant theme at the RSA Conference 2025 and will be crucial again this year. The organizations that operationalize these semi-autonomous systems capable of alert triage, investigation, and initial response while maintaining human oversight at critical decision points will hold a meaningful defensive advantage.
Governance as the differentiator
When speed and scale ramp up, they can introduce new risks if they aren’t paired with proper governance. These include regulatory risks, the potential misuse of collected data, and fragile operational processes that can’t withstand scrutiny.
Applied to public-data intelligence programs, governance means answering a specific set of questions before scaling: What data is being collected, from which sources, and with what retention period? How is the collected information validated before it informs a decision? Who has access to raw versus processed intelligence, and how is that access logged? What triggers data deletion? How are false positives tracked and fed back into detection models?
These now constitute genuine operational requirements for running a durable intelligence program. Organizations that build them in from the start can move faster and with more confidence than those that bolt governance on afterward – because their teams trust the data, their outputs hold up to scrutiny, and their programs can scale without accumulating risk.
Conclusion
The organizations managing this landscape well are treating public web visibility as a core strategic capability – not a nice-to-have monitoring feature. They’ve built infrastructure for global signal collection, deployed AI to manage volume and prioritize responses, and invested in governance structures that enable them to act on intelligence quickly without creating new exposure. That combination – access, automation, and accountability – is what effective defense at scale actually requires.