This article explains how governments use trolls, bots, fake media, and social engineering to distort opinion, harvest intelligence, and weaken rivals.
Social Media Is Now a Foreign Operations Platform
Foreign governments no longer need to smuggle leaflets across borders or rely only on state TV. They can now use social platforms, fake news sites, covert influencers, hacked material, and professional networking scams to shape debate inside a rival country at low cost and with plausible deniability. Public reporting from U.S., EU, platform, and security sources points most often to Russia, China, and Iran, but the methods matter as much as the actors.
This is not just “online propaganda.” It is a blended playbook: manipulation, interference, reconnaissance, intimidation, and sometimes espionage wrapped into the same campaign. The goal is usually not to mind-control an entire population. It is to flood the information space, deepen mistrust, exploit existing grievances, and make the target society easier to divide and harder to govern.
Influence, Interference, and Espionage Are Not the Same Thing
That distinction matters. Brookings notes that foreign influence and foreign interference are often conflated, but they are not identical: influence can be overt, while interference is covert, deceptive, or coercive. GAO similarly frames foreign disinformation as part of broader foreign malign influence, including fake social accounts, hidden websites, and other deceptive techniques used to alter perceptions and behavior.
So the cleanest way to understand this article is simple. Influence is the message. Interference is the hidden manipulation behind it. Espionage is the theft, access-building, and targeting that make the message more precise. Modern state campaigns often combine all three.
The Playbook Is Blunt
Most foreign social-media operations follow the same basic sequence:
- Map the target: find fractures, grievances, communities, and vulnerable individuals.
- Build false credibility: fake personas, cloned news sites, covert outlets, or proxy influencers.
- Inject content: memes, videos, false claims, selective leaks, emotional images, or “news” articles.
- Amplify it: bots, coordinated accounts, paid ads, or cross-platform reposting create false momentum.
- Harvest access: fake recruiters, fake journalists, or fake contacts build rapport for phishing, malware, or intelligence collection.
- Adapt fast: when platforms remove networks, operators shift to new domains, new apps, and new fronts.
That is why these campaigns are hard to kill. They are not one account or one lie. They are ecosystems.
Russia Built the Modern Foreign Troll-and-Leak Model
Russia’s playbook is the most documented. The 2018 U.S. indictment of the Internet Research Agency described Russian operators posing as Americans, using false U.S. personas, exploiting divisive political and social issues, stealing identities, buying ads, and even organizing real-world rallies through fake grassroots accounts. This was not random trolling. It was a structured information-warfare campaign designed to interfere in U.S. politics while hiding its Russian origin.
That model has evolved, not disappeared. Meta says Russia remains the number one source of covert influence operations it has disrupted since 2017, and many of those networks used fake likes, cross-platform activity, and large webs of spoofed websites. Meta also identified Doppelganger as the largest and most persistent of these operations, built around fake domains that imitate legitimate media and government sites.
U.S. officials say the Russian model now goes beyond troll farms. In September 2024, the Justice Department announced the seizure of 32 domains tied to the Russian government-backed Doppelganger campaign, alleging the network used cybersquatted media lookalikes, fabricated influencers, AI-generated content, paid social ads, and fake profiles to push Kremlin narratives and influence voters in the United States and other countries. Treasury separately said RT executives had covertly recruited unwitting American influencers through a front company to push content aligned with Russian interests.
The important point is not just that Russia lies online. It is that Russia repeatedly mixes fake audiences, fake media brands, covert funding, and hacked or laundered material into one package. That is what makes the operation scalable and deniable at the same time.
China Prefers Scale, Laundering, and Persistence
China’s foreign influence operations have often looked different. Instead of one famous troll farm, public reporting points to sprawling, persistent, low-engagement networks that post across platforms in high volume. Google says the PRC-linked DRAGONBRIDGE network kept leaning into U.S. wedge issues, Taiwan, and major news events in 2024, with spammy content and growing use of AI, but still struggled to generate meaningful organic engagement. In 2023 alone, Google disrupted more than 65,000 DRAGONBRIDGE instances across YouTube and Blogger, and more than 10,000 more in the first quarter of 2024.
That low engagement does not make the tactic harmless. It shows the method. China-linked operations often rely on volume, repetition, multilingual posting, and persistence rather than one explosive viral hit. Meta said a Chinese-origin network it removed in 2022 targeted U.S. domestic politics ahead of the midterms and Czechia’s policy toward China and Ukraine, marking a notable shift toward more direct targeting of foreign domestic debate.
Another part of the PRC playbook is content laundering. In late 2024, Google’s Threat Intelligence Group detailed GLASSBRIDGE, an umbrella network of firms operating hundreds of domains that posed as independent news sites in dozens of countries while pushing narratives aligned with the political interests of the People’s Republic of China. That matters because fake news brands can outlive a single takedown and make propaganda look like local journalism.
China’s state-linked campaigns also sit inside a broader information strategy. The EEAS reported that throughout 2024, foreign actors in Sub-Saharan Africa increasingly used diplomatic accounts, state-controlled media, local and regional media, and influencers, alongside offline media engagement, to push anti-Western narratives while presenting Russia and China as reliable alternatives. That is not just spam. It is long-horizon narrative positioning.
Iran Blends Narrative Warfare With Social Engineering
Iran’s operations are often more personal. U.S. intelligence said in September 2024 that Iranian malicious cyber actors sent excerpts of stolen, non-public Trump campaign material to individuals associated with President Biden’s campaign and also continued sending stolen material to U.S. media organizations. That is the classic hack-and-leak model: steal first, then weaponize the material through information channels.
Iran also stands out for social engineering. The FBI warns that foreign intelligence services use professional networking sites to target people with security clearances. Microsoft says the Iran-linked actor Crimson Sandstorm has used fictitious social media accounts to build trust with targets and then deliver malware to exfiltrate data. In plain English: fake online relationships are not only for persuasion. They are also for access.
That makes Iranian operations especially dangerous in sectors like academia, defense, research, telecoms, and policy circles. A fake activist, recruiter, journalist, or researcher can function as both a propaganda node and an intelligence collection tool. The same account that shapes narratives can also steal credentials or map a target’s network.
What Changes From Region to Region
The core methods repeat, but the narratives shift by geography. The EEAS reported that in 2024 the MENA information environment saw manipulated and inauthentic content, hate speech, AI-generated visuals, misleading translations, staged footage, and impersonation, all aimed at distorting perception and eroding trust. In Sub-Saharan Africa, foreign and domestic actors exploited elections, security crises, and governance disputes, with increasing use of diplomatic networks, state-controlled media, and influencers.
That is the real pattern: foreign operators rarely invent social fractures from nothing. They plug into local anger, fear, identity politics, war narratives, or distrust that already exist. The message changes by country. The machinery does not.
Bots and AI Matter, But Not in the Way Hype Suggests
AI has made this cheaper and faster. GAO lists artificial intelligence, including deepfakes, as one of the ways foreign governments spread disinformation. The EEAS also documented AI-generated visuals and other synthetic tactics in current information manipulation.
But the evidence does not support lazy panic. Meta said the feared wave of AI-enabled election disinformation in 2024 did not materialize in a significant way on its services and that the overall impact was modest and limited in scope. It also said most covert influence networks it disrupted struggled to build authentic audiences and often used fake likes or followers to look bigger than they were. Brookings made a similar point in 2024: there is little evidence so far that AI-enabled foreign influence has had a meaningful effect on elections, even though the threat is real and evolving.
That is the sober conclusion. AI is an accelerator, not a magic wand. It helps operators make more junk faster, localize messages, fake visuals, and test narratives cheaply. It does not automatically make propaganda persuasive. Human grievances, trusted messengers, and political context still matter more.
What These Campaigns Are Really Trying to Achieve
The goal is usually broader than “make people believe one lie.” Foreign campaigns try to exhaust the audience, pollute the information environment, discredit institutions, and turn every controversy into a trust crisis. Russia has long used spoofed sites, state messaging, and weaponized social media; China has used high-volume fake accounts and disguised media ecosystems; Iran has paired influence work with social engineering and stolen material. Different styles, same strategic aim: weaken rivals from the inside without crossing the threshold of open war.
That is why “did it change millions of minds?” is too narrow a question. A campaign can still succeed if it distracts journalists, pressures political campaigns, harasses dissidents, amplifies cynicism, or forces institutions to spend time cleaning up manufactured chaos. Carnegie’s review of the field makes the deeper problem clear: disinformation is hard to define, harder to measure, and even harder to counter cleanly.
What Readers Should Watch For
The warning signs are usually obvious once you know the pattern:
- sudden accounts with strong political opinions but thin personal history
- “news” sites with familiar branding but strange domains or no real ownership trail
- identical narratives pushed across many platforms at once
- emotionally loaded AI visuals, staged clips, or badly translated content dropped into a fast-moving crisis
- fake recruiters, journalists, researchers, or activists trying to build rapport unusually fast
- leaks or “exclusive documents” promoted through anonymous accounts and suspicious media fronts
Conclusion
Social media did not create foreign interference. It industrialized it. Today’s state campaigns can mix trolls, bots, fake media, covert influencers, AI-generated content, and espionage into one cheap, deniable system built to manipulate attention and exploit existing fractures. Russia, China, and Iran do not use identical methods, but public reporting shows they all treat digital platforms as tools for pressure, persuasion, access, and disruption.
The blunt truth is this: the most dangerous foreign operation is not always the one that goes viral. It is the one that looks ordinary enough to blend in, credible enough to be shared, and targeted enough to reach the right people at the right moment. That is how online manipulation stops looking like propaganda and starts behaving like strategy.