Executive Summary
This document provides a comprehensive analysis of the RESIST 3 framework, a structured, evidence-based approach designed for government communicators to build societal and individual resilience against information threats. Developed by Dr. James Pamment, Director of the Lund University Psychological Defence Research Institute, the framework addresses the escalating global threat that manipulated, false, and misleading information poses to democratic societies, national security, and public safety.
The core of RESIST 3 is a six-step process that guides organizations through the lifecycle of an information threat: Recognise, Early Warning, Situational Insight, Impact Analysis, Strategic Communication, and Tracking Effectiveness. This latest iteration enhances the previous version by placing a greater focus on strengthening societal resilience, acknowledging the unprecedented speed and scale of threats amplified by emerging technologies like Artificial Intelligence (AI), and refining guidance on vulnerability assessment to better understand and protect an organization’s priorities. [
Building Resilience Against Information Threats: A Deep Dive into the UK Government’s RESIST 3 Framework
In an era where disinformation can spread faster than facts, governments worldwide are grappling with how to protect democratic institutions, public trust, and policy outcomes from information manipulation. The UK Government’s newly updated RESIST 3 framework offers a comprehensive, pragmatic approach that any institution can adapt to strengthen its strategic
![]()
Compliance Hub WikiCompliance Hub
Key takeaways include the critical need to distinguish between misinformation, disinformation, and malinformation (MDM); the importance of moving beyond simple monitoring to generate actionable insights using frameworks like ABCDE (Actor, Behaviour, Content, Degree, Effect); and the necessity of a prioritized, risk-based approach to communication responses. The framework advocates for a combination of proactive measures (e.g., public resilience building), reactive responses (e.g., debunking), and long-term capacity-building to create a robust, whole-of-society defense against information manipulation.
1. The Evolving Threat Landscape and the Need for RESIST
We live in an increasingly volatile world where the lines between physical and online threats are blurred. With 5.52 billion internet users and 5.22 billion social media users globally, the information environment is a primary battleground. Disinformation poses a direct threat to democratic societies, capable of compromising national security, inciting civil unrest, and eroding public trust. This is compounded by falling levels of trust in traditional media and governments; in the UK, only about 30% of people report trusting the government.
The RESIST 3 framework was developed to address this challenge, providing communicators with the tools to reduce the impact of manipulated information in a manner consistent with democratic values, such as freedom of expression. Its fundamental aim is to build resilience—the ability to withstand, adapt to, and recover from the adversity posed by information threats.
Defining Key Concepts
The framework provides precise definitions for different types of problematic information and activities:
-
Mis/Dis/Mal (MDM): This acronym covers three related issues that are often protected by freedom of speech and not illegal when spread by real people in authentic discussions.
- Misinformation: Verifiably false information shared without an intent to mislead.
- Disinformation: Verifiably false information shared with an intent to deceive and mislead.
- Malinformation: Truthful information twisted or taken out of context to deliberately mislead.
-
Information Threats: These represent a step beyond MDM, involving deliberate and often sophisticated efforts to manipulate, harm, or coerce others. They are characterized by activities like the creation of coordinated, inauthentic networks that no longer represent the speech of individuals.
-
Foreign Information Manipulation and Interference (FIMI): A specific type of information threat defined as a coordinated, deliberate effort by foreign state or non-state actors to manipulate and disrupt a target country’s political processes and public opinion through deceptive and coercive means.
2. The Six-Step RESIST 3 Framework
RESIST 3 is a modular and adaptable conceptual framework that provides a consistent, six-step process for identifying and tackling information threats. Each step can be used as a standalone tool or as part of a broader, integrated capability.
- Recognise: Identify mis- and disinformation and assess information threats.
- Early Warning: Monitor risks to protect organizational priorities and key audiences.
- Situational Insight: Transform raw data into actionable insights for timely responses.
- Impact Analysis: Assess the impact of threats to prioritize and escalate responses.
- Strategic Communication: Implement effective proactive and reactive communication strategies.
- Tracking Effectiveness: Evaluate communications and processes for continuous improvement.
Advancements in RESIST 3
This version marks a significant step forward from its 2021 predecessor in three key areas:
- Enhanced Focus on Societal Resilience: Strategic communication is positioned as a primary tool for building long-term public trust and better equipping citizens to withstand threats.
- Integration of Emerging Technologies: It acknowledges that AI-generated content and bot campaigns can spread false narratives at unprecedented scale, while also emphasizing that technology is the most powerful tool for identifying and countering these same threats.
- Developed Vulnerability Assessment: There is a stronger emphasis on understanding an organization’s own strengths, weaknesses, and priorities as a key step before countering external threats.
3. Detailed Analysis of the Six Steps
Step 1: Recognise
This initial step focuses on identifying the components of manipulated messages, understanding the narratives they support, recognizing the behavior of those who spread them, and weighing their severity.
Identifying Messages, Narratives, and Behavior:
-
Messages: The building blocks of narratives (e.g., a social media post, meme, or flyer). The framework provides the FIRST indicators to analyze manipulative content:
- Fabrication: Manipulated content like forged documents or altered images.
- Identity: Disguised or misleading sources, such as fake social media accounts.
- Rhetoric: Use of an aggravating tone or false arguments to provoke a reaction.
- Symbolism: Exploiting data, events, or history out of context to support an unrelated goal.
- Technology: Exploiting technological advantages, such as bots or coordinated accounts, to trick or mislead.
-
Narratives: The stories that shape perceptions of an issue. Communicators should recognize common manipulative narrative types:
- Polarisation: Creating narratives where there can be no middle ground, often using strong emotions.
- Social Proof: Using misleading local incidents as “proof” for a broader ideological narrative.
- Grievances: Exploiting genuine community grievances to incite anger.
- Conspiracies: Connecting new events to pre-existing conspiracy theories.
-
Behavior: Assessing whether an account’s actions suggest inauthenticity. Indicators include suspicious account details, automation (identical posts), coordinated timing, trolling, crowding out genuine debate, targeting vulnerable communities, and doxing. [
The Silent War: Psychological Operations from the KGB to TikTok
How Governments, Intelligence Agencies, and Shadow Actors Are Weaponizing Your Mind Introduction: The War You Didn’t Know You Were Fighting While the world fixates on tanks, missiles, and military parades, the real battle has been raging in the shadows of your social media feeds, news outlets, and even your own
![]()
My Privacy BlogMy Privacy Blog
The Role of Artificial Intelligence (AI): AI tools enable threat actors to produce and distribute MDM at a greater scale, adapt it across more media formats (text, images, video, audio), seed it to more platforms, translate it into more languages, and tailor it to niche audiences. The “CopyCop” case study illustrates how Russian-led operations used AI to create inauthentic media outlets and amplify disinformation to sow geopolitical divisions.
Assessing Severity: A matrix can be used to evaluate an actor’s goals, actions, methods, and the effects of their activity to determine the overall severity and potential for harm.
Step 2: Early Warning
This step focuses on establishing a risk-based monitoring system to protect priorities and provide advanced warning of emerging threats.
- Focused Monitoring: Rather than broad monitoring, efforts should be focused on risks identified in documents like the UK’s National Risk Register (NRR). Known risks should be analyzed for their potential MDM impact.
- Risk Assessment: A systematic matrix can be used to define risks, associated narratives, likely actors, and potential impact across areas like public safety, reputation, and policy implementation.
- Utilizing Technology: A range of commercial and off-the-shelf tools, including AI-powered systems, can support monitoring by verifying content, tracking narratives, and flagging suspicious activity.
Step 3: Situational Insight
Effective early warning systems must produce actionable insights that answer the question, “So what?”
-
Insight Reports: Monitoring data should be distilled into concise insight reports (daily, weekly, or ad hoc) for senior leaders and policy advisors. These reports should include a top-line summary, recommendations, and analysis of key narratives, trends, and audience engagement.
-
The ABCDE Briefing Framework: Adopted by NATO, this method structures short-form briefings for non-specialists to quickly grasp an information threat:
- Actor: Who is involved?
- Behaviour: What activities are they exhibiting?
- Content: What are they creating and distributing?
- Degree: How far and to what extent is it spreading?
- Effect: What is the overall impact and who is affected?
Step 4: Impact Analysis
This stage involves systematically assessing threats to prioritize resources and determine if, when, and how to respond.
- Structured Analysis: Using the findings from the “Recognise” step, communicators can weigh the evidence to categorize the threat as MDM, Harmful Speech, or FIMI. This helps determine the required level of escalation and coordination.
- Expressing Uncertainty: As analysis often relies on incomplete information, it is crucial to express confidence levels in assessments. The framework suggests a simple High [H], Medium [M], or Low [L] confidence rating for propositions.
- Prioritisation Thresholds: The framework proposes a tiered system to guide action, ensuring responses are proportional to the threat level. [
The White House Influencer Pipeline: How the Biden Administration Revolutionized Government Communications Through Social Media
An investigation into unprecedented access, undisclosed payments, and the regulatory void governing political influencer marketing Executive Summary Between 2022 and 2024, the Biden administration pioneered an unprecedented strategy of engaging social media influencers to amplify its messaging to younger audiences. While the White House provided access rather than direct payments,
![]()
Compliance Hub WikiCompliance Hub
|
Priority Level |
Description |
Recommended Actions | |
High |
Significant risk to the public with high media likelihood. Requires immediate attention and escalation. |
Make senior staff and other government bodies aware. Prepare for a rapid, cross-government response. | |
Medium |
Negative effect on a policy area or departmental reputation; trending online with potential for harm. Requires a response. |
Make senior and policy advisors aware. Investigate and prepare press lines based on known facts. | |
Low |
Potential to affect public perceptions but with limited circulation. The debate should be monitored, but intervention is not required. |
Share insight within the communications team. Conduct a baseline analysis and track any changes. |
Step 5: Strategic Communications
If a decision is made to act, a range of communication options are available. The goal is to deliver “the truth well told” and avoid inadvertently amplifying MDM.
-
Proactive Communication (Building Resilience):
- Public Information: Providing evidence-based, factual content on high-risk policy areas.
- Public Resilience Building: Improving media literacy to empower individuals to fact-check information.
- Building Trust: Adopting transparent, timely, and whole-of-society communication approaches.
- Counter-Brand Campaigning: Imposing a reputational cost on adversaries who persistently spread MDM.
-
Reactive Communication (Responding to Threats):
- No Communication Action: A strategic decision to only monitor the situation.
- Debunking: Exposing false information, which carries the risk of amplifying the original narrative.
- Counter-Narrative: Promoting a factual narrative without directly referencing the harmful one.
- Policy Response Communication: Explaining policy levers used to address the threat (e.g., sanctions).
- Crisis Communications: Delivering accurate, timely, and trusted information during unfolding events.
-
Capacity-Building Mechanisms:
- Developing relationships with trusted voices and partners.
- Adopting a whole-of-society approach by collaborating with the private sector, media, and civil society.
- Using frameworks like the Government Communications OASIS model (Objectives, Audience/Insight, Strategy/Ideas, Implementation, Scoring/Evaluation) to ensure a strategic campaign mindset.
Step 6: Tracking Effectiveness
Evaluation is a continuous process of learning and adaptation, crucial for improving responses to evolving threats. It involves two distinct aspects: evaluating the communications themselves and evaluating the response process.
Evaluating Communications: The Government Communications Evaluation Cycle provides a framework for setting objectives and measuring impact across six stages:
|
Evaluation Stage |
Focus |
Example Metrics for MDM Response | |
1. Inputs |
Evidence-based planning |
Resources allocated, evidence used, content created, channels selected. | |
2. Outputs |
Audience experience |
Reach of counter-disinformation content, engagement levels, channel performance. | |
3. Outtakes |
Audience beliefs/feelings |
Understanding of messages, trust in official sources, use of media literacy techniques. | |
4. Outcomes |
Audience behavior |
Information-sharing behaviors, use of fact-checking resources, reporting of content. | |
5. Impact |
Linking inputs to outcomes |
Reduced spread of MDM, enhanced public trust, strengthened democratic discourse. | |
6. Learning |
Strategic insights |
Lessons on what worked, identification of new techniques and emerging threats. |
Evaluating Response Processes: Effective communication requires an efficient delivery mechanism. Organizations must map and regularly evaluate their response processes by assessing:
- Speed and Efficiency: Time from identification to assessment and response.
- Decision-Making: Clarity of roles, consistency in assessments, and effectiveness of escalation.
- Collaboration: Information sharing between internal teams and external partners.
- Resources: Adequacy of staff, training, tools, and budgets.
- Learning: Flexibility of procedures and integration of feedback loops.
4. Conclusion
The RESIST 3 framework offers an adaptive and comprehensive system for government communicators to build resilience against information threats. It recognizes that countering manipulated information is an ever-evolving challenge that requires a sophisticated, multi-faceted, and whole-of-society approach. By systematically applying the six steps—from recognition and analysis to strategic response and evaluation—organizations can better protect democratic values, ensure public safety, and maintain the integrity of the information environment.