LawyersForPeace.Online

the law of peace & cooperation in the cyber & information sphere

User Tools

Site Tools


disinformation

Disinformation as a Catalyst for Cyberwar: Legal Frameworks, State Practices, and Pathways to Stability

Introduction

The contemporary global security landscape is increasingly shaped by the pervasive influence of disinformation, a phenomenon that, while historically present in warfare and politics, has been dramatically amplified by the digital age. Disinformation is precisely defined as inaccurate information intentionally disseminated to deceive and cause serious harm, distinguishing it from misinformation, which refers to the accidental spread of inaccurate information.1 This encompasses not only knowingly false content but also the subtle manipulation of true information to create a deceptive impression, all with malicious intent.4 The rapid evolution of the internet and social media has exponentially increased the scale, speed, and precision with which such deceptive narratives can propagate globally.1 Further complicating this environment, advancements in Artificial Intelligence (AI) enable the creation of highly realistic synthetic media, such as deepfakes, which are nearly indistinguishable from authentic content, posing profound new threats to public trust and democratic processes.11

A critical aspect of understanding disinformation for legal and policy responses lies in recognizing the intrinsic link between the “intent” behind its dissemination and the “harm” it is designed to cause. The consistent emphasis across various international documents and scholarly analyses on “intent to deceive” and “intent to cause harm” as core definitional elements of disinformation underscores a fundamental shift in regulatory focus.1 This focus directs the regulatory challenge away from merely the falsity of content, which could inadvertently infringe upon freedom of expression, towards the malicious purpose driving its spread.1 The concept of “harm” in this context is broad, extending beyond physical damage to encompass the erosion of public trust, the subversion of democratic processes, the violation of human rights, and the destabilization of societal cohesion.1 The complex interplay between proving malicious intent and establishing a causal link to these often ambiguous, non-physical consequences presents a significant hurdle for precise legal regulation and enforcement.5

The relationship between disinformation and cyberwarfare is not merely coincidental but deeply symbiotic, forming a dangerous causal loop. Disinformation campaigns are increasingly deployed in coordination with both physical and cyber operations, serving as a long-favored military tactic to achieve strategic objectives.5 Their primary goals include manipulating public opinion, attracting allies, weakening adversaries, sowing discord among populations, and deceiving military forces.3 In this dynamic, disinformation functions as a direct precursor to cyberattacks by eroding trust in institutions, weakening societal cohesion, and creating a fertile ground for exploitation. For instance, fabricated emails or deepfake messages, enabled by disinformation tactics, can trick individuals into revealing sensitive information or bypassing security protocols, thereby directly facilitating phishing, fraud, or system breaches.25 Conversely, cyber operations serve as powerful force multipliers, amplifying the reach and impact of disinformation campaigns.2 This creates a reinforcing cycle: disinformation generates vulnerabilities that cyberattacks exploit, and successful cyberattacks can, in turn, further spread or enable disinformation. This dynamic blurs the traditional lines between “information warfare,” which focuses on the manipulation of information, and “cyberwarfare,” which targets computer systems and networks, as cyber means become integral to information manipulation and vice-versa.2

This report undertakes a comprehensive analysis of this intricate relationship between disinformation and cyberwarfare. It aims to elucidate the international legal frameworks applicable to these phenomena, examine current state practices, and evaluate collaborative efforts by the international community. The report will highlight the inherent challenges in applying existing legal principles to novel digital realities and propose actionable pathways for enhancing global stability and preventing conflict escalation in the information environment.

I. The Evolving Threat Landscape of Disinformation and Cyber Operations

Understanding Disinformation: Definitions, Intent, and Impact

Disinformation is fundamentally characterized by its deliberate intent to deceive and cause harm.1 Its tactics frequently involve crafting false narratives, manipulating emotions such as fear or anger, and exploiting existing societal or political divisions to achieve specific objectives.3 These campaigns are often designed to create confusion, deepen societal divisions, and destabilize targeted societies.28

The impact of disinformation is far-reaching and multifaceted. It affects a broad spectrum of human rights, undermines public policies, and significantly amplifies tensions during emergencies or armed conflicts.1 For instance, false information about safety zones or humanitarian aid in conflict areas can lead civilians into life-threatening situations.21 Beyond immediate physical dangers, disinformation can lead to severe psychological distress, foster radicalization, and directly incite violence.14 Furthermore, it erodes public trust in democratic institutions, governance processes, and the integrity of information itself, thereby weakening societal resilience.3

Beyond the more obvious physical or economic harms, a deeper understanding reveals that disinformation's most potent impact lies in its psychological manipulation of target populations.2 This deliberate targeting of the human mind constitutes “cognitive warfare,” aiming to alter attitudes, beliefs, and ultimately, the will of an adversary.33 By exploiting cognitive biases and heuristics, disinformation can sow confusion, induce decision paralysis, and undermine societal cohesion, thereby rendering populations more vulnerable to other forms of attack.3 This signifies that disinformation is not merely a preparatory step for kinetic or cyber attacks but a distinct weapon in its own right, directly engaging the human element of national security. The challenge for international law, therefore, extends to addressing these non-kinetic, psychological harms that can have profound real-world consequences without resorting to undue censorship, which itself poses risks to fundamental freedoms.1

The Role of Cyber Operations in Disinformation Campaigns

Cyber operations are instrumental in facilitating and amplifying disinformation campaigns through several mechanisms:

  • Mass Amplification and Global Reach: Social media platforms, often augmented by automated botnets and fake accounts, enable the rapid and wide-scale dissemination of false narratives.2 Perpetrators exploit social media algorithms to efficiently target content to specific demographic or ideological groups, maximizing the impact of their deceptive messages.3 This creates an illusion of widespread support or opposition, concealing the true origin of the message.12
  • Creation of Synthetic Media: Advances in AI allow for the generation of highly realistic but fabricated photos, videos (deepfakes), and audio recordings.11 This synthetic media makes disinformation increasingly difficult to distinguish from authentic content, raising significant concerns about its potential to undermine public trust and fuel online extremism.11 Such content can be used to manipulate public opinion or even influence democratic processes.3
  • Targeting Critical Infrastructure: Malicious cyber activities, sometimes integrated with disinformation campaigns, can target critical infrastructure systems such as energy grids, transportation networks, and financial systems.38 Such attacks can lead to severe physical damage, widespread disruption, or even casualties, as exemplified by the Colonial Pipeline ransomware attack.41 Disinformation can be used to create panic, misdirection, or to justify such attacks during these events.
  • Espionage and Data Weaponization: Cyber espionage facilitates the collection of sensitive information, which can then be weaponized to influence governmental decisions, sow discord among competitors, or undermine public confidence.44 Concerns exist regarding hidden backdoors in technology products, particularly those from state-controlled companies, which could be exploited for espionage and subsequently for cyber warfare purposes.44 This access to data can provide significant economic, political, and military advantages.44

Disinformation as a Hybrid Threat and its Escalatory Potential

Disinformation is a core component of “hybrid warfare,” a strategy that deliberately blends conventional military and non-military tactics, as well as covert and overt means, to destabilize societies and blur the lines between peace and conflict.47 Hybrid threats are characterized by their speed, scale, and intensity, facilitated by rapid technological change and global interconnectivity.48

Disinformation possesses significant escalatory potential by fueling hatred, exacerbating polarization, and inciting civil unrest.19 The inherent ambiguity of cyberattacks and disinformation, coupled with the persistent challenges in attributing these activities to specific state or non-state actors, complicates the ability of targeted states to respond proportionately.50 This ambiguity significantly increases the risk of miscalculation and unintended escalation, as states may struggle to discern the true nature and origin of an attack, potentially leading to overreactions.

Disinformation, particularly when cyber-enabled, frequently operates within the “grey zone” – below the traditional threshold of armed conflict.52 This operational space poses a significant challenge for the application of conventional international law, which often requires a clear “use of force” or “armed attack” to justify a kinetic response.50 However, the cumulative and cascading effects of disinformation, especially when combined with cyberattacks on critical infrastructure, can transcend the digital realm and result in severe physical consequences.50 This can potentially trigger a state's inherent right to self-defense or lead to conventional retaliation. The critical aspect here is the existence of a distinct escalatory pathway: disinformation campaigns can destabilize societies and create vulnerabilities, which are then exploited by cyberattacks. If these cyberattacks reach a sufficient level of severity, causing significant damage, injury, or death, they can be deemed an “armed attack,” thereby justifying a kinetic response and potentially escalating to full-scale conventional or even nuclear conflict.52 This highlights the urgent need for clearer international legal thresholds and robust response mechanisms for activities in this “grey zone.”

Applicability of Existing International Law to Cyberspace

There is a broad and increasingly firm consensus among states that existing international law, including the foundational principles of the UN Charter and International Humanitarian Law (IHL), applies to cyberspace.38 This widespread agreement is rooted in the principle that international law is “tech-neutral,” meaning its rules apply irrespective of the technology used.68

UN Charter Principles:

  • Sovereignty: States generally affirm that the principle of sovereignty extends to state conduct in cyberspace and encompasses jurisdiction over cyber infrastructure located within their territory.68 A hostile cyber operation or its effects on a state's territory, if attributable, can constitute a violation of sovereignty.70 However, some states, notably the UK, maintain a contested interpretation, arguing that international law does not recognize a specific rule of sovereignty that would be automatically violated by remote cyber intrusions per se.73
  • Non-Intervention: This well-established norm of customary international law prohibits states from coercively intervening in the internal or external affairs of other states.68 Digital interference, such as manipulating election results or disrupting governmental functions, can constitute prohibited intervention if it is coercive and impacts a state's political, economic, or social system.69 A wide-scale and targeted disinformation campaign leading to civil unrest could potentially violate this principle, as it aims to coerce a state's decisions within its domaine réservé.69
  • Prohibition on the Use of Force (Article 2(4)): Article 2(4) of the UN Charter prescribes that all Members shall refrain in their international relations from the threat or use of force against the territorial integrity or political independence of any state.50 While this prohibition applies to any use of force regardless of the means employed, it is generally understood to be limited to armed force or operations that produce effects comparable in scale and severity to armed force.53 A cyber operation that causes or is reasonably likely to cause physical damage to property, loss of life, injury to persons, or permanently disables critical infrastructure would fall under this prohibition.40 Debate continues regarding whether intangible damage or purely economic harm, without physical consequences, can also constitute a “use of force”.68
  • Self-Defense (Article 51): Article 51 of the UN Charter codifies the inherent right of individual or collective self-defense if an “armed attack” occurs against a Member State.50 The critical challenge lies in defining the threshold at which a cyber operation qualifies as an “armed attack.” General consensus suggests it must be comparable in severity and impact to a kinetic armed attack, resulting in significant death, injury, or substantial material damage or destruction.50 Importantly, the right to self-defense does not necessitate a response using the same means; a kinetic response to a cyberattack is permissible if the threshold of an armed attack is met.58

International Humanitarian Law (IHL) in Cyber Conflict:

IHL applies to cyber activities conducted in connection with an armed conflict, or if the cyber activities themselves reach the threshold of violence to be characterized as an armed conflict.61 The fundamental objective of IHL is to limit the effects of armed conflicts and protect those who do not, or no longer, participate in hostilities, particularly civilians.61

  • Principles of Distinction, Proportionality, and Necessity: These foundational IHL principles are crucial for regulating cyber operations during armed conflict. The principle of distinction mandates that attacks must be directed only against legitimate military objectives, distinguishing between combatants and non-combatants/civilians.50 Proportionality prohibits attacks expected to cause incidental civilian harm (loss of life, injury, or damage to civilian objects) that would be excessive in relation to the anticipated concrete and direct military advantage.50 The principle of military necessity justifies measures needed to defeat the enemy as quickly and efficiently as possible, provided they are not prohibited by the law of war.76 Applying these principles to cyber operations requires careful consideration of the inherent interconnectivity between military and civilian networks and the potential for non-kinetic effects to cause incidental harm, which should be included within the meaning of “incidental harm” for proportionality assessments.61

A critical tension exists within IHL concerning information manipulation during armed conflict, particularly regarding the distinction between a “ruse of war” and “perfidy.” Historically, IHL has been permissive towards forms of information manipulation, such as propaganda, considering them legitimate “ruses of war”.5 These tactics are allowed if they are intended to mislead an adversary or induce reckless action but do not violate other applicable rules of international law.5 This traditional permissiveness is increasingly challenged by the unprecedented scale, speed, and precision of modern digital disinformation, especially when it directly targets civilian populations.19 In contrast, acts of “perfidy” – feigning protected status (e.g., surrender or civilian status) to gain a military advantage – are explicitly prohibited because they exploit IHL's safeguards and undermine their integrity.5 The crucial distinction lies in whether disinformation exploits these protections or, more gravely, incites war crimes or violence against civilians, which directly violates the fundamental IHL principles of distinction and proportionality.21 The evolving use of disinformation to directly target civilians, rather than solely combatants, marks a troubling shift, as it can lead to life-threatening decisions for individuals (e.g., false information about safety zones or evacuation routes) and severely undermine humanitarian efforts by discrediting aid organizations.19 This highlights a significant gap in how traditional IHL, primarily designed for kinetic warfare, fully addresses the non-kinetic yet profoundly harmful effects of digital disinformation on civilian populations.

Challenges of Attribution and State Responsibility in Cyber-Enabled Disinformation

Attributing internationally wrongful acts to a specific state is a complex problem in international law, a complexity significantly amplified when cyber operations are involved due to their inherent nature.78 The technical difficulties in tracing the origin of cyber operations and definitively determining authorship pose substantial challenges for legal attribution.50 This is because cyber operations can be conducted remotely, anonymously, and often involve routing through third countries, making definitive identification difficult.26

State responsibility arises under international law if cyber operations are conducted by state organs, by persons or entities exercising governmental authority, or by non-state actors acting under the state's instructions, direction, or control.70 However, the precise degree of control required for attributing the conduct of non-state actors to a state (e.g., “effective control” versus “overall control”) remains a subject of ongoing debate among legal scholars.60 The threshold for state liability under Article 8 of the Articles on Responsibility of States for Internationally Wrongful Acts (ARSIWA) and the “effective control” test is notably high, leading to discussions about whether it should be extended to include “overall control”.78

While allegations of wrongful acts should be substantiated, international law does not impose a legal obligation to publicly disclose the evidence upon which an attribution is based.60 States frequently withhold such evidence to protect sensitive intelligence methods, further complicating transparency and accountability.5 This practice, while understandable from a national security perspective, contributes to a persistent and inherent difficulty of attributing cyber operations, particularly those involving disinformation, to a specific state actor.50 This creates a profound “attribution gap.” This gap directly impedes the invocation of state responsibility under customary international law.60 Without clear, widely accepted, and often publicly verifiable attribution, it becomes exceedingly difficult for victim states to apply countermeasures, impose sanctions, or hold responsible states accountable for internationally wrongful acts.60 This pervasive lack of accountability can embolden malicious actors, fostering a permissive environment for state-sponsored cyber-enabled disinformation campaigns, and significantly increasing the risk of escalation due to unaddressed provocations and a breakdown in deterrence.52

Human Rights Law and Freedom of Expression in the Context of Disinformation

Human rights law, notably Article 19 of the Universal Declaration of Human Rights and the International Covenant on Civil and Political Rights (ICCPR), serves as a crucial framework for protecting freedom of expression and the right to seek, receive, and impart information.1 This protection extends to a wide range of speech, including critical commentary, irony, satire, parody, humor, and even erroneous interpretations of facts.1

Restrictions on freedom of expression are permissible only in exceptional and narrowly defined circumstances. Such restrictions must be prescribed by law, serve a legitimate purpose (e.g., protection of rights or national security), and be necessary and proportionate to achieve that purpose.1 They must not be used to unduly stifle legitimate speech or exacerbate societal ills.1

However, a clear prohibition exists under international law (Article 20(2) ICCPR) against propaganda for war or any advocacy of national, racial, or religious hatred that constitutes incitement to discrimination, hostility, or violence.1 This specific prohibition reflects a recognition of the direct link between certain forms of speech and the potential for severe harm, including atrocity crimes.14

A core challenge for the international legal framework is the inherent tension between upholding freedom of expression and effectively countering harmful disinformation.1 The UN and other international bodies consistently advocate for responses rooted in human rights, emphasizing that measures should promote digital literacy, transparency, and independent media rather than imposing overbroad restrictions or censorship.1 This approach suggests a strategic shift from direct content prohibition (except for clear incitement to violence) towards building societal resilience and empowering individuals to critically evaluate information, thereby fostering a more robust information ecosystem.1 The difficulty is particularly pronounced with state-sponsored disinformation, where the malicious intent to deceive and cause harm is present, but direct legal prohibitions under existing frameworks are often limited, creating a “normative gap” below the threshold of “use of force”.5 This gap necessitates a nuanced approach that balances the protection of fundamental freedoms with the imperative to mitigate the real-world harms caused by intentional manipulation.

Principle Core Definition/Purpose Application to Cyber-Enabled Disinformation Key Challenges/Ambiguities Relevant Sources
Sovereignty The authority of a state to exercise exclusive power over its territory, free from external interference. Extends to state conduct in cyberspace and jurisdiction over cyber infrastructure within its territory. Hostile cyber operations or their effects on a state's territory can violate sovereignty. Debate on whether remote cyber intrusions per se violate sovereignty; defining the threshold of interference. 68
Non-Intervention Prohibits states from coercively intervening in the internal or external affairs of other states. Digital interference, such as manipulating elections or disrupting governmental functions, can constitute prohibited intervention if coercive and impacting a state's political, economic, or social system. Wide-scale disinformation campaigns causing civil unrest may violate this. Defining “coercion” in the digital realm; establishing causal link between disinformation and specific outcomes. 68
Prohibition on Use of Force (Art. 2(4) UN Charter) Prohibits the threat or use of armed force against the territorial integrity or political independence of any state. Applies to cyber operations that produce effects comparable in scale and severity to armed force, such as physical damage, loss of life, injury, or permanent disablement of critical infrastructure. Whether intangible damage, purely economic harm, or psychological effects without physical consequences constitute “use of force.” 68
Self-Defense (Art. 51 UN Charter) Inherent right of individual or collective self-defense if an “armed attack” occurs. A cyber operation qualifies as an “armed attack” if its severity and impact are comparable to a kinetic armed attack, resulting in significant death, injury, or substantial material damage/destruction. Response need not be cyber. Defining the precise threshold for a cyber operation to constitute an “armed attack.” 50
Distinction (IHL) Mandates that attacks must be directed only against legitimate military objectives, distinguishing between combatants and non-combatants/civilians. Cyberattacks must target military objectives and not be indiscriminate. Civilian cyber infrastructure is presumed not to be used for military action in case of doubt. Interconnectivity of military and civilian networks; identifying dual-use infrastructure. 50
Proportionality (IHL) Prohibits attacks expected to cause incidental civilian harm (loss of life, injury, or damage to civilian objects) that would be excessive in relation to the anticipated concrete and direct military advantage. Requires assessing potential incidental harm to civilians from cyber operations, including non-kinetic effects, in relation to military advantage. Quantifying and predicting civilian harm from non-kinetic cyber effects; assessing “excessive” harm. 50
Necessity (IHL) Justifies measures needed to defeat the enemy as quickly and efficiently as possible, provided they are not prohibited by the law of war. Cyber operations must be militarily necessary and not prohibited by IHL. Balancing military advantage with humanitarian considerations in cyber operations. 76
Freedom of Expression (IHRL) Protects the right to seek, receive, and impart information and ideas through any media, regardless of frontiers. Applies to online content and information dissemination. Responses to disinformation must promote and protect this right, avoiding undue censorship. Balancing freedom of expression with the need to counter harmful disinformation; risk of overbroad restrictions. 1
Prohibition of Incitement (IHRL) Prohibits propaganda for war or any advocacy of national, racial, or religious hatred that constitutes incitement to discrimination, hostility, or violence (Article 20(2) ICCPR). Applies to disinformation that crosses the high threshold of inciting discrimination, hostility, or violence, or propaganda for war. Such content must be prohibited by law. Establishing intent to incite; defining the legal threshold for incitement in digital contexts. 1

III. National Doctrines and State Practices in the Information Environment

Comparative Analysis of Key State Approaches

States globally are developing diverse doctrines and practices to navigate the complexities of disinformation and cyberwarfare, reflecting varied strategic priorities and interpretations of international law.

United States:

The United States firmly asserts that existing international law applies to cyberspace, a position that has gained broad acceptance within the international community.67 Its cyber strategy emphasizes a “defend forward” approach, aiming to disrupt malicious cyber activity at its source, even when it falls below the threshold of armed conflict.59 This proactive posture is designed to counter threats before they reach U.S. networks or critical infrastructure. A central tenet of U.S. strategy is building “digital solidarity” with allies and partners, focusing on mutual assistance to victims of malicious cyber activity, promoting human rights in the digital sphere, and strengthening norms of responsible state behavior.88 The National Cybersecurity Strategy (2023) further seeks to rebalance cybersecurity responsibility, shifting the burden for defense away from individual users and small organizations towards more capable entities within the public and private sectors.88 The Global Engagement Center (GEC) is explicitly tasked with countering foreign state and non-state propaganda and disinformation efforts that threaten U.S. national security interests and those of its allies.2 The U.S. government views foreign disinformation as a significant national security threat, capable of weakening democracies and increasing political instability and conflict.93

The U.S. doctrine reveals a strategic evolution towards a proactive, offensive-oriented, and collaborative cyber defense. This approach signifies a willingness to engage in offensive cyber operations to disrupt threats at their source, even if they do not immediately constitute an armed attack, thereby moving beyond purely defensive measures.59 This proactive stance is intrinsically linked with the concept of “digital solidarity” and a strong emphasis on international cooperation with allies and partners.88 This suggests a strategy of collective deterrence and pre-emption against a broad spectrum of cyber threats, including disinformation campaigns. Furthermore, the strategic objective of rebalancing cybersecurity responsibility and fostering public-private collaboration 88 indicates a recognition that comprehensive cyber defense is a “whole-of-society” endeavor, requiring integrated efforts across governmental agencies, the private sector, and civil society.

Russia:

Russia's Information Security Doctrine (2016) defines information security broadly, encompassing the protection of the individual, society, and the state against internal and external information threats, including those that compromise strategic stability and information sovereignty.95 The doctrine explicitly acknowledges information warfare as a real problem with the potential to escalate into military conflict, emphasizing the need for international legal mechanisms to address it.96 Russia views information technologies as global and transboundary, prioritizing the countering of their use as weapons for terrorist and extremist purposes.66 A key strategic objective is the protection of its own information sphere from external influences and countering criticism of the Russian Federation, which it perceives as a threat to its spiritual and moral values, particularly among youth.96 Russia employs disinformation as a sophisticated tool within its influence operations, frequently to sow domestic chaos in adversary nations or undermine trust in their institutions, as seen in the 2016 U.S. presidential elections.23

Russia's approach to information security and cyber operations is characterized by a dual strategy: aggressive external information warfare combined with stringent internal control over its information space. While Russia actively leverages disinformation and influence operations in cyberspace to achieve political and military objectives abroad, it simultaneously strives for uncompromising control over its domestic cyber environment.97 This internal control, often enforced through legislation like the Yarovaya Law, restricts the flow of “undesirable” information to its population, even at the expense of civil rights like privacy.97 This dual approach highlights a perceived asymmetry: Russia exploits the openness of democratic societies to spread disinformation while maintaining a closed, controlled information environment domestically to prevent foreign influence.97 This strategy demonstrates a clear intent to manipulate the global information environment to its advantage, while simultaneously building resilience against similar tactics from external actors within its own borders.

China:

China's approach to cyberspace is heavily influenced by its concept of “cyber sovereignty,” asserting its right to control the internet within its borders.44 This stance is reflected in laws like the National Security Law (2015) and the Cybersecurity Law (2017), which grant the government broad powers to conduct security reviews, demand access to encryption keys and source code, and compel companies to cooperate with government oversight.44 These laws raise significant concerns about potential hidden backdoors in technology products manufactured by Chinese companies, which could be exploited for espionage, economic advantage, or cyber warfare.44 China's military doctrine, “informatized warfare,” laid out in its 2008 Defence White Paper, integrates information-based weapons and forces, including battlefield management systems and precision-strike capabilities, with a focus on achieving information superiority.100 This includes both information offense (attacking enemy information systems) and information defense (preventing destruction of its own systems).100 China engages in sophisticated social media influence operations and disinformation campaigns, which have shown increasing sophistication, often aimed at downplaying negative events or manipulating public opinion on sensitive issues like the COVID-19 pandemic.102

China's “cyber sovereignty” doctrine and its implementation through extensive legal and technological controls reveal a strategic imperative to achieve information dominance and maintain internal stability. The emphasis on controlling data within its borders, mandating local data storage, and restricting cross-border data transfers 101 demonstrates a clear intention to exert absolute authority over its digital space, viewing it as an extension of its physical territory. This approach, while framed as national security, also serves to facilitate pervasive surveillance and censorship, limiting freedom of expression for its citizens.99 The integration of information warfare into its military doctrine, coupled with the potential for state-mandated backdoors in technology, indicates a comprehensive strategy to leverage the digital realm for both defensive purposes and offensive influence operations, including economic espionage and political manipulation on a global scale.44 This holistic approach highlights a state's efforts to shape the information environment to its strategic advantage, both domestically and internationally.

European Union (EU):

The EU views Foreign Information Manipulation & Interference (FIMI), including disinformation, as a growing security and foreign policy threat.17 Since 2015, the EU has significantly built up its capabilities to identify, analyze, and respond to FIMI, particularly in response to Russian disinformation campaigns.17 The EU's approach goes beyond merely addressing disinformation content, focusing instead on the manipulative behavior of FIMI actors.17 The European External Action Service (EEAS) has developed the “EU FIMI Toolbox,” a comprehensive, whole-of-society approach based on four pillars: situational awareness (e.g., Rapid Alert System), resilience building (e.g., supporting independent media, digital literacy), disruption & regulation (e.g., Digital Services Act), and external action (e.g., international cooperation, sanctions).17 The Digital Services Act (DSA) introduces legal obligations to combat disinformation for online intermediaries and platforms, especially Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs), requiring them to conduct risk assessments and mitigate systemic risks.7 The Rapid Alert System (RAS) facilitates information sharing and coordinated responses to cross-border disinformation campaigns among EU institutions and Member States.107

The EU's comprehensive and multi-faceted approach to countering FIMI and disinformation reflects a recognition of the systemic nature of these threats to democratic processes and societal cohesion. The EU's emphasis on distinguishing between misinformation and disinformation based on intent to harm 104 and its focus on the manipulative behavior of actors, rather than solely content, allows for a more targeted and legally robust response that aims to protect fundamental freedoms while addressing malign influence.17 The integration of the Code of Practice on Disinformation into the Digital Services Act framework 105 signifies a move towards legally binding accountability for platforms, pushing them to take greater responsibility for mitigating systemic risks. This collaborative and regulatory approach, coupled with external diplomatic action and sanctions against malign actors 17, demonstrates a commitment to upholding international norms of responsible state behavior in the information space through a combination of soft power (resilience building) and hard power (disruption and sanctions).

United Kingdom (UK):

The UK's National Cyber Strategies (NCS) have evolved from a security-focused approach to a broader vision of “cyber power,” defined as the ability to protect and promote national interests in and through cyberspace.73 The UK aims to be a “leading responsible and democratic cyber power,” leveraging cyber diplomacy and soft power to export its values and norms.73 The UK government is highly critical of Russian, Chinese, and North Korean cyber operations that disrupt critical infrastructure or harbor cybercriminals.73 The UK subscribes to a contested understanding of sovereignty in cyberspace, arguing that it can engage in cyber operations that, in its interpretation, do not violate other states' sovereignty and therefore do not violate international law.73 Under the international law of countermeasures, the UK government asserts the right of states whose sovereignty is violated by cyber intrusions to take otherwise illegal actions.73 The UK's Integrated Review (2021, refreshed 2023) emphasizes science and technology to reinforce national security and grow cyber power, recognizing the risk of AI accelerating disinformation and interference in political processes.47

The UK's evolving doctrine of “cyber power” reflects a strategic ambition to project influence and defend national interests in the digital realm, but it also highlights a deliberate ambiguity in its legal interpretations, particularly concerning sovereignty and offensive cyber operations during peacetime. The UK's contested understanding of sovereignty, which suggests that certain remote cyber intrusions may not per se violate international law, allows for greater operational flexibility for its National Cyber Force.73 This approach, while aiming to enhance the UK's ability to act in the grey zone below the threshold of armed conflict, may also be perceived as a threat by other states due to the lack of clear legal boundaries. The UK's focus on information operations and its recognition of disinformation as a component of hybrid warfare 11 further underscore its intent to engage in the information environment as a domain of strategic competition. This combination of an expansive “cyber power” concept with legal ambiguities creates a dynamic where the UK seeks to lead in cyber capabilities while navigating the complex international legal landscape.

Estonia:

Estonia holds a clear position that existing international law applies in cyberspace, both in times of peace and war, and emphasizes that states must act responsibly in this domain.56 Estonia is closely associated with the “Tallinn Manual,” an independent academic analysis initiated at the request of the NATO Cooperative Cyber Defence Centre of Excellence (CCDCOE) in Tallinn.56 The Tallinn Manual and its subsequent editions (2.0 and 3.0) analyze the application of international law, including jus ad bellum, International Humanitarian Law (IHL), sovereignty, state responsibility, and the law of neutrality, to cyber warfare and cyber operations.62 The Tallinn Manual 2.0, for example, identifies 154 “black letter” rules governing cyber operations and provides extensive commentary on each.71 Estonia actively supports discourse on international law through initiatives like the Tallinn Workshops on International Law and Cyber Operations, which bring together experts to discuss topics such as state responsibility, attribution, use of force, and countermeasures.56

Estonia's consistent advocacy for the applicability of existing international law to cyberspace and its central role in the development of the Tallinn Manual demonstrate its significant contribution to shaping the normative framework for responsible state behavior in the digital realm. By hosting and supporting initiatives like the Tallinn Manual, Estonia has positioned itself as a leader in clarifying how established legal principles, originally conceived for kinetic warfare, translate to the unique challenges of cyber operations. This normative leadership is crucial for reducing legal ambiguity, fostering predictability in state behavior, and ultimately enhancing stability in cyberspace. The ongoing work on Tallinn Manual 3.0, which seeks to revise existing content and introduce discussions on emerging topics like investment and trade law, further solidifies Estonia's commitment to the continuous evolution and refinement of international cyber law.64

IV. International Cooperation and Future Pathways for Stability

Multilateral Initiatives and Norm Development

The international community, particularly through the United Nations, has engaged in sustained efforts to develop norms for responsible state behavior in cyberspace. Since 1998, the UN General Assembly has addressed information security, leading to the establishment of various intergovernmental processes.54 Key among these are the Groups of Governmental Experts (GGEs) and the Open-Ended Working Groups (OEWGs).54

The GGEs, starting in 2004, have studied threats posed by ICTs and how to address them, with four groups agreeing on substantive reports that have been welcomed by all UN Member States.54 The 2015 GGE report, for instance, made significant progress by proposing 11 voluntary, non-binding norms of responsible state behavior in cyberspace and explicitly referencing four principles of international law (humanity, necessity, proportionality, and distinction) as applicable to state conduct in cyberspace.38 These norms lay the groundwork for collective expectations and aim to reduce risks to international peace and security.114

The OEWG, established in 2018 and open to all Member States, has also played a crucial role, adopting a consensus report in 2021 that reaffirmed the framework of responsible state behavior.54 The work of both the GGEs and OEWGs has focused on existing and emerging threats, the application of international law to ICTs, norms of responsible behavior, confidence-building measures, and capacity building.39

The cumulative efforts of the UN GGEs and OEWGs have established a foundational and evolving framework for responsible state behavior in the use of Information and Communications Technologies (ICTs). This framework, built upon successive consensus reports from 2010, 2013, 2015, and 2021, has solidified the acceptance that international law, particularly the UN Charter and its principles, applies to cyberspace.54 This cumulative framework provides a common understanding of the normative landscape, recommending 11 voluntary, non-binding norms of responsible state behavior and specific confidence-building and capacity-building measures.116 The ongoing nature of these discussions, with a new OEWG meeting regularly through 2025 54, demonstrates a continuous commitment to adapt and refine these norms as technology evolves and new threats emerge. This iterative process, despite its challenges in achieving binding agreements, plays a crucial role in shaping international expectations and guiding state conduct in the digital realm.

Confidence-Building Measures (CBMs)

Confidence-Building Measures (CBMs) are well-proven tools in international relations designed to promote peaceful uses, transparency, stability, and reduce the risk of misunderstanding, escalation, and conflict.117 In cyberspace, the potential for deniability, potency, and low cost of malicious cyber activities makes CBMs particularly vital.113

CBMs aim to increase trust and understanding between states.119 Examples include establishing “hotlines” between governments or militaries, improving transparency in doctrine, and exchanging visits of military officers.117 Concrete regional initiatives by organizations such as ASEAN, OAS, and OSCE demonstrate the willingness of states to develop and implement CBMs tailored to their specific contexts.113 These regional efforts can serve as models for CBMs at the UN level.113

The development and implementation of CBMs in cyberspace are crucial for mitigating the risks of miscalculation and inadvertent escalation, especially given the inherent ambiguities of cyber operations. By fostering trust, transparency, and predictability, CBMs can defuse tensions and guide state behavior towards more stable international relations.113 A key CBM is the creation of a global network of Points of Contact (PoCs) at policy, diplomatic, and technical levels, which would facilitate effective communication during crises and enable information sharing, including technical data and the nature of requests.113 This systematic approach to communication and information exchange helps clarify intentions and prevent misunderstandings that could otherwise lead to conflict escalation. Furthermore, CBMs emphasize the importance of multistakeholder engagement, including the private sector, academia, civil society, and the technical community, recognizing their significant contributions to Internet resilience and reducing the chances of miscalculation.117

Challenges to International Cooperation and Treaty Development

Despite the recognized need for international cooperation, significant challenges impede the development of legally binding instruments and effective arms control measures in cyberspace. An overarching cybersecurity agreement or treaty attempting to address the full range of conflict, including crime, trade issues, espionage, and military action, would be impractical due to its broad scope.40 Focused agreements on specific issues are considered more achievable.40

Key obstacles to cyber arms control and effective regulation include:

  • Dual-Use Nature of Technology: Many information and communication technologies (ICTs) have dual-use capabilities, serving both civilian and military purposes, making it difficult to define and regulate “cyberweapons”.120 A cyberweapon often becomes a weapon only through the malicious code, specific vulnerability, and intended effect, rather than its inherent design.120 This lack of clear, uniform definitions for terms like “cyberweapon” significantly complicates the discussion of legal regulations.120
  • Proliferation and Constant Technological Progress: The rapid pace of technological advancement and the widespread availability of cyber tools make it difficult to constrain their reach and prevent proliferation.120
  • Importance of the Private Sector: Cyberspace is predominantly influenced by non-state actors, including companies and organized criminals, who are often key cyber defenders and aggressive actors.120 This necessitates inclusive, multistakeholder approaches, but also complicates state-centric legal frameworks.120
  • Difficulties in Attribution and Verification: The inherent challenges in attributing cyberattacks to specific state or non-state actors make verification of compliance with any arms control treaty remarkably difficult.120 This lack of clear attribution undermines accountability and deterrence.

The persistent challenges in defining cyber weapons, attributing attacks, and verifying compliance create fundamental impediments to establishing a robust international legal framework for cyber arms control. The dual-use nature of ICTs means that regulating specific technologies or codes is often unworkable, as civilian tools can be weaponized with malicious intent.120 This necessitates a focus on the effects of cyber operations rather than the tools themselves. Furthermore, the difficulty of reliably attributing cyberattacks to a specific state actor, often due to the use of proxies, third-party systems, or false flags 52, directly undermines the ability to enforce any legally binding treaty. Without timely and accurate attribution, it becomes challenging to impose sanctions or ensure accountability, which can perpetuate a cycle of unaddressed malicious activity and increase the risk of miscalculation and escalation.52 This complex interplay of technical characteristics and legal ambiguities makes achieving a meaningful consensus on cyber arms control profoundly difficult among states with differing strategic interests and legal interpretations.81

Academic and Policy Proposals for a Robust Framework

Recognizing the evolving nature of threats and the limitations of existing frameworks, various academic and policy proposals advocate for new approaches to manage disinformation and cyber conflict:

  • Development of a Cyber-Specific Treaty: Many legal scholars and practitioners advocate for a new, cyber-specific treaty that would create binding obligations for states, clearly defining what constitutes an act of cyber aggression and establishing mechanisms for accountability and redress.79 Such a treaty would need to incorporate adaptable mechanisms, such as built-in review clauses or specialized oversight bodies, to remain relevant amidst rapid technological innovation.46
  • International Court for Cybersecurity: To facilitate attribution and accountability, some authors propose establishing an international court specializing in cyberwar and cybercrime.79 Such a court could help create a global accountability system, resolve disputes, impose sanctions, and generate uniform jurisprudence on these complex issues.79
  • Strengthening International Alliances and Partnerships: Robust international cooperation and digital solidarity are crucial for mitigating risks, especially for smaller and developing nations vulnerable to state-sponsored cyberattacks.79 This includes sharing advanced cyber defense technologies and expertise, as exemplified by initiatives like the Global Forum on Cybersecurity (GFC).79
  • Global Rapid Response Network: Proposals include creating a global “rapid response” network involving governments, NGOs, and technology companies to provide immediate technical and legal assistance to affected countries during cyber incidents, helping to mitigate damage and restore critical infrastructure.79
  • Promotion of Education and Global Cyber Awareness: Increasing cyber-risk awareness and training at all levels of society, from basic education to professional training, is seen as an effective preventive measure against human vulnerabilities exploited by disinformation.79 This includes promoting digital literacy and critical thinking skills to help individuals identify and debunk disinformation.1
  • Multistakeholder Approaches and Data Transparency: A holistic legal approach emphasizing empirical data and interdisciplinary collaboration is advocated to address the structural mismatch between existing legal paradigms and the novel nature of digital disinformation operations.9 Transparent access to platform data for vetted researchers is considered critical to understanding the scale and impact of disinformation and grounding evolving international law in factual evidence.9

The fragmented legal response to disinformation and cyber operations, often reflecting a structural mismatch between existing legal paradigms and the novel nature of digital interference, necessitates a holistic legal approach. This approach emphasizes the need for empirical data and interdisciplinary collaboration to effectively combat digital manipulation and preserve the integrity of the information ecosystem.9 The critical challenge lies in the fact that disinformation operations are often designed to exploit areas of law that remain underdeveloped or ill-suited to address low-intensity digital interference.9 Therefore, proposals for structural reform, such as mandating privacy-preserving data access for vetted researchers, are essential to enable systematic study of the scale and impact of disinformation, which has largely remained unmeasured.9 This collaborative and evidence-based approach, integrating insights from law, policy, and technology, is crucial for developing an international legal order capable of adapting to the realities of online harm and ensuring accountability without stifling legitimate expression.9

Conclusion

The intersection of disinformation and cyberwarfare presents a complex and evolving threat to international peace and security. Disinformation, characterized by its malicious intent and capacity to cause widespread harm, has been dramatically amplified by digital technologies, enabling psychological manipulation and cognitive warfare. When combined with cyber operations, it acts as a powerful force multiplier, capable of destabilizing societies, eroding trust, and even precipitating kinetic conflict. The “grey zone” in which many cyber-enabled disinformation campaigns operate poses a significant challenge, as the ambiguity of attribution and the difficulty in defining thresholds for “use of force” or “armed attack” can lead to miscalculation and unintended escalation.

While there is broad consensus that existing international law, including the UN Charter and International Humanitarian Law, applies to cyberspace, significant ambiguities persist. The nuanced distinction between permissible “ruses of war” and prohibited “perfidy” in the context of digital information manipulation remains a critical area of legal development, particularly concerning the protection of civilians from harmful narratives. Furthermore, the inherent “attribution gap” in cyber operations profoundly impedes the invocation of state responsibility, undermining accountability and deterrence. Balancing the fundamental human right to freedom of expression with the imperative to counter harmful disinformation represents a core tension, necessitating responses rooted in human rights that prioritize digital literacy, transparency, and independent media over broad censorship.

National doctrines reflect diverse approaches to this evolving threat, ranging from the U.S. “defend forward” and digital solidarity strategy to Russia's dual approach of external information warfare and internal cyber control, and China's emphasis on “cyber sovereignty” and informatized warfare. The EU's comprehensive FIMI Toolbox and the UK's evolving “cyber power” doctrine demonstrate varied regional and national efforts to adapt. Estonia's leadership in developing the Tallinn Manual highlights the crucial role of normative clarification.

Moving forward, international cooperation remains paramount. Multilateral initiatives through the UN GGEs and OEWGs have established a cumulative framework of responsible state behavior and confidence-building measures, which are essential for fostering predictability and reducing the risk of conflict. However, significant challenges persist in developing legally binding instruments, particularly due to the dual-use nature of technology, the difficulty in defining cyber weapons, and the persistent attribution challenges. Academic and policy proposals advocate for a holistic legal approach, including cyber-specific treaties, international cyber courts, strengthened alliances, rapid response networks, and enhanced digital literacy. Ultimately, effective mitigation of disinformation leading to cyberwar requires sustained international collaboration, continuous normative development, and a commitment to transparency and accountability across all stakeholders.

Created with Google Gemini and subject to further review.

disinformation.txt · Last modified: 2025/06/02 08:57 by wbauer