Sfetcu, Nicolae (2025), Challenges and Risks of Artificial Intelligence in Electronic Warfare, IT & C, 4:1, 96-118, https://www.internetmobile.ro/challenges-and-risks-of-artificial-intelligence-in-electronic-warfare/
Abstract
Artificial Intelligence (AI) has emerged as a disruptive force in electronic warfare (EW), enabling advanced signal processing, threat recognition, and real-time decision-making. However, the integration of AI into EW systems introduces a range of challenges and risks. These include data scarcity and bias, system robustness, adversarial manipulation, legal and ethical concerns, and the complexities of human–machine teaming in high-stakes environments. This article outlines the key challenges and risks associated with the development and deployment of AI-enabled EW systems, reviews contemporary research, and suggests potential solutions to mitigate these risks, drawing on both technical innovations and policy frameworks.
Keywords: challenges, risks, artificial intelligence, electronic warfare, adversarial attacks
Provocări și riscuri ale inteligenței artificiale în războiul electronic
Rezumat
Inteligența artificială (IA) a apărut ca o forță perturbatoare în războiul electronic (RE), permițând procesarea avansată a semnalului, recunoașterea amenințărilor și luarea deciziilor în timp real. Cu toate acestea, integrarea IA în sistemele RE introduce o serie de provocări și riscuri. Acestea includ deficitul și părtinirea datelor, robustețea sistemului, manipularea adversă, preocupările legale și etice și complexitatea echipelor om-mașină în medii cu mize mari. Acest articol subliniază provocările și riscurile cheie asociate cu dezvoltarea și desfășurarea sistemelor RE activate de IA, analizează cercetările contemporane și sugerează soluții potențiale pentru a atenua aceste riscuri, bazându-se atât pe inovațiile tehnice, cât și pe cadrele politice.
Cuvinte cheie: provocări, riscuri, inteligență artificială, război electronic, atacuri adverse
IT & C, Volumul 3, Numărul 4, Decembrie 2024, pp. 96-118
ISSN 2821 – 8469, ISSN – L 2821 – 8469,
URL: https://www.internetmobile.ro/challenges-and-risks-of-artificial-intelligence-in-electronic-warfare/
© 2025 Nicolae Sfetcu. Responsabilitatea conținutului, interpretărilor și opiniilor exprimate revine exclusiv autorilor.
Challenges and Risks of Artificial Intelligence in Electronic Warfare
Ing. fiz. Nicolae SFETCU[1], MPhil
nicolae@sfetcu.com
[1] Cercetător – Academia Română – Comitetul Român de Istoria și Filosofia Științei și Tehnicii (CRIFST), Divizia de Istoria Științei (DIS), ORCID: 0000-0002-0162-9973
1. Introduction
Electronic warfare (EW) refers to the use of the electromagnetic spectrum to sense, deceive, or disrupt enemy operations while protecting friendly forces’ ability to use the spectrum effectively. Over the past decade, the increasing sophistication of adversarial EW capabilities and the exponential growth of data in modern battlefields have prompted interest in integrating AI techniques into EW systems.
In today’s battlespace, adversaries leverage increasingly sophisticated waveforms, jamming techniques, and deceptive tactics, necessitating complex, near-real-time decision-making. AI, particularly machine learning (ML), has emerged as a potent tool to enhance EW across multiple domains, including signal detection, classification, threat forecasting, and adaptive countermeasures (Department of Defense 2018). Artificial intelligence (AI) is revolutionizing electronic warfare, the domain of military strategy focused on controlling the electromagnetic spectrum (EMS). By enabling rapid signal processing, adaptive jamming, and autonomous decision-making, AI enhances capabilities in sensing, attacking, and protecting communications systems.
Despite its immense potential, AI’s inherent vulnerabilities, such as reliance on large training datasets and susceptibility to adversarial manipulation, raise significant concerns. For instance, in contested or denied environments, where adversaries may deliberately craft misleading signals or degrade system inputs, AI algorithms are prone to errors that could trigger unintended escalations or degrade friendly capabilities. Furthermore, the “black box” nature of many AI models, especially deep neural networks, can complicate human oversight and legal accountability—issues that carry particular weight in military contexts.
2. AI-Driven Electronic Warfare
Artificial intelligence/machine learning (AI/ML) is a critical force multiplier in electronic warfare (EW) and can be a highly effective tool when applied in areas such as signal recognition, emission and signal control, emitter classification, threat recognition and jamming identification. From tactical situational awareness and management, threat recognition and classification, and emission and signature control tactics, to over-the-horizon targeting and non-kinetic sights using non-organic EW capabilities, AI/ML will be an enormous advantage in force lethality (Gannon 2023).
The AI/ML technique for EW integration is called Q-learning algorithms, a type of algorithmic reinforcement learning (Li et al. 2021). Most EW capabilities do not require complex models to integrate with AI/ML.
AI/ML can also improve the technique of „fingerprinting” transmitters by analyzing transmitter parameters and modulating the pulse from the transmitter. But these transmitter parameters often change over time or are switched between platforms, which is a challenge (Gannon 2023).
AI is generally defined as a computing system capable of superior human-level cognition. Current artificial intelligence systems are classified as “narrow AI,” trained to perform specific tasks.
The application of artificial intelligence in conjunction with EW has the potential to bridge the gap between desired warfighting capability and acquired skills. AI is already considered essential for mobile EW systems deployed with battlefield formations (De Spiegeleire, Maas, and Sweijs 2017), due to AI’s ability for effective decision support, handling large amounts of data, situational awareness, scenario visualization evolving and generating appropriate responses, and better self-control, self-regulation and self-action due to AI’s inherent computational and decision-making capabilities (Gulhane 2018).
In EMS operations, the goal is to respond immediately. With AI and ML, the computer decides the next steps. Due to the unpredictable behavior of the system, even the people responsible for the system cannot predict its exact behavior. If a radar tries to track a jet, for example, the adversary’s countermeasures can cause it to fail. Using ML, that radar will repeatedly try new approaches to achieve success (Friedrich 2020).
Modern AI systems are capable of processing information to make predictions and achieve goals (Tegmark 2018). As a result, these systems are transforming the foundations of the defense industry, national security, and global warfare.
The inclusion of AI in EW systems makes them very effective as autonomous systems (Allen and Massolo 2020). For military applications, AI-supported data processing systems are already in use, and intelligent communication attributes have become indispensable (Liao, Li, and Yang 2018).
AI is already implemented through numerous applications in EW:
- Signal intelligence and machine learning: AI applications in signal processing for improved information gathering; machine learning algorithms for pattern recognition and anomaly detection in vast data sets.
- Cyber operations and AI: Using artificial intelligence for proactive cyber defense and response; autonomous cyber capabilities and their implications for offensive operations.
- Electronic attack and AI: Electronic attack strategies based on artificial intelligence to disrupt enemy communication system; adaptive and self-learning algorithms for real-time adjustments in electronic attack scenarios.
AI-based EW systems can be classified into EW systems that affect the operational level of warfare, and those that affect the strategic level. At the operational combat level AI can help in achieving tactical objectives, planning, eliminating uncertainty and effective preparation, such as (Sharma, Sarma, and Mastorakis 2020):
- Collecting, interpreting, and analyzing information (Davis 2019).
- In war games, increasing the power of simulations and game tools.
- In the case of unmanned vehicles such as UAVs for battlefield guidance, or in combination with robots for autonomous operation (Brooks 2018).
At the strategic level, AI can assist in organizing the order of battle, assigning forces, war strategies, decisions about scale and escalation, sharing and interpreting information, the scope and nature of war, the consequences of deploying assets, etc. (Sharma, Sarma, and Mastorakis 2020) Thus AI can be used in intelligence, surveillance and reconnaissance (ISR), targeting and navigation, or probing, mapping and hacking computer networks (Davis 2019). AI techniques can also be used in detecting RF signals from adversaries and predicting threats (Microwaves 2019).
The growing complexity and volume of electromagnetic signals on the modern battlefield necessitate automated and intelligent approaches to signal detection, classification, and response. AI techniques—particularly machine learning methods—are appealing for EW due to their ability to process large amounts of high-dimensional data and extract patterns that may be imperceptible to human operators. For instance, deep neural networks (DNNs) can discern subtle differences in radar signatures or communication waveforms, facilitating faster threat recognition and enabling systems to adapt in real time.
An important driver of AI adoption in EW is the emphasis on decision superiority. In congested and contested electromagnetic environments, speed is a decisive factor (Department of Defense 2018). AI algorithms can accelerate the observe-orient-decide-act (OODA) loop by automating routine signal processing tasks. AI-driven EW systems have attracted growing interest from defense agencies worldwide. Key applications include:
Signal Classification and Identification
Machine learning can classify radar signatures, identify jamming techniques, and detect spoofing or intrusion attempts more accurately than manual approaches (Kosko 1986). Machine learning algorithms can process massive streams of radar, communication, and telemetry data to automatically categorize signals, identify anomalies, and detect emerging threats. Sophisticated feature extraction techniques—such as wavelet transforms and autoencoders—help isolate critical information from clutter (Varshney and Alemzadeh 2017).
Intelligent Resource Allocation
AI can optimize the use of limited resources, such as power and bandwidth, by predicting adversary behavior and dynamically scheduling jamming or deception activities (W. Zhang et al. 2023). AI can facilitate cognitive EW, where systems perceive and learn from the electromagnetic (EM) environment, adjust transmission parameters, and reconfigure defenses autonomously. This includes dynamic scheduling of surveillance tasks, channel hopping to evade detection, and real-time selection of countermeasures.
Adaptive Countermeasures
AI-driven EW platforms can learn from the real-time electromagnetic environment and adapt their responses to evolving threats (Xu et al. 2020). Reinforcement learning (RL) allows EW systems to optimize jamming strategies in response to real-time feedback from the environment. A RL agent can choose frequency bands, power levels, or deception waveforms that maximize disruption while minimizing resource consumption (Busoniu et al. 2017).
Predictive Analytics for Strategic Planning
Machine learning models can analyze historical data and intelligence reports to forecast adversarial EW tactics or communication patterns. Predictive insights can inform resource allocation, platform deployment, and risk assessment, enabling more proactive strategies.
* * *
Advantages and opportunities of integrating artificial intelligence in electronic warfare:
- Speed and efficiency:
- The ability of AI to process large amounts of data in real time.
- Enhanced decision-making capabilities for faster response in dynamic electronic environments.
- Precision and accuracy:
- Improved targeting through AI-based analytics.
- Collateral damage reduction through precise electronic warfare tactics.
Despite these promises, operationalizing AI in EW presents significant difficulties. Factors such as the paucity of labeled training data, the vulnerability of ML models to adversarial perturbations, and the rapid tempo of EW engagements make AI deployment in this domain both critical and highly sensitive.
At the same time, the reliance on complex AI and ML models introduces numerous vulnerabilities. These vulnerabilities can be exploited by adversaries using adversarial attacks, while system failures in operational environments can produce cascading strategic consequences.
3. Technical and Operational Challenges
Despite its potential, AI is not infallible. Technical limitations, such as the inability to generalize across diverse scenarios, can hinder the effectiveness of AI in EW. For instance, an AI system trained on specific types of electronic signals may struggle to adapt to novel or evolving threats.
Moreover, AI systems can exhibit unpredictable behavior, especially in complex and dynamic environments. A study by Amodei et al. (2016) highlights the challenges of ensuring the safety and reliability of AI systems in real-world applications (Amodei et al. 2016). In EW, such unpredictability could lead to unintended consequences, including collateral damage or mission failure.
The Electromagnetic Spectrum Supremacy Strategy (ESSS), published in October 2020 by the US, addresses the many challenges faced in securing, maintaining access to, using, and maneuvering the electromagnetic spectrum (Gannon 2023).
The future of AI in EW is directly related to the ability to design autonomous systems with independent reasoning based on knowledge and expertise. Most military UAVs currently require significant human intervention to execute their missions. Current operational systems are more automated than autonomous, but there are significant global efforts in research and development of autonomous systems, although there are cost and organizational issues that limit the operational implementation of autonomous systems.
The future of warfare is related to AI, despite all the problems specific to the military establishment, such as the ability to develop and test safe and controllable autonomous systems and the lower attractiveness in the aerospace and defense sector due to lower funding compared to private industry.
Future AI capabilities include general AI (trained to perform including tasks outside their original domain or programming) and artificial superintelligence (exceeding the cognitive abilities of humans in all spheres of operation at speeds far exceeding human capability), introducing challenges for traditional operational planning methods.
EW systems based on artificial intelligence are susceptible to wrong data entry, with adverse effects (Davis 2019). Accidentally hitting the wrong targets can have strategic, social, and political implications.
One of the advantages of the EW system with AI is rapid decision-making, which can turn into a disadvantage if it unnecessarily hastens the end of the conflict before allowing the crisis to be managed through peace talks between the parties.
Machine learning cannot reliably predict the exact outcomes of an event, potentially inducing errors in decision-makers, jeopardizing their own strengths and proper handling of the situation.
AI-powered information warfare through fake news and photos and deep fakes can have adverse effects, distorting the public’s and leaders’ perception of the conflict.
Other possible vulnerabilities in the integration of artificial intelligence in electronic warfare:
- Vulnerabilities and exploitation:
- Potential vulnerabilities in AI systems that adversaries can exploit.
- The risk of AI-led attacks on critical infrastructure.
- Unwanted consequences:
- Address potential unintended consequences of AI algorithms in dynamic electronic warfare scenarios.
- Escalation risk due to autonomous AI decision-making.
Data Scarcity and Quality
A fundamental challenge in AI-enabled EW is the availability of comprehensive and representative datasets for training and validation. Unlike commercial AI domains that benefit from large public datasets, EW operates under strict secrecy, limiting data sharing and standardization. Furthermore, real-world data collection is expensive, classified, and often incomplete. Models trained on small or biased datasets risk poor generalization, leading to degraded performance in operational environments.
Limited and Classified Datasets: High-fidelity EW data often contains classified or sensitive information, restricting its sharing outside select government agencies or contractors. This confidentiality impedes the development of large, open datasets typically used to train and benchmark advanced AI algorithms. As a result, models may be trained on simulated or incomplete data, which may not reflect the full complexity of real-world EM environments.
Data Bias and Representativeness: Even within secure repositories, training sets might be skewed toward common scenarios (e.g., jamming from known adversaries) while neglecting rare or novel threat signatures. Biased training data can lead to overfitting, diminishing model generalizability. Consequently, systems may fail to recognize or appropriately counter new or unanticipated EW tactics.
Real-Time Constraints
Electronic warfare demands rapid decision-making to detect and counter threats in fractions of a second. For instance, deep neural networks with millions of parameters may not be ideal for edge deployment if the hardware cannot support rapid inference. Large AI models require significant computational resources, which may not be feasible in embedded or resource-constrained EW systems. Additionally, on-device processing at the tactical edge must handle uncertain or incomplete data while minimizing latency. Balancing model complexity, computational efficiency, and energy requirements is essential for real-world deployment. The challenge may rely on battery power or generator systems, limiting the feasible computational load.
Explainability and Transparency
AI systems, especially deep learning models, are often criticized for their „black box” nature, meaning their decision-making processes are not easily interpretable by humans. In EW, where split-second decisions can have life-or-death consequences, the lack of explainability poses a significant risk. Military operators may struggle to trust AI-driven recommendations if they cannot understand how the system arrived at a particular conclusion. The “black box” nature of deep learning methods poses challenges in EW contexts where transparency can be critical for trust, troubleshooting, and legal accountability. Operators and commanders must understand the rationale behind AI-driven recommendations, particularly when lethal or high-stakes decisions are at hand. Enhancing explainability without sacrificing model performance remains an open research area, with methods like saliency maps or local interpretable model-agnostic explanations (LIME) providing potential paths forward (Ribeiro, Singh, and Guestrin 2016).
A report by the Defense Advanced Research Projects Agency (DARPA, 2018) highlights the importance of developing explainable AI (XAI) for defense applications (Gunning 2019). Without transparency, there is a risk of over-reliance on AI systems, which could lead to catastrophic errors in high-stakes scenarios.
Robustness and Reliability
AI algorithms, particularly deep learning models, are sensitive to noise and small perturbations in input data (Papernot et al. 2017). Adversaries could exploit these sensitivities by injecting carefully crafted signals or jamming patterns, causing misclassification or system malfunction. Ensuring robust performance under high electronic noise, degraded communications, and intentional adversarial interference remains a key technical hurdle.
Noise Tolerance: EW environments are inherently noisy, with overlapping signals, frequency hopping, and deceptive bursts. Deep learning models, particularly convolutional neural networks (CNNs) trained on spectrogram images, are often sensitive to variations or noise (M. Zhang, Diao, and Guo 2017). Small perturbations to the input signal can degrade performance or lead to misclassifications.
Environmental Uncertainty: AI models for EW must operate in unpredictable, hostile environments where communication links may be intermittently lost, and adversaries actively seek to degrade sensor inputs. Real-time data may also be incomplete, requiring the AI system to make decisions with partial information—a scenario that demands robust uncertainty modeling and adaptive learning mechanisms (Watkins and Dayan 1992).
Integration with Legacy Systems
Many military organizations operate extensive legacy EW infrastructure that predates modern AI. Integrating advanced AI tools with these legacy systems can be both technically complex and costly. Compatibility with older hardware and software, standardization of interfaces, and ensuring secure data exchange channels are significant concerns. Moreover, incremental upgrades must be carefully managed to avoid creating new vulnerabilities or system instabilities. NATO’s 2020 report on EW modernization stresses that mismatched systems could create operational gaps, undermining the effectiveness of AI enhancements (Giordano 2024).
Most defense forces operate a complex tapestry of legacy EW platforms, sensors, and command-and-control infrastructures developed over decades. Retrofitting AI into these systems entails:
- Hardware Constraints: Older systems may have limited computational resources, memory, or connectivity, hindering real-time AI inference.
- Software Incompatibilities: Existing software may not support modern ML frameworks or languages, requiring extensive refactoring or modularization.
- Interoperability Gaps: In coalition operations, different branches or allied nations may have varying standards and protocols, complicating data exchange and joint AI workflows.
4. Operational and Strategic Risks
Data Dependency and Adversarial Manipulation
AI systems rely on vast datasets to train algorithms for threat detection and response. In dynamic EW environments, where signal patterns evolve rapidly, models trained on historical data may fail to adapt, leading to degraded performance (DARPA 2022).Worse, adversaries can exploit this dependency through data poisoning—injecting deceptive signals to mislead AI into misclassifying threats or ignoring genuine attacks. For example, adversarial machine learning techniques could generate “fake” radar signatures, causing AI-driven systems to target phantom objects (Szegedy et al. 2014). Adversaries can craft subtle modifications to signals that remain imperceptible to human operators but can cause AI models to fail catastrophically (Papernot et al. 2015). For example, adversarial perturbations in radar waveforms could cause an AI-based threat classifier to ignore an enemy missile or misclassify it as benign.
Vulnerability to Adversarial Attacks
One of the most pressing challenges of AI in EW is its susceptibility to adversarial attacks. AI systems, particularly those based on machine learning (ML), rely on data to make decisions. Adversaries can exploit this by feeding manipulated or poisoned data into the system, causing it to malfunction or produce incorrect outputs. For example, an AI-driven radar system could be deceived by adversarial signals, leading to misclassification of threats or false alarms.
According to a study by Kurakin et al. (2017), adversarial attacks can significantly degrade the performance of AI models, even with minimal perturbations to input data (Kurakin, Goodfellow, and Bengio 2017). In the context of EW, such vulnerabilities could compromise mission-critical operations and put military personnel at risk.
AI systems can be targeted by adversarial machine learning techniques, where small perturbations in input data can lead to misclassification or other types of malfunction (Kurakin, Goodfellow, and Bengio 2017). In EW contexts, an adversary might craft deceptive signals to mislead threat recognition algorithms, potentially disabling or degrading defensive measures. This could lead to:
- Spoofing and Deception: By injecting erroneous data, adversaries can manipulate AI-driven EW systems to misidentify or ignore critical threats.
- Model Inversion: Attackers can glean information about the AI model by probing it with queries, enabling them to reconstruct sensitive data or tailor attacks more effectively.
Cybersecurity Vulnerabilities
AI-enabled EW systems are prime targets for cyberattacks. Compromised algorithms could enable adversaries to hijack jamming systems, disrupt communications, or exfiltrate sensitive tactical data. A RAND Corporation study (2024) highlights that AI’s complexity obscures vulnerabilities, making it difficult to secure against sophisticated intrusions (Black et al. 2024). For instance, a hacked AI model might deactivate friendly electronic countermeasures during critical operations, leaving forces exposed (Márquez-Díaz 2024). In addition to real-time perturbations, attackers can insert malicious data during the training process—known as data poisoning. Poisoned models may behave normally under typical conditions but fail or behave maliciously under certain triggers (Biggio et al. 2018). This risk is especially pronounced if training data is collected or aggregated from third-party sources.
The reliance on advanced infrastructure, such as cloud computing and high-bandwidth communication channels, makes AI systems vulnerable to cyberattacks. A study by Taddeo and Floridi (2018) emphasizes the need for robust cybersecurity measures to protect AI systems in military contexts (Taddeo and Floridi 2018).
Autonomous Escalation and Arms Races
The speed of AI decision-making could inadvertently escalate conflicts. The integration of AI into EW could accelerate an already intense global arms race. Nations may feel compelled to develop and deploy increasingly sophisticated AI-driven EW systems to maintain a strategic advantage. This escalation could lead to a destabilizing cycle of innovation and counter-innovation, increasing the likelihood of conflict. For example, an AI tasked with neutralizing enemy radars might preemptively strike ambiguous targets, provoking retaliation. Paul Scharre, in Army of None (2018), warns that such scenarios could fuel an AI arms race, where adversaries deploy increasingly autonomous systems, reducing human oversight and increasing miscalculation risks (Scharre 2018). States may feel pressured to develop increasingly sophisticated AI-driven EW systems to outmatch adversaries. This surge in AI-enabled weapons raises the stakes of conflict, as systems with high degrees of autonomy can inadvertently accelerate engagements (Altmann and Sauer 2017, 5).
A report by the Center for a New American Security (Horowitz and Scharre 2020) warns that the militarization of AI could undermine global security by creating new avenues for competition and confrontation. The proliferation of AI in EW could also lower the threshold for conflict, as nations may be more willing to engage in limited strikes if they believe AI can minimize risks.
Automated decision-making systems in EW could escalate conflicts by reacting faster than human operators can intervene. A small miscalculation—such as misidentifying a civilian communications channel as hostile—might trigger an overreaction with dire consequences (Scharre 2018). Although speed is a strategic advantage, it heightens the risk of accidental or unintended engagements, especially in dense and unpredictable electromagnetic environments.
Fratricide and Collateral Damage
AI’s ability to distinguish between hostile and civilian signals remains imperfect. In congested EMS environments, misidentification could lead to fratricide—disabling allied systems—or collateral damage, such as disrupting emergency communication networks. A 2023 IEEE study notes that machine learning models exhibit higher error rates in cluttered spectral environments, raising reliability concerns.
5. Ethical and Legal Concerns
The use of AI in EW raises ethical and legal questions, particularly regarding accountability and the potential for autonomous decision-making in combat situations. If an AI system mistakenly targets civilians or friendly forces, determining responsibility becomes complex. Additionally, the deployment of AI in EW could escalate conflicts, as adversaries may perceive such technologies as destabilizing.
The US is investing the most time and energy assessing how AI will impact the EW field by exploring issues such as ethics. According to col. P. J. Maykish, USAF, who served as director of analysis for the National Security Commission on Artificial Intelligence, “Ethics is a major consideration for U.S. AI development. It comes down to three issues: civil liberties, human rights and privacy.” (Friedrich 2020) China and Russia do not share these concerns, Maykish recommending a ” coalition of nations focused on common values.” He also warns that if the US does not take into account the growth of AI and machine learning (ML) in other nations, it could fall behind in development compared to the achievements of others (Friedrich 2020).
Ethical considerations in approaching electronic warfare through the lens of artificial intelligence:
- Autonomous decision making:
- The ethical implications of allowing AI to make decisions in electronic warfare.
- Balancing human oversight with AI autonomy to prevent unintended consequences.
- Responsibility and accountability:
- Establishing clear lines of responsibility for actions based on artificial intelligence.
- Legal and ethical frameworks for addressing AI incidents in electronic warfare.
According to a report by the United Nations Institute for Disarmament Research (UNIDIR 2019), the lack of international norms and regulations governing AI in military applications creates a legal gray area. This ambiguity could lead to unintended consequences, including the misuse of AI-driven EW systems.
Ethical dilemmas:
- Collateral damage: AI-driven jamming or spoofing can affect civilian communication networks inadvertently.
- Bias and discrimination: Biased training data may lead to disproportionate targeting or prioritization of certain signals.
- Moral disengagement: Automating combat decisions can create a psychological distance from the consequences of military actions, potentially leading to an erosion of ethical standards.
The growing autonomy of AI-enabled EW systems raises substantial ethical and legal questions. International laws of armed conflict do not fully address autonomous electronic or cyber actions, especially where attribution and intent can be obscured. Concerns include:
- Accountability: Determining who is responsible for decisions made or influenced by AI becomes more complex, particularly if AI models behave unpredictably or if their training data was biased.
- Discrimination: AI-driven EW systems must discriminate between legitimate and non-legitimate targets. Biased or erroneous data can blur these distinctions, leading to humanitarian or diplomatic repercussions.
Accountability and Attribution
When AI autonomously executes EW actions, accountability becomes murky. International Humanitarian Law (IHL) requires distinguishing combatants from civilians, but AI lacks the contextual judgment to adhere to these principles consistently. The International Committee of the Red Cross (ICRC 2024) emphasizes that unclear accountability frameworks could lead to unchecked violations of IHL.
Traditional rules of engagement assume human decision-makers can be held accountable. But with AI in the loop, the question arises: should accountability lie with the operators, the system developers, or the commanders? (Schmitt 2017)
Autonomous EW systems blur the lines of responsibility. In cases where AI-driven defenses misidentify friendly signals as hostile and trigger a response, determining accountability can be extremely complex. Policies and legal frameworks are still evolving, leaving significant gaps in governance.
Compliance with International Humanitarian Law (IHL)
Although EW primarily targets infrastructure and communication links rather than personnel, the risk of collateral damage or escalation remains. Accidental targeting of civilian channels, for instance, can breach IHL (Scharre 2018). Ensuring AI systems respect the principles of distinction, proportionality, and necessity is non-trivial, especially when encountering ambiguous signals in crowded spectrum environments.
Escalation Risks
AI systems capable of rapid, autonomous responses may inadvertently escalate conflicts if they misinterpret benign signals as hostile or overreact to minor provocations. A single misclassification could provoke a large-scale conflict or compromise allied communications.
Human–Machine Teaming
Striking the right balance between human oversight and AI autonomy in EW remains challenging. Operators must trust AI-driven recommendations but also retain the ability to override or audit decisions (Gunning 2019). Overreliance on AI risks deskilling human operators, who may lose proficiency in manual EW tactics. Additionally, “black box” AI systems, which lack explainability, could erode trust. A RAND survey (2024) found that 65% of military operators distrust AI recommendations without transparent reasoning (Black et al. 2024).
In high-pressure combat scenarios, the interplay between human operators and AI systems can be fraught with challenges. Overreliance on AI can erode operator skills, while under-trusting AI may negate its advantages (Gunning 2019). Designing intuitive user interfaces, providing adequate training, and establishing clear command-and-control relationships are vital to ensure that human operators remain in the decision loop where necessary.
AI systems, while powerful, are not infallible. There is a risk that human operators may become overly reliant on automated decision-making tools:
- Skill degradation: Operators may lose proficiency in manual EW techniques as they rely heavily on AI.
- System failures: Malfunctions or adversarial manipulation can cause catastrophic outcomes if human intervention is not readily available.
6. Mitigation Strategies
Addressing these challenges and risks requires a multifaceted approach:
- Robust Data Practices: Develop secure protocols for data collection, classification, and sharing to ensure high-quality training datasets.
- Explainable AI (XAI): Invest in research that allows AI systems to provide interpretable outputs, fostering trust and improving transparency.
- Continuous Testing and Validation: Implement rigorous validation frameworks, including adversarial testing, to evaluate AI model performance under various scenarios.
- Human-in-the-Loop Design: Maintain a balance between autonomy and human oversight, ensuring that critical decisions remain under human control.
- Regulatory Frameworks: Collaborate internationally to establish guidelines and treaties that govern the development and deployment of AI-driven EW systems.
Given the breadth of challenges and risks, a multi-pronged approach is required to ensure AI systems in EW achieve their potential safely and effectively.
Multi-Domain Data Fusion and Synthetic Data Generation
To address data scarcity, researchers are exploring synthetic data generation using high-fidelity simulators that replicate real-world EM environments. By introducing varied jamming techniques, radar waveforms, and noise levels, synthetic datasets can supplement limited real-world recordings. Transfer learning methods can then be employed to fine-tune models on smaller, classified real-world datasets to bridge the gap in fidelity.
Adversarial Training and Robustness Techniques
To combat adversarial attacks, security researchers have proposed adversarial training, where models are exposed to perturbations or malicious samples during training (Kurakin, Goodfellow, and Bengio 2017). Defensive distillation, gradient masking, and certified defenses are among the tactics that can raise the barrier for attackers. Additionally, uncertainty quantification methods—such as Bayesian neural networks—can help systems recognize when they are uncertain and escalate decisions to human operators.
Lightweight and Real-Time Inference
Hardware-driven optimizations, including Field-Programmable Gate Arrays (FPGAs) and Graphics Processing Units (GPUs) designed for edge computing, can accelerate AI inference in resource-constrained environments. Model pruning, quantization, and knowledge distillation further reduce computational overhead without excessively compromising accuracy (Han, Mao, and Dally 2015).
Explainable and Interpretable AI
Explainable AI (XAI) methods, such as saliency maps, local interpretable model-agnostic explanations (LIME), or attention mechanisms, can provide insight into the model’s decision process (Ribeiro, Singh, and Guestrin 2016). Implementing these techniques in EW contexts can foster trust among human operators and facilitate rapid troubleshooting, making it clearer why certain signals are flagged as threats.
Policy Frameworks and Ethical Governance
Policymakers and military leaders must establish clear guidelines that define acceptable levels of AI autonomy and accountability in EW systems. Proposed measures include:
- Human-in-the-Loop Requirements: Stipulations that certain high-stakes decisions (e.g., electronic attacks that could cause collateral damage) require human confirmation.
- Standards and Certification: Developing standardized testing protocols to evaluate AI algorithms’ performance, robustness, and compliance with legal and ethical norms.
- International Collaboration: Cooperative agreements and confidence-building measures could clarify red lines for AI-driven EW, reducing the risk of accidental escalation.
Human–Machine Collaboration and Training
Rather than fully automating EW tasks, AI can be integrated into a human–machine teaming framework where the system assists operators, improves situational awareness, and streamlines routine tasks. Training programs must evolve to equip operators with the skills to interpret AI outputs, identify anomalies, and understand algorithmic limitations.
To address these challenges, experts propose:
- Robust Testing: Rigorous adversarial testing of AI models in simulated EW environments (DARPA 2022).
- Human-AI Collaboration: Maintaining human oversight for critical decisions, as advocated by the U.S. Department of Defense’s AI Ethical Principles (Department of Defense 2020).
- International Norms: Developing treaties to regulate autonomous EW systems, akin to efforts for lethal autonomous weapons (ICRC 2024).
5. Conclusion and Future Directions
AI holds significant promises for enhancing electronic warfare capabilities by delivering faster, more adaptive responses to rapidly evolving threats. However, realizing these benefits requires overcoming critical technical, operational, and strategic challenges. Addressing data scarcity through secure data-sharing frameworks, investing in robust model architectures to counter adversarial attacks, and ensuring explainability and transparency are among the pressing areas of research. Furthermore, it is vital to develop coherent governance and ethical frameworks that can accommodate the complex nature of AI-driven EW and its potential to escalate conflicts inadvertently.
To mitigate these risks, policymakers, researchers, and military leaders must collaborate to develop transparent, secure, and ethically sound AI systems. International cooperation and the establishment of clear norms and regulations will also be essential to ensure the responsible use of AI in electronic warfare.
Future work should explore deeper integration of AI with other emerging technologies like quantum computing for cryptographic resilience, microelectronics for energy-efficient model execution, and advanced sensors for higher-fidelity data capture. Additionally, formal frameworks for risk assessment, accountability, and escalation control are essential for preventing unintended consequences in high-stakes environments.
Future research should focus on:
- Robust and Secure AI Architectures and System Design: Methods for adversarial training, uncertainty quantification, and on-the-fly reconfiguration in contested environments. Ongoing research into adversarially resilient architectures, robust training techniques, and real-time inference optimizations.
- Hybrid Human–Machine Teaming: Approaches that optimize collaboration between human operators and AI systems, including new training paradigms and user interface designs. Systems that allow human operators to comprehend and control AI-driven actions, reducing both mistrust and overdependence.
- Policy and Legal Frameworks and Multi-disciplinary Governance: International agreements and protocols that address AI autonomy, accountability, and transparency in EW deployments. Involvement of legal experts, ethicists, and policymakers to ensure responsible use and compliance with international norms.
- Computational Efficiency and Collaborative Data Ecosystems: Hardware advancements and algorithmic optimizations to enable real-time inference under constrained resources. Mechanisms for sharing and validating EW data in secure environments or through advanced simulation and synthetic data generation.
By proactively identifying and mitigating these challenges and risks, military organizations can harness the transformative potential of AI in EW while maintaining stability, security, and adherence to international norms.
In sum, while AI offers transformative potential for electronic warfare, realizing its benefits requires a careful balance of innovation, security, ethics, and governance. Through concerted research efforts, tested policy mechanisms, and enduring international cooperation, AI can serve as a force multiplier in EW without compromising safety, stability, or humanitarian principles.
Bibliography
- Adadi, Amina, and Mohammed Berrada. 2018. “Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI).” IEEE Access 6:52138–60. https://doi.org/10.1109/ACCESS.2018.2870052.
- Allen, John, and Giampiero Massolo. 2020. The Global Race for Technological Superiority. Discover the Security Implication. Edited by Fabio Rugge. Milan: Ledizioni.
- Altmann, Jürgen, and Frank Sauer. 2017. “Autonomous Weapon Systems and Strategic Stability: Survival: Vol 59, No 5.” 2017. https://www.tandfonline.com/doi/abs/10.1080/00396338.2017.1375263.
- Amodei, Dario, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mané. 2016. “Concrete Problems in AI Safety.” arXiv. https://doi.org/10.48550/arXiv.1606.06565.
- Biggio, Battista, Konrad Rieck, Davide Ariu, Christian Wressnegger, Igino Corona, Giorgio Giacinto, and Fabio Roli. 2018. “Poisoning Behavioral Malware Clustering.” arXiv. https://doi.org/10.48550/arXiv.1811.09985.
- Black, James, Mattias Eken, Jacob Parakilas, Stuart Dee, Conlan Ellis, Kiran Suman-Chauhan, Ryan J. Bain, et al. 2024. “Strategic Competition in the Age of AI: Emerging Risks and Opportunities from Military Use of Artificial Intelligence.” RAND Corporation. https://www.rand.org/pubs/research_reports/RRA3295-1.html.
- Brooks, Risa. 2018. “Technology and Future War Will Test U.S. Civil-Military Relations.” War on the Rocks. November 26, 2018. https://warontherocks.com/2018/11/technology-and-future-war-will-test-u-s-civil-military-relations/.
- Busoniu, Lucian, Robert Babuska, Bart De Schutter, and Damien Ernst. 2017. Reinforcement Learning and Dynamic Programming Using Function Approximators. CRC Press.
- DARPA. 2022. “AI Forward | DARPA.” 2022. https://www.darpa.mil/research/programs/ai-forward.
- Davis, Zachary. 2019. “Artificial Intelligence on the Battlefield: Implications for Deterrence and Surprise.” PRISM 8 (2): 114–31.
- De Spiegeleire, Stephan, Matthijs Maas, and Tim Sweijs. 2017. Artificial Intelligence and the Future of Defense.
- Department of Defense. 2018. “2018 DoD Artificial Intelligence Strategy.” https://media.defense.gov/2019/Feb/12/2002088964/-1/-1/1/DOD-AI-STRATEGY-FACT-SHEET.PDF.
- ———. 2020. “DOD Adopts Ethical Principles for Artificial Intelligence.” U.S. Department of Defense. 2020. https://www.defense.gov/News/Releases/release/article/2091996/dod-adopts-ethical-principles-for-artificial-intelligence/https%3A%2F%2Fwww.defense.gov%2FNews%2FReleases%2FRelease%2FArticle%2F2091996%2Fdod-adopts-ethical-principles-for-artificial-intelligence%2F.
- Friedrich, Nancy. 2020. “AI and Machine Learning Redefine the EW Landscape | 2020-12-08 | Microwave Journal.” 2020. https://www.microwavejournal.com/articles/35107-ai-and-machine-learning-redefine-the-ew-landscape.
- Gannon, Brian P. 2023. “Implement AI in Electromagnetic Spectrum Operations.” U.S. Naval Institute. August 1, 2023. https://www.usni.org/magazines/proceedings/2023/august/implement-ai-electromagnetic-spectrum-operations.
- Giordano, Paolo. 2024. “The Age of AI: Navigating the Convergence of Emerging Technologies.” NATO’s ACT. January 31, 2024. https://www.act.nato.int/article/the-convergence-of-emerging-technologies/.
- Goodfellow, Ian, Yoshua Bengio, and Aaron Courville. 2016. Deep Learning. MIT Press.
- Gulhane, Tejaswi Singh and Amit. 2018. “8 Key Military Applications for Artificial Intelligence.” 2018. https://blog.marketresearch.com/8-key-military-applications-for-artificial-intelligence-in-2018.
- Gunning, David. 2019. “DARPA’s Explainable Artificial Intelligence (XAI) Program.” In Proceedings of the 24th International Conference on Intelligent User Interfaces, ii. IUI ’19. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/3301275.3308446.
- Han, Song, Huizi Mao, and W. Dally. 2015. “Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding.” arXiv: Computer Vision and Pattern Recognition, October. https://www.semanticscholar.org/paper/Deep-Compression%3A-Compressing-Deep-Neural-Network-Han-Mao/642d0f49b7826adcf986616f4af77e736229990f.
- Horowitz, Michael, and Paul Scharre. 2020. “AI and International Stability: Risks and Confidence-Building Measures.” CNAS. 2020. https://www.cnas.org/publications/reports/ai-and-international-stability-risks-and-confidence-building-measures.
- ICRC. 2024. “Weapons and International Humanitarian Law.” https://rcrcconference.org/app/uploads/2024/10/CoD24_R3-Res-Weapons-and-IHL-EN.pdf.
- Kosko, Bart. 1986. “Fuzzy Cognitive Maps.” International Journal of Man-Machine Studies 24 (1): 65–75. https://doi.org/10.1016/S0020-7373(86)80040-2.
- Kurakin, Alexey, Ian Goodfellow, and Samy Bengio. 2017. “Adversarial Machine Learning at Scale.” arXiv. https://doi.org/10.48550/arXiv.1611.01236.
- Li, Huiqin, Yanling Li, Chuan He, Jianwei Zhan, and Hui Zhang. 2021. “Cognitive Electronic Jamming Decision-Making Method Based on Improved Q -Learning Algorithm.” International Journal of Aerospace Engineering 2021 (December):1–12. https://doi.org/10.1155/2021/8647386.
- Liao, Xiaofeng, Bo Li, and Bo Yang. 2018. “A Novel Classification and Identification Scheme of Emitter Signals Based on Ward’s Clustering and Probabilistic Neural Networks with Correlation Analysis.” Computational Intelligence and Neuroscience 2018 (November):e1458962. https://doi.org/10.1155/2018/1458962.
- Márquez-Díaz, Jairo Eduardo. 2024. “Benefits and Challenges of Military Artificial Intelligence in the Field of Defense.” Computación y Sistemas 28 (2): 309–23. https://doi.org/10.13053/cys-28-2-4684.
- Microwaves, Microwaves & RF. 2019. “BAE Bets on Use of Artificial Intelligence in Electronic Warfare.” Microwaves & RF. July 15, 2019. https://www.mwrf.com/markets/defense/article/21849838/bae-systems-bae-bets-on-use-of-artificial-intelligence-in-electronic-warfare.
- Papernot, Nicolas, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z. Berkay Celik, and Ananthram Swami. 2017. “Practical Black-Box Attacks against Machine Learning.” arXiv. https://doi.org/10.48550/arXiv.1602.02697.
- Papernot, Nicolas, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z. Berkay Celik, and Ananthram Swami. 2015. “The Limitations of Deep Learning in Adversarial Settings.” arXiv. https://doi.org/10.48550/arXiv.1511.07528.
- Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. 2016. “‘Why Should I Trust You?’: Explaining the Predictions of Any Classifier.” arXiv. https://doi.org/10.48550/arXiv.1602.04938.
- Scharre, Paul. 2018. Army of None: Autonomous Weapons and the Future of War. W. W. Norton & Company.
- Schmitt, Michael N. 2017. Tallinn Manual 2.0 on the International Law Applicable to Cyber Operations. Cambridge University Press.
- Sharma, Purabi, Kandarpa Kumar Sarma, and Nikos E. Mastorakis. 2020. “Artificial Intelligence Aided Electronic Warfare Systems- Recent Trends and Evolving Applications.” IEEE Access 8:224761–80. https://doi.org/10.1109/ACCESS.2020.3044453.
- Szegedy, Christian, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2014. “Intriguing Properties of Neural Networks.” arXiv. https://doi.org/10.48550/arXiv.1312.6199.
- Taddeo, Mariarosaria, and Luciano Floridi. 2018. “Regulate Artificial Intelligence to Avert Cyber Arms Race.” Nature 556 (7701): 296–98. https://doi.org/10.1038/d41586-018-04602-6.
- Tegmark, Max. 2018. “Life 3.0: Being Human in the Age of Artificial Intelligence | Mitpressbookstore.” July 31, 2018. https://mitpressbookstore.mit.edu/book/9781101970317.
- UNIDIR. 2019. “The Impact of Artificial Intelligence on Strategic Stability and Nuclear Risk, Volume I, Euro-Atlantic Perspectives.” SIPRI. https://www.sipri.org/publications/2019/research-reports/impact-artificial-intelligence-strategic-stability-and-nuclear-risk-volume-i-euro-atlantic.
- Varshney, Kush R., and Homa Alemzadeh. 2017. “On the Safety of Machine Learning: Cyber-Physical Systems, Decision Sciences, and Data Products.” arXiv. https://doi.org/10.48550/arXiv.1610.01256.
- Watkins, Christopher J. C. H., and Peter Dayan. 1992. “Q-Learning.” Machine Learning 8 (3): 279–92. https://doi.org/10.1007/BF00992698.
- Xu, Jianliang, Huaxun Lou, Weifeng Zhang, and Gaoli Sang. 2020. “An Intelligent Anti-Jamming Scheme for Cognitive Radio Based on Deep Reinforcement Learning.” IEEE Access 8 (January):202563–72. https://doi.org/10.1109/ACCESS.2020.3036027.
- Zhang, Ming, Ming Diao, and Limin Guo. 2017. “Convolutional Neural Networks for Automatic Cognitive Radio Waveform Recognition.” IEEE Access 5:11074–82. https://doi.org/10.1109/ACCESS.2017.2716191.
- Zhang, Wenxu, Tong Zhao, Zhongkai Zhao, Dan Ma, and Feiran Liu. 2023. “Performance Analysis of Deep Reinforcement Learning-Based Intelligent Cooperative Jamming Method Confronting Multi-Functional Networked Radar.” Signal Processing 207 (June):108965. https://doi.org/10.1016/j.sigpro.2023.108965.
Articol cu Acces Deschis (Open Access) distribuit în conformitate cu termenii licenței de atribuire Creative Commons CC BY SA 4.0 (https://creativecommons.org/licenses/by-sa/4.0/).
Lasă un răspuns