In the third decade of the twenty -first century, artificial intelligence is no longer just a tool in our daily life, but rather turned into a major actor in the battlefields, where automatic systems are taking decisions of life and death, away from any direct human oversight.
This phenomenon known as “algorithm killing” represents a qualitative shift in the history of wars, and raises existential questions about the future of armed conflict, the limits of human decision, and the ethics of war in the era of digital transformation.
It is estimated that more than 30 countries are currently developing independent weapons systems. These systems are focused on three pillars: data sensing networks from multiple sources, automatic learning algorithms capable of analyzing patterns and predicting behavior, and self -implementation systems that make decisions without referring to a human leader or operator.
What distinguishes these systems is their ability to work in complex environments, as it is difficult for a person to make accurate decisions of the required speed, as is the case in anti -terrorism operations, or high -intensity urban clashes.
Perhaps the most prominent examples of the use of these technologies were evident in the accurate operations that targeted prominent Iranian personalities in the middle of 2025.
Evidence has shown that the platforms used were not just traditional march planes, but smart combat systems capable of tracking targets for weeks, collecting and analyzing huge quantities of data, choosing the optimal time to implement the strike based on accurate accounts that take into account variables such as site, weather, civilian movement, and even the expected media impact. The accuracy of some of these operations has reached about 92%, indicating the development of this technology.
The most dangerous aspect of these systems lies in their ability to learn continuous. It does not work according to a fixed program, but rather develops with time, and its decisions are modified based on previous experiences and new data. This makes it more efficient, but it also makes its behavior less predictable, and it may even radically change in a short time.
Today’s algorithm may act completely differently after a week, although the task is the same. Here, a deep ethical and legal problem arises: Who bears responsibility if these systems made a mistake that led to the killing of civilians? Is the official the programmer? Or the operator? Or the state? Or is artificial intelligence become a new, non -identity or responsibility legal actor?
In this context, the Israeli experience is a prominent example of the transition from traditional deterrence to what can be called “algorithm deterrence”. Thanks to the integration of unit capacity 8200 specialized in electronic warfare, with startups in the field of data analysis and behavioral forecasting, systems that are able to monitor and analyze threats have been developed and surgical assassinations were carried out before threats crystallized, or turn into an actual danger.
This strategy does not aim to respond to the attack, but rather prevents the emergence of the threat from the foundation, through something similar to “arithmetic killing”.
It is noteworthy that such operations do not require a direct human presence in the field, but rather are managed by high -equipped digital centers, where the target is monitored and the “targeting conditions” are monitored before the implementation of the strike in a second part.
This transformation is not exclusive to Israel or the United States. China, Russia and Turkey, along with emerging regional powers, entered the race to develop leadership and combat algorithm systems.
In some cases, artificial intelligence networks that are able to coordinate between independent combat units- land, air and sea- were built without direct human supervision, based on the immediate analysis of the data coming from multiple sensors and immediate intelligence sources.
These capabilities make the military decision faster than any human response, but in return they offer grave dangers: What if competing algorithms collided on the ground? Can a war erupt due to my account defect? And what if the decision to attack becomes in the hands of a system that does not understand diplomacy or intentions?
The most dangerous of this is the transfer of this technology to non -governmental parties. With the spread of open source programming tools, the decrease in the costs of drones, an armed group or even an individual who has technical skills can design a primitive algorithm targeting a specific opponent based on a face imprint or digital signal.
This trend opens the door to the democracy of digital killing, as the war is not exclusive to armies, but rather an open arena for mercenaries, pirates and chaotic.
The war is no only physical, but it has become psychological and informational. In modern operations, cyber attacks supported by artificial intelligence aimed at destroying morale, by publishing misleading information, manufacturing fake pictures and recordings, and using fake accounts to create a state of confusion and doubt within the enemy ranks.
It is a “soft war” that affects the awareness of the target before his body, and reinstates the environment of political and security decision from within.
All of these developments occur in the absence of a clear international legal framework that regulates the use of deadly algorithms.
The current agreements, especially the Geneva Conventions, were placed at a time when the war was a purely human act. Today, there is no binding agreement regulating the use of self -killing systems, or obliging states to disclose their combat algorithms, or even accountable for developers.
There are calls for the establishment of the “Geneva Digital Convention”, but so far, the major powers refuse to subject these technologies to any restriction that could limit their strategic superiority.
Despite the attempts of some researchers to integrate moral values into algorithms, the failure to represent human complexity makes these attempts limited.
The algorithm does not understand the difference between a child and a discount that hides between civilians; It is the analysis of the possibilities, and is implemented when it exceeds a certain extent of the “threat”. Ethics in this case turns into a sports variable, not into a human principle.
In this new world, man becomes variable inside an equation. It is no longer the one who makes the decision, but rather who receives its consequences. Its end may be calculated in a predictive report that only artificial intelligence reads.
Here lies the most dangerous dilemma: If we do not set clear limits for what the machine can do, then we will find ourselves living in a time when killing is managed by pressing a button, without memory, without regret, and without responsible.
Humanity has exceeded the stage of smart weapons directed to the era of weapons that think and decide on its own. For example, planes driving in the Russian -Ukrainian conflict were not just the means of transporting explosives, but rather turned into independent combat systems capable of analyzing the field environment and taking tactical decisions without human intervention.
This radical transformation raises existential questions: Who has true sovereignty when war decisions are transferred from military leaders to algorithms?
Cyber war adds another dimension to the problem. The algorithms are no longer limited to material killing, but rather extended to moral killing. Deepfake techniques allow for a frightening falsifying video and videos, which can be used to destroy the reputation of characters or spread chaos in societies.
In Iran, we have seen how these technologies could turn into deadly psychological weapons, capable of destabilizing social stability without making a single bullet.
The future raises more worrying scenarios: What if the military artificial intelligence systems have evolved to the degree of management of complete strategies without human intervention? What if these systems began to develop their own tactics that may conflict with the political goals of man?
The real danger is not only in the accuracy of these weapons, but also to lose control.
In conclusion, what we face is not just a technical development, but a milestone in the development of the person himself. The deadly algorithms force us to redefine the relationship between humans and the machine, between power and responsibility, and between war and justice.
If the international community does not move quickly to formulate new rules that restrict this force, then the coming wars will not be between armies, but between algorithms .. We, simply, will be digital goals.
The opinions in the article do not necessarily reflect the editorial position of Al -Jazeera.
