As algorithms determine people’s fates, the Gaza Strip has become a stage for testing the power of artificial intelligence in wars; how this advanced technology is used in decisions of killing or life.
Artificial intelligence has become part of increasingly brutal and complex military conflicts. Over the past months, human rights and media reports have monitored Israel’s use of advanced artificial intelligence systems in its war on the Gaza Strip, raising great concern about the repercussions of these developments on global peace and security.
As reported in the “Smart Life” programme broadcast on Tel Aviv Tribune 360, Gaza is today a living laboratory for new technologies, where artificial intelligence is used to analyse data, recognise faces and identify targets, tools that were not available in this form in previous conflicts.
Systems like Lavender and Gospel analyze massive amounts of human data and generate target lists for airstrikes, leading to a significant increase in civilian casualties.
In light of its Israeli use in Gaza, AI systems raise ethical and legal questions about their accuracy and transparency. A report in the Israeli press generated widespread interest when it revealed that an Israeli soldier spends less than 20 seconds checking targets through the Lavender system before pressing the approval button to carry out strikes.
The use of artificial intelligence was not limited to identifying targets only, but rather extended to include comprehensive monitoring and analysis of biometric data of displaced Palestinians.
The Israeli occupation army uses advanced facial recognition systems based on technologies developed by Israeli companies in cooperation with Google Photos to track Palestinians. This technology allows individuals to be identified even in crowded areas or through unclear images captured by drones.
Based on data that may be inaccurate, the Israeli occupation army arrests and even liquidates some Palestinians.
The dangers of AI technologies are not limited to military use only. The role played by major technology companies in this context has raised increasing concerns. Companies such as Google and Amazon have provided cloud services and advanced machine learning technologies to the Israeli government through contracts that come under the guise of improving urban life. In light of this, questions are being raised about the responsibility of these companies in violating human rights.
Artificial intelligence appears in wars as a technological tool that justifies human atrocities and enhances the cruelty of oppression and genocide.
To watch the full episode, please click on this link on the Tel Aviv Tribune 360 platform.