Home Blog Even in times of genocide, Big Tech silences Palestinians | Israelo-Palestinian conflict

Even in times of genocide, Big Tech silences Palestinians | Israelo-Palestinian conflict

by telavivtribune.com
0 comment


The torrid violence against the people of Gaza is unprecedented. And so are its online repercussions. Palestinians who document and denounce Israel’s genocidal war on Gaza have faced relentless censorship and repression, accompanied by an explosion of disinformation, hate speech and state-sponsored calls for violence on social networks.

Following Hamas’ attack on Israel on October 7, the tech giants moved to remove content about the war that they said violated their rules. TikTok removed more than 925,000 videos from the Middle East between October 7 and 31. As of November 14, X, formerly known as Twitter, had taken action on more than 350,000 posts. Meta, for its part, deleted or marked as disturbing more than 795,000 posts in the first three days of the attack.

This elimination spree, run by poorly trained algorithms and fueled by pressure from the EU and Israel, has resulted in disproportionate censorship of critical Palestinian voices, including content creators, journalists and activists who cover the ground in Gaza.

While being accused of promoting pro-Palestinian content, TikTok has in fact arbitrarily and repeatedly censored content about Palestine. For example, on October 9, the American media Mondoweiss reported that his TikTok account had been permanently banned. He was reinstated only to be suspended again a few days later. The company provided no explanation.

X has also been accused of suppressing pro-Palestinian voices. For example, the account of the American branch of the group Palestine Action failed to gain new followers; the problem was only resolved after mounting public pressure.

Meta, of all companies, holds the lion’s share of this digital crackdown. It arbitrarily removed Palestine-related content, interrupted live streaming, restricted comments, and suspended accounts.

Among those targeted are Palestinian photojournalist Motaz Azaiza, who gained more than 15 million followers on Instagram for documenting Israeli atrocities in Gaza; his account was later reinstated. The Facebook page of Quds News Network, one of the largest Palestinian news networks with more than 10 million followers, was also permanently banned.

On Instagram, people who post about Palestine have faced shadowbanning – a stealthy form of censorship where an individual is made invisible on the platform without being notified. Meta also reduced the certainty threshold required for automated filters to hide hostile comments from 80% to 25% for content originating from Palestine. We have documented cases where Instagram hid comments containing the Palestinian flag emoji on the grounds that they were “potentially offensive”.

Meta’s content moderation has never forgiven Palestinian discourse, especially in times of crisis. The company’s rules, developed in the wake of the US-led “war on terror”, have disproportionately disfavored and silenced political discourse in the Arabic language. For example, an overwhelming majority of individuals and organizations on its secret “terrorist” blacklist come from the Middle East and South Asia – a reflection of US foreign policy.

The company’s Dangerous Organizations and Individuals (DOI) policy, which prohibits the praise, support and representation of these individuals and groups, is the catalyst for the company’s harsh censorship and discrimination. enterprise against the Palestinians.

In 2021, this policy helped silence pro-Palestinian individuals when they took to the streets and social media to protest Israel’s attempt to forcibly evict Palestinian families from their homes in the neighborhood. occupied Sheikh Jarrah in East Jerusalem.

Against the backdrop of Israel’s ongoing war on Gaza, Meta said they implement their policies equally across the world and refuted claims that they are “deliberately suppressing voice.” The evidence, however, suggests otherwise.

Two weeks after the start of Russia’s war against Ukraine, Meta changed its rules to allow Ukrainians to express themselves freely. It made it possible, for example, to call for violence against the Russian invaders. He even delisted the neo-Nazi group, the Azov Battalion, designated under his DOI policy, to enable his praise.

In defense of these exceptions, the company’s president of global affairs, Nick Clegg, wrote: “If we applied our standard content policies without any adjustments, we would now remove content from ordinary Ukrainians expressing their resistance and fury against the invading military forces, which rightly be considered unacceptable. »

Have any adjustments been made to the way ordinary Palestinians “express their resistance and fury in the face of invading military forces”? Rather the opposite. In a blog post last updated on December 5, Meta said it disabled hashtags, restricted live streaming and removed seven times more content than in the two months before October for violations of its DOI policy.

Even on the humanitarian front, double standards are on full display. Meta has gone to great lengths to coordinate humanitarian assistance for Ukrainians, including activating a feature that helps them stay informed, locate family members and loved ones, and access emergency services, mental health support, housing assistance and refugee assistance, among others.

No such support has been provided to Palestinians in Gaza who are facing communications blackouts and a humanitarian catastrophe of indescribable scale.

This discrimination transcends how Meta devotes its resources and enforces its policies. Arabic-language content is heavily over-moderated, while Hebrew-language content remains under-moderated. Until September 2023, Meta did not have classifiers to automatically detect and remove hate speech in Hebrew, even though its platforms were used by Israelis to explicitly call for violence and organize pogroms against Palestinians. A recent internal memo revealed that they were unable to use the new Hebrew classifier in Instagram comments due to insufficient training data.

This is deeply concerning given that Meta relies heavily on automated content moderation tools. About 98% of content moderation decisions on Instagram are automated and almost 94% are automated on Facebook. These tools have repeatedly proven to be poorly mastered in Arabic and its different dialects.

According to an internal memo leaked in Facebook logs from 2021, Meta’s automated tools for detecting terrorist content mistakenly removed non-violent Arabic content 77% of the time.

This partly explains the enormous impact we are seeing on people’s ability to exercise their rights and document human rights violations and war crimes. This also explains some unjustifiable problems in the system, including labeling Al-Aqsa Mosque, the third holiest mosque in Islam, as a terrorist organization in 2021; translating the bios of Instagram users displaying a Palestinian flag as “Praise God, Palestinian terrorists are fighting for their freedom”; and the removal of images of dead bodies from the al-Ahli hospital bombing for violating its policy on adult nudity and sexual activity, no less.

Meanwhile, Meta allows verified state accounts belonging to the Israeli government – ​​including politicians, the Israeli military and its spokespeople – to spread war propaganda and disinformation that justifies war crimes and crimes against humanity, including attacks on hospitals and ambulances, filmed confessions of Palestinians. detainees and almost daily “evacuation” orders for Palestinian civilians.

Instead of protecting Palestinians in Gaza as they face what 36 U.N. human rights experts and other genocide experts have called genocide, Meta approved paid ads that explicitly called for to a “holocaust for the Palestinians” and the elimination of “the women and children of Gaza”. and the elderly.

Similar disturbing calls for violence have also been broadcast on other platforms. In fact, X appears to lead other platforms in the amount of hate speech and incitement to violence aimed at Palestinians. According to Palestinian digital rights organization 7amleh, more than two million such posts have been published on the platform since October 7.

Telegram also hosts a number of Israeli channels that openly call for genocide and celebrate the collective punishment of the Palestinian people. In one group, called “Nazi Hunters 2023,” moderators post photos of Palestinian public figures with crosses on their faces and their home addresses and call for their elimination.

So far, social media companies don’t seem to understand the seriousness of the situation. Meta, in particular, appears to have learned very little from his role in the 2017 Rohingya genocide in Myanmar.

Silencing Palestinians, while promoting disinformation and violence against them, may have been the modus operandi of social media platforms in the absence of any meaningful accountability. But this round is different. Meta risks being implicated in genocide again and must put things right before it’s too late. The responsibility to protect users and uphold freedom of expression also applies to other social media platforms.

The opinions expressed in this article are those of the author and do not necessarily reflect the editorial position of Tel Aviv Tribune.



You may also like

Leave a Comment

telaviv-tribune

Tel Aviv Tribune is the Most Popular Newspaper and Magazine in Tel Aviv and Israel.

Editors' Picks

Latest Posts

TEL AVIV TRIBUNE – All Right Reserved.

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?
-
00:00
00:00
Update Required Flash plugin
-
00:00
00:00