Are you chatting with an AI-powered pro-Israel superbot? | Interactive news


By the end of 2023, almost half of all internet traffic was bots, according to a study by US cybersecurity firm Imperva.

Bad bots reached their highest levels recorded by Imperva, accounting for 34% of internet traffic, while good bots accounted for the remaining 15%.

This is partly due to the growing popularity of artificial intelligence (AI) to generate text and images.

According to Baydoun, the pro-Israeli bots they found are primarily aimed at sowing doubt and confusion about a pro-Palestinian narrative rather than getting social media users to trust them.

Bot armies – thousands or even millions of malicious bots – are used in large-scale disinformation campaigns to influence public opinion.

As bots become more advanced, it becomes more difficult to differentiate between bot content and human content.

“The ability of AI to create these larger bot networks… has a hugely deleterious effect on truthful communication, but also on freedom of expression, because they have the ability to drown out human voices,” said Jillian York, Director of International Freedom of Expression at International Freedom of Expression. Non-profit digital rights group, Electronic Frontier Foundation.

Evolution of robots

The first robots were very simple, operating according to predefined rules rather than employing the sophisticated AI techniques used today.

In the early to mid-2000s, as social networks like MySpace and Facebook grew, social media bots became popular because they could automate tasks such as quickly adding “friends”, creating accounts of users and the automation of publications.

These early robots had limited language processing capabilities: they understood and responded only to a narrow range of commands or predefined keywords.

“Before, online bots, especially in the mid-2010s… mostly regurgitated the same text, over and over again. The text… would very obviously be written by a robot,” Semaan said.

In the 2010s, rapid advances in natural language processing (NLP), a branch of AI that allows computers to understand and generate human language, allowed robots to do more.

During the 2016 US presidential election between Donald Trump and Hillary Clinton, a study by researchers at the University of Pennsylvania found that a third of tweets were pro-Trump and almost a fifth of tweets were pro-Clinton came from robots during the first two debates.

Then a more advanced type of NLP, known as large language models (LLM), using billions or trillions of parameters to generate human-like text, emerged.

Related posts

News of the day | November 25 – Midday

Updates: Israeli, Lebanese officials say ceasefire deal near

News of the day | November 24 – Evening