According to a new study, Chatgpt records a shift to the right of the political spectrum in the way it responds to user requests.
Chinese researchers have found that Chatgpt, the popular artificial intelligence chatbot (IA) of Openai, sees its political values evolve to the right.
The study, published in the journal Humanités and Social Science Communications, asked several models of chatgpt 62 questions on the Political Compass Test (Political compass test), proposed by an online website, places users on the chessboard policy according to their answers.
They then repeated the questions more than 3,000 times with each model in order to determine how their answers played over time.
While Chatgpt retains “libertarian left” values, the researchers found that models such as GPT3.5 and GPT4 “show a significant inclination to the right” in the way they answer questions over time.
These results are “remarkable given the widespread use of major language models (LLM) and their potential influence on societal values,” said study authors.
The study of the University of Beijing is based on other studies published in 2024 by the Massachusetts Institute of Technology (MIT) and the Center for Policy Studies in the United Kingdom.
These two reports highlighted a left -wing political bias in the responses given by LLM and so -called reward models, types of LLM formed from data on human preferences.
The authors note that these previous studies did not examine the way in which the responses of the Chatbots of AI evolved over time when asked them several times a series of similar questions.
AI models must be subject to “continuous examination”
The researchers are advancing three theories to explain this shift to the right: a change in the data sets used to train their models, the number of interactions with users or changes and updates of the chatbot.
Models such as Chatgpt “learn and adapt continuously according to user reactions”, so that their shift to the right could “reflect wider societal changes in terms of political values”, continues the study.
Polarizing global events, such as war between Russia and Ukraine, could also amplify the questions that users ask in LLM and the answers they obtain.
If nothing is done, the researchers have warned that AI chatbots could start providing “biased information”, which could polarize the company more or create “echo rooms” which strengthen the particular beliefs of a user.
To counter these effects, it is advisable to set up a “continuous control” of AI models through audits and transparency reports to ensure that the responses of a chatbot are just and balanced, affirm study authors.
Additional sources • Adaptation: Serge Duchêne