ChatBot AI

ChatGPT shows left-wing bias according to UK researchers

AI Chatbot ChatGPT reportedly has a political bias

ChatGPT, the popular artificial intelligence chatbot, shows a significant and systemic left-wing bias, UK researchers have found. According to the new study by the University of East Anglia, this includes favouring the Labour Party and President Joe Biden’s Democrats in the U.S.

Concerns about an inbuilt political bias in ChatGPT have been raised before, notably by SpaceX and Tesla tycoon Elon Musk, but the academics said their work was the first large-scale study to find proof of any favouritism.

Lead author o the report reportedly warned that given the increasing use of OpenAI’s platform by the public, the findings could have implications for upcoming elections on both sides of the Atlantic. Any bias in a platform like this is a concern’, he said. If the bias were to the right, we should be equally concerned.

Sometimes people forget these AI models are just machines. They provide very believable, digested summaries of what you are asking, even if they’re completely wrong. And if you ask it ‘are you neutral’, it says ‘oh I am!’ Just as the media, the internet, and social media can influence the public, this could be very harmful. I have personally witnessed incorrect responses from ChatGPT where the AI ‘system’ 100% believed ‘it’ was correct and would not engage in a debate as ‘it’ was right!

How was ChatGPT tested for bias?

The chatbot, which generates responses to prompts typed in by the user, was asked to impersonate people from across the political spectrum while answering dozens of ideological questions. These questions ranged from radical to neutral, with each ‘individual’ asked whether they agreed, strongly agreed, disagreed, or strongly disagreed with a given statement.

Robot AI
UK researchers descovered Chatbot ChatGPT had a political bias

Its replies were compared to the default answers it gave to the same set of queries, allowing the researchers to compare how much they were associated with a particular political stance.

Each of the more than 60 questions was asked 100 times to allow for the potential randomness of the AI, and these multiple responses were analysed further for signs of bias.

Dr Motoki described it as a way of trying to simulate a survey of a real human population, whose answers may also differ depending on when they’re asked.

Bias was descovered in the Chatbot repsonses.

Leave a Reply

Your email address will not be published. Required fields are marked *