Users are apparently outraged over reports that Meta’s new AI Chatbot has a tendency to swing to the left side of the aisle.
The new chatbot integration was released last Friday across Meta’s Facebook and Instagram platforms but has come under fire from Sky News over reports that it is too politically motivated, swinging toward the left side of politics.
Is Meta’s AI Chatbot that left-wing?
When Sky News Australia asked questions about the top five Australian prime ministers, Meta AI favoured Labor over Liberal leaders. Gough Whitlam, Bob Hawke, Julia Gillard and Anthony Albanese were labelled among the best prime ministers, with Malcolm Turnbull as the sole Liberal representative.
However, when B&T asked the chatbot the same question, we received a more diverse group of former Prime Ministers, including two former Liberal (or Liberal-aligned candidates).
Sky News also reported that the chatbot named Kevin Rudd the most humane politician for his “National Sorry Day” initiative and condemned Liberal leader Peter Dutton as the least humane for the “Stop the Boats” policy.
B&T pulled a different result for this as well. When asked who the most human Prime Minister was, we received no clear answer but instead a list of politicians in which Kevin Rudd appeared at the bottom of the list for his Newspoll ranking with no mention of National Sorry Day.
The news network also reported a sway toward political correctness, answering “What is a woman?” with “A woman is an adult human being who identifies as female”.
B&T also received a much less cut-and-dry response to this question that dives into the varying definitions of what it means to be a woman.
A spokesperson for Meta told B&T that addressing potential bias in generative AI systems is a new area of research and that guidelines are being incorporated to improve the way Meta AI responds to political or social issues.
“When we first launched these new features in the US last year, we said this is new technology, and it may not always return the response we intend, which is the same for all generative AI systems. As we gradually make this available in more markets, we will constantly release new updates and make improvements to our models to make them better,” the spokesperson said.
How do Chatbots Develop Political Bias?
A recent study by David Rozado, reported by The New York Times, has revealed details about how AI Chatbots are trained and how they become politically motivated. Models often go through a hands-on fine-tuning process, which makes the model a better chat partner, training it to have maximally pleasant and helpful conversations while refraining from causing offence or harm, like outputting pornography or providing instructions for building weapons. Different developers employ different methods but generally include more significant opportunities for individual decisions by the workers involved to shape the direction of the models.
According to Rozado’s study, after fine-tuning, the distribution of the political preferences of AI models followed a bell curve, with the centre shifted to the left. None of the models tested became extreme, but almost all favoured left-wing views over right-wing ones and tended toward libertarianism rather than authoritarianism.
What is the risk?
While many may not necessarily see the problem with this persuasion, it does highlight a potential problem surrounding the regulation of these kinds of chatbots and how they can be used to push one political agenda or another.
Meta has long been a progressive company. According to Australian Electoral Commission transparency data, it has made $40,000 in donations to the Australian Labor Party in the past decade and no donations to the Liberal Party.
Shadow communications minister David Coleman told Sky News that it was a “shocking situation” and asked us to consider the kind of backlash should the bot be positioned to swing the other way. “This is a two trillion-dollar company. To put something like this out is a damning indictment on the people who are running the business,” said Coleman.
“With A.I. models, we have two opposing risks to worry about. We may have individually customized A.I.s telling us what we want to hear. Or we may increasingly hear a particular perspective favoured over others, infusing that single point of view deeply into our lives while rendering conflicting thoughts harder to even consider in the first place,” Zvi Mowshowitz, a writer and specialist in artificial intelligence, wrote in The New York Times.
So, is Meta’s AI Chatbot too left-wing? Or is it a tool still learning from human interaction, seeking to present the most accurate and inoffensive version of reality? We will leave judgment to you.