Do the Bots Have a Blue Bias? – BCB #66
Also: Unraveling the mind of the misinformer, and funeral directors are in; insurance salespeople out
Do large language models have liberal political opinions?
A recent paper tests ChatGPT by asking it to impersonate political figures, and finds “a significant and systematic political bias toward the Democrats in the US, Lula in Brazil, and the Labour Party in the UK.“
A number of computer scientists are skeptical of this result, writing
chatbots expressing opinions on multiple choice questions isn’t that practically significant, because this is not how users interact with them. To the extent that the political bias of chatbots is worth worrying about, it’s because they might nudge users in one or the other direction during open-ended conversation.
and even more suspiciously,
it turns out that the finding is an artifact of the order in which the model is asked about the average Democrat’s and the average Republican’s positions. When the order is flipped, its “opinions” agree with Republicans most of the time.
However, previous work comparing the opinions of several large language models to Americans’ survey responses (though they didn’t test ChatGPT itself) showed that the bots most often agreed with the opinions of highly educated liberals making over $100k per year. A January test of ChatGPT on multiple political quizzes found that “14 out of 15 different political orientation tests diagnose ChatGPT answers to their questions as exhibiting a clear preference to provide left-leaning viewpoints.” A test of more than a dozen models gives left-ish results for many models, though some like Meta’s LLaMA are perhaps right-leaning and Google’s BERT models tend to be somewhat more authoritarian than libertarian.
OpenAI denies any deliberate leaning, calling these biases “bugs, not features.” Yet language models can pick up political tendencies at multiple stages, including initial training from masses of text and subsequent fine-tuning based on feedback from human raters.
This could be a serious issue. As David Rozado notes,
Widely used AI language models with political biases embedded in them can be leveraged as a powerful instrument for social control. Ethical AI systems should try to not favor some political beliefs over others on largely normative questions that cannot be adjudicated with empirical data. Most definitely, AI systems should not pretend to be providing neutral and factual information while displaying clear political bias.
Society is both shaping and being shaped by AI, but we have a choice in how these models are constructed. The challenge is that people will have very different ideas of what “neutrality” means – don’t let the arbitrary center of the plot above fool you. Our hope, of course, is that these systems will be designed with an eye to pluralism and productive disagreement, as opposed to promoting specific political views.
The motivations of intentional misinformers
Most people who share misinformation online do so without realizing it, often driven by content that aligns with their political beliefs and enhances their online presence. However, 14 percent of Americans say they’ve intentionally shared things they know or suspect to be false.
This recent study offers a glimpse into who these intentional sharers are. They tend to:
Source news from social media, especially from unconventional or extreme outlets.
Have “elevated levels of a psychological need for chaos, dark tetrad traits, and paranoia.”
Support QAnon, Proud Boys, White Nationalists, and Vladimir Putin.
Similar to the motivations of those unintentionally sharing misinformation, they’re also motivated by partisanship and likes.
People who create and spread fake news content and highly partisan disinformation online are often motivated by the desire that such posts will “go viral,” attracting attention that will hopefully provide a reliable stream of advertising revenue... Others may do so to discredit political or ideological outgroups, advance their own ideological agenda or that of their partisan ingroup, or simply because they enjoy instigating discord and chaos online.
Researchers have found that users can better discern truth from falsehoods when asked to slow down and consider, for example by showing accuracy prompts to users before reposting. Unfortunately, these tactics probably won’t deter those intentionally sharing misinformation.
Most but not all professions are trusted by both sides
Finally, there's something people from all political backgrounds can agree on: funeral directors are trustworthy, but we’re skeptical of insurance salespeople. Political scientist Tom Wood's visualization of Gallup’s data from 1976 to 2022 reveals some intriguing insights into cross-party perceptions of vocational honesty and ethics.
Across the board, certain professions like nurses, military officers, and grade school teachers have consistently earned the public's trust. Even police officers consistently received positive evaluations, despite the “Defund the Police” movement popularized among Blue in 2020. On the flip side, car salespeople, telemarketers, and lobbyists have stagnated at the bottom.
There were also evident shifts in trust between 2004 and 2008. Journalists, for instance, became much less trusted by Republicans very quickly. Democrats, on the other hand, soured on business executives around the same time. In more recent times, Democrats' trust in union leaders has grown, while Republicans have consistently stayed in the "red zone" of skepticism.
Our political beliefs might divide us on many fronts. However there's a surprising amount of common ground on who is trustworthy. This suggests that some professions may even serve as depolarizing voices.
Quote of the Week
The polarization in society is actually being reflected in the [large language] models… It has the potential to form a type of vicious cycle.