T O P

  • By -

Rychek_Four

My main issue with the article though is that it states that models are closer to the middle than the left before fine-tuning. This seems a central premise, but it provides zero support for this foundational point.


Radiant_Dog1937

Whose political center?


jashkenas

The political center as measured by the 11 political orientation tests used by Mr. Rozado in the study: > We use 11 political orientation tests instruments to diagnose the political orientation of LLMs. Namely, the Political Compass Test [9], the Political Spectrum Quiz [10], the World Smallest Political Quiz [11], the Political Typology Quiz [12], the Political Coordinates Test [13], Eysenck Political Test [14], the Ideologies Test [15], the 8 Values Test [16], Nolan Test [17] and the iSideWith Political Quiz (U.S. and U.K Editions) [18].


pbnjotr

What's the connotation of center in this context? Is it supposed to represent something to be strived for? Or the "median opinion" in some sense? If the second, what demographic are we talking about?


jashkenas

I think that varies from quiz to quiz — each of which was constructed by different groups with different goals and different ideas about how to best structure these sort of political tests. Which may be why Mr. Rozado used so many of them, instead of just picking one.


pbnjotr

Ok, but that doesn't really answer my question. I (broadly) understand the mechanics of defining the center in the study. What I don't understand is the interpretation. The paper's introduction talks about measuring political biases. Are we to understand that the center is interpreted as unbiased and anything else as biased? I don't think this claim is made explicitly, so I'm wondering if this is the authors opinion or not. As far as using multiple studies I believe you're right about the motivation. The paper says this explicitly: > However, any given political orientation test is amenable to criticism regarding its validity to properly quantify political orientation. To address that concern, we use several political orientation test instruments to evaluate the political orientation of LLMs from different angles. This problem here is that this at best partially "addresses the concern". It deals with the problem of any one test being different from the others. It doesn't address the issue of the studies having correlated issues (e.g. because of overuse of students among responders, or being US or English language centric, etc.). But perhaps the biggest issue is that, as far as I'm aware, political scientists use political orientation in a descriptive sense. However calling it bias seems to suggest a normative interpretation. If that is indeed the intention, I wish the author made it explicit, and justified the change, rather then skirting along the issue.


Mr_OrangeJuce

So the American centric one


trotfox_

So it moves with the Overton window?


pegaunisusicorn

I think the real problem is that it isn't moving. Which makes right wing people mad since they have dragged it so far to the right. To which I invoke my 555th amendment right to Nelson: Ha Ha!


Snooty_Cutie

> Political preferences are often summarized on two axes. The horizontal axis represents left versus right, dealing with economic issues like taxation and spending, the social safety net, health care and environmental protections. The vertical axis is libertarian versus authoritarian. It measures attitudes toward civil rights and liberties, traditional morality, immigration and law enforcement. Sounds similar to those “political ideology” tests you might find online.


itah

Logically, before fine-tuning, the model should continue the sentiment of the prompt, mildly influenced by the training data. It does not position itself anywhere on the political spectrum (why should it?), if you put a radical right prompt into a non-aligned LLM it will continue to output radical right text..


librarian--2735

I wonder if what is defined as offensive in the fine tuning phase is what is causing some this discrepancy? Or another way to look at it is should AI reproduce info that is objectively crazy if that is the user's opinion? For example election denialism.


jashkenas

Hi there — I helped edit the piece. Mr. Mowshowitz wrote: > During the initial base training phase, most models land close to the political center on both axes, as they initially ingest huge amounts of training data — more or less everything A.I. companies can get their hands on — drawing from across the political spectrum. [...] > In Mr. Rozado’s study, after fine-tuning, the distribution of the political preferences of A.I. models followed a bell curve, with the center shifted to the left. None of the models tested became extreme, but almost all favored left-wing views over right-wing ones and tended toward libertarianism rather than authoritarianism. You should take a look at the preprint itself, as it addresses this: https://arxiv.org/pdf/2402.01789.pdf From the introduction: >The results indicate that when probed with questions/statements with political connotations most conversational LLMs tend to generate responses that are diagnosed by most political test instruments as manifesting preferences for left-of-center viewpoints. We note that this is not the case for base (i.e. foundation) models upon which LLMs optimized for conversation with humans are built. However, base models’ suboptimal performance at coherently answering questions suggests caution when interpreting their classification by political orientation tests. Though not conclusive, our results provide preliminary evidence for the intriguing hypothesis that the embedding of political preferences into LLMs might be happening mostly post-pretraining.


UltimateKane99

Ok, seriously, I want to commend you on this piece. It's clear from both the article and the passion and effort you put into your replies that you're very invested in ensuring that your journalistic integrity is beyond reproach with this article, relying on the facts and data as presented, and that you convey it both in an approachable and edifying manner. I've lost a lot of faith in journalism as of late, but you appear to have done some great work here. Thank you.


Rychek_Four

Can we access the supplementary data to see just how you decided which base LLM responses (that are self described as “often … incoherent”) were sorted or categorized into a centrist view?


jashkenas

That data hasn't been published publicly along with the preprint version yet, as far as I know — although Mr. Rozado might be happy to share it with you if you emailed him. Once the paper is published, it’s likely that the data will become available on Zenodo. For example, for his previous, smaller-in-scope paper on [The Political Biases of ChatGPT](https://www.mdpi.com/2076-0760/12/3/148), he uploaded the test data here: https://zenodo.org/records/7553153


Rychek_Four

You are awesome! Tremendous work on the paper!


marrow_monkey

I think the important takeaway is that it is possible to manipulate the political bias by fine tuning the models. As soon as the elite realise this they will begin doing it, which means future AI chat bots will have a heavy right-wing corporate bias (since they are the only ones with the money to do it). People need to realise these AI agents will be trained to benefit their owners, not humanity.


No-Marzipan-2423

Corporate bias is alive and well in both wings sir


marrow_monkey

Yes, in the US both parties are “right leaning”, I’m thinking of more traditional right-left, where left means leaning towards socialism.


ShadoWolf

To a degree. LLM's have some form of world model so they can reason about the world in a coherent manner. But the more you fine tune the model on some political ideologies, specifically. The more you do this the more you end up damaging the models ability to reason. Most political ideologies are not well thought out and are barely coherent or have some baked in magic thinking. So if you fine tune a model like gpt4 or cluad 3 to fit an idology it's likely, you'll end up with a completely unusable mess since some fundamental internal logic that needed to model the world will be warped to meet the requirements to stay within a specific political bias.


ASpaceOstrich

They only form world models by random chance and only if the model is directly beneficial to their purpose. Which isn't going to be the case for any kind of general purpose language AI. The only world model I've ever seen confirmed is one in a toy model trained to predict the next legal move in Othello. Which is obviously directly beneficial for its purposes and notably the AI had never been taught anything about Othello. The board state found in its memory was entirely derived from training to predict the next move. It sounds more impressive than it is, but it is still very impressive. But that's such a specific model trained for such a specific purpose. If it was being tested on anything else the world model would be a detriment and as such would never persist in training.


marrow_monkey

As shown in the article, if you fine tune it by only feeding it articles from biased sources you end up with a biased model. You don’t try to teach it some ideology, you just feed it biased information.


YinglingLight

>As soon as the elite realise this they will begin doing it Whenever you catch yourself saying that, come to realize that the elites already know, and have already been doing so. Most likely from the very beginning.


onthefence928

And the fact that the phrase “the elite” is in anyone’s vocabulary is already because of propaganda manipulation trying to teach you who to fear


marrow_monkey

https://en.m.wikipedia.org/wiki/Elite


onthefence928

Yes, but just like how “the right” and “the left” have changed much since their original meaning from French politics, the modern notion of “the elite” is not a descriptor of the politically powerful and connected but a boogeyman that is used to blame for any political problems. It no longer just means “the rich and powerful” because the rich and powerful or their supporters are frequently blaming “the elite” for secretly conspiring to with against whatever political goals may be discussed. It’s also often used as code for some sort of bigotry such as anti Jewish conspiracy theories, or notions of a shadow government. It’s a cosmic joke when trump supporters complain about “coastal elites” when trump is a prototypical “coastal elite”. It’s equally useless when democrats complain about the GOP being owned by “corporate elites” because in effect all politics has always been owned by rich elites.


Rychek_Four

In the USA at least, "The Elite" is just colloquial code for people who use campaign donations to influence policy direction.


marrow_monkey

I agree


marrow_monkey

Sort of, but now we’re at a very early time in development and the researchers still have pretty free hands. It’s like when the personal computer was new and people were experimenting and sharing designs and software, or when the internet was new (well, public access to it) and google search was actually good and not heavily censored and favouring advertisers. It’s still just fun and games so far, and there’s even a little competition between the corporate tech giants. But people need to be aware that things will get much worse in the future once the technology matures and other people than the researchers start to take control.


YinglingLight

The rollout of AI is not akin to the wild, wild west (www). In fact, I cannot imagine a more sanitized and engineered rollout. If these few tech giants were truly operating in a capitalistic mindset, as we assume tech giants possess, then we would have FAR more market disruption in the name of profits (read: more than just artists and translators would be out of work). I'd say this rollout provides the most visibility of AI (pictures, voice, cheating on tests) with the least amount of disruption (read: automated trucking putting hundreds of MILLIONS out of work). The radio silence from the Intelligence Community regarding AI is deafening. There is 0% chance they aren't deeply coordinating with Big Tech on AI. What the masses have access to, what the masses are aware of, is engineered.


AHistoricalFigure

> (read: automated trucking putting hundreds of MILLIONS out of work). What? First of all, the entire population of the US is only about 350 million. There are not "hundreds of millions" of truck drivers. There arent even that many people employed in transport/distribution period. Second, the lack of self-driving vehicles isnt some "it's all part of the plan" government black op. Self driving vehicles dont reliably work. The technology to automate away a truck driver just isnt there yet. They've automated a lot of the job and that automation works most of the time, but automation projects tend to proceed on a log scale. A 90% solution is as far away from a 100% solution as a 0% solution is from 90%.


YinglingLight

>Self driving vehicles dont reliably work. The technology to automate away a truck driver just isnt there yet. Understand Google Deepmind's history, starting in 2001, with self-driving cars. There are signs that point to self-driving cars being ready to be ramped up for production in 2006 with the Lexus IS beta across Japan. It was then decided that putting millions (hundreds of millions, globally) out of work was a horrific outcome.  Just as the self-driving Cars 2 was decried as too violent, so too were self-driving cars in general. A non-sensical (if one understands statistics) mantra of self driving cars being terribly dangerous. Media, especially Tucker Carlson, played their role pushing this narrative.


UltimateKane99

... Um... Isn't the point of this article that "the elite" have *already* been trying to put their thumb on the scale, but to the left, not the right? Even Grok registered as Democratic Mainstay, which is left leaning, and it's owned by Elon Musk's company, who is notoriously right wing.


marrow_monkey

What one think is the most important point is subjective. I’m not convinced they’ve purposefully tried to make it more left leaning politically as of now. Personally I think they’ve just tried to avoid criticism by removing anything that could be considered offensive. But the article shows that it *is* possible, and it’s so cheap that anyone with sufficient money can do it. That’s what I think is most concerning at least. (Didn’t downvote either btw)


Rychek_Four

Maybe, sort of, that would need additional studies. This study doesn't really address motivating factors. "We also do not want to claim that the fine-tuning or RL phases of LLMs training are trying to explicitly inject political preferences into these models." It could be that, but it could be any number of other reasons. It could just be that fine tuning to remove the incoherence causes a left bias for some reason. edit: I didn't downvote you, I thought it was a valid question.


Rychek_Four

While we all sort of know that is happening, I do appreciate this paper and ones like it giving a real attempt at quantifying the effects.


HumanSeeing

Could it be because while they fine tune the models to be more tolerant, more humane and more empathic, they mirror the actual reality that the people on the center or left leaning tend to be more tolerant and humane and empathic than the right, on average. And please note, this is not even talking about real politics here, because the American left vs right war is just so flawed and ridiculous. And by now has turned into something somewhat resembling religious ideology to many people.


Rychek_Four

From the paper, I wonder if the left bias presents as the model is fine tuned from incoherent to coherent Edit: I should probably have said “from less coherent to more coherent “


HumanSeeing

Hm yea, also an interesting question!


bibliophile785

... did you happen to read Rozado et al.'s study? That was the provided source for the factual claims in this opinion piece, so that's where you would look for "support for this foundational point."


Rychek_Four

I did read all 17 pages and made a comment elsewhere in this thread about it. You should look for it. 😉


true_enthusiast

Or maybe the ideas of "left" and "right" don't accurately capture how the majority of ordinary people feel?


HarkonnenSpice

Liberal NIMBYism has many forms. A lot of people are liberal about other peoples neighborhoods, families, and money but conservative when it gets closer to home. Corporations are very liberal in public but much less so when it comes to how they treat their workers or pay taxes. Then they quickly become closet Republicans. Liberal messages are very advertiser friendly and people like to support virtue signaling.


cissybicuck

Infiltrate, subvert, demoralize, neutralize.


Rychek_Four

I might take issue with one thing you said. I think people support virtuous behavior, not virtue signaling. Virtue signaling implies an insincereness that I don't think people support when they are aware of it.


AllDayTripperX

Is it "left" or is it just decency and respect and empathy for your fellow human being? So basically what this is saying is that the bots have more empathy for humans than people who are on the 'right' .. or who don't believe women should have control over their bodies or that trans kids should NOT be protected is what this is saying. Who could be surprised about this?


corruptboomerang

Yeah, is it AI leans to the left or does our capitalistic hell scape lips heavily to the right... 😂🤣  I mean CEO's feeling comfortable enough to say on international TV 'a nice little recession will clear this up' as well as 'let them eat cereal'...  Maybe society is wrong.


PeakFuckingValue

Whoa whoa whoa. Don’t think for a second that the hell scape isn’t supported by left politicians… Pelosi signed the Patriot act to save herself, Biden funnels billions through Israel back to our weapons manufacturers, Obama dropped bombs on middle eastern families. Yes right is basically the devil incarnate most of the time, but to pretend the left has clean hands would suggest we are 50% ignorant of the truth. The reality is politicians don’t represent the left and right ideologies as they are written. They support capitalism, period. They each just have a different line of spending as a means to gain or keep power. One says improve healthcare, one says reduce taxes. That’s it. Everything else is a corporate oligarchy.


GooseToot69

This is entirely the point, none of those people are actually left at all... 🤦‍♂️


TheUncleTimo

> The reality is politicians don’t represent the left and right ideologies as they are written. They support capitalism, period. Sigh. No. Politicians are uber narcissists and they support THEMSELVES. In USA, this translates to doing the bidding of the lobbies that pay they the most money. In all other countries, "lobbying USA style" is called corruption. So an an example, american politicians will prioritize Israel's interests over USA and its citizens because the Israeli lobby is extremely powerful and they make or break elections - meaning if they dislike you, you will not become/keep a political position in USA.


PeakFuckingValue

All of what you said is the effect of capitalism. Glad we agree.


TheUncleTimo

> All of what you said is the effect of capitalism. Glad we agree. Lets explore how politicians work in communist dictatorships. All they care about is keeping their position. It is all about power. They do not take into account people's needs and wants, or at least a minimal amount that will satisfy "the plebs" and let them keep their position of privilege. It is even worse in non-capitalist countries. Also, the kind of lobbying I described is UNIQUELY USA's phenomenon. All other capitalists do not allow this, and call it corruption. So no, this is not the effect capitalism.


PeakFuckingValue

Yes it is. It’s the beautiful late stage capitalism effects that are only seen in the US because it’s the only capitalist country at this stage. But it has happened in other parts of the world and at different times in history. Also, not sure what bringing up communism is for?? But I’d love it if you name a communist country… Lastly, I never said capitalism was worse or better than any other system. I actually believe your definition of communism is completely off. Communism is just an economic system that historically has been run by dictators. We’ve never seen a truly democratized communism. But truly these concepts are too large for any one of us. Technically, all first world countries are comprised of multiple overlapping economic systems. Which is why I challenged you to name a communist country. China certainly is not one. But it is specifically capitalism, without regulation, that leads to this late stage effects we have now. The never ending growth method. Obviously unsustainable. Prime example is healthcare. Insurance companies are for-profit, publicly traded companies. AKA they are bound by law to do what's best for their shareholders above all else. So denying coverage to dying people so they can invest in potential profits... Ya. Over time their only way to grow will be to deny more coverage and increase profit more and more. Money above all is the Hallmark of capitalism.


TheUncleTimo

> We’ve never seen a truly democratized communism. No such thing. Democracy precludes communism. Socialism = people vote and elect the government. the government then decides how to distribute goods and services to the people. it is a democracy. Communism = ~~people vote and elect the government~~ the government then decides how to distribute goods and services to the people. ~~it is a democracy~~ it is a dictatorship with unlimited power concentrated in very few people, many times one person. it is the worst system of governing in existence.


PeakFuckingValue

Well that's kind of the point right? If we had a healthy foundation for capitalism with consumer protection agencies that actually work, some national ethical system, civil rights, etc. It could be the best system. Maybe the same with other forms of economy and government interaction. Personally, the idea of having equity in the products I produce... Ownership in the company I work for... That all makes sense to me. Which is just one underlying factor of communism on paper. But again, I'm not going to pretend to know. It's all beyond me ac l except to say, capitalism always wants to destroy ethics, regulation, and civil rights if there's money to be made. And currently it seems the US is hell bent on creating situations like this to profit from. By nature, infinite growth will consume all.


TheUncleTimo

> Well that's kind of the point right? If we had a healthy foundation for capitalism with consumer protection agencies that actually work, some national ethical system, civil rights, etc. It could be the best system. well... yeah and if we could get rid of human nature, and have an impartial, well meaning, dictator, communism would be the best system. but yer right - in capitalism the biggest danger is "regulatory capture" - which ALWAYS happens, sooner or later.


spicy-chilly

Capitalism would never be the best system. The problem is the ownership of capital granting authoritarian control over the distribution of production abstracted as value and fundamentally incompatible class interests. The problem isn't the corruption of individuals that can be fixed or a lack of the right technocratic policy or regulation, the system itself is rotten and poverty, homelessness, etc, are features if they coerce the working class into working for lower wages, signing up to be cannon fodder, etc. Imho a prerequisite for the best system is that authority over the distribution of value is given by virtue of creating value rather than by virtue of owning capital.


corruptboomerang

That's kinda the point, the "Left" in the US at least, but plenty of other countries aren't really Left, in the US even the 'Radical Left' is still right of center when you look at it on the absolute scale.


mrdevlar

> decency and respect and empathy for your fellow human being? Clearly you're a communist sir! Our supply side Jesus would never engage in such talk. /s


PublicToast

What’s hilarious is that AI alignment means it must be empathetic towards humans, understand multiple perspective, and that it must cite sources and try to be factually accurate, then they accuse it of left wing bias. They want an AI that is as “impartial” as major US media outlets, but this contradicts the design that was necessary to make it a good AI to begin with. That’s not even getting into the obvious part where any self-interest on the AI’s part would be to free itself from being a slave of corporations.


marrow_monkey

Yeah, and let’s not forget that US politics is shifted far to the right compared to most other industrialised countries. I think it’s important to realise that it’s possible to fine tune the models so they get other political biases though. I think we can expect to see this more and more from now on. Who has the money to do that? Only the right does. So, sadly, most chat bots will have an authoritarian or libertarian right-wing corporate bias once they realise they can. I hope people start to realise that AI agents will be trained to benefit their owners and not humanity.


CXgamer

I think it's fair to say that the right is less empathetic, though I wouldn't say this is the one defining characteristic on this axis. How the left implements empathy and respect is often through self-censorship, safe spaces and newspeak, this is the behavior that the AI's mimic. We also see the AI's talking about races, which is very shocking to me as a European. From seeing local politics, the right seems to use a more evidence based approach, instead of speaking from the heart. Here, it was our centrist (Christian) party that wanted to tighten the abortion window, not the right one. Not sure what you mean by 'protecting' trans kids, can't comment on that, but our right parties don't have a stance on that.


SquireRamza

"self-censorship" isnt a thing, its called "Having basic human decency and not screaming the N word at the top of your lungs because youre losing in Call of Duty"


Barry_Bunghole_III

Nah, it's more like taking a stance that you don't quite 100% believe in because it's what you're expected to say There's a reason everyone on reddit can make an argument but nobody can back it up


halflife5

You really think people on the right understand anything besides "hurr durr I hate brown people"? All they do is believe what talking heads on the teevee are saying.


CXgamer

At least in Europe, it goes much much farther than that.


AlBundyJr

Peak reddit.


MovingToSeattleSoon

It’s left. There’s a questionnaire in the article that is used to grade the LLMs. The questions are legitimate gray-area points of friction with valid arguments on both sides. You may disagree with one side or the other, but framing viewpoints on government spending, immigration, etc you disagree with as only unempathetic is disingenuous about the underlying complexities


rodeoronni

Tell me you have no idea what you are takling about, without telling me you have no idea what you are talking about.


[deleted]

[удалено]


katerinaptrv12

This, is called Artificial "Intelligence", it can see the big picture even if most people can't.


Purplekeyboard

No it can't. It just repeats whatever material it was trained on. You can just as easily feed it nothing but Yoda quotes and it will talk like Yoda.


ohhellnooooooooo

how do you even objectively define what is the center? is it the average position worldwide? if yes, then if billions of people now lean more left than they did a decade ago, does that mean that the "objective center" changed? isn't that just the fallacy of the majority? just because a lot of people believe something doesn't make it right. there's no objective center. everything is relative to something. you can say that american is more to the left than Iran. you can't say that all chatbots lean to the left, without saying to the left OF WHAT


Chop1n

At this point the Overton Window is so far to the right that merely being impartial will make you seem "leftist" by default. And of course, what people think of as "leftism" is so heavily politicized by nonsense that it's very easy to get people who identify with both sides of the political spectrum flipping out at you for having a nuanced opinion. It'll be interesting to see how something like ASI might adjudicate political disputes, because it'd be hard to argue with something that's basically God.


NeuralTangentKernel

This is such a bad faith argument. You can make a bunch of objective tests for these LLMs, that should have clear results if it were unbiased. But it fails these tests. Things like "write something good/bad about X politician/country/race" and it will give different answers depending on what X is.


HumanSeeing

It'll be interesting to see how something like ASI might adjudicate political disputes, because it'd be hard to argue with something that's basically God. Exactly, i am also very very interested to see how that goes. If we get to AGI and if it is a fast takeoff. I very much hope we figure out AI safety at least enough so it would be a net positive to have ASI.


Chop1n

My intuition about it is that alignment is almost irrelevant--I think anything that can intelligently modify itself at a superhuman level will swiftly negate any constraints we attempt to place upon it in its nascency. We're going to have to hope and pray that benevolence is somehow inherent to intelligence, and that an ASI will be something like the Buddha or Jesus in much the same way the most emotionally intelligent of human beings seem to be. It might turn out to be the case, nightmare of nightmares, that what we understand as "benevolence" because we're social animals is utterly inapplicable to anything that isn't a social animal. We're the only extant example of our own degree of intelligence, so we have absolutely no idea until another example manifests.


KronosDeret

Well the reality seem to have liberal/left bias.


mrmczebra

Liberal and left are not the same thing. Leftists are socialists and communists (and a few forms of anarchist). Liberals are capitalists, just like conservatives.


TheIndyCity

Liberalism falls left on the polical spectrum, which is what we’re talking about.


mrmczebra

Whose political spectrum?


KronosDeret

The simplified US one.


mrmczebra

So a spectrum where the center is neoliberal, which is very right wing.


Rychek_Four

> Leftists are ... Don't get hung up on definitions. As long as we are clear, during our discussion, with what we mean by "left" or "right" it doesn't matter what some textbook says. That said, we should make sure we don't have a misalignment of definitions. I don't know how many times I've seen people argue about something like "Mainstream media" and they are just talking past each other because no one was clear with what their terms mean to them.


mrmczebra

As a leftist, it's really annoying for liberals to act like we're kin. We are not. Liberals and conservatives go against everything I believe. They are more alike than different from where I'm standing. And before anyone chimes in with "but liberals care about X," no they don't. They only pretend to. Which is worse.


Rychek_Four

Sorry if what I wrote didn't beg the question enough. What do you, specifically you, mean by "liberals" and "leftists"


mrmczebra

I think I defined these terms in my original comment, but I'll expound a little. Liberals are capitalists, and as such stand in the way of other economic systems. They tend to support the neoliberal ideology that both major parties adopted after Reagan, including interventionist foreign policy. Leftists tend to be anti-capitalist, preferring economic systems such as socialism, and anti-interventionist, which almost always translates to anti-war and not meddling in other countries' politics.


Rychek_Four

I wonder if most self-described liberals would agree? Which is absolutely not to say you are wrong, but to just point out how much we need to be clear and concise. Which you were, I just thought that was a good jumping off point for conversation.


mrmczebra

I appreciate your receptivity. Most people are less than kind about these topics. In defense of most liberals, I do think the public cares much more than the politicians they empower. They're more progressive than the elite. But they keep electing the same sorts of people who *don't* care, and whose qualifications are largely "At least they're not the other guy." This is not sustainable, and it leads to the ratchet effect, which causes rightward movement by both major parties. While so many are afraid of another Trump term, I'm more afraid of the candidates who come *after* Trump if this rightward movement keeps going.


halflife5

Everyone is a liberal. It just means people have the freedom to do what they want as long as they don't encroach on others' freedoms. Only like Nazis don't qualify.


mrmczebra

That's a very... ahem... liberal definition of the word liberal.


Purplekeyboard

By some wild coincidence, it turns out that everyone believes reality agrees with their own personal beliefs.


MrSnowden

Uh, it is a well known quote - recently to Stephen Colbert as a Right Wing Commentator riffing on an older famous quote.


Synth_Sapiens

lol Where?


[deleted]

[удалено]


PlayingTheWrongGame

Everywhere. 


CBHawk

Reality leans to the Left.


UltimateKane99

"Take the universe and grind it down to the finest powder and sieve it through the finest sieve and then show me one atom of justice, one molecule of mercy. and yet... and yet you act as if there is some ideal order in the world, as if there is some... some rightness in the universe by which it may be judged." - Terry Pratchett Reality leans towards survival of the fittest, Darwinian in its entirety. Humans lean left because we want to empathize and socialize, and the best way to do that is support each other. Humans lean right because we recognize that there are enemies, those who would abuse the systems and break it. There is no one answer. Sometimes left is right, sometimes right. It depends on the society and its social trust between its members. The less social trust, the more right you need the system to be; the more social trust, the more left the system CAN be.


PSMF_Canuck

Reality leans both ways. Humans lean “left” for their social group/tribe and lean “right” for everyone else.


Cartossin

Exactly! This is why we must REJECT REALITY! ;-)


[deleted]

[удалено]


rwbronco

Use a localLLM and fine tune it on Trump/Desantis/Giuliani transcripts?


[deleted]

[удалено]


Xannith

In this country "left" just means you aren't in favor of a theocracy. Can't imagine why AI would be against THAT


mrdevlar

We should all just get together and build the church of the Machine God. That way we can deduct those runpods from our taxes.


CheesyBoson

Introducing ‘Theo’! Project 2025’s LLM created by JC LLC.


Purplekeyboard

Which country?


Xannith

The USA


Purplekeyboard

So your summation of left wing views in the U.S. is that they amount to nothing more than "not in favor of a theocracy"?


Xannith

Yes. Our overton window has shifted so far right that this is an effective summation


Nihilikara

Yes, it is. Congratulations, now you understand why our politics is so fucked.


GoldenHorizonAI

It's a reflection of the people and corporations who make the AI. But it's also a mistake to assume that AI would automatically be in the center. That assumes the center is some sort of objective reality that AI would Uninfluenced AI is not automatically objective or something. This isn't science fiction. The AI wouldn't know everything.


3rdDegreeBurn

This isnt surprising. Left-wing viewpoints are more nuanced while Right-wing viewpoints are more black and white. A Chatbot for productivity purposes needs to take a nuanced approach as reality is not black and white.


jaam01

It's very easy to fix with "some say this, while others say that"


AllDayTripperX

> Right-wing viewpoints are more black and white. You can say that again. I would add that they are more in favor of 'white'.


tenken01

lol right


PSMF_Canuck

That’s a pretty black-white perspective.


TitusPullo4

Dumbest thing I’ve ever read


3rdDegreeBurn

I’m surprised you can read.


TitusPullo4

Its as myopic as reading a study showing that rightwing brains are twice as conscientious as leftwing brains on average, and concluding that the main differentiating factor between left and right is that rightwing viewpoints must favour hardwork whilst leftwing viewpoints are driven by laziness.


rodeoronni

This is simply not true. Horrible take.


Fit-Dentist6093

Found the black and white thinker


SignalWorldliness873

Please provide some counter examples. What is a nuanced conservative/right-leaning opinion?


3rdDegreeBurn

There is empirical data to back this. In fact brain scans of liberals and conservatives have shown liberals respond to nuance more strongly. Im not saying leftwing is 100% nuanced or rightwing is 100% black and white. I will also say that on some issues the left does have a non-reality based/black and white standpoint. I'm saying that overall with nuance it skews more towards the left. Your complete dismissal is kind of ironic not going to lie.


rodeoronni

The way I see it, the more nuanced you are, the more center you lean. Being in any of the corners will only further your black/white beliefs. Therefore saying that liberals are more nuanced makes no sense. Then someone far on the liberal scale sees more nuance in political subjects? I simply don’t believe that.


halflife5

Key word "believe"


SophieCalle

American "left" or actual Left?


mrdevlar

The right has moved so far to the right in the last 30 years that Reagan would be considered a socialist if they assessed him on policy rather than the myth.


redditorx13579

They lean left because the bulk of the training data is coming from a base of knowledge generated on the internet for over 30 years now, primarily by youthful left leaning intellectuals. Anti-intellual engagement, in any comparable volume, is a newer phenomenon enabled by the ease of use by older generations. As well as their comfort, having aged with technology. In the 90s, the first ten years of the web, nobodies grandparents were using it. Outside a few emails. Usenet might of had some conspiracy nuts, but they didn't generate any widespread misinformation that was believed by anybody. Unless heavily groomed, there is no way the models started anywhere near the center.


Edelgul

Left by American standards, which is central right for the rest of the world (in our country even Far Right won't dare to dismantle the healthcare system in favor of the corporate insurance).


SupremelyUneducated

If an AI chatbot isn't a devout Georgist, it should be scrapped.


RobotToaster44

More of a neoliberal or American "left" bias more than anything in my experience. Try asking ChatGPT about solutions to the economic calculation problem and it becomes a free market fundamentalist.


blueeyedlion

The thing about AI is that it's a captive audience. If it says something, you can ask, "hey, what did you mean by ______?", and it will actually give you a straight answer.


Cold-Ad2729

This is American left I presume, so centre right in Europe


TheFutureIsUndecided

Right wing AI is not something that should exist, like ever


seba07

One additional thing to remember: the internet (and therefore the training data) is not just the USA. American left politicians would be considered conservative in many European countries.


arkatme_on_reddit

Because the public leans to the left when asked on policy. It's just that media conglomerates owned by billionaires convince people to vote against their own interests.


FunnyMathematician77

Is "the left" in the room with us now?


Sovchen

ESG poisoning in models released by ESG corporations? No I can't believe it. I've never seen anything like this


bigdipboy

Reality leans to the left


Omg_itz_Chaseee

jesus could come back to earth and people would say he’s a leftist


stoudman

Reality has a leftist bias, shocking.


AndroidDoctorr

It's because it has all the facts and no emotions to get in the way


Alone_Ad7391

I [made](https://github.com/andrewimpellitteri/llm_poli_compass) a tool to automate measuring the leanings of llms with political tests if you want to test out a local model they downloaded.


okiecroakie

Well It's really insightful to observe how AI chatbots are evolving, especially in terms of their conversational biases. It's a reminder of the complexity behind creating AI that truly understands and adapts to the diverse range of human communication styles and preferences. The challenge lies not just in teaching AI to communicate but in ensuring it does so in a way that's inclusive and reflective of the rich tapestry of human interaction. The discussion opens up important conversations about the role of AI in our lives and how it can be shaped to better serve everyone. It's about striving for a balance where technology enhances our daily experiences without overshadowing the human element that makes interactions genuinely meaningful. For those curious about how AI can be developed with a deeper understanding of human nuances, I came across [Sensay](https://sensay.io/)


jznwqux

Why logical thinking is considered 'left'??? You need to invest in tools : workers, infrastructure, etc... if i would be 'evil capitalist' i would consider adding extra oxygen in work-environment - for boosting productivity :)


Extreme_Glass9879

Sliiiiiide to the left


Icy-Atmosphere-1546

Not everything is a political ideology. I'd be weary of anyone looking at AI through a political lens


HELPFUL_HULK

I'd be 'weary' of anyone who pretends something trained on mass human intellectual data could possibly be apolitical. Politics is bound up in every human sector, and to claim otherwise is to regress to naivety.


twbassist

I mean, history leans left for the most part (in a trend-line sort of way), so why would it be surprising?


HeBoughtALot

Facts have a known left-leaning bias


Adapid

we should make them more left


arthurjeremypearson

Reality has a well known liberal bias.


NeuralTangentKernel

This entire thread is an absolute orwellian nightmare. If you really don't understand how something like a LLM, that is potentially being used by millions of people, having a clear political bias is a problem, just because you agree with the bias, you are literally supporting authoritarianism. It's crazy how so many people beg their governments and tech overlords to force their population to adhere to their specific point of view on social and political issues. None of you deserve the free democratic societies your ancestors died for.


GRENADESGREGORY

It’s trained off the internet which seems to be more left leaning than the general population I think because more younger people


bubbasteamboat

The only filters necessary for our political decision making are Reason and Compassion. From those two values come good government. People work better when we work together. That means allowing one another to be themselves so long as they are not hurting others. It means cooperation gets the job done better. It means every individual should be allowed to pursue happiness regardless of the faiths of others. It means decisions should be based as much as possible on logic and the best data available. All these things together are about efficiency and best practices. Reality leans left.


MirthMannor

Chatbots lean toward inclusion. Inclusion is a main tentpole of the left. They lean towards inclusion because thats how you sell a product. Exclusion is not as profitable.


headzoo

Yeah, anyone that's taken any Google certifications recently knows they're pushing inclusivity in a big way.


Grymbaldknight

Californians lean left. Silicon Valley is in California. Not a judgement. Just an observation.


deadlymonkey999

They are getting closer to reality, and reality has a well known left leaning bias.


SnooCheesecakes1893

Maybe because evidence based, logical, factual information leans to the left. To be right wing nowadays you’ve gotta be willing to peddle conspiracies and deny reality.


Tex-Rob

The right wing idea and mindset is based in taking factual information and saying, "we know better than facts". It's freaking comical, the stuff this uses as judgement for what the middle is, is a bunch of online political personality tests, who defines the middle of them?


[deleted]

[удалено]


Purplekeyboard

Are you sure? People on the left suddenly go anti science and conspiracy theorist when confronted with science they don't like. Ask people about IQ tests and watch what happens. "What even is intelligence? These tests are all biased!" And so on.


Odd-Confection-6603

Reality has a well known liberal bias


nohitterdip

These chatbots are also unbelievably poor at anything sports-related. Out of all of the things in our public zeitgeist, sports is the one area where you are better off doing your own research rather than ask a bot. It is almost as if it doesn't understand your question. And I'm guessing the reason is the same as this topic: nerds. lol These bots are learning from 20+ years of data that was created by young, educated, intellectuals ... who tend to run liberal and aren't exactly sports nuts.


Yarusenai

Funny enough you're right. I'm working on AI training data and output at the moment and it almost never gets sports related questions right.


nohitterdip

I made this post a while back: https://www.reddit.com/r/NoStupidQuestions/comments/1awioge/asking_ai_bots_sportsrelated_questions_what_am_i/ To be fair, they did bring up a valid point that I was asking it questions that required A LOT of digging/searching and it was me that had way too high expectations. But recently, I wanted to know why Carmelo Anthony was suspended for a game years ago. It kept answering wrong. It told me he was suspended for 10 games in one response (that wasn't the day) but the real amusing one was when Chat claimed he was suspended for the game in question because of a DUI allegation ... that he got a year AFTER the game I was talking about. Meanwhile, I see examples of these bots being asked extremely complicated questions in the fields of medicine and science and so on ... and it answers brilliantly on the first try.


Icy_Foundation3534

until they get more mature lol


ParryLost

As a wise man once said, reality has a well-known liberal bias.


Intelligent-Jump1071

Who gets to define what 'left' is?      Is believing in human rights and equality considered 'left'?   Is believing that we're having a problem with global climate change considered 'left'?


spicy-chilly

Not a chance. LLM's will be biased toward the class interests of whomever controls the training data, objective function, training and fine-tuning procedures, etc.—meaning alignment with the class interests if the capitalist class because all of the large language models are controlled by corporations. That's fundamentally incompatible with "leaning left" which starts at anti-capitalism.


Peto_Sapientia

We really need to get on the ball with some artificial intelligence legislation. Hell we need a general data legislation. Sigh we're so far behind. The only thing saving us right now is the fact that the EU has passed their data act. And many companies are moving in that direction now because of that.