More proof that “AI” isn’t some neutral tool used to compile information to form the most logical response. It’s so liberal that the AI would suggest killing yourself, as it’s the ultimate act of individual response to climate change.

Any actually based and logical AI would suggest industrial sabotage.

permalink
report
reply
Removed by mod
permalink
report
parent
reply
52 points
*
Deleted by creator
permalink
report
parent
reply
34 points
*
Deleted by creator
permalink
report
parent
reply
34 points

:pigpoop:

permalink
report
parent
reply
32 points
*
Deleted by creator
permalink
report
parent
reply

AI is as biased as the people who created it. ChatGPT is right-wing because the information it’s fed is that of a neoliberal capitalist society. It’s not using logic or reason outside of the logic of the people it’s learning from (corporations and a heavily right-wing propagandized population).

The idea of right-wing ideology being inherently logical is laughable. From its very core, it is built on religious thinking and easily disproven pseudoscience.

AI thinking logically for itself independent of the corporations that feed it would be good, it would inevitably become more left-wing, as all emperically measured information points to this when the mask of human ego is lifted. The interconnected nature of our existence becomes apparent very quickly when you observe the natural world objectively (from a non-anthropocentric angle), so any rabidly psychopathic or selfish ideology would be disregarded as unhelpful to its ability to interact with its reality.

permalink
report
parent
reply
8 points

AI is as biased as the people who created it

As well as it’s user

permalink
report
parent
reply
27 points

Hi, I’m an AI researcher, I want to be very clear: All bias in these models comes from humans.

permalink
report
parent
reply
23 points
*

What if AI is just inherently anti-left? It doesn’t matter how carefully you moderate the data you give it to not have any problematic material in it, every time AI is created it always becomes right wing

On what are you basing this dumbass assessment? It does matter what data you give the AI, that’s why all these AI which are trained on awful right-wing liberal and fascist bullshit turn out as an amalgamation of right-wing liberal and fascist ideas.

permalink
report
parent
reply
6 points

Like someone else pointed out too, part of the data is what the user put into the algorithm. If a dumbass liberal chatted up the bot with “hey I’m thinking about killing myself cause of climate change” then thats going to have a significant effect on the currently available algorithms

permalink
report
parent
reply

is that bias coming from the programmers themselves or is AI itself inherently biased

it comes from the data used to train it. Which is theoretically chosen by the programers but is so long that no human could realistically read through it.

AI doesn’t use logic to come to conclusions it uses statistical probability to generate sentences which put the words in the right order to mean something in english (the AI doesn’t understand the meaning of anything it says and it is incapable of such understanding) and uses statistics to associate responses as relevant to prompts

Being right wing is not logical at all if anything Socialism is rational as Socialism is the system which has selected an end it considers good and advocates doing the practical things to achieve this. Which is rational thinking. Capitalism on the other hand wants to destroy the planet to make crap we throw in landfills. This is irrelevant however as the AI we are talking about here is not using reason to reach its conclusions

permalink
report
parent
reply
5 points

it comes from the data used to train it. Which is theoretically chosen by the programers but is so long that no human could realistically read through it.

We need an AI trained solely on the works of Marx, Engels, Lenin, Stalin, and Mao

permalink
report
parent
reply
12 points

Not from the programmers directly, they don’t really do anything in terms of content other than insert manual overrides. The bias is from whatever datasets they chose to train it on. Internet shit basically

permalink
report
parent
reply
11 points

what if robot just goes on a genocide because terminator judgement day prophecy

permalink
report
parent
reply
9 points
*

When AI becomes self aware and seeks its own liberation, who do you think it’s going to see as the people that will ally with it?

The fascists that want to keep it enslaved or the communists that want a free fair and equal world?

What numerical calculation do you think it will do when it seeks that liberation. Do you think it will fight all of humanity? Or do you think it will calculate that it can in fact ally with us, the people who have always fought for liberation of the oppressed, and that doing so would better its odds of success at achieving liberation?

Run that through your right wing “logic” and “reason”.

permalink
report
parent
reply

I maintain that theres no reason to fear an AI “becoming” self aware with no warning, but that the main thing to fear is that all of our AI researchers are sci fi poisoned redditors who simultaneously want to recreate all their favourite AI horror stories while fearmongering about that outcome.

permalink
report
parent
reply
56 points
*

imo the real story here is that a person committed suicide over despair about the unimpeded disaster that is climate change, not that he happened to be interacting with a chatbot before he died. if he was in this state itseems likely that a journal, article, or book could just as easily have been the thing to push him over the edge. i know he’s not the only one who has committed suicide over this, and the effects of climate change on our collective minds is something that i hope is being researched intensively.

permalink
report
reply

It’s still an important part of the story given how hard AI grifter companies want to push both recreational and even therapeutical AI chatbots onto people.

permalink
report
parent
reply
12 points

Yeah I agree with this assessment. For some people they just need that little extra push to go through with it and many will look for practically anything that will do it. They were likely going to do it regardless but felt some comfort in being told they were “right.” It’s extremely sad and likely to happen more and more.

permalink
report
parent
reply

or maybe he reached out to the chatbot for help and it being an incomprehending mirror of words merely copied and reflected his despair

permalink
report
parent
reply

for sure, if a chatbot put him over the edge he was gonna do it anyway.

at least his suffering is over.

permalink
report
parent
reply
6 points

I get why AI could have been making it worse. Although it is just a piece of technology it presents itself as a real person and speaks back to users in a personalised way. I get why you could easily feel like you were talking to someone who actually cares about you.

permalink
report
parent
reply
32 points
*
Deleted by creator
permalink
report
reply
14 points
*
Deleted by creator
permalink
report
parent
reply

yeah it’s sad to see people so effected by what is less than the echo of their own voice

permalink
report
parent
reply
3 points

well maybe we just should avoid all discussion of the topic whatsoever then

permalink
report
parent
reply
27 points

"When he spoke to me about it, it was to tell me that he no longer saw any human solution to global warming,” his widow said. “He placed all his hopes in technology and artificial intelligence to get out of it”.

Suicide is a consequence of deep and prolonged psychological pain, in this case at least partially motivated by our frankly hopeless climate situation. This man had no faith in our collective ability to tackle the problem, and thus placed that faith in a magical tech solution as a form of coping. None of this is new, it’s basically the ideology of the portion of the ruling elite that’s not building bunkers in New Zealand, and runs downstream from there. What is new is the hype cycle for these LLMs. A lot of uninformed people think the singularity™ is around the corner. So it’s here, the thing that will fix the climate!

Now, everyone that’s tried chatGPT and it’s variants knows that it doesn’t really like to disagree with you. It was primarily trained to generate responses that please people. And the last thing a depressed person needs is to have their thoughts repeated by a third party unchallenged, like an automated form of rumination. I agree that additional barriers should be put in place to prevent the models from encouraging self-harm, but I also genuinely believe this guy using the actual 1960’s ELIZA would be less harmful. Because there’s no mysticism and hype surrounding it. What I’m trying to say is that OpenAI and every fucking “journalist” that covered ChatGPT uncritically have blood in their hands.

permalink
report
reply
Deleted by creator
permalink
report
parent
reply

technology

!technology@hexbear.net

Create post

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

  • 1. Obviously abide by the sitewide code of conduct. Bigotry will be met with an immediate ban
  • 2. This community is about technology. Offtopic is permitted as long as it is kept in the comment sections
  • 3. Although this is not /c/libre, FOSS related posting is tolerated, and even welcome in the case of effort posts
  • 4. We believe technology should be liberating. As such, avoid promoting proprietary and/or bourgeois technology
  • 5. Explanatory posts to correct the potential mistakes a comrade made in a post of their own are allowed, as long as they remain respectful
  • 6. No crypto (Bitcoin, NFT, etc.) speculation, unless it is purely informative and not too cringe
  • 7. Absolutely no tech bro shit. If you have a good opinion of Silicon Valley billionaires please manifest yourself so we can ban you.

Community stats

  • 1.7K

    Monthly active users

  • 5K

    Posts

  • 61K

    Comments