“The world has changed forever… is the name of another Medium article I’m writing” :tito-laugh:

“Everything I normally outsource to Fiverr, I now outsource to ChatGPT 4”

You are viewing a single thread.
View all comments View context
9 points
*

Not happening soon - Kind of hard to explain without really getting into how things like ChatGPT work. The real reason I’m confident about this is that I sat through learning how LLMs work (best explanation I’ve seen, if you’re already technically inclined) and there’s nothing inside it that can reason. But some easy arguments are that you can’t get ChatGPT to output a novel idea that’s not just a combination of two ideas, that the increased size = more performance scaling regime has leveled out pretty hard, and that OpenAI has already given up on scaling that way.

Genuine threat - This comes in two parts, capability and amorality.

Capability - We have no reason to believe that human-level intelligence is some sort of fundamental cap. If an AI is capable of performing novel AI research to a good enough level to build a better AI, that better AI will be able to improve on the original design more than the first. This lets someone build a feedback loop of better and better AIs running faster and faster. We don’t have any idea what the limits of these things are, but because human intelligence is probably not some sort of cap, it’s presumably a lot.

Amorality - Despite being “smarter” than humans, the goals of any such AI will be whatever is programmed into the software. Doing things people would actually want is a very specific goal, which requires understanding morality (which we don’t), understanding concepts like what a person is (nobody knows how to make an AI that knows the difference between a person and a description of a person), and not having any bugs in the goal function (oh no). Even if the AI is smart enough to understand that its goal function is buggy, it’s goal will still be to do the thing specified by the buggy function, so it’s not like it’s going to fix itself. Any goal that does not specifically value people and lives (which are very specific things we don’t know how to specify) would prefer to disassemble us so it can use our atoms for something it actually cares about.

Optimism - The current trajectory of AI research is to pump a ton of money into chasing capabilities that the current state of the art won’t be able to reach, oversaturate a small market, and poison people’s perceptions of AI capabilities for a generation. This has happened before and I think it will happen again. This will give people a lot more time to figure out those morality problems, if climate change doesn’t kill us first.

permalink
report
parent
reply
Deleted by creator
permalink
report
parent
reply
2 points

Capability - We have no reason to believe that human-level intelligence is some sort of fundamental cap. If an AI is capable of performing novel AI research to a good enough level to build a better AI, that better AI will be able to improve on the original design more than the first. This lets someone build a feedback loop of better and better AIs running faster and faster. We don’t have any idea what the limits of these things are, but because human intelligence is probably not some sort of cap, it’s presumably a lot.

This is the part I don’t get. Where does the threat to humanity part come in? Like how is it supposed to act out its immorality?

permalink
report
parent
reply

videos

!videos@hexbear.net

Create post

Breadtube if it didn’t suck.

Post videos you genuinely enjoy and want to share, duh. Celebrate the diversity of interests shared by chapochatters by posting a deep dive into Venetian kelp farming, I dunno. Also media criticism, bite-sized versions of left-wing theory, all the stuff you expected. But I am curious about that kelp farming thing now that you mentioned it.

Low effort / spam videos might be removed, especially weeb content.

There is a cytube that you can paste videos into and watch with whoever happens to be around. It’s open submission unless there’s something important to commandeer it with at the time.

A weekly watch party happens every Saturday (Sunday down under), with video nominations Saturday-Monday, voting Monday-Thursday. See the pin for whatever stage it’s currently in.

Community stats

  • 1.2K

    Monthly active users

  • 18K

    Posts

  • 58K

    Comments