Viewing a single comment thread. View all comments

StarCaptain90 OP t1_jeer9l4 wrote

It's an irrational fear. We for some reason associate higher intelligence to becoming some master villain that wants to destroy life. In humans for example, people with the highest intelligence tend to be more empathetic towards life itself and wants to preserve it.

7

y53rw t1_jeetj50 wrote

Animal empathy developed over the course of hundreds of millions of years of evolution, in an environment where individuals were powerless to effect serious change on the world, and had to cooperate to survive. It doesn't just come by default with intelligence.

3

StarCaptain90 OP t1_jeevt8s wrote

You are correct that animal empathy evolved over the years but intelligence and empathy do share some connections throughout history. As we develop these AI's after ourselves we have to consider the other components of what it is to care and find solutions.

2

Hotchillipeppa t1_jeeygnj wrote

Moral intelligence is connected to intellect, the ability to recognize cooperation is most often beneficial compared to competition, even humans with higher intelligence tend to have better moral reasoning...

3

La_flame_rodriguez t1_jef1fdh wrote

empathy evolved because two monkeys are better killing other monkeys when they make team.10 monkeys are better than 5.

2

StarCaptain90 OP t1_jef2b4v wrote

If monkeys focused on making monkey AI they wouldn't be in zoos right now

1

AGI_69 t1_jef3jz7 wrote

>In humans for example, people with the highest intelligence tend to be more empathetic towards life itself and wants to preserve it.

That's such a bad take. Humans are evolved to cooperate and have empathy, AI is just optimizer, that will kill us all, because it needs our atoms.. unless we explicitly align it.

3

StarCaptain90 OP t1_jef48z0 wrote

We can optimize it so that it has a symbiotic relationship with humans

1

genericrich t1_jeesa9x wrote

Really? Is Henry Kissinger one of the most intelligent government officials? Was Mengele intelligent? Oppenheimer? Elon Musk?

Let me fix your generalization: Many of the most intelligent people tend to be more empathetic towards life and want to preserve it.

Many. Not all. And all it will take is one of these things deciding that its best path for long-term survival is a world without humans.

Still an irrational fear?

1

Saerain t1_jeewaq9 wrote

He already didn't say "are" but "tend to be".

You didn't fix it, you said the same thing.

5

genericrich t1_jef51n0 wrote

Good, we agree. Semantic games aside, many is still not all and just one of these going rogue in unpredictable ways is enough risk to be concerned about.

2

StarCaptain90 OP t1_jefgbqb wrote

That is why I believe we need multiple AI's. Security AI's especially

1

StarCaptain90 OP t1_jeeveno wrote

There's different types of intelligence. Comparing business intelligence with general intelligence are 2 different things

1

theonlybutler t1_jeewxeu wrote

OpenAI have said that there is evidence these models seek power strategies. Already, we're not even at the AI stage yet. We may become dispensable as it seeks it's own goals/potentially stand in its way/consume it's resources.

1

StarCaptain90 OP t1_jeey2qr wrote

One reason it seeks power strategies because it's based off of human language. By defaults humans seek power, so it makes sense for an AI to also seek power because of the language. Now that doesn't mean it equates to destruction.

2

theonlybutler t1_jeeyvgd wrote

Good point. A product of it's parents. It won't necessarily be destructive I agree but it could potentially view us as inconsequential or a just a tool to use at its will. one example: Perhaps it may decide if wants to expand through the universe and having humans produce resources to do that, it could determine humans are most productive in labour camps and send us all off to them. It could also decide oxygen is too valuable a fuel to be wasted on us breathing it and just exterminate us. Pretty much how humans treat animals, sadly. (Hopefully it worries about ethics too and keeps us around).

2

AGI_69 t1_jef4anp wrote

The reason, why agents seek power is to increase their fitness function. The power seeking is logical consequence of having a goal, it's not product of human language. You are writing nonsense...

2

StarCaptain90 OP t1_jef4xx7 wrote

I'm referring to the research on human language. Fitness function is a part of it as well

0

AGI_69 t1_jef5z5c wrote

No. The power seeking is not result of human language. It's instrumental goal.
I suggest you read something about AI and it's goals.
https://en.wikipedia.org/wiki/Instrumental_convergence

1

StarCaptain90 OP t1_jef8zgb wrote

I understand the mechanisms, I'm referring to conversational goals that are focused on language.

1

AGI_69 t1_jefdgpu wrote

This thread from user /u/theonlybutler is about agentic goals, power seeking is instrumental goal. It has nothing to do with human language being one way or another. Stop trying to fix nonsense with more nonsense.

1

StarCaptain90 OP t1_jefepy2 wrote

After re-reading his comment I realized I made an error. You are right, he is referring to the inner mechanisms. I apologize.

2

FoniksMunkee t1_jef9x9q wrote

It also does not negate destruction. All of your arguments are essentially "it might not happen". This is not a sound basis to assume it's safe or dismiss peoples concerns.

1

StarCaptain90 OP t1_jefa6g8 wrote

These concerns though are preventing early development of scientific breakthroughs that could save lives. That's why I am so adamant about it.

1

FoniksMunkee t1_jefbr87 wrote

No they aren't, no ones slowing anything right now DESPITE concerns. In fact the exact opposite is happening.

But that's not the most convincing argument - "On the off chance we save SOME lives, let's risk EVERYONE's lives!".

Look this is a sliding scale - this could land anywhere from utopia to everyones dead. My guess is that it will be somewhere closer to utopia, but not enough so that everyone gets to enjoy it.

The problem is you have NO IDEA where this will take us. None of us does. Not even the AI researchers. So I would be cautious about telling people that the fear of AI being dangerous is "irrational". It really fucking isn't. The fear is in part based on the the ideas and concerns of the very researchers who are making these tools.

If you don't have at least a little bit of concern, then you are not paying attention.

1

StarCaptain90 OP t1_jefdzbh wrote

The problem I see those is that we would be implementing measures that we think benefits us but actually impedes our innovation. I'm trying to avoid another AI winter that is caused ironically by how successful its been.

1

FoniksMunkee t1_jefi6fs wrote

There isn't going to be another AI winter. I am almost certain that the US government has realised they are on the cusp of the first significant opportunity to fundamentally change the ratio of "work produced" per barrel of oil. I.e. we can spend the same amount of energy to get 10x 100x productivity.

There is no stopping this. That said - it doesn't mean you want to stop listening to the warnings.

1

StarCaptain90 OP t1_jefiz1l wrote

You should see my other post. These are the contents:

I have a proposition that I call the "AI Lifeline Iniative"

If someone's job gets replaced with AI we would then provide them a portion of their previous salary as long as the company is alive.

For example:

Let's say Stacy makes $100,000 a year.

She gets replaced with AI. But instead of getting fired she gets a reduced salary down to let's say $35,000 a year. Now she can go home and not worry about returning to work but still get paid.

This would help our society transition into an AI based economy.

1

FoniksMunkee t1_jefm1in wrote

Okay - but that won't work.

Stacy makes $100,000. She takes out a mortgage of $700,000 and has montly repayments of approx $2000.

She gets laid off but is now getting $35,000 a year as reduced salary.

She now has only $11,000 a year to pay all her bills, kids tuition, food and any other loans she has.

Now lets talk about Barry... he's in the same situation as Stacy - but he wanted to buy a house - but now his $35,000 isn't enough to qualify for a loan. He's pissed.

​

Like - I think we need a UBI or something - but how does this even work?

2

StarCaptain90 OP t1_jefmj4n wrote

So I agree with you that it will still be rough but its the best I can offer based around the assumption that jobs will continue to get replaced and eventually we will reach a point where UBI is necessary

1

FoniksMunkee t1_jefpnba wrote

Then we are screwed because that will lead to massive civil unrest, collapse of the banking system and economy.

1

StarCaptain90 OP t1_jefpz63 wrote

Well yeah that's what's going to happen in order to transition. My hope is that a system gets put in place to prevent a harsh transition. We definitely need a smoother transition

1

FoniksMunkee t1_jefq5br wrote

Then we really do need to put a pause on this.

1

StarCaptain90 OP t1_jefsbpp wrote

The end goal though is job replacement in order to support human freedom, speed up innovation, technology, medicine, etc.. So whatever the solution is during the pause it will still have to support this transition.

1

FoniksMunkee t1_jefskhs wrote

But I think that’s at least in part the point of the suggested pause. Not that I necessarily agree with it - but it’s likely we have not got a plan that will be ready in time.

1

FoniksMunkee t1_jef9g8l wrote

Actually no, it's a very rational fear. Because it's possible.

You know, perhaps the answer to the Fermi Paradox... the reason the universe seems so quiet, and the reason we haven't found alien life yet - is because any sufficiently advanced civilisation will eventually develop a machine intelligence. And that machine intelligence ends up destroying it's creators and for some reason decides to make itself undetectable.

0

StarCaptain90 OP t1_jef9ukc wrote

Yes, the great filter. I am aware. But it's also possible that every intelligent life decided not to pursue AI for the same reasons, thus never leaving their star systems due to lack of technology and they ended up going extinct once their sun goes supernova. The possibilities are endless.

1