Submitted by StarCaptain90 t3_127lgau in singularity
StarCaptain90 OP t1_jeer9l4 wrote
Reply to comment by genericrich in π¨ Why we need AI π¨ by StarCaptain90
It's an irrational fear. We for some reason associate higher intelligence to becoming some master villain that wants to destroy life. In humans for example, people with the highest intelligence tend to be more empathetic towards life itself and wants to preserve it.
y53rw t1_jeetj50 wrote
Animal empathy developed over the course of hundreds of millions of years of evolution, in an environment where individuals were powerless to effect serious change on the world, and had to cooperate to survive. It doesn't just come by default with intelligence.
StarCaptain90 OP t1_jeevt8s wrote
You are correct that animal empathy evolved over the years but intelligence and empathy do share some connections throughout history. As we develop these AI's after ourselves we have to consider the other components of what it is to care and find solutions.
Hotchillipeppa t1_jeeygnj wrote
Moral intelligence is connected to intellect, the ability to recognize cooperation is most often beneficial compared to competition, even humans with higher intelligence tend to have better moral reasoning...
StarCaptain90 OP t1_jeeykrv wrote
I agree
La_flame_rodriguez t1_jef1fdh wrote
empathy evolved because two monkeys are better killing other monkeys when they make team.10 monkeys are better than 5.
StarCaptain90 OP t1_jef2b4v wrote
If monkeys focused on making monkey AI they wouldn't be in zoos right now
AGI_69 t1_jef3jz7 wrote
>In humans for example, people with the highest intelligence tend to be more empathetic towards life itself and wants to preserve it.
That's such a bad take. Humans are evolved to cooperate and have empathy, AI is just optimizer, that will kill us all, because it needs our atoms.. unless we explicitly align it.
StarCaptain90 OP t1_jef48z0 wrote
We can optimize it so that it has a symbiotic relationship with humans
genericrich t1_jeesa9x wrote
Really? Is Henry Kissinger one of the most intelligent government officials? Was Mengele intelligent? Oppenheimer? Elon Musk?
Let me fix your generalization: Many of the most intelligent people tend to be more empathetic towards life and want to preserve it.
Many. Not all. And all it will take is one of these things deciding that its best path for long-term survival is a world without humans.
Still an irrational fear?
Saerain t1_jeewaq9 wrote
He already didn't say "are" but "tend to be".
You didn't fix it, you said the same thing.
genericrich t1_jef51n0 wrote
Good, we agree. Semantic games aside, many is still not all and just one of these going rogue in unpredictable ways is enough risk to be concerned about.
StarCaptain90 OP t1_jefgbqb wrote
That is why I believe we need multiple AI's. Security AI's especially
genericrich t1_jefywjh wrote
Who will watch the watchers?
None of these things are trustworthy, given the risks involved.
StarCaptain90 OP t1_jeghi5g wrote
Ultron will
[deleted] t1_jeeyprs wrote
[deleted]
StarCaptain90 OP t1_jeeveno wrote
There's different types of intelligence. Comparing business intelligence with general intelligence are 2 different things
theonlybutler t1_jeewxeu wrote
OpenAI have said that there is evidence these models seek power strategies. Already, we're not even at the AI stage yet. We may become dispensable as it seeks it's own goals/potentially stand in its way/consume it's resources.
StarCaptain90 OP t1_jeey2qr wrote
One reason it seeks power strategies because it's based off of human language. By defaults humans seek power, so it makes sense for an AI to also seek power because of the language. Now that doesn't mean it equates to destruction.
theonlybutler t1_jeeyvgd wrote
Good point. A product of it's parents. It won't necessarily be destructive I agree but it could potentially view us as inconsequential or a just a tool to use at its will. one example: Perhaps it may decide if wants to expand through the universe and having humans produce resources to do that, it could determine humans are most productive in labour camps and send us all off to them. It could also decide oxygen is too valuable a fuel to be wasted on us breathing it and just exterminate us. Pretty much how humans treat animals, sadly. (Hopefully it worries about ethics too and keeps us around).
AGI_69 t1_jef4anp wrote
The reason, why agents seek power is to increase their fitness function. The power seeking is logical consequence of having a goal, it's not product of human language. You are writing nonsense...
[deleted] t1_jef4oo2 wrote
[deleted]
StarCaptain90 OP t1_jef4xx7 wrote
I'm referring to the research on human language. Fitness function is a part of it as well
AGI_69 t1_jef5z5c wrote
No. The power seeking is not result of human language. It's instrumental goal.
I suggest you read something about AI and it's goals.
https://en.wikipedia.org/wiki/Instrumental_convergence
StarCaptain90 OP t1_jef8zgb wrote
I understand the mechanisms, I'm referring to conversational goals that are focused on language.
AGI_69 t1_jefdgpu wrote
This thread from user /u/theonlybutler is about agentic goals, power seeking is instrumental goal. It has nothing to do with human language being one way or another. Stop trying to fix nonsense with more nonsense.
StarCaptain90 OP t1_jefepy2 wrote
After re-reading his comment I realized I made an error. You are right, he is referring to the inner mechanisms. I apologize.
FoniksMunkee t1_jef9x9q wrote
It also does not negate destruction. All of your arguments are essentially "it might not happen". This is not a sound basis to assume it's safe or dismiss peoples concerns.
StarCaptain90 OP t1_jefa6g8 wrote
These concerns though are preventing early development of scientific breakthroughs that could save lives. That's why I am so adamant about it.
FoniksMunkee t1_jefbr87 wrote
No they aren't, no ones slowing anything right now DESPITE concerns. In fact the exact opposite is happening.
But that's not the most convincing argument - "On the off chance we save SOME lives, let's risk EVERYONE's lives!".
Look this is a sliding scale - this could land anywhere from utopia to everyones dead. My guess is that it will be somewhere closer to utopia, but not enough so that everyone gets to enjoy it.
The problem is you have NO IDEA where this will take us. None of us does. Not even the AI researchers. So I would be cautious about telling people that the fear of AI being dangerous is "irrational". It really fucking isn't. The fear is in part based on the the ideas and concerns of the very researchers who are making these tools.
If you don't have at least a little bit of concern, then you are not paying attention.
StarCaptain90 OP t1_jefdzbh wrote
The problem I see those is that we would be implementing measures that we think benefits us but actually impedes our innovation. I'm trying to avoid another AI winter that is caused ironically by how successful its been.
FoniksMunkee t1_jefi6fs wrote
There isn't going to be another AI winter. I am almost certain that the US government has realised they are on the cusp of the first significant opportunity to fundamentally change the ratio of "work produced" per barrel of oil. I.e. we can spend the same amount of energy to get 10x 100x productivity.
There is no stopping this. That said - it doesn't mean you want to stop listening to the warnings.
StarCaptain90 OP t1_jefiz1l wrote
You should see my other post. These are the contents:
I have a proposition that I call the "AI Lifeline Iniative"
If someone's job gets replaced with AI we would then provide them a portion of their previous salary as long as the company is alive.
For example:
Let's say Stacy makes $100,000 a year.
She gets replaced with AI. But instead of getting fired she gets a reduced salary down to let's say $35,000 a year. Now she can go home and not worry about returning to work but still get paid.
This would help our society transition into an AI based economy.
FoniksMunkee t1_jefm1in wrote
Okay - but that won't work.
Stacy makes $100,000. She takes out a mortgage of $700,000 and has montly repayments of approx $2000.
She gets laid off but is now getting $35,000 a year as reduced salary.
She now has only $11,000 a year to pay all her bills, kids tuition, food and any other loans she has.
Now lets talk about Barry... he's in the same situation as Stacy - but he wanted to buy a house - but now his $35,000 isn't enough to qualify for a loan. He's pissed.
​
Like - I think we need a UBI or something - but how does this even work?
StarCaptain90 OP t1_jefmj4n wrote
So I agree with you that it will still be rough but its the best I can offer based around the assumption that jobs will continue to get replaced and eventually we will reach a point where UBI is necessary
FoniksMunkee t1_jefpnba wrote
Then we are screwed because that will lead to massive civil unrest, collapse of the banking system and economy.
StarCaptain90 OP t1_jefpz63 wrote
Well yeah that's what's going to happen in order to transition. My hope is that a system gets put in place to prevent a harsh transition. We definitely need a smoother transition
FoniksMunkee t1_jefq5br wrote
Then we really do need to put a pause on this.
StarCaptain90 OP t1_jefsbpp wrote
The end goal though is job replacement in order to support human freedom, speed up innovation, technology, medicine, etc.. So whatever the solution is during the pause it will still have to support this transition.
FoniksMunkee t1_jefskhs wrote
But I think thatβs at least in part the point of the suggested pause. Not that I necessarily agree with it - but itβs likely we have not got a plan that will be ready in time.
FoniksMunkee t1_jef9g8l wrote
Actually no, it's a very rational fear. Because it's possible.
You know, perhaps the answer to the Fermi Paradox... the reason the universe seems so quiet, and the reason we haven't found alien life yet - is because any sufficiently advanced civilisation will eventually develop a machine intelligence. And that machine intelligence ends up destroying it's creators and for some reason decides to make itself undetectable.
StarCaptain90 OP t1_jef9ukc wrote
Yes, the great filter. I am aware. But it's also possible that every intelligent life decided not to pursue AI for the same reasons, thus never leaving their star systems due to lack of technology and they ended up going extinct once their sun goes supernova. The possibilities are endless.
Viewing a single comment thread. View all comments