Accomplished_Diver86
Accomplished_Diver86 t1_j0yzg5l wrote
Reply to comment by Emu_Fast in Prediction: De-facto Pure AGI is going to be arriving next year. Pessimistically in 3 years. by Ace_Snowlight
I agree. This was more of a sentiment than a conclusion.
Already buckled.
Accomplished_Diver86 t1_j0yndi2 wrote
Reply to comment by Capitaclism in Prediction: De-facto Pure AGI is going to be arriving next year. Pessimistically in 3 years. by Ace_Snowlight
You are assuming AGI has an Ego and Goals of itself. That is a false assumption.
Intelligence =/= Ego
Accomplished_Diver86 t1_j0xxb6u wrote
Reply to comment by Capitaclism in Prediction: De-facto Pure AGI is going to be arriving next year. Pessimistically in 3 years. by Ace_Snowlight
Well I agree to disagree with you.
Alignes AGI is perfectly possible. While you are true that we can’t fulfill everyone’s desires we can however democratically find a middle ground. This might not please everyone but the majority.
If we do it like this there is a value system in place the AGI can use to say 1 is right 2 is wrong. Of course we will have to make sure it won’t go rogue over time (becoming more intelligent) So how? Well I always say we build into the AGI to only want to help humans based on its value system (what is helping? Defined by the democratic party everyone can partake in)
Thus it will fear itself and not want to go in any direction where it will revert from its previous value system of „helping human“ (yes defining that is the hard part but possible)
Also we can make it value only spitting out knowledge and not it taking actions itself. Or we make it value checking back with us whenever it want’s to take actions.
Point is: If we align it properly there is very much a Good AGI scenario.
Accomplished_Diver86 t1_j0ugu2y wrote
Reply to comment by nutidizen in Prediction: De-facto Pure AGI is going to be arriving next year. Pessimistically in 3 years. by Ace_Snowlight
Didn’t say so. I would be happy to have a well aligned AGI. Just saying that people put way to much emphasis on the whole AGI thing and completely underestimate the Deep learning AIs.
But thanks for the 🧂
Accomplished_Diver86 t1_j0trf8v wrote
Reply to Is progress towards AGI generally considered a hardware problem or a software problem? by Johns-schlong
How is anyone supposed to answer this in a logical way? How can we know if hardware is good enough if we don’t even have AGI yet.
Reliable answers not possible - speculation only
Accomplished_Diver86 t1_j0tr8i0 wrote
Reply to Prediction: De-facto Pure AGI is going to be arriving next year. Pessimistically in 3 years. by Ace_Snowlight
A bit too optimistic. AFAIK current deep learning technology (which is also used in Stable Diffusion and other AI programs such as ChatGPT) is fundamentally flawed when it comes to awareness.
There is hope we will just hit the jackpot in some random ass way, but I wouldn’t bet my money on it. Probably need a whole revamp of how AI are learning.
But still. The question remains: Do we even need AGI? We can accomplish so many feats (healthcare, labor, UBI) with just narrow AI / deep learning, without the risks of AGI.
People always glorify AGI as if it is like either we get AGI or remain in the same place as society. Narrow AI / Deep learning will revolutionize the world, and that’s a given
Accomplished_Diver86 t1_j0c6z16 wrote
Reply to Can modern computer architecture make up what it lacks in number of neurons(assuming we are trying to model the brain) by it's speed? by Dat_koneh-98
Hm good question. I don’t know for sure.
One Problem that could however arise from this is that not only the neuron acts as a transmitter but the position of it is data in itself.
So if you reuse the neurons for multiple actions at the same time, you might be loosing information compared to a human brain due to the neuron being the same and not locally situated somewhere else.
That’s all I got though and I am not an expert. Just my thoughts as I know that we don’t understand the brain enough to even answer your question in a reliable manner
Accomplished_Diver86 t1_j05vo0d wrote
I wouldn’t mind not working. Fuck that. I just want my head to be plugged in the matrix and enjoy my VR Waifus. For real tho. I view my job simply as a formality that allows me to do the shit I want to do in my free time.
Just plug me into the matrix already
Accomplished_Diver86 t1_izw4dnr wrote
Reply to comment by Shelfrock77 in An Odious but Plausible Solution to the Alignment Problem. by OldWorldRevival
Whatever you say Grandma. Let’s get you back into the retirement home.
Accomplished_Diver86 t1_izw21oa wrote
Reply to comment by Shelfrock77 in An Odious but Plausible Solution to the Alignment Problem. by OldWorldRevival
Grandma it is time to take your pills
Accomplished_Diver86 t1_izujzaq wrote
Reply to comment by __ingeniare__ in AGI will not precede Artificial Super Intelligence (ASI) - They will arrive simultaneously by __ingeniare__
Sure but you are still forgetting the first part of the picture. Expansion means movement. You will have a time where it is good but not good in all domains. This will resemble what we call AGI.
Humans are good just not in all domains and ranges you wish we were. It’s the same thing with AI.
TLDR: Yes but no
Accomplished_Diver86 t1_izuikfz wrote
Reply to comment by __ingeniare__ in AGI will not precede Artificial Super Intelligence (ASI) - They will arrive simultaneously by __ingeniare__
As you have said (I will paraphrase) „We will built dumb ASI and expand its ranges of domain“
My argument is that ASI does inherently have greater ranges of domain than AGI.
So if we expand it there will be a point where the ranges of domain are human like (AGI) but not ASI like.
TLDR: You can not build a narrow ASI and scale it. That’s not an ASI but a narrow AI
Accomplished_Diver86 t1_izuhyen wrote
Reply to comment by blueSGL in AGI will not precede Artificial Super Intelligence (ASI) - They will arrive simultaneously by __ingeniare__
Yeah sure. I agree that was the point. Narrow AI could potentially see what it takes to make AGI which would in turn free up resources.
All I am saying that it would take a ton of new ressources to make the AGI into an ASI.
Accomplished_Diver86 t1_izuho09 wrote
Reply to comment by __ingeniare__ in AGI will not precede Artificial Super Intelligence (ASI) - They will arrive simultaneously by __ingeniare__
Yeah well that I just don’t agree with
Accomplished_Diver86 t1_izuefw6 wrote
Reply to AGI will not precede Artificial Super Intelligence (ASI) - They will arrive simultaneously by __ingeniare__
Disagree. I know what your point is and I would agree weren’t it for the argument that AGI would need less resources than ASI.
So we stumble upon AGI. With whatever resources it needs to get to AGI it will need a lot more of that to get to ASI. There are real world implications to that (upgrading hardware etc.)
So AGI would first have to get better Hatdware to get better and need more Hardware to get even better than that. All this takes a lot of time.
Of course if the hardware is there and the AGI is basically just very poorly optimised sure, it could optimise itself a bit and use the now free ressources of hardware. I just think thats not enough.
An ASI will not just need to upgrade from a 3090 to 4090. It probably needs so much hardware it will take weeks if not months / years
For all intents and purposes, it will first need to invent new hardware to even get enough hardware to get smarter. And not just one generation of new hardware but many
Accomplished_Diver86 t1_iyzqrty wrote
Reply to What are your predictions for 2023? How did your predictions for 2022 turn out? by Foundation12a
Bro life will be wild when we can generate videos
Accomplished_Diver86 t1_j1t2sz4 wrote
Reply to favorite youtube channels about this sub's topic? by Freds_Premium
Hold on to your papers. What a time to be alive