Accomplished_Diver86

Accomplished_Diver86 t1_j0xxb6u wrote

Well I agree to disagree with you.

Alignes AGI is perfectly possible. While you are true that we can’t fulfill everyone’s desires we can however democratically find a middle ground. This might not please everyone but the majority.

If we do it like this there is a value system in place the AGI can use to say 1 is right 2 is wrong. Of course we will have to make sure it won’t go rogue over time (becoming more intelligent) So how? Well I always say we build into the AGI to only want to help humans based on its value system (what is helping? Defined by the democratic party everyone can partake in)

Thus it will fear itself and not want to go in any direction where it will revert from its previous value system of „helping human“ (yes defining that is the hard part but possible)

Also we can make it value only spitting out knowledge and not it taking actions itself. Or we make it value checking back with us whenever it want’s to take actions.

Point is: If we align it properly there is very much a Good AGI scenario.

2

Accomplished_Diver86 t1_j0tr8i0 wrote

A bit too optimistic. AFAIK current deep learning technology (which is also used in Stable Diffusion and other AI programs such as ChatGPT) is fundamentally flawed when it comes to awareness.

There is hope we will just hit the jackpot in some random ass way, but I wouldn’t bet my money on it. Probably need a whole revamp of how AI are learning.

But still. The question remains: Do we even need AGI? We can accomplish so many feats (healthcare, labor, UBI) with just narrow AI / deep learning, without the risks of AGI.

People always glorify AGI as if it is like either we get AGI or remain in the same place as society. Narrow AI / Deep learning will revolutionize the world, and that’s a given

74

Accomplished_Diver86 t1_j0c6z16 wrote

Hm good question. I don’t know for sure.

One Problem that could however arise from this is that not only the neuron acts as a transmitter but the position of it is data in itself.

So if you reuse the neurons for multiple actions at the same time, you might be loosing information compared to a human brain due to the neuron being the same and not locally situated somewhere else.

That’s all I got though and I am not an expert. Just my thoughts as I know that we don’t understand the brain enough to even answer your question in a reliable manner

2

Accomplished_Diver86 t1_izujzaq wrote

Sure but you are still forgetting the first part of the picture. Expansion means movement. You will have a time where it is good but not good in all domains. This will resemble what we call AGI.

Humans are good just not in all domains and ranges you wish we were. It’s the same thing with AI.

TLDR: Yes but no

1

Accomplished_Diver86 t1_izuikfz wrote

As you have said (I will paraphrase) „We will built dumb ASI and expand its ranges of domain“

My argument is that ASI does inherently have greater ranges of domain than AGI.

So if we expand it there will be a point where the ranges of domain are human like (AGI) but not ASI like.

TLDR: You can not build a narrow ASI and scale it. That’s not an ASI but a narrow AI

1

Accomplished_Diver86 t1_izuefw6 wrote

Disagree. I know what your point is and I would agree weren’t it for the argument that AGI would need less resources than ASI.

So we stumble upon AGI. With whatever resources it needs to get to AGI it will need a lot more of that to get to ASI. There are real world implications to that (upgrading hardware etc.)

So AGI would first have to get better Hatdware to get better and need more Hardware to get even better than that. All this takes a lot of time.

Of course if the hardware is there and the AGI is basically just very poorly optimised sure, it could optimise itself a bit and use the now free ressources of hardware. I just think thats not enough.

An ASI will not just need to upgrade from a 3090 to 4090. It probably needs so much hardware it will take weeks if not months / years

For all intents and purposes, it will first need to invent new hardware to even get enough hardware to get smarter. And not just one generation of new hardware but many

5