Comments

You must log in or register to comment.

stupidimagehack t1_j0r80i3 wrote

ChatGPT took 5 days to impact a million people. I suspect the singularity will be like that only with bigger impact.

28

FrugalityPays t1_j0szaco wrote

Oh no, it took our jobs!

Wait, I’m still on payroll and am just now directing agi…I’m ok with this

2

SteppenAxolotl t1_j0r52dl wrote

No. There are first principles reasons why the physical world cant change instantly, human civilization is very large and will take much time to alter.

What is the Technological Singularity?

16

SmithMano t1_j0stsh2 wrote

I think his point is once you have an AI that can do anything, it will be able to improve on itself so fast that it will just surpass is in no time.

2

SirDidymus t1_j0qwzqs wrote

I don’t see why not. Anything with the power and agency to exponentially better itself has no foreseeable limits.

15

Agreeable_Bid7037 t1_j0r7rek wrote

I have a feeling AI will be the ones to self make AGI, with some help from humans ofc.

11

Not-Banksy t1_j0rqfv7 wrote

I hear this cited a lot, but isn’t that exactly what a motivated human brain with internet connection is?

There’s still an X factor intelligence alone can’t implement to generate discovery.

4

SirDidymus t1_j0tb1b3 wrote

Far from it. A human brain is physically limited. An AGI has the capacity to expand to its requirements. That same X factor was deemed impossible to the generation of the Arts, too, in the very recent past.

1

Heizard t1_j0r11jy wrote

Singularity is the moment when we can no longer predict or calculate something.

We can't predict technological progress at this point - so in a sense we are in a singularity already.

But yes, AGI will make future even less predictable.

12

randomwordglorious t1_j0r29r3 wrote

Exactly. If you were to fall into a black hole, you wouldn't be able to notice when you crossed the event horizon, the point at which your collapse toward the singularity was inevitable. I think we've crossed the AI event horizon.

17

ihateshadylandlords t1_j0rb5fz wrote

To me, AGI is just a program with the IQ of your average person. I don’t see how that will lead to a singularity.

8

EulersApprentice t1_j0rpi6m wrote

There are other advantages that computers inherently have over people that aren't captured by IQ. For instance, speed, and direct thought-access to calculators and computational resources, and an ability to run at full capacity 24/7 without needing time to sleep or unwind.

11

SeaBearsFoam t1_j0sbqxx wrote

Exactly. AGI with an identical level of intelligence and computational capacity as a human would have significant advantages over humans. Like:

Hardware:

Speed. The brain’s neurons max out at around 200 Hz, while today’s microprocessors (which are much slower than they will be when we reach AGI) run in the GHz range, on the order of 10 million times faster than our neurons. And the brain’s internal communications, which can move at about 120 m/s, are horribly outmatched by a computer’s ability to communicate optically at the speed of light.

Size and storage. The brain is locked into its size by the shape of our skulls, and it couldn’t get much bigger anyway, or the 120 m/s internal communications would take too long to get from one brain structure to another. Computers can expand to any physical size, allowing far more hardware to be put to work, a much larger working memory (RAM), and a longterm memory (hard drive storage) that has both far greater capacity and precision than our own.

Reliability and durability. It’s not only the memories of a computer that would be more precise. Computer transistors are more accurate than biological neurons, and they’re less likely to deteriorate (and can be repaired or replaced if they do). Human brains also get fatigued easily, while computers can run nonstop, at peak performance, 24/7.

Software:

Editability, upgradability, and a wider breadth of possibility. Unlike the human brain, computer software can receive updates and fixes and can be easily experimented on. The upgrades could also span to areas where human brains are weak. Human vision software is superbly advanced, while its complex engineering capability is pretty low-grade. Computers could match the human on vision software but could also become equally optimized in engineering and any other area.

Collective capability. Humans crush all other species at building a vast collective intelligence. Beginning with the development of language and the forming of large, dense communities, advancing through the inventions of writing and printing, and now intensified through tools like the internet, humanity’s collective intelligence is one of the major reasons we’ve been able to get so far ahead of all other species. And computers will be way better at it than we are. A worldwide network of AI running a particular program could regularly sync with itself so that anything any one computer learned would be instantly uploaded to all other computers. The group could also take on one goal as a unit, because there wouldn’t necessarily be dissenting opinions and motivations and self-interest, like we have within the human population.

7

mocha_sweetheart t1_j0sci4r wrote

Thanks for the thoughts on this topic, also remember it won’t have the human brain’s biases and inefficiencies

1

nebson10 t1_j0rs78z wrote

One you make one average IQ computer it won't be long before you can then make an army of them that work 24/7 basically for free. It's the scaling of it that is important.

8

LastofU509 t1_j0rv79c wrote

so... we'll have cyber war and ww3 before the timeframe we believe? cool LMAO

2

nebson10 t1_j0sckfu wrote

I didn't literally mean an army but you're not wrong

1

porcelainfog t1_j0soig3 wrote

We can copy and paste it. It’s not just one human equivalent. You create an army.

1

z0rm t1_j0re8wy wrote

Depends on what immediately means. The same year? No, of course not. Within 30 years? Probably. An unlikely but possible scenario is it takes a very long time to achieve AGI and we never see a singularity.

5

Nervous-Newt848 t1_j0s0odj wrote

Once we reach AGI... ASI will come soon after, I would say within 10 to 15 years, because of scaling as one person said in here. First we will have one AGI machine. After that we will have two, then hundreds, then thousands, then millions of AGI machines working on different problems, 24/7 with no breaks.

3

mihaicl1981 t1_j0r2nvj wrote

Definitely not.. We will stop it because we fear things that we can't control. Unless the alignment problem is solved by then. Of course that will be the main story but behind closed doors some organisation's will try to go further and solve it.

That is the risk actually.. It will be same as nukes. Heavily regulated.

2

LastofU509 t1_j0rvahq wrote

do we? LOL ? einstein was right. so was ted.

1

tolgaakman t1_j0rqbdu wrote

Intelligence needs embodiment. It needs a physical body and the more it it is like us, the more we can talk about AGI and singularity and you name what. To this date all we have are fancy names, hypes and a ton of people who are not even aware what intelligence mean but talk about singularity. So, don't worry, nothing will happen even in the next 50 years. We will only encouter only more mathematical formulas for which nature has no use at all.

2

bustedbuddha t1_j0rykkn wrote

That's a matter of goalpost moving, we've obviously crossed multiple qualifiers for the "singularity" such as having the processing power to simulate an environment, and developing systems beyond human understanding, so the goal posts have been moved by people who are uncomfortable with the idea that the singularity already happened. (not that it really comforts me, but basically that)

​

so the current goalpost of "singularity" is an active AGI.

2

jj_HeRo t1_j0sdf22 wrote

Good question: if the AGI has access to resources and is self-aware I would say this reduces the amount of time greatly.

2

TemetN t1_j0qzrb3 wrote

Improbable. I've increasingly come around to the view that the integration of narrow AI into R&D is the form building blocks of the singularity will take. Contrastingly, while I expect AGI by the middle of the decade, I think it's more likely to be along the weak definition (ala Metaculus or its historical meaning). What this means is we're likely to see a gradual increase in the pace of technological advancement, but it's likely to start in parallel to AGI.

1

96suluman OP t1_j0rbwca wrote

I think it will be between 2023 (yes really, one person wants to do it next year) and 2060

2

TemetN t1_j0rdzzr wrote

I've tried to game out a long time line, and it just doesn't work. Even presuming a horribly destructive world war, or running into a yet unseen bottleneck, I can't game out anything close to that. The problem for assuming such a thing is that just what's been individually shown to date indicates we should reach AGI shortly. So while I could see it being delayed to later in the decade, anything past that seems like a one percent or less situation.

1

EulersApprentice t1_j0rp4sj wrote

Probably not instantly, but I wouldn't guess it'd take too too long. Maybe a few years at most.

1

President-Jo t1_j0rvtud wrote

ASI is likely what’s going to trigger the version of the singularity you’re referring to. AGI could very well be what creates an ASI.

1

BinaryFinary98 t1_j0s1mia wrote

It is hard to say, but it may have the capacity to happen very quickly, yes. If it is able to improve its own mechanism of intelligence, thus making it capable of improving it further in a positive feedback loop, then yeah it could potentially happen very quickly.

1

Ortus12 t1_j0swkua wrote

Yes.

AGI is a relativistic term as no intelligence is completely general, not even human intelligence. But when people use it they often refer to intelligence that is at least as capable and general as most humans.

When AGI get's to this point it will be vastly more intelligent than humans because of all of the advantageous that it computers already have over human brains. It will be able to re-write it's own code, perform tasks for pay and use that money to buy more server farms, and optimize existing hardware, and buy more solar farms, as well as design more cost effective chips and solar farms.

That is what we refer to as the "intelligence explosion" singularity. It's a feedback loop that starts when AGI reaches human capabilities.

1

Hailtothething t1_j0t3md8 wrote

Maybe when quantum capabilities are added in

1

LastofU509 t1_j0ru0fw wrote

no. its a wet dream. the best case scenario leads to human race wipeout lol

−1