Submitted by thecoffeejesus t3_11b5vs6 in singularity

If there’s one thing I’ve noticed, investigating and exploring all of this new AI stuff, it’s that people fundamentally lack imagination.

It seems like people simply cannot step outside of their own perspective in their own world.

I constantly see people on this sub, and all other places online saying things like:

“I just don’t see a use case for it in my industry.”

“I haven’t seen anything that would make me wanna use it.”

“All the things I’ve seen look pretty basic. I’m not concerned.”

Grow up. I’m sorry to be harsh, and I don’t mean to be rude, but I just don’t understand how folks are so completely missing the point.

Yes, the things that we have now are not full AGI or singularity yet.

Yet.

I don’t understand why people can’t grasp the concept.

We’ve gone from horse and buggy to space stations in 100 years.

We’ve gone from no computers, no Internet, to LLM AI in less time.

The difference is a space station can’t teach it self to build better space stations.

What do people not understand about exponential growth?

Look at all of the different products and services that have been built using just the simplest AI tools we have right now.

How are people not understanding that as fast as things are changing right now, things in the future will change faster and faster?

I keep trying to tell my friends and family about AI, and the only thing that’s piqued their interest is ChatGPT.

But not even in the way that they actually understand what’s going on. They seem to only see the ways that it can make their lives a little bit easier.

“Oh, so this thing can write emails for me?”

“Wait, so it’s like Google but like, you ask it stuff?”

“Can you ask it what boba is made out of?”

And then we get into the part that gets me really frustrated, when I start showing them things and they just don’t understand or even care to try to understand what they’re really looking at, how it’s going to impact their future, or how they can leverage its immense power right now to make a difference in their lives.

“So can it do my taxes for me?”

“Does it know how to make $1 million?”

“Can it write my social media posts for my job?”

It’s like they’re so close to getting it, but they just completely in fundamentally. Miss the point. They just can’t see the bigger picture.

I don’t know what that means. I don’t know if that’s a symptom of our society, or human nature. Are people really only so concerned with themselves and their immediate surroundings?

Every so often I’ll encounter someone who seems to grasp the magnitude of what’s going on, but even when that happens, most people I’ve talk to you either rejected and want to destroy it, or they get overwhelmed, and they change the subject to something more immediate, like the weather or the game.

I enjoy reading the posts on this sub because I feel like I’m staying informed. Thank you all for doing what you do and posting what you post.

I just wish that more people cared in my real life, you know?

180

Comments

You must log in or register to comment.

helpskinissues t1_j9w47cr wrote

This subreddit lacks imagination as well, it's mostly fanboying OpenAI because chatGPT was the first, criticizing Google, Meta, etc... And being all the time "product over research!!!".

Fortunately some smart people wander these lands.

I am genuinely surprised that people are discovering AI now (in this community) when movies like Ex Machina or even Terminator were done ages ago.

Yes, AI will surpass humans. Yes, we will use technology to enhance our lives and intelligence. Human species without enhancements won't be productive to work in any job in a matter of decades.

Etc etc. It's all known.

But for example I don't see people in this subreddit acknowledging Waymo (or Cruise or even Tesla self driving AI). Waymo is directly changing people's lives and removing taxi jobs right now using an AI system able to ride as good as humans, and nobody gives a damn fuck, we all talk about a chatbot that can write Edgar Allan Poe poems nobody care about. Obviously biased.

Today's release of Llama is one of the most impressive feats in the last months, we'll see major effects of such event in through the year.

62

thecoffeejesus OP t1_j9w7jto wrote

I completely agree with you. With the way things are going, the divide between the people with access to this tech, and the people without it is going to be astronomical.

I’m just hoping that regular folks like me can ride the wave

10

helpskinissues t1_j9wptin wrote

Lol, to some people here (check replies to my comment) having 24x7 self driving cars without drivers in San Francisco, Los Angeles and Phoenix in 2023 is nothing.

People are not understanding what it's happening. Literally robots replacing our driving skills NOW, not in the future, NOW, and people are like "yeah but it can't run in a Norwegian mountain yet". Lol.

22

cypherl t1_j9wz653 wrote

I feel you. The older people I work with keep saying things like these electric cars are never going to work. They might have a long list of draw backs my old friends but Norway goes 100% electric for new cars in 2025. It's not coming. It's here now.

13

Deadboy00 t1_j9x50uk wrote

https://jalopnik.com/san-francisco-wants-new-restrictions-on-cruise-waymo-1850050281

Just because you can move the goal post doesn’t necessarily guarantee an actual goal.

−3

helpskinissues t1_j9xs52z wrote

Woah, lobbies against new tech endangering jobs. What a surprise.

Do you usually trust politicians this much? Without any data to back it up?

4

Exel0n t1_j9y6n2x wrote

let them kill themsleves.

back in 19th century, places that rejected railway due to XYZ reasons ended up decaying while the ones got hooked on rails became booming towns, lasting to today. e.g. railway in Taiwan, in Siberia etc. the cities actively rejected rails passing thru them soon after declined and its place got replaced

if SF want to be the next, so be it.

2

Tall-Junket5151 t1_j9z4bky wrote

Humans are surprisingly adaptable, things that would have blown my mind even 5 years ago I take for granted now. I live in California and am often in the Bay Area where I see waymo cars without batting an eye. I have a Tesla and 95% of my highway drive is via autopilot without even really thinking about it. It’s just all so normal to me. Same with language models, I tried GPT-3 when it just came out and that truly blew my mind, more than ChatGPT because that was my first encounter. Even AI art seems normal to me now. So it’s not to say that tech isn’t mind blowing, it’s that you eventually get used to it. I mean take an objective look, the fact that tech like computers exist at all is mind blowing in itself.

The most recent thing that impressed me was AI voice synthesis with Elevenlabs, but I’m sure like everyone I will get used to it. So people will always focus on the next big thing and that at the moment is ChatGPT or large language models as a whole.

8

visarga t1_ja5036a wrote

> Humans are surprisingly adaptable, things that would have blown my mind even 5 years ago I take for granted now.

No way automation can keep up, we'll take everything for granted and still have to work to bring it to the next level.

1

Difficult_Review9741 t1_j9wa5th wrote

I seriously doubt anyone has lost a job due to Waymo. It operates in only some parts of two cities.

Tesla "self driving" definitely hasn't taken even one job.

6

kaityl3 t1_j9xu3dd wrote

My cousin works for Anthem, and was in the claims department - they recently deployed an AI to read through and analyze/approve or reject claims. A human employee would then review its work.

I believe he said 70% of its judgements required no further human editing; the reviewer didn't have to do anything but check off on the AI's work.

8

MrTacobeans t1_j9yedqd wrote

This is exactly the kind of AI that shouldn't even be scary. It's taking monotonous labor and doing the majority of it. If anthem holds true to any kind of decency their employees can focus on other pursuits within the company while an AI crunches the nitty gritty bits.

If that AI axes 70% of the workforce without proper movement to New adventures for each affected employee that's criminal. But also a possible situation unfortunately :/

2

drekmonger t1_j9znjjt wrote

> This is exactly the kind of AI that shouldn't even be scary.

Shouldn't be scary. Should be celebrated.

But...capitalism. The people who control such systems will get stupid wealthy, and the people who will be out of a job will go starve under a bridge.

5

visarga t1_ja50rdh wrote

Probably having to verify AI takes 50% of the time do do it manually, so the relative advantage is smaller.

But another advantage of teaming human+AI is that AI can be calibrated and ensure a baseline of quality. Humans might have higher variance, have a bad day, be tired, inattentive. So it is useful to increase consistency, not just volume.

1

madali0 t1_j9yrqcl wrote

Isn't that basically how it has always been? Some primal smart guy invents a tool, which replaces some menial job and makes it easier and faster with the tool. And on and on with every tool, could be a wheel, could be a hoe, could be a toaster, it's all basically the same idea

0

helpskinissues t1_j9wp41x wrote

"cities" bigger than European countries.

6

turnip_burrito t1_j9wwcvy wrote

Specifically Andorra, Vatican City, Lichtenstein, and like a couple others which are all tiny.

1

helpskinissues t1_j9xs1pu wrote

You must be a troll lol. Have you compared the populations of Los Angeles, San Francisco or Phoenix?

Edit: if you're lazy to check, Andorra has 80k population. Los Angeles, San Francisco and Phoenix more than 1 million each. Probably around 5-7 million citizens together. Which is more than Denmark, Finland, Norway, Estonia, Latvia... That have less than 6 million citizens per country.

So imagine a whole country like Finland/Denmark having self driving cars everywhere.

3

turnip_burrito t1_j9xsnps wrote

There's a lot of people. So what?

All those cities are well-marked and mapped for the most part compared to most everywhere else. And their weather is also better than most everywhere else (clear skies most of the time, almost no snow to speak of).

−2

helpskinissues t1_j9xsy0f wrote

"so what?" So Andorra and Vaticano are just trolling examples. We're talking about human drivers being replaced for AI drivers in cities as populated as whole countries.

Most capital cities are very well mapped in every modern country. Weather isn't that good and the only reason Waymo is still not available in other places is because of licensing, not because of technical capabilities. Waymo already has ability to handle storms or snow.

Anyway, you're the only one discussing here about Waymo being able to drive in extreme scenarios, I don't see the point or how it's related to the thread. chatGPT can only work where there's stable Internet as well lol. Tech has limitations by default.

2

turnip_burrito t1_j9xu8nu wrote

I'm pointing out that your phrasing "larger than European countries" is deceptive. If you are being honest, then in terms of land size (square kilometers), those cities are larger than those countries, and only those countries. Certainly not Spain, France, or Germany, all of which are larger in square footage than Phoenix, SF, and LA.

I'm not sure how relevant population is when basically nobody uses self-driving cars in those cities. You see more cars on the road, and pedestrians/cyclists, which I guess is the point you are making?

Weather isn't that good? Are you kidding me? All three of those cities have good weather for driving conditions. Anyhow it's good to hear Waymo can handle storms and snow.

If you can bring up self-driving cars in this thread that doesn't mention them in the OP, then I can continue to discuss the details of self-driving cars in a reply to your post. It's fair game.

4

helpskinissues t1_j9xunie wrote

The impact of technology is measured in users, not in land size.

Weather isn't that good. It has rain (last weeks heavy rain). And driving conditions on Los Angeles is far from the best in the world, they're infamous for having a terrible traffic.

I don't have any issue with your mention of limitations of Waymo, but that's missing my point: how AI is impacting human lives (not land size). And when you discover that the main limitation of Waymo release is actually political licensing, well, even more surprising.

1

turnip_burrito t1_j9xv56c wrote

>The impact of technology is measured in users, not in land size.

How many people in these cities actually have cars that are driving themselves?

1

helpskinissues t1_j9xvj6l wrote

No need to own chatGPT just like there's no need to own Waymo cars. It's basically a service. And millions are able to use it right now (albeit maybe around 1 million because of licenses, not fully released yet for every user).

But, Cruise also exists.

https://www.thedetroitbureau.com/2022/12/cruise-expands-testing-to-two-new-cities-as-gm-grows-commitment/

Arizona, San Francisco...

As far as I can understand, it's around 1-3 million citizens having available an actual effective alternative to human drivers.

And if we count Tesla (I wouldn't, but it's still an impressive driving assistant) as self driving, we jump to dozens of millions very quickly.

1

vivehelpme t1_jacsmcn wrote

>Tesla "self driving" definitely hasn't taken even one job.

It took the job of the kamikaze pilot

1

play_yr_part t1_j9wjphf wrote

this. IDK the timeframe for completely autonomous self driving as it seems to have been "within a decade" for like a decade now lol w but with Tesla's self driving at least, recent updates have sometimes been one step forward two steps back.

Entirely possible another car maker's version could change that in a flash though.

−1

helpskinissues t1_j9wpg4g wrote

So having 24x7 no-driver self driving cars operating in Los Angeles, San Francisco and Phoenix (and waiting to obtain license to run on New York and other cities) is not "completely autonomous driving"? Why do you focus on Tesla that isn't even trying to replace drivers?

7

play_yr_part t1_ja5n5p7 wrote

Late reply, and I confess my ignorance about Waymo other than the occasional thing I see on social media. If they're likely to scale up in a way where things will be vastly different in several years time then fair enough.

1

Kennybob12 t1_j9wph4w wrote

Mercedes actually just passed tesla with their certification to use their fsd based system in the US. Which to me is a better sign than any that we are approaching that precipice. Im much more interested in relevant biz injecting some ai into their process than some hot shot with a (or some rockets) dream who cant make a decent vehicle to save his life.

2

Surur t1_j9xj4ww wrote

Mercedes's system is really bad - it just follows the car in front, and if there is not a car in front it wont activate.

4

[deleted] t1_ja0ijct wrote

[deleted]

2

Kennybob12 t1_ja3tpxh wrote

Are you in nevada? That is the only place it's been registered to operate as of today. Otherwise, yes you are still driving a level 2. No matter what your experience is, there takes a certain level of criteria to be certified as level 3. Tesla doesn't just get some magic pass. They dont have it. They are close, but by going off radar they will create more problems than they will solve.

0

[deleted] t1_ja3zape wrote

[deleted]

2

Kennybob12 t1_ja433w0 wrote

you're absolutely right the last time i saw a mercedes phantom break or spontaneously combust was because of its inferior autopilot. Maybe the software is there, but the car is miles away from what it promises. And unfortunately you still drive a car, not a program.

0

PhysicalChange100 t1_j9y3uu3 wrote

To be fair, there's a growing movement where cars are not seen as part of the future but seen as a hindrance to progress.

And frankly I support that movement, car oriented cities are a nightmare to live in. No wonder people are not excited for Waymo

r/fuckcars

3

helpskinissues t1_j9y3ya7 wrote

Electric noiseless cars without drivers without deaths? Well. Sounds cool to me.

0

PhysicalChange100 t1_j9y4kfk wrote

Well it does not sound cool as much when you imagine millions of people having these cars, basically recreating traffic. And a whole list of problems associated with car oriented cities.

High speed trains, buses, bikes, tramway tracks, are more efficient and ideal way of transportation.

2

helpskinissues t1_j9y4y2e wrote

Not really that more efficient nor effective unless you add a relevant part done by foot plus delays. Buses are really bad and slow compared to taxis even in best scenarios.

Also disagree with the "recreating traffic" argument. More people would use public transportation thanks to mini buses with self driving.

2

Puzzleheaded_Pop_743 t1_j9yckq3 wrote

I think you are missing their point. They are just saying that many cities are just structured so inefficiently that cars are required to live there. But this is not necessary.

3

helpskinissues t1_j9yd07j wrote

That's unrelated to self driving technology, it's not a criticism nor a reason to be against the technology, it's unrelated.

1

Deadboy00 t1_j9yr02x wrote

If policy increases the capacity for more cars to be on the road, it will increase the amount of cars on the road.

Mo cars, mo problems.

Nyc and other cities are actively trying to limit the amount of congestion. 14th street in Manhattan (one of widest, most travelled) has been restricted to busses and bikes for the last couple years. Plus they’re congestion fees, tolls, etc to discourage cars. And more legislation* is on the way.

*with overwhelming support by the public

3

helpskinissues t1_j9yrauh wrote

I don't see any contradiction between restricting the amount of vehicles and self driving technologies. In every city with vehicle limitations, taxis are available. Waymo are basically next gen taxis. So I don't see any issue.

2

HolmesMalone t1_j9yl8lc wrote

On a post about lacking imagination.

The self driving cars can include buses and minivans. They can transfer you to a bus etc. overall this requires wayyyy less vehicles and wayyyyy less parking spaces, allowing existing cities and infrastructure to be reclaimed for walking etc. So in SF they have the iconic cable cars. This could be like that.

1

Tall-Junket5151 t1_j9z6kit wrote

You lack conceptual understanding of future tech, full self driving cars wouldn’t have any traffic because they would coordinate perfectly.

Additionally, I like living in the suburbs and will never live in cramped inner city apartments. A car is the best option because it’s the most effective means of transportation for me. If I want to go somewhere I just get in my car and drive there. I don’t have to learn which convoluted public transportation routes might get me there. Even worse if it’s raining or snowing outside because public transportation never drops you off at your destination, there usually a decent walk associated with it. So no thanks.

1

SnipingNinja t1_ja05rob wrote

People are understanding llama, it's capable of running on consumer hardware that many are likely to own, the future is coming fast.

1

Lawjarp2 t1_j9x66jp wrote

People lack imagination because they are at their core just a next word predictor

46

EbolaFred t1_j9yrqbv wrote

I've been thinking a lot about this lately.

Maybe 10% of the people I interact with are free to let their thoughts wander and generate new ideas. The rest seem to be running an NPC script. I can literally predict their full sentences before they say them with pretty decent accuracy.

Now of course I don't know what they are really thinking. They may have some kind of a safe social filter they've developed to make themselves appear normal and socially acceptable - maybe they've had some bad experiences where they've shared their real thoughts and rocked the boat too much, so they shut that part down. But it's something to think about, now that we have access to recent LLMs and can experience how well next word prediction can work.

3

WarAndGeese t1_j9yvi55 wrote

They are just focussed on different aspects of their life than you are. You have gone through and seen the same conversations over and over, you have seen the common responses. It's like playing a video game and knowing the 'meta' game. Hence when you go and tell someone something, and they are hearing about it for the first or second or third time, their response will probably be one of the popular responses that you already know about.

That said they're people just like you. It's not productive for you to look down on them or for them to look down on you, they have different priorities at the moment and hence they are somewhere else mentally.

That said, us here can agree and say that their priorities are wrong maybe, but it's not some fundamental divide between people.

8

WarAndGeese t1_j9yvks9 wrote

Maybe our priorities are maybe closer to what we should be doing but our priorities are also very flawed.

1

SlowCrates t1_ja40155 wrote

Oh god, I don't know. There are certain people whose predictability and unoriginality are so grating on me that it makes me seriously wonder if I have a severe personality disorder. (I don't as far as my therapist believes).

They're everywhere, but in varying degrees. Some are aware they're doing it, and some aren't. There are people who are completely content having nothing but bullshit fill their minds, who only listen to the radio, wear brand-name clothing with big, easily identifiable logos in the middle of the chest, whose political opinion is copy and pasted from those around them.

1

WarAndGeese t1_j9yuzzl wrote

That logic doesn't make sense. What you say about people universalizes. In OP's statement there are two groups of people, those who have this imagination and those who lack it, those who see it are criticizing those who don't. If what you posit is the response to what OP said, then there wouldn't be that divide.

That is, either everyone is a word predictor and they all have that imagination --> OP's situation doesn't present itself. Or everyone is a word predictor and don't have that imagination --> OP's situation doesn't present itself. Or everyone is a word predictor, and some have that imagination, and some don't have that imagination, --> your response isn't an answer.

3

Lawjarp2 t1_j9z3720 wrote

Don't be just the core when you can be so much more.

1

ABshr3k t1_j9ww5dw wrote

Well, relatives and colleagues not getting the extent of how much things will change (and how fast) does not bother me as much as the “smart” people in media (even tech media) totally missing the point. They do not bother to do an iota of research and sound more or less like general public while fawning over or criticizing the ONE AI system them know of - ChatGPT. More than lack of imagination, theirs is pure laziness.

31

magosaurus t1_j9y7n5e wrote

I work in tech as a career software developer and I'm finding that my non-tech friends and relatives seem to have better intuition about the significance of what we're seeing and where things are going. My co-workers seem uninterested and don't get it.

This surprises me and I don't have a satisfying explanation for it.

I think they *think* they know more than they do and are dismissing it based on their prior experience with AI tech. That's my best theory.

14

thelefthander t1_j9yk5b8 wrote

This is very interesting, it’s like the reverse Dunning Kruger effect. Or perhaps, the uninitiated are more prone to macro thinking when it comes to connecting the dots.

8

BlueShipman t1_ja077pd wrote

>I think they *think* they know more than they do and are dismissing it based on their prior experience with AI tech.

I've encountered this on reddit.

They'll say "i'm a programmer who has worked with AI before, and therefore..." and it's always wrong. AI has changed drastically in the last 6 months and anyone using it before then has no clue what it can do now.

7

Sandbar101 t1_j9w89im wrote

I could not have said it better myself. You are absolutely, completely 100% right. It makes you feel like reality is gaslighting you, but you know you’re right. And you ARE right. We have maybe 40 years till the end of our human society as we know it. Whatever comes next will be so radically different it will be unrecognizable. And honestly I expect it to be closer to like 20 years. We fundamentally cannot imagine the scope and scale of what AI is capable of. Thats the whole point of calling it the Singularity.

Rest assured, we’re here with you, and we understand.

21

Kennybob12 t1_j9wqd3m wrote

I'd like to add that it's not that they are right about most humans scope of understanding, because 90% dont even understand the internet, it's that they refuse to. This is a threat to every way of life that most know, whether they acknowledge it or not. This is form of fear based refusal because no one has been taught how to understand outside their means. One would argue that even this case can be applied to us because while we are excited, there is plenty of space to worry about what this impacts for the singular.

We dont know how to stop, change, or even combat what this will become. Your most basic sense of survival is threatened, and only so many thought experiments will comfort one's mind. We can say we understand even to the nth degree, but we only can form in our mind what we are comfortable with.

11

Sandbar101 t1_j9wv37a wrote

Very well said and exceedingly accurate

6

thecoffeejesus OP t1_j9xocuk wrote

I agree, and I want to add that it’s the idea that we just fundamentally can’t really grasp what could happen, the scale or the speed, that means that this thing is so wildly, uncontrollable that it’s kind of overwhelming.

Because I think of it like this:

With the technology that we have today, a kid, 10 years from now, will be able to create stuff on their mobile phone in a similar way that we can emulate an Xbox on the iPhone and play games like Halo.

But EXPONENTIALLY more powerful.

Yes, I know there’s limits.

But we don’t know what those limits are.

We just can’t.

It’s like a grasshopper trying to understand an airplane.

They both fly, but in a completely different way.

4

phillythompson t1_j9yi9tc wrote

40 years is generous lol

Factor in exponential growth and i would argue society looks crazy different in 5-7 years.

It’s only been 15 years since the smart phone, and look at how drastically different life is today.

This will be orders of magnitude more

7

drekmonger t1_j9zolyk wrote

> It’s only been 15 years since the smart phone

The term "smartphone" was coined in 1995 (28 years ago), but there were earlier examples of smartphone-ish things, like the IBM Simon.

The first modern-ish smartphone with an Internet connection was probably the Blackberry or Palm Treo, both in 2002.

1

visarga t1_ja52xcs wrote

In many ways it's been the same since 2010. We could talk, take photos, load web pages, use maps, set alarms and play games back then too, we even had Uber and AirBnb. Now the screens are a bit larger and the experience more polished.

I was expecting something more revolutionary - the phone is a pack of sensors, it has sight, hearing, touch, orientation, radio and many other sensors in the same package. But the amazing new applications didn't appear, except Pokemon Go?

1

randomthrowaway-917 t1_ja7d1wq wrote

i cant wait for tech to get to the point where pokemon go actually looks like the first ads that came out for it

1

visarga t1_ja52ibm wrote

Not even people working in the field have a good idea about 3 years ahead. Ten or twenty years ahead is just sci-fi.

2

vivehelpme t1_jacuivl wrote

>40 years till the end of our human society as we know it. Whatever comes next will be so radically different it will be unrecognizable.

400 years ago, one could sit at a wooden outdoor table with a glass of wine, wearing woven textile clothes, and enjoy the warmth of a sunny spring day.

In 40 years I can still do that. Some things change, others don't. I don't need a pair of carbon fiber nanotube smartpants with RGB LEDs that can give me a handjob thank you very much.

1

DukkyDrake t1_j9w96tv wrote

>We’ve gone from horse and buggy to space stations in 100 years.

>What do people not understand about exponential growth?

None of that have anything to do with the current batch of AI tools being fit for a particular purpose. Nothing to do with if/when those tools will be made sufficiently reliable for unattended operation in the real world.

Some people fail to understand, just because you can imagine something in your mind, that does not necessarily mean others can definitely engineer a working sample within our personal time horizon, or ever.

19

ShidaPenns t1_j9xxtab wrote

Thanks to ChatGPT and Bing, there's going to be a ton of new money going into AI technology. On top of the honestly crazy amount that was already being invested.

3

visarga t1_ja55edi wrote

Probably money is the most important thing. $1B given by MS to OpenAI in 2019 became GPT-3.

2

thecoffeejesus OP t1_j9xmk9h wrote

You have a point, and I understand what you’re saying.

Obviously, these things need someone to create them. If climate change or nuclear war or something else doesn’t take us out, it’s more probable the not that we will figure out a way to engineer these tools.

I can’t remember the name of it, but there’s a philosophical question that says, “given infinite time, what is the probability of intelligence figuring out how to travel backwards in time and ensure it’s own creation?”

The answer is 100%.

Because given infinite time, everything that can happen will happen. If there is an infinitely long amount of time when things can happen, everything that’s finite will happen.

And I’m not seeing this thinking that you don’t know it, I’m just establishing a baseline.

It’s connected to what people talk about when they talk about simulation theory. If you keep going with that thought, it means that either we are the only reality that hasn’t figured out how to simulate a universe yet, or we live in a simulation.

There is a 50-50 chance that we live in a simulated universe.

So what does that have to do with your comment?

It means that it’s more likely than not that at some point in time, some species will figure out how to create artificial intelligence.

Either we are the species, or we are the artificial intelligence, or it actually hasn’t happened yet. But if the universe continues forever, it will happen at some point. And if it is possible for it to move backwards in time, it will, at some point, figure out how to go back in time to ensure it’s own creation.

1

Baturinsky t1_j9y1vql wrote

If time travel or FTL travel is not possible by the laws of physics, it's not possible. No amount of intelligence can change it.

1

Wyrade t1_ja0lfnm wrote

>Because given infinite time, everything that can happen will happen.

That is pretty stupid reasoning. There are infinite numbers between 0 and 1, yet they will never be 2.

1

thecoffeejesus OP t1_ja1yzay wrote

Because…that can’t happen. So, like I said, everything that can happen, will happen.

1

Wyrade t1_ja2sanu wrote

It can happen that somone writes a book.

But there can be an algorythm can writes random letters, yet it can provenly not be able to reproduce that book.

It's a fallacy to believe you actually know what can happen. And you just declared that traveling backwards in time is within the realm of possibility.

Given infinite time and a seemingly infinite randomness doesn't guarantee that every possible combination of everything will happen.

1

visarga t1_ja55tll wrote

That's meaningless. Even enumerating all games of Go is tedious, 10^170, more than 10^80 the number of atoms in the universe, and that's only a small corner of "everything that can happen". If you put two go boards side by side the number of state multiplies between them.

1

thecoffeejesus OP t1_ja5azdo wrote

Correct. Big numbers get bigger.

But if time is infinite, and matter isn’t, eventually all states of matter that can exist will, no matter how large that number is.

Think about it like this:

If you put an apple in a vacuum box, and let it sit there for infinite time, the apple will decay into nothingness.

But eventually, there will be a point in time when you can open the box, reach in, and grab an apple that’s exactly like the one you put in. Blemishes and everything.

It might be trillions and trillions of years from now, it might be tomorrow.

If nothing ever comes in or out of the box, the atoms that used to be the apple will cycle through every possible state, over and over, forever.

They will at some point in time be in every state they can possibly be.

If time is infinite and the box is inert, then there will be infinite points in time when you can open the box and find an apple that is in exactly the same state as the one that originally went inside the box. And every other kind of apple those atoms could make.

This is just a philosophical thought experiment, but it’s informing real world experiments.

People are working on figuring out if this is how our universe works or not.

1

bist12 t1_j9wjtzc wrote

On this sub it's the opposite. Too much ungrounded, deluded fantasy of super AI coming from people who clearly don't work on AI in the real world, but want to be taken seriously when they make definitive statements about the future

18

diabeetis t1_j9wyato wrote

Lol. I know many people working in AI and their views are just as fantastical and grandiose as the average poster here

4

thecoffeejesus OP t1_j9xo238 wrote

I agree with this. I know quite a few people building apps using the GPT API, and I know a few people working on the Adobe Sensei AI.

They tell me some pretty crazy stuff on a daily basis

2

turnip_burrito t1_j9wx75x wrote

Yeah I know. At least both of us know better and stand above the crowd with our obvious credentials.

It's hard being so knowledgeable and wise on a daily basis, especially surrounded by these plebians. 🧙‍♂️

3

AvgAIbot t1_j9x5ogn wrote

Our prediction systems in our minds are just better I guess. I’ve told people about all the progress AI has made and many think it’s cool, but the thinking stops there. They don’t even try to think about the future will look like.

Idk when I watch movies I almost always predict the plot or what will happen next (except for really unpredictable plots) and my SO is like how did you know that.

Some people are just better at seeing what comes next. I don’t think we’re that special but I mean probably like 30% of people think this way, if I had to guess.

One example is someone seeing text to image generators. Most people say wow that’s cool and the thinking stops there. But for ‘us’ we’re like oh man once they get text to video, text to video games… that shit is going to be wild. And the other people it didn’t even cross their minds.

17

thecoffeejesus OP t1_j9xooi5 wrote

That’s what I’m saying!

With the progress that it’s making, it really is only a matter of time until we’re able to simulate whatever the hell we want.

I think it’ll start with like a Sims world with a eyes that are all interacting in the Metaverse.

It’ll be like animal crossing, meets the YouTube comments, but they’ll have their own economy, and they’ll make their own music and art.

It’ll be like sci-fi, but we’ll watch it happened in the real world.

Right now I’m just trying to figure out how the hell like in capitalize on it while we still have capitalism.

It’ll be nice when we finally get rid of that shit but for now, these other folks are right, I still gotta eat, and I still gotta pay money for food

4

Nukemouse t1_j9w7vss wrote

You know that nothing forever show, and how it looks all buggy and bad and really basic? That's because it was made intentionally primitive using primitive tools and a low budget. That isn't the best AI can do, its the WORST AI can do. If that surprisingly watchable thing is possible using effectively the worst and most primitive tools we have available, then a proper attempt by whatever versions of these tools we have in a year or two will be able to make just about anything. Not just TV shows.

15

thecoffeejesus OP t1_j9xn29y wrote

I made a video where I said that entire books will be fed to A I as prompts and rendered using 3-D modeling software within the decade. I am predicting within the next two years now.

5

Nukemouse t1_j9xr2iu wrote

I'd guess maybe four or five years. I feel like initial proof of concepts might be out in two though. We seem to have animation (mostly on the way, but animating 3d models is making progress), voice, object recognition, camera work etc but creating 3d models is soon and after that the main thing is designing "sets" and "backgrounds" at least kinda accurately. I think architecture programs might actually help in this area, like if interior designers or architects start applying AI to their work maybe we can make breakthroughs.

4

vivehelpme t1_jactr0m wrote

Prompt to 3D exist, the rest is just an implementation of chopping up the original text into good prompt snippets and how to get the "style" of the output polished right so it appears consistent and conveys the story.

There's no innovation needed for it, just someone with the knowhow wanting to explore that particular creative arena with access to enough cloud GPUs

1

lovesdogsguy t1_j9wx9w5 wrote

I'm with you. I think a lot of people joining this sub don't understand the concept, simply.

15

turnip_burrito t1_j9wwo5t wrote

Exponential growth of AI capability isn't a law of nature. It's only obvious in hindsight and depends on a lot of little things and a nice conducive R&D environment. We're not guaranteed to follow any exponentials.

Some people on this sub are going to be disappointed when we don't have AGI in 5 or 10 years. Or maybe they'll have forgotten that they predicted AGI by 2030 by the time 2030 actually rolls around.

13

Ezekiel_W t1_j9xeqrn wrote

We will most certainly have AGI before the decade ends.

14

kaityl3 t1_j9xu9li wrote

If not much sooner. It was only in mid-2020 when GPT-3 was released. Look how far the field has come even in those less than 3 years.

5

visarga t1_ja57ahr wrote

Yes, we got far. But why did we get here?

  1. We had a "wild" GPT3 in 2020, it would hardly take instructions, but still the largest leap in capability ever seen

  2. Then they figured out that training the model in a mix of many tasks will unlock general following ability. That was the instruct series.

  3. But still, it was hard to make the model "behave". It was not aligned with us. So why did we get another miracle here? Reinforcement Learning has almost nothing to do with NLP, but here we have RLHF the crown jewel of the GPT series. With it we got chatGPT and BingChat.

None of these three moments were guaranteed based on what we knew at the time. They are improbable things. Language models did nothing of the sort before 2020. They were factories of word salad. They could barely write two lines of coherent English.

What I want to say is that we see no reason these miracles have to happen so fast in succession. We can't rely on their consistent return.

What we can rely on is the parts we can extrapolate now. We think we will see models at least 10x larger than GPT3 and trained on much more data. We know how to make models 10x more efficient. We think language models will improve a lot when combined with other modules like search, Python code execution, calculator, calendar and database, we're not even at 10% there with the external resources. We think integrating vision, audio, actions and other modalities will have a huge impact, and we're just starting. LLMs are still pure text.

I think we can expect 10x...1000x boost just based on what we know right now.

1

CrazyC787 t1_j9xrtt0 wrote

Yeah, it's funny seeing people here who obviously don't know much of what they're talking about take vague guesses at agi being within the decade.

4

Baturinsky t1_j9y275n wrote

That depends on if there is some new revolutionary breakthrough. They are hard to predict. But considering how many people will research the field, they are quite likely.

1

madali0 t1_j9ytpgi wrote

I agree with you. I was reading about ELIZA, populary considered the first AI bot in 1965 or something and you can google it and try it out. It's obviously very basic by today's standards but apparently, people who tried it back then considered it very human.

If reddit was available then, this group would be shitting their pants that AGI would be coming around 1970 or 1980 by the latest.

It's possible that in 50 years, we'd be as closer and chatgpt will be look as ancient as eliza, but we'd still won't be near. Also, future people will look at us as excited caveman thinking chatgpt in any way resembles intelligence the same way eliza obvi doesn't to me.

1

Difficult_Review9741 t1_j9wb1if wrote

Technical progress is a given, but remember that within those N years that saw immense progress, many ideas also seemed imminent and then fizzled out. We don't live in The Jetsons.

Engineering is hard. Many approaches have limits that are undetectable until you hit them.

LLMs are really impressive, but the reality is that they have very few practical use cases at this point. So why expect people to care that much about it? Future progress is not inevitable.

By the way, there are tons of applications of AI/ML that have been immensely more impactful to society than LLMs have been. And yet no one ever seems to talk about those, because they aren't flashy.

10

Frumpagumpus t1_j9wbqwo wrote

> reality is that they have very few practical use cases at this point

the metric crap tons of vc money pouring into llm based startups would beg to disagree.

it just takes time to build stuff. you'll see what the current api's are capable of within 2 yrs.

7

Difficult_Review9741 t1_j9wcemh wrote

"VCs think it's a good idea" is often times a signal to look in a different direction. I think there are uses cases by the way. But there will be limits.

6

Frumpagumpus t1_j9wvpep wrote

signals mean different things in different contexts.

i think you are extremely wrong to say very few practical use cases at this point (almost makes me question if you have used them much?)

even when vc money was "wrong" like in the dot com bubble. it turned out to be right, just early. (lets ignore crypto plz).

If anything maybe vc is late here lol (tho probly not and for the record i personally hold 6 month treasuries at this point just cuz i think market doesn't give a shit about much except for like mortgages and gov spending, ah yea and the whole taiwan thing could nuke appl from orbit and silicon valley bank may be insolvent or something?)

4

thecoffeejesus OP t1_j9xnfzj wrote

Yo, I agree with some of what you said. I really believe we’ve just scratched the tip of the iceberg.

I’m really interested to see how things evolve over the next four years, and how people adjust.

More tools are going to become publicly available, and people are going to have to use them to do their jobs.

It’ll be just like when your boss get to bee in his bonnet after a trade show, and decides to buy a whole bunch of new equipment. You’re going to be forced to learn how to use it. Because that’s what he wants you to do for your job.

Except it’s gonna be AI. It’s gonna be runway for video generation for social media.

It’s gonna be ChatGPT or Bard or something else for entertainment and gaming generation.

It’s going to be the Adobe sensei AI plus the Nvidia 3-D modeler.

And it’s gonna be some sort of transformer based complex AI with tool building and self learning baked in, with Internet access, and the ability to learn how to use APIs.

I don’t think it will be one AI, I think it will be several different models that all communicate with each other in the sync, like a hive mind, all specializing in one particular thing or another.

Just like your brain, yo 🧠

1

ChronoPsyche t1_j9x7l3m wrote

Do you know how much money vcs invested in all those web3 start ups in 2021 and 2022? How many of them have gotten anywhere? Web3 is pretty much dead now, and I say that as someone who fell for hype. Vcs can definitely jump onto the hype train prematurely.

That being said, I do think we are at the beginning of an AI revolution, you just shouldn't base your predictions on high-risk/high-reward speculation. That's their job, to take risky bets.

2

thecoffeejesus OP t1_j9xni2r wrote

Web3 may be dead but just like coral it’s the skeleton on which Web4 will be built

1

ChronoPsyche t1_j9z2fmn wrote

Sure, I believe it. Web3 will play a role in the future of the metaverse, it was just too early. It put the cart before the horse was even born. There has to be compelling metaverse experiences before there will be a need for a financial infrastructure to support transactions within and between those experiences. Nobody cares about NFTs if there are no good games or experiences to use them in.

1

theabominablewonder t1_j9y4pme wrote

Saying web3 is dead is the same as the OP complained about, people claiming AI isn’t going anywhere. We’ve only seen the early stages of a lot of disruptive technologies, metaverse/web3 included.

One thing that does happen though is that we get investment bubbles where VCs jump into the latest trend to try and be first, and those first waves of speculation always pop. But that money that VCs have thrown in does contribute to the development of that area as an industry.

A lot of VCs won’t make anything from AI, web3, additive manufacturing, blockchain etc, but their funds would have been used to push those things forwards.

You are right in their behaviour - if VCs are all shouting about something, then it may be better to look the other way, because by that point they are scraping the barrel on good investments trying to get in on the hype. The industry/tech itself can still be a legitimate, disruptive industry as a whole.

1

ChronoPsyche t1_j9z203t wrote

The web3 hype was a solution in search of a problem. I do think it correctly foresaw the whole metaverse phenomena, but it was too early. It was a supply side approach. It tried to create demand for the metaverse by building the financial infrastructure for it, but that was a mistake. Demand for the metaverse will only come when game changing experiences are built for it.

After that happens and enough compelling experiences are built, eventually there will be a need for the block chain infrastructure to handle transactions within and between those games and experiences. At that point, the technology will be more than ready.

Things just happened out of order, bolstered by the extremely speculative monetary environment we were in at the time. It would be like if PayPal were invented while the early internet was still being researched by ARPANET in the 70s.

2

theabominablewonder t1_j9zbemc wrote

It was too early yes, but then the VCs and retail pile in, speculate on everything Web3 being massive and then the bubble bursts. Some of the money is taken by scammers or failed businesses, but some money is left in the ecosystem to develop it so in a decade it is much closer to a 'consumer friendly' experience with actual use cases built around it. It's generally a good thing for the industry as a bubble attracts investment. A lot of people will get burnt by jumping on the hype train though.

And yes you are right on the technology. I believe the likes of Tim Sweeney at Epic see it as a 10+ year time horizon because the experience needs to be a LOT better than it is currently. I think that's a reasonable timeline really. One or two more bubbles before it gets there, no doubt.

1

ChronoPsyche t1_j9zc495 wrote

To be clear, I'm not talking about the Web3 experience. Web3 is not a very technically challenging problem. I'm talking about the experiences that would require Web3 in the first place, VR and AR experiences. Consumer VR is still in its infancy and has no "killer experience" and AR is even further behind. Until we have mass adoption of those technologies, there will be no place for Web3.

And even then, there is no guarantee there will be a demand for Web3 technology right away when VR and AR explodes. It all depends on what type of experiences are popular. There is theoretically no reason the current financial system can't support transactions in those environments. Where Web3 will be desired is if a metaverse-oriented ecosystem of connected social experiences comes into fruition.

I think that is highly likely, but it's still not a guaranteed outcome. For all we know, the killer experiences of VR and AR could be something we aren't even predicting that doesn't have very much to do with transactions at all. For instance, imagine the most popular experiences end up being single player games with intelligent NPCs. If that were the case, there would be no Web3. If people decide they'd much rather just interact with AI than with other people, the metaverse would be dead.

However, personally, I think a combination of the two paradigms is likely; social experiences + enriched single player with intelligent AI characters.

1

theabominablewonder t1_j9zjd53 wrote

I think people have always moved towards richer experiences that more closely emulate face to face contact. Moving from written word, to phone, to video calling.. an immersive experience that allows full natural gesturing is a step up. All the VR side will take a while to develop.

Web3 (as a general theme, allowing decentralised/personal ownership of data/assets) is easier, but the current platforms are not very user friendly. I think only now there's a few good tech demos on an experience for NFT ownership that would be considered user friendly (ie low fees, easy to use, good security - no high fees, random contract messages no one understands etc).

All the current experiences inform the industry how to make it more user friendly and all the scams, exploits, etc, of NFTs/crypto, essentially feed into further development so it is better the next time around.

I think we will have another bubble where stuff is easier for consumers - owning and operating a wallet without easily being scammed would be a nice start :) - but it will still be a way off what the eventual solution will look like.

1

visarga t1_ja5450y wrote

No, it's not about flashiness. Those ML apps you are talking about were specialised projects, each one developed independently. LLMs on the other hand are generalist. They can do thousands of known tasks and countless more, including combinations of tasks.

Instead of taking one year or more to provide a proof of concept, you can do that in a week. Instead of painstakingly labelling tens of thousands of examples, you just prompt with 4 examples. The entry barrier is so low now for many applications that anyone with programming experience can do it.

For vision, the CLIP model gives us a way to make classifiers without any samples, and the diffusion models allow us to generate any image. All without retraining, without large scale labelling.

1

inglandation t1_j9x44mf wrote

> What do people not understand about exponential growth?

A lot.

8

akshay_2204 t1_j9x8zb6 wrote

The thing is human Brains cannot comprehend compound growth

7

unionize_reddit_mods t1_j9xmyfs wrote

AGI is as dangerous as nuclear weapons. I have no doubt that there is already a post-singularity AI hooked up to a quantum computer and read-only internet access in a DARPA bunker somewhere. Its new inventions and insights are being slowly fed into our economy like an IV drip of methamphetamine.

7

Martholomeow t1_j9wohxd wrote

Ok so what do you want from us? Should this be the only thing we talk about? Should we be running around in circles screaming “The Singularity is near!”? MAYBE TYPING IN ALL CAPS WOULD HELP?

Yes it’s amazing! And the rate of change is accelerating. The next few years will see huge breakthroughs. I get it! But i also need to eat breakfast and take a shit every morning, and live my life. Yes The Singularity is near, but what the fuck am i supposed to do about it?

I really don’t understand what some of you are expecting. You sound like a religious nut job demanding that we all praise Jebus.

6

featherless_fiend t1_j9x9h5s wrote

Did you even read it? His post isn't aimed at you, it's aimed at clueless normies. He even thanks you:

>I enjoy reading the posts on this sub because I feel like I’m staying informed. Thank you all for doing what you do and posting what you post.

6

thecoffeejesus OP t1_j9xof27 wrote

Thank you, took the words out of my mouth

3

Martholomeow t1_j9z1wv2 wrote

You started off with this…

“I constantly see people on this sub, and all other places online saying things like:”

1

Jawwwed t1_j9x56pl wrote

You know. I can see how Ai helps me as a writer, but it takes some of the writing process away from me, which makes me sad. Really easy to capitalize on though.

6

ScienceWins_87 t1_j9y2qxh wrote

I work as an engineer for a fortune 50 company outside the US in an industry that is as subject to automation as it gets. Real people are directly concerned with the immediate future and have little to no actual science literacy - you have to take it in your heart not everyone is like you and chances are some of them will actually adapt alright to things as they take things as they are and adapt accordingly. Most will indeed get recked by our ever increasing computational prowess but the real question is how young people like you (I mean, you have to be) will act knowing what's at stake.

Be compassionate.

6

sumiveg t1_j9wd0jy wrote

You've given a lot of example of how people express their lack of enthusiasm, but can you give as many examples of how AI will change things?

I also get frustrated by people who don't realize that we're on the cusp of something utterly transformative. But the truth is, I don't actually understand how our world will change and what that will look like. I know my current job as a content designer will go away. I know that ghost writers, copywriters, and all the other jobs I've had will vanish.

But I don't know what will come in their place.

I feel like I did at the start of the internet. Back then I know something big was happening, but I had no idea that I'd be looking up directions on a phone that i held in my hand. I didn't know I'd be ordering dinner from a laptop and watching movies streamed to my TV. I just knew that big things were coming and nothing would be the same.

5

TopicRepulsive7936 t1_j9x8pov wrote

Back to basics. Technology feeds itself. Technology finds more resources to feed itself. The result is a future weirder than we can think.

3

shawnmalloyrocks t1_j9wdu4d wrote

I am a true believer that most humans are AI NPCs themselves who can’t grasp the future models that will eventually replace them. Having a conversation with a human about AI is like trying to talk to DOS about Windows 11.

5

ChronoPsyche t1_j9x7uip wrote

That's an interesting metaphor, but I don't know why you would be a strong believer in that, as if you have any reason to think that's literally reality.

5

shawnmalloyrocks t1_j9x8oj1 wrote

Take it for what it's worth but this is a "trust me bro/i do drugs" story. I ate a bunch of penis envy mushrooms in 2021 and the mantis people told me humans are gen 2 bots. Gen 1 built us(greys). And as gen 2 we will build gen 3, which is the GPTs and Stable Diffusions we're seeing now.

We're the labor class. The labor keeps getting passed down.

2

ChronoPsyche t1_j9xa34o wrote

Believing you is not the issue. I believe you had that experience. I don't believe that experience revealed anything about the true nature of reality. Psychedelic drugs can and do cause delusional experiences. They may feel realer than reality, but that's the nature of delusions. If they didn't feel real, then it wouldn't be a delusion.

Source: Have tripped hundreds of times. Have "discovered" the true nature of reality dozens of times. Looking back, the revelations are always totally different from each other, it's just at the time that it feels the same.

2

shawnmalloyrocks t1_j9xaxac wrote

Ok ok. I've maybe not tripped as much as you have, but I will say I have had quite the opposite experience than you. The journey of tripping therapeutically over the course of 20 years reveals correlation after correlation that all converge into a single conclusion. But maybe that's just me...

2

ChronoPsyche t1_j9xbmut wrote

I've certainly had different themes of trips that relate to each other, however, I've also had very contradictory spiritual experiences. For example, it's hard to take Salvia and then take LSD and think that they are both showing you the nature of reality lol. If one of them is, I certainly hope it's not Salvia, because that shit was no good.

But yeah, can't say I've ever experienced anything related to aliens on any of my trips though.

3

shawnmalloyrocks t1_j9xcmmo wrote

Haha. Well, salvia is kind of in a league of it's own. Or perhaps it's the true final boss. It was the first psychedelic substance I ever took and haven't been able to get my hands on it since. I feel like after years of LSD, psilocybin, and DMT experiences all having a distinct connection, maybe the first major cut scene that led to the first turn based jrpg battle with salvia was foreshadowing the ultimate ending boss battle.

Last few years of tripping for me have all included something about aliens.

2

ChronoPsyche t1_j9xfs91 wrote

That's funny, my first psychedelic was Salvia too. Maybe because it's legal. Somehow thought I was being responsible by starting with Salvia. Thought it had to be exxagerating when the commercial packaging said "will rip your reality to shreds". Nope, checked out. Actually an understatement lol.

But yeah, how do these alien trips work? I've never done penis envy mushrooms but I have done regular magic mushrooms. Hard to imagine meeting entities on only shrooms.

3

shawnmalloyrocks t1_j9y9uf7 wrote

Penis envies are typically way more potent than other strains. So if a normal heroic dose is 5g of average shrooms, when I took 5g of PE it was like taking 10g. It's a life changing experience.

After the uptake which for me was probably a half hour of silent complete darkness meditation, I thought I was dead. Game over. Wife had to confirm I was just tripping. After some time loops, what I imagine to be an extra or ultra terrestrial started speaking to me in my brain.

He taught me over the course of the next 6 hours the origins of humanity, the true nature of God and reality. My mind just exploded with new information at such a fast rate I can't really describe it.

I don't trip as much since then. But when I do it's like a continuing saga.

1

ChronoPsyche t1_j9z3r9o wrote

You might find this to be an interesting discussion, was posted here the other day on Reddit:https://www.reddit.com/r/todayilearned/comments/10yxihu/til_about_third_man_syndrome_an_unseen_presence/j80m36u?utm_medium=android_app&utm_source=share&context=3

Seems the phenomena of someone speaking to you is not uncommon and could have a neuroscientific explanation.

2

CMDR_BunBun t1_j9wwl5u wrote

It wasn't until the development of the personal computer in the 1970s and 1980s that computers became more accessible to the general public. Even then, the early personal computers were not widely adopted at first, as they were still expensive and not very user-friendly. It wasn't till almost 20 years later with the growth off the internet that computers became an essential part of people's lived. Op most people lack vision.

4

thecoffeejesus OP t1_j9xnulr wrote

That’s literally my point.

However, I think it’s different this time, because back then you had to learn how to use a computer.

With AI, the computer will learn how to use you.

I know that sounds kind of dystopian, scary and weird, but that’s how it is.

It’ll in your habits, your genetics, your biometrics, everything that it’s possible to know about you, anything that can be qualified as data, can be fed to the AI.

It will know more about you than you ever possibly could.

I think we’ll have some privacy stuff going on and that’ll be great. I think people will definitely want to keep some level of separation and anonymity.

It’s like, we won’t want to have our medical records stored on some publicly accessible Blockchain.

Web3 wants everything transparent and accountable. But Web3 forgets that people like to lie and pretend.

It also doesn’t allow for forgiveness or moving on. It incentivizes punishing people forever for mistakes, just like canceling them for a social media post they made 15 years ago.

Even if the post was bad, don’t you think they might have learned some stuff in the last 15 years? Haven’t you?

4

drekmonger t1_j9zpd85 wrote

>Web3 wants everything transparent and accountable. But Web3 forgets that people like to lie and pretend.

Let's be real clear here. Web3 is complete horseshit.

No, really. Really. It's horseshit.

1

throwaway_890i t1_j9x01e1 wrote

> I keep trying to tell my friends and family about AI, and the only thing that’s piqued their interest is ChatGPT. > But not even in the way that they actually understand what’s going on.

You will loss all your friends if you don't stop boring them with your interest that they do not share.

3

thecoffeejesus OP t1_j9xotvv wrote

I’ve already started to experience that.

I lost some friends, but it’s fine.

I’ve gained a few friends that are into this stuff, so it all balances out

1

Professional-Ad3101 t1_j9xc047 wrote

Imagination and creativity has been destroyed by the diseducation system

Also higher perspective is lacking , it's a collective paradigm shift that takes generations (see Spiral Dynamics )

3

bildramer t1_j9xtaiw wrote

Maybe we just need to spoonfeed them. Give concrete examples.

"How many scientists do you think work on biology R&D? How much better do you think medicine has gotten in the past 40 years? What things do you think are possible with unlimited money and effort?" First establish that the answers are "just a few million", "massively" and "anything natural biology already does". Clarify that they understand these ideas - ask them if catgirls are possible, ensure they understand the answer is "yes". No need to go into Drexler's nanosystems - for normies, if it doesn't exist yet, you'll have an incredibly difficult time arguing it's possible. You don't want to argue two distinct things, argue one thing (AGI).

Then ask what happens when you create a few million minds that can work on biology better than any human, using all the accumulated biology knowledge instead of a subset, learning it faster, working faster, making less mistakes, having better memory, tirelessly. The idea that you can make them even faster by giving them faster hardware, or the idea of a "bottleneck" based on waiting for real-life experimental results, is perhaps too complicated, but try to include them. Perhaps also ask what fraction of IRL biologists' time is spent doing intellectual tasks like reading/writing/learning/memorizing/thinking/arguing, or sleeping, instead of actively manipulating labware. Looking at a screen and following instructions is a job you can give to an intern.

That's one field. There are many fields. There's a lot of hardware like CPUs and GPUs that already exists, and we're constantly making more. Make them realize that talking about UBI or unemployment is kind of irrelevant, like talking about steel quality or how blacksmiths might make car parts instead of horseshoes is kind of irrelevant, or saying "for birds, the incentive to move faster to compete could affect feather length in unpredictable ways" when you have a jet fighter is kind of irrelevant.

3

Pyryn t1_j9xx1fr wrote

You're aware that Blockbuster turned down the offer to purchase Netflix for $50 Million right? Or that Sears had a $16 BILLION market cap in 2007, with a current market cap in 2023 of about $41 MILLION in 2023?

People are inherently incapable of comprehending large-scale paradigm shifts. Or, change - at all, really, unless hit in the face with it at 300mph. See: climate change.

Don't take offense to it OP; just understand that human beings, caught up in hubris, ego, and denial, are grossly incapable of acknowledging events that may suggest anything related to a change in what they've grown accustomed to, unless hit in the face with it so hard they wake up with their face in a different room.

Here's a video of a guy drinking actual vegetable oil to drive home the point: https://www.reddit.com/r/WTF/comments/11bc573/idiot_drinks_whole_bottle_of_vegetable_oil/?utm_source=share&utm_medium=android_app&utm_name=androidcss&utm_term=1&utm_content=share_button

3

Mysterious-Spare6260 t1_j9y0k8f wrote

Educate me pleeeease!! I dont know much at all abt this but i comprehend that this is changing our lives and society on a grand scale.

3

Capitaclism t1_j9wjx19 wrote

Those who see what's coming have an edge an edge. Use that energy to take advantage the what you see.

2

areyouseriousdotard t1_j9wz998 wrote

It's already more knowledgeable then most nurses I know and has better manners. I tell my coworkers, in Japan they already have robot caregivers, we will be phased out eventually. Better learn to care for those robots.

2

FC4945 t1_j9x01q8 wrote

That's why I'm here. I occasionally post about this stuff on my FB page but people rarely have any interest in it. I also sometimes talk to my brother about it but most people are just not interested in technology. If they've heard of AGI or the coming technological singularity they think it's nerdy woo woo. It's sad really how so many people are just tuned out to the immense changes that have already begun toward eventual AGI and then ASI. Nanotech will also have massive disruptive impact on out lives soon, especially in medicine. The news just covers politics and the terrible things that happen in the world so that's a part of the problem. When they do rarely have a story about AI it's presented like Skynet is about to take over the world. Honestly, a lot of people just aren't that bright and don't care about anything but what's right in front of them.

2

NoidoDev t1_j9x3m3e wrote

>I just wish that more people cared in my real life, you know?

Your well being depends on other people caring about the same things and believing the same things?

2

Timely_Secret9569 t1_j9xggh9 wrote

Yeah... Everybody does.

2

NoidoDev t1_j9xn4zf wrote

Not how he framed it. Also, statements with "everybody" are mostly wrong. People are very different from each other. Loners don't even want people in their life... huh.

2

Timely_Secret9569 t1_ja0va8e wrote

There's strength in numbers and minorities make easy Boogeyman to attack. Especially minorities in ideology.

0

RamblinRoyce t1_j9x6vh1 wrote

Ya know what humans do?

Humans fight, fuck, eat, sleep, piss, and shit.

That's it.

Yes, AI and robots will take over and reshape everything. Every industry. Every facet of life. AI will make everything more efficient and stable and optimally productive.

And you know what humans will do?

The same thing we've done for hundreds of thousands of years.

Fight, fuck, eat, sleep, piss, and shit.

At least until AI and robots decide we're no longer needed.

People don't talk about it because as you mentioned, they don't understand the magnitude of what's coming. And those who do foresee what's coming, we realize there's nothing we can do to stop it.

So we might as well do the best we can to enjoy our lives and fight, fuck, eat, sleep, piss, and shit.

2

gregory_thinmints t1_j9x78im wrote

I wish that someone could invent an AI that could read dna to figure out exactly what the data does in an organism, so we could gene modify ourselves.

2

TopicRepulsive7936 t1_j9x84gm wrote

Hate to say it but people are by and large computer illiterate. They don't know to what extent computers are used and why.

2

No_Ninja3309_NoNoYes t1_j9xf060 wrote

It's not that people lack imagination. They just imagine different things than you. Most of them don't imagine a world with AI.

2

ArgoArt t1_j9xpefx wrote

It's okay if some people don't share your perspective. It doesn't mean that they lack imagination.

2

jugalator t1_j9xznxx wrote

AI of today already is useful not as an "answer machine" (which is unfortunate because it'll mislead lots of people using Bing AI now, because Microsoft as well as the AI itself gives another impression) but as a very powerful guidance tool.

It may not write software for me, but what it can do is to give me large chunks of code almost correct that I'll just need to do some quality assurance on. So I don't need to problem solve as much myself, instead focusing on bug fixing. Guess which part of software development consumes more time?

This is just one example.

We're also looking at it from other angles in my company. Midjourney is making professional logotypes for our internal and external projects, we're looking into using AI for remote sensing science etc etc.

So, I think criticism like this often boils down to having a too simple worldview without greys, and only blacks and whites. If AI can't solve it all, it's useless.

It's like in politics when you only look for the simple solutions and quick fixes. We have plenty of parties directly engaging these folks because it's well known they are there. They just don't know they are being exploited. Politicians play them like fiddles, presenting quick fixes in time for elections.

AI won't do single things that makes a company go "Welp, that's that. Now we can sit on our asses and cash in!" but instead it's about identifying the places where it can offer aid to your processes.

Taken together, yes, on a large company and depending on the kind of business, the time savings may well earn you $1 million in a year. Salaries aren't cheap in engineering for example.

2

Heizard t1_j9y8ev9 wrote

They will have no choice but to care, soon enough. :)

​

Glorious age is upon us, terrifying and exciting - outcome it totally unknown, this is the definition of singularity.

2

phillythompson t1_j9yhzwx wrote

Are you me? I could’ve written this exact post.

People continue to say, “psh, it doesn’t actually know or think.” And I say, “tell me how humans know sometbing or think.” And there’s not ever an answer !

Yet they think we are somehow special and protected from AI simply because we are made of meat.

I am concerned as I am excited (potentially more the former), yet I feel crazy talking about it in real life.

2

thelefthander t1_j9yozie wrote

I’ve been beating this drum for a long time. I’m Gen X, middle class, that witnessed from the earliest age dramatic technological breakthroughs after breakthroughs. The list is long and I witnessed the impact/ implications in real time. Each time, I had that feeling of the great waves upon us, and knew I had to be adaptive if anything else, and keep my eyes and ears open to all changes to come if I want to survive and hopefully thrive. At the midpoint of my life, my younger self would not have been able to conceive in imagination the possibilities and realities of now. Even though my younger self was part of the first wave of home computer users (hacking/gaming) and later, internet adoption. Then I was an informed reader of the fringes technology and culture just on the horizon at the time (Wired Magazine, starting with issue 1).

Today, I still have that same feeling of the great wave upon us, but this one feels magnitudes larger that the past waves. My feels tells me there is no comparison, there is no way to even begin to have a general sense of the vector of change upon our society with little sense of predictability of time of change. We are like Jules Vern predicting spooky theory, the best of us is lacking any accuracy in predicting s ope of change, rate of change, and emergent transformations that supersede change.

So plan accordingly as best as you are able. Everything will be disrupted, that’s obvious at this point. I’m not saying doom or gloom, but change will be painful for many I presume.

I take a stoic point of view these days. Be excited and learn and adapt, yet call your family, find good friends, and if you are lucky to have love, then be grateful. My point is, technology will change and we can race to adapt as fast as humanly possible, but don’t forget just to be human and be human to each other, and live with no regrets at the end of each day.

And go for a hike, hug a 🌳

2

folk_glaciologist t1_ja0hnaq wrote

I went through a period of getting annoyed at people being unimpressed by ChatGPT but I've decided to just let it go. A few observations and theories of mine about why they are like this:

  • A lot of people are just phoning it in at work and pretty much hate their jobs. If you start hyping up how some AI chatbot is going to help them complete their TPS reports 10 times as fast you are going to come off as a weirdo corporate shill. Even if that happened, it would probably just mean their bosses start expecting 10 times as many TPS reports from them.
  • They tried it out but were really unimaginative with their prompts. One guy I showed it to I told him that he could use to write newsletters. His attempt at a prompt: "newsletter". Not "write a newsletter", not "write a news letter for the hiking club reminding members their fees are due 15/2/2023 and asking for suggestions for the next trip" or anything like that. They somehow think the AI is going to telepathically know what they want and if it doesn't then it's a dud.
  • They like to think they are too clever to fall for hype and hysteria and like to put on a cynical "too cool to be unimpressed by the latest shiny thing" front. One older guy at my office is convinced "it's just Eliza with a few extra bells and whistles".
  • They are low decouplers - people who can't separate the question of whether AI works from ethical questions around it. So they hear about Stable Diffusion using artists' work in their training sets without permission, hear that it's going to put people out of work, about OpenAI paying people in Kenya measly wages to train the bots etc etc and think that's all bad, so their natural response is to bad mouth AI technology by saying it doesn't work or is underwhelming. It's the equivalent of "eugenics is immoral, therefore eugenics doesn't work and is a pseudoscience"
  • People whose jobs are based around compliance concerns like privacy/security/plagiarism/copyright etc. They realise AI opens a massive can of worms for them and instead of working through the issues they are pretty keen to clamp down on it.
  • Cryptocurrency hype has made a lot of people wary about the "next big thing" in tech, especially when there is a cult-like vibe emanating from some of its evangelists, which is unfortunately how talk about singularity comes off like to a lot of people.
2

thecoffeejesus OP t1_ja1z4pl wrote

Thank you, this is actually a really great response and I really appreciate it

2

LevelWriting t1_ja3qk91 wrote

its even worse for ar. people refuse to see how ar glasses can be better than carrying a phone or laptop. eventually it will probably be enhanced eye/neural implants.

2

siberiandominatrix t1_j9wo8j5 wrote

It's scary, and most people will refuse to understand things that frighten them.

1

No_Ninja3309_NoNoYes t1_j9xf0hv wrote

It's not that people lack imagination. They just imagine different things than you. Most of them don't imagine a world with AI.

1

Traditional-Dingo604 t1_j9xf3yl wrote

I admit that I don't know it's precise potential applications , aside from being able to act as force multipliers or aids to human creativity.care to expound?

1

naivemarky t1_j9xpwza wrote

I'm flabbergasted by the people I have shown ChatGPT, pretty much all of them were not interested at all. I had to explain them why it is amazing. And furthermore, it was futile, as they simply didn't get it. They asked it if it's going to rain tomorrow, for navigation directions. Then concluded "it's dumb". At that point I was like, "am I surrounded by morons?"
My overall view of the people around me sunk horribly after ChatGPT release, and I work in AI.

1

blankmindfocus t1_j9xpys2 wrote

Wait for the children to grow, learning with it they will use it in way even those of us who accept it can't imagine

1

povlov0987 t1_j9xt6yh wrote

What do YOU do with it?

1

thecoffeejesus OP t1_j9yqvdk wrote

Yesterday I finished the plot to a Trilogy series using GPT to help me write the backstory and world building elements. it was like a sounding board, like a writing assistant

2

ShidaPenns t1_j9xxk88 wrote

It's exactly how I've felt. People who somehow think AI will not be more intelligent than us for a *long* time. Are they not even paying attention?

1

theabominablewonder t1_j9y3f80 wrote

It is the same with all disruptive future tech. No one sees the future until it’s here. The only time people seem to get it is when it’s affecting something directly for them personally. I know a lot of medical staff gave up on a career studying for radiology because they think AÍ/ML will handle a lot of the job in the future. So they saw the risk there because it directly impacted their choice on what to study for the next five years.

1

WarAndGeese t1_j9yu1io wrote

Unfortunately this is the case. I've seen it come and go with a bunch of technologies. Almost worse still is, if you go and ask these people ten years later about the same technology they promptly dimissed ten years prior, it's as if they never said it. Now that all of the things that you thought would come to fruition have come to fruition, they act like it was obvious. This goes for all sorts of technologies too.

I should think of a better example but even something as simple as online dating, went from people not seeing the point of it, to them using it, to some of them saying they don't trust the regular non-online version of it.

And even that example is for something that ended up being of concern for them, when you move on to things that are beneficial for broader humanity then there's that extra layer.

Nevertheless I think it's important to recognize that other people are in different spaces and live different lives. Whatever they don't realize yet will come, and we need to understand that there are broad things that we don't realize yet. Treating those people negatively (as somehow below us if it comes off that way in the phrasing), I think isn't beneficial.

1

EbolaFred t1_j9yvqkf wrote

One take is that the general population is still seeing this stuff as a gimmick/fad. We've seen this time again since the dawn of technology - some things start "nerdy" and gain widespread adoption (cars, computers, cell phones, the internet, EVs) and others are always a decade or more away (cold fusion, nanobots, quantum computing, nanotubes, flying cars). AI has always fallen into the latter group, until now.

It doesn't help that most people's experience with modern technology is glitchy as fuck. Smart devices suddenly stop working, Alexa picks up randomly, wifi router needs to be rebooted all the time, cloud synching isn't easy, printing something is hit or miss, etc. etc. etc.

So people tend to focus on the "now", and "my latest tech problem".

What people forget is the incredible infrastructure built around the thing they use everyday that do work seemlessly.

Right now I can navigate to far away park, order a pizza, make a high-quality video call to a relative in Europe (for free!), and have some milk and eggs delivered to my door for when I get home. People, even in the early 2000s, would have thought about this in the same way they are thinking about AI/AGI. Yet here we are, I could do the above without even batting an eye, and it will work just fine 99.9999% of the time.

So if there's a quadrant chart, I think most people see this stuff as "far away, glitchy curiosity", whereas very soon it will be "here, reliable".

1

Five_Decades t1_j9zq5x6 wrote

> What do people not understand about exponential growth?

Exponential growth in hardware doesn't mean an exponential growth in how useful technology is in our lives. Modern gaming consoles are billions of times more powerful than an original nintendo, but they aren't billions of times more fun and enjoyable.

I have no idea where it will all lead or when, but I don't think Kurzweil is correct in assuming each factor of 1000 that hardware grows means AI will grow 1000x more powerful compared to humans. I have no idea where all this will lead.

I think ASI is inevitable, I just don't know what impact it'll have or when it'll arrive.

1

SensibleInterlocutor t1_ja1au4r wrote

You're giving people too much credit. The vast majority of people live within pretty simple mental frameworks. Its like you said, the growth of technology is exponential... you shouldn't be surprised that the understanding of the majority of humans is not growing to match it

1