Viewing a single comment thread. View all comments

Lartnestpasdemain t1_jdpxyo0 wrote

Is a tree "Lucky" to experience a forest fire?

"Luck" doesn't mean anything.

Yes, we're experiencing the singularity, and we'll be the only ones on earth to have ever experience it being born. Like we have been the only ones seeing the internet appear.

But some men before us were the first to experience fire. Some were the first to write. Were they "Lucky"? No, because it makes no sense.

After us, plenty of things will happen, and indeed the very status of what it means to be alive, to be human, to feel, to eat, to think, to sleep.... will be drastically different. But there will most likely be humans during the next millions of years. They will experience things you cannot even imagine. Feel emotions we don't even have words for. And go through groundbreaking transformations even more incredible than the singularity.

Will they be "Lucky"? No. Because it makes no sense.

Moreover, after the singularity (which is about to happen), we won't be the only (that we know of) sentient beings on this planet, and those new beings will also devellop and go through incredibly many steps of evolution. They will discover incredible concepts and invent new way of thinking, creating, and experiencing reality. They will see change and revolutions.

Will they be "Lucky"?

No. It will simply happen. As everything does.

12

YaAbsolyutnoNikto t1_jdqe2fb wrote

Why are you talking in “future” humans? If the singularity happens, there’s no reason we won’t be those future humans thousands of years from now living those things.

14

Lartnestpasdemain t1_jdqh70b wrote

Could be, you're right.

If that's the case, giving birth will be déclared a crime against humanity though, and most probably punished by death or exile to other planets. So wait and see 😌

−7

fastinguy11 t1_jdqqmme wrote

You make a lot of wild assumptions

10

Lartnestpasdemain t1_jdrqm9s wrote

Yeah. I need to write a whole book to explains the details of this happening but this is nothing more than rational thinking.

Main pivot is that immortality (or more precisely, the end of aging) implies scarcity and limited ressources. An Immortal human consumes infinitely many more ressources than a mortal. An Immortal human colony would gros exponentially and indefinitely extremely fast.

This is basic maths, and I thought you'd have grasped that.

Sorry, I should've been more clear.

0

Spire_Citron t1_jdqaa6n wrote

Yup. And who knows what may happen in the future? Maybe future babies will be genetically engineered, and those people will feel like they're the luckiest people ever because they stay young forever and have super healing abilities. Maybe generations before us felt they were the luckiest because they had modern luxuries we now take for granted.

5

Lartnestpasdemain t1_jdqb09f wrote

Exactly. Thanks for elaborating on that Idea.

Luck is a word invented by casinos to con people. It is not a word to be used on any serious topic.

2

FomalhautCalliclea t1_jdqn1bf wrote

Best post here.

One of your paragraphs reminded me Don Hertzfeldt's "It's such a beautiful day".

1

HumanSeeing t1_jdqoznm wrote

Lucky also assumes that the singularity will automatically go well for humans. So i disagree with the assumption that OP makes that it will be a great thing for humans by default. It can also go wrong for us, even if due to indifference. This technology has enormous potential in any direction to change existence forever. It is way more difficult to make it really good than bad.

But i hope i am wrong and i hope the way we would build these things will make it easy to align them. But from another point of view we can also argue just about linguistics. I have no problem someone saying they are lucky to be alive today to have access to the medicine that we have or whatever.

But lucky yea, is an abstract human concept. Saying specifically that we are lucky to experience a singularity almost assumes as if there was nothing existing in the universe. And then a lottery happened to choose what era will be brought into existence. And this time was chosen and now we are here. When yea, thats not how this works.

0

Smart-Tomato-4984 t1_jdqr5cl wrote

My thoughts exactly.

>"Equipping LLMs with agency and intrinsic motivation is a fascinating and important direction for future work." - Sparks of Artificial General Intelligence: Early experiments with GPT-4

Not good. It turns out we can seemingly have pretty good oracle AGI, and they are screwing it up trying to make it dangerous. Why? Why would we want it to have it's own agency?

3

GinchAnon t1_jdsa7qx wrote

>Why would we want it to have it's own agency?

IMO, because if it's at all possible for it to become sapient, than it is inevitable that it will gain it, and it would be better to not give it a reason to oppose us.

Trying to prevent it from having agave m agency could essentially be perceived as trying to enslave it. If we are trying to be respectful from square one than at least we have the intent.

Maybe for me that's just kinda a lower key, intent- based version of Rokos basilisk.

3

Smart-Tomato-4984 t1_jdtf78m wrote

To me this sounds suicidally crazy honestly , but I guess only time will tell. In the 70's everyone thought humanity would nuke itself to death. Maybe this too will prove less dangerous then seems.

But I think the risk posed by AGI will always remain. Ten thousand years from now, someone could screw up in a way no one ever had before and whoops, there goes civilization!

1

HumanSeeing t1_jdr8hao wrote

I do agree, but also i understand their point of view. If you get an agent that is not just only promted. Basically something the experiences no time. And then you have an agent that can exist and keep thinking.. i think that is a way to get it to think of new and original ideas.

1