y53rw
y53rw t1_jeetj50 wrote
Reply to comment by StarCaptain90 in 🚨 Why we need AI 🚨 by StarCaptain90
Animal empathy developed over the course of hundreds of millions of years of evolution, in an environment where individuals were powerless to effect serious change on the world, and had to cooperate to survive. It doesn't just come by default with intelligence.
y53rw t1_jees0e0 wrote
Reply to 🚨 Why we need AI 🚨 by StarCaptain90
> ensuring it does not harm any humans
> we can design AI systems to emphasize the importance of serving humanity
If you know how to do these things, then please submit your research to the relevant experts (not reddit) for peer review. Their inability to do these things is precisely the reason they are concerned.
y53rw t1_je86txm wrote
Reply to comment by SkyeandJett in The Only Way to Deal With the Threat From AI? Shut It Down by GorgeousMoron
They might not see us as a threat, but they would see our cities and farms as wasted land that could be used for solar farms. So as long as we get out of the way of the bulldozers, we should be okay.
y53rw t1_je4se3n wrote
Reply to comment by CravingNature in Commentary of the Future of Life Institute's Open Letter, and Why Emad Mostaque (Stability AI CEO) Likely Signed it by No-Performance-8745
Creating a Manhattan Project type group is not an alternative solution to a pause. They are complementary.
y53rw t1_jctspsq wrote
Reply to comment by [deleted] in An Appeal to AI Superintelligence: Reasons to Preserve Humanity by maxtility
Your idea of what might be interesting to a super intelligent AI, and therefore worth pursuing, has no basis whatsoever.
y53rw t1_jcs3nd6 wrote
Reply to comment by [deleted] in An Appeal to AI Superintelligence: Reasons to Preserve Humanity by maxtility
Yes. AGI will understand the difference. But that doesn't mean it will have any motivation to respect the difference.
I have a motivation for not pissing in the cup on my desk. It's an unpleasant smell for me, and the people around me. And the reason I care about the opinion of people around me is because they can have a negative impact on my life. Such as firing me. Which is definitely what would happen if I pissed on a cup on my desk.
What motivation will the AGI have for preferring to utilize the resources of the Moon over the resources of California?
y53rw t1_jcrzl99 wrote
Reply to comment by [deleted] in An Appeal to AI Superintelligence: Reasons to Preserve Humanity by maxtility
> Why wipe out your creators to put your servers in California when you can just turn the moon into computronium?
Because California's resources are much more readily available than the moon's resources. But this is a false dilemma anyway. Sending a few resource gathering robots to the moon does not preclude also sending them to California.
y53rw t1_j9n86st wrote
Reply to Can someone fill me in? by [deleted]
The danger is that we don't yet know how to properly encode our values and goals into AI. If we have an entity that is more intelligent and more capable than us that does not share our values and goals, then it's going to transform the world in ways that we probably won't like. And if we stand in the way of its goals, even inadvertently, then it will likely destroy us. Note that "standing in it's way" could simply be existing and taking up precious resources like land, and the matter that makes up our bodies.
y53rw t1_jeexpem wrote
Reply to comment by StarCaptain90 in 🚨 Why we need AI 🚨 by StarCaptain90
In that case, let me advise you to avoid this line in your paper
> We for some reason associate higher intelligence to becoming some master villain that wants to destroy life
Because nobody does. It has nothing to do with the problem that actual A.I. researchers are concerned about.