phaedrux_pharo

phaedrux_pharo t1_jadrn73 wrote

>By what mechanism do we think that will be achievable?

By "correctly" setting up the basic incentives, and/or integration with biological human substrates. Some ambiguity is unavoidable, some risk is unavoidable. One way to approach the issue is from the opposite direction:

What do we not do? Well, let's not create systems whose goals are to deliberately extinguish life on earth. Let's not create torture bots, let's not create systems that are "obviously" misaligned.

Unfortunately I'm afraid we've already done so. It's a tough problem.

The only solution I'm completely on board with is everyone ceding total control to my particular set of ethics and allowing me to become a singular bio-ASI god-king, but that seems unlikely.

Ultimately I doubt the alarms being raised by alignment folks are going to have much effect. Entities with a monopoly on violence are existentially committed to those monopolies, and I suspect they will be the ones to instantiate some of the first ASIs - with obvious goals in mind. So the question of alignment is kind of a red herring to me, since purposefully un-aligned systems will probably be developed first anyway.

9

phaedrux_pharo t1_j9cuw8g wrote

I think you should find someone to talk to, get some exercise, eat healthy, and then revisit these ideas from a critical perspective.

To answer your question: No, that isn't the case for me. The singularity might be the end of the road for us - and that's not even a worst case. I don't think it's inevitable, I'm not sure it's likely, but it's definitely an interesting topic.

The pseudo religious proselytizing is the most boring part of the community.

4

phaedrux_pharo t1_j5g0oee wrote

I think the world would be a pretty awesome place if everyone's biggest problem was that all their basic needs were met and they were maybe kinda bored from all the pleasure and excitement of their lives.

People are suffering and dying needlessly, literally every minute. I'll take some existential angst born from hedonism over that any day.

43

phaedrux_pharo t1_j5ch9xu wrote

>Our perception of time as human when 1 second of time passes will definitely be different than what the Artificial Intelligence will experience.

Ok, sure

>1 Second of human time, will be for the Artificial Intelligence Program or anything similar, would be close to a month or even a few months.

This claim doesn't hold up for me. The entity you're imagining doesn't have any prior experience to relate to, it would simply experience the passage of time in whatever way it does as "normal." There wouldn't be any conflict with expectations.

This isn't a situation where something like you with your lived experience is suddenly transitioned to a different sensation of passing through time. It's a completely novel entity with its own senses and baselines. I think this presents some interesting questions just not in the direction you're taking it.

Over anthropomorphising can be tricky.

18

phaedrux_pharo t1_j57rtfz wrote

Then how do you view the normal examples of the alignment problem, like the paperclip machine or the stamp collector etc? Those seem like real problems to me- not necessarily the literal specifics of each scenario, but the general idea.

The danger here, to me, is that these systems could possess immense capability to effect the world without even being conscious, much less having any sense of morality (whatever that means.) Imagine the speculated capacities of ASI but yoked to some narrow unstoppable set of motivations: this is why, I think, people suggest some analogue of morality. As a shorthand to prevent breaking the vulnerable meatbags in pursuit of creating the perfect peanut butter.

If you agree that AI will inevitably escalate beyond control, how can you be so convinced of goodness? I suppose if we simply stop considering the continuation of humanity as good, then we can side step morality... But I don't think that's your angle?

7

phaedrux_pharo t1_j5762er wrote

Does this re-framing help solve the problem? I don't see it.

We might create autonomous systems that change the world in ways counter to our intentions and desires. These systems could escalate beyond our control. I don't see how your text clarifies the issue.

Also doubt that "good" engineers are mistaking Asimov's laws as anything serious.

7

phaedrux_pharo t1_j2er9wv wrote

Assuming that the mind is a complex set of physically interacting systems operating on causal principles: it should be possible in theory to alter any moment to moment lived experience with:

  1. A thorough understanding of those interacting systems and principles
  2. Tools with fine enough effective resolution to precisely influence the physical mechanisms in question

So, maybe. But with that kind of tech I think we're opening an entirely new set of pandora's boxes, ie the ability to start directly hacking our own minds. I'm not against opening that box, but it does scare me.

1