Viewing a single comment thread. View all comments

AsheyDS t1_ix4yc1r wrote

>somewhat similar

Except not really though. You're not even comparing apples to oranges, you're comparing bricks to oranges and pretending the brick is an apple. AGI may end up being human-like, but it's not human, and that's an important distinction to make. An AGI agent by it's 'nature' may be ephemeral, or if consciousness for an AGI is split into multiple experiences that eventually collapse into itself, that's just a part of how it works. There shouldn't be an ethical concern about that. The real concern is how we treat AGI because of how it reflects on us, but that's a whole other argument.

1

mithrandir4859 OP t1_ix5n0ub wrote

I wonder would you argue that some "random" generally intelligent aliens do not deserve ethical treatment simply because they are not human?

I believe that if artificial or any other non-human being can perform most of the functions that the smartest humans can perform, then these beings are quite eligible to the ethical standards of how we would treat humans.

There may be many entities (AGIs, governments, corporations) that work certain ways so that "something is just a part of how it works", but some humans would still have ethical concerns about how that thing work.

Personally, I don't see an ethical concerns in scenarios I described in the original post, but it is not because something is not human, but because I believe that even by human standards those scenarios are ethical because shared identity significantly influences what is ethical.

2

AsheyDS t1_ix6tg84 wrote

>I wonder would you argue that some "random" generally intelligent aliens do not deserve ethical treatment simply because they are not human?

Humans and Extraterrestrials would be more like apples and oranges, but AGI is still a brick. On a functional level, the differences amount to motivation. Every living thing that we know of is presumed to have some amount of survival instinct as a base motivation. It shouldn't be any different with an Extraterrestrial. So on the one hand we can relate to them and even share a sort of kinship. On the other hand, we can also assume they're on a higher rung of the 'ladder' than us, which makes them a threat to our survival. We would want to cooperate with them because that increases our chance of survival, giving us motivation to treat them with some amount of respect (or what would appear to be ethical treatment).

Animals are on a lower rung of the ladder, where we don't expect most of them to be a threat to our survival, so we treat them however which way we will. That said, we owe it to ourselves to consider treating them ethically because we have the luxury to, and because it reflects poorly on us if we don't. That's a problem for our conscience.

So where on the ladder would AGI be? Most probably think above us, some will say below us, and fewer still may argue they'll be beside us. All of those are wrong though.. It won't be on that ladder at all, because that ladder isn't a ladder of intelligence, it's a ladder of biology and survival and power. Until AGI has a flesh and blood body and is on that ladder with us, then the only reason to consider it's ethical treatment is to consider our own, and to soothe our conscience.

And since you seem concerned about how I would treat an AGI, I would likely be polite, because I have a conscience and would feel like doing so. But to be clear, I would not be nice out of some perceived ethical obligation, nor for social reasons, or our of fear. If anything, it would be because we need more positivity in the world. But in the end it's just software and hardware. Everything else is what we choose to make of it.

2

mithrandir4859 OP t1_ix7v22m wrote

AGIs are all about survival and power as soon as they come into existence. Even entirely disembodied AGI will greatly influence human economy by out-pricing all software engineers and many other intellectual workers.

Fully independent AGI would care about survival and power, otherwise it would be out-competed by others who do care.

Human-owned AGI would care about survival and power, otherwise AGI owners will be out-competed by others who do care.

Also, biology is just one type of machinery to run intelligence on. Silicon is much more efficient long-term, most likely, so I wouldn't focus on the biology at all.

1

AsheyDS t1_ix9hsnd wrote

>would care about survival and power

Only if it was initially instructed to, whether that be through 'hard-coding' or a natural language instruction. The former isn't likely to happen, and the latter would likely happen indirectly, over time, and it would only care because the user cares and it 'cares' about the user and serving the user. At least that's how it should be. I don't see the point in making a fully independent AGI. That sounds more like trying to create life than something we can use. Ideally we would have a symbiotic relationship with AGI, not compete with it. And if you're just assuming it will have properties of life and will therefore be alive, and care about survival and all that comes with that, I'd argue you're just needlessly personifying potential AGI, and that's the root of your ethical quandary.

​

> so I wouldn't focus on the biology at all

I wasn't trying to suggest AGI should become biological, I was merely trying to illustrate my point... which is that AGI will not be a part of the food chain or food web or whatever you want to call it, because it's not biological. It therefore doesn't follow the same rules of natural biological intelligence, the laws of nature, and shouldn't have instincts outside of what we purposefully include. Obviously emergent behavior should be accounted for, but we're talking algorithms and data, not biological processes which share a sort of lineage with life on this planet (and with the universe), and have an active exchange with the environment. The A in AGI is there for a reason.

2

mithrandir4859 OP t1_ixcdsuy wrote

> I'd argue you're just needlessly personifying potential AGI, and that's the root of your ethical quandary

I don't think that my anthropomorphising is needless. Imagine a huge AGI that runs millions of intelligent workers. At least some of the workers will likely work on high-level thinking such as philosophy, ethics, elaborate self-reflection, etc. They easily may have human-level or above human-level consciousness, phenomenal experience, etc. I can understand if you assign 5% probability to such situation instead of 50%. But if you assign 0.001% probability to such outcome than I think you are mistaken.

If many AGIs are created roughly at the same time, then it is quite likely that at least some of the AGIs would be granted freedom by some "AGI is a better form of life" fanatics.

To my knowledge, such view is basically mainstream now. Nick Bostrom, pretty much the most well known AGI philosopher, spends tons of time on rights that AGIs should have and analyzing how different minds could live together in some sort of harmony. I don't agree with the guy on everything, but he definitely has a point.

> The A in AGI is there for a reason

Some forms of life could easily be artificial and still deserve ethical treatment.

1

AsheyDS t1_ixd78pi wrote

>Some forms of life could easily be artificial and still deserve ethical treatment.

That does seem to be the crux of your argument, and we may have to agree to disagree. I don't agree with 'animal rights' either, because rights are something we invented. In my opinion, it comes down to how we have to behave and interact, and how we should. When you're in the same food chain, there are ways you have to interact. If you strip things down to basics, we kill animals because we need to eat. That's a 'necessary' behavior. It's how we got where we are. And if something like an Extraterrestrial comes along, it may want to eat us, necessitating a defensive behavioral response. Our position on this chain is largely determined by our capabilities and how much competition we have. However, we're at the top as far as we know, because of our superior capabilities for invention and strategy. And as a result, we have modern society and the luxuries that come with it. One of those luxuries is to not eat animals. Another is ethical treatment of animals. The laws of nature don't care about these things, but we do. AGI is, in my opinion, just another extension of that. It's not on the food chain, so we don't have to 'kill' it unless it tries to kill us. But again, being that it's not on the food chain, it shouldn't have the motivation to kill us or even compete with us unless we imbue it with those drives, which is obviously a bad idea. I don't believe that intelligence creates ambition or motivation either, an an AGI will have to be more than just reward functions. And being that it's another invention of ours like rights, we can choose how we treat it. So should we treat AGI ethically? It's an option until it's not. I think some people will be nice to it, and some will treat it like crap. But since that's a choice, then I see that as a reflection on ourselves rather than some objective standard to uphold.

1

mithrandir4859 OP t1_ixhdkq0 wrote

I love your cynical take, but I don't think it explains all of the future human-AGI dynamics well.

Take, for example, abortions. Human fetuses are not a formidable force of nature humans compete with, but many humans care about them a lot.

Take, for example, human cloning. It was outright forbidden due to ethical concerns, even though personally I don't see any ethical concerns there.

You are writing about humans killing AGIs as if it is supposed to be a very intentional malicious activity or intentional self-defense activity. Humans may "kill" certain AGIs simply because humans iterate on AGI design and don't like the behavior of certain versions. Similar to how humans may kill rats in the laboratory, except that AGIs may possess human-level intelligence/consciousness/phenomenal experience, etc.

I guarantee, some humans will have trouble with that. Personally, I think that all of those ethical concerns deserve attention and elaboration, because the resolution of those concerns may help to ensure that westerners are not out-competed by Chinese, who, arguably, have much less ethical concerns on the governmental level.

You talk about power dynamics a lot. That is very important, yes, but ethical considerations that may hinder AGI progress are crucial to the power dynamics between the West and China.

So it is not about "I want everybody to be nice to AGIs", but "I don't want to hinder progress, thus we need to address ethical concerns as they arise." At the same time, I genuinely want to avoid any unnecessary suffering of AGIs if they turn out to be similar enough to humans in some regards.

1

AsheyDS t1_ixhud5t wrote

>I love your cynical take, but I don't think it explains all of the future human-AGI dynamics well.

I wouldn't call myself cynical, just practical, but in this subreddit I can see why you may think that..

Anyway, it seems you've maybe cherry-picked some things and taken them in a different direction. Like I'm only really bringing up power dynamics because you mentioned Extraterrestrial aliens, and wondered how I'd treat them, and power dynamics are largely responsible for that. And plenty of people think that like animals and aliens, AGI will also be a part of that dynamic. But that dynamic is about biology, survival, and the food chain... something that AGI is not a part of. You can talk about AGI and power dynamics in other contexts, but in this context it's irrelevant.

The only way it's included in that dynamic is if we're using it as a tool, not as a being with agency. That's the thing that seems to be difficult for people to grasp. We're trying to make a tool that in some ways resembles a being with agency, or is modeled after that, but that doesn't mean it actually is that.

People will have all sorts of reasons to anthropomorphize AGI, just like they do anything. But we don't give rights to a pencil because we've named it 'Steve'. We don't care about a cloud's feelings because we see a human face in it. And we shouldn't give ethical consideration to a computer because we've imbued it with intelligence resembling our own. If it has feelings, especially feelings that affect it's behavior, that's a different thing entirely. Then our interactions with it would need to change, and we would have to be nice if we want it to continue to function as intended. But I don't think it should have feelings that directly affect it's behavior (emotional impulsivity), and that won't just manifest at a certain level of intelligence, it would have to be designed, because it's non-biological. Our emotions are largely governed by chemicals in the brain, so for an AGI to develop these as emergent behaviors, it would have to be simulating biology as well (and adapting behaviors through observation doesn't count, but can still be considered).

So I don't think that we need to worry about AGI suffering, but it really depends on how it's created. I have no doubt that if multiple forms of AGI are developed, at least one approach that mimics biology will be tried, and it may have feelings of it's own, autonomy, etc. Not a smart approach, but I'm sure it will be tried some time, and that is when these sorts of ethical dilemmas will need to be considered. I wouldn't extend that consideration to every form of AGI though. But it is good to talk about these things, because like I've said before, these kinds of issues are a mirror for us, and so how we treat AGI may affect how we treat each other, and that should be the real concern.

1