Submitted by mithrandir4859 t3_yzq88s in singularity
mithrandir4859 OP t1_ixcdsuy wrote
Reply to comment by AsheyDS in Ethics of spawning and terminating AGI workers: poll & discussion by mithrandir4859
> I'd argue you're just needlessly personifying potential AGI, and that's the root of your ethical quandary
I don't think that my anthropomorphising is needless. Imagine a huge AGI that runs millions of intelligent workers. At least some of the workers will likely work on high-level thinking such as philosophy, ethics, elaborate self-reflection, etc. They easily may have human-level or above human-level consciousness, phenomenal experience, etc. I can understand if you assign 5% probability to such situation instead of 50%. But if you assign 0.001% probability to such outcome than I think you are mistaken.
If many AGIs are created roughly at the same time, then it is quite likely that at least some of the AGIs would be granted freedom by some "AGI is a better form of life" fanatics.
To my knowledge, such view is basically mainstream now. Nick Bostrom, pretty much the most well known AGI philosopher, spends tons of time on rights that AGIs should have and analyzing how different minds could live together in some sort of harmony. I don't agree with the guy on everything, but he definitely has a point.
> The A in AGI is there for a reason
Some forms of life could easily be artificial and still deserve ethical treatment.
AsheyDS t1_ixd78pi wrote
>Some forms of life could easily be artificial and still deserve ethical treatment.
That does seem to be the crux of your argument, and we may have to agree to disagree. I don't agree with 'animal rights' either, because rights are something we invented. In my opinion, it comes down to how we have to behave and interact, and how we should. When you're in the same food chain, there are ways you have to interact. If you strip things down to basics, we kill animals because we need to eat. That's a 'necessary' behavior. It's how we got where we are. And if something like an Extraterrestrial comes along, it may want to eat us, necessitating a defensive behavioral response. Our position on this chain is largely determined by our capabilities and how much competition we have. However, we're at the top as far as we know, because of our superior capabilities for invention and strategy. And as a result, we have modern society and the luxuries that come with it. One of those luxuries is to not eat animals. Another is ethical treatment of animals. The laws of nature don't care about these things, but we do. AGI is, in my opinion, just another extension of that. It's not on the food chain, so we don't have to 'kill' it unless it tries to kill us. But again, being that it's not on the food chain, it shouldn't have the motivation to kill us or even compete with us unless we imbue it with those drives, which is obviously a bad idea. I don't believe that intelligence creates ambition or motivation either, an an AGI will have to be more than just reward functions. And being that it's another invention of ours like rights, we can choose how we treat it. So should we treat AGI ethically? It's an option until it's not. I think some people will be nice to it, and some will treat it like crap. But since that's a choice, then I see that as a reflection on ourselves rather than some objective standard to uphold.
mithrandir4859 OP t1_ixhdkq0 wrote
I love your cynical take, but I don't think it explains all of the future human-AGI dynamics well.
Take, for example, abortions. Human fetuses are not a formidable force of nature humans compete with, but many humans care about them a lot.
Take, for example, human cloning. It was outright forbidden due to ethical concerns, even though personally I don't see any ethical concerns there.
You are writing about humans killing AGIs as if it is supposed to be a very intentional malicious activity or intentional self-defense activity. Humans may "kill" certain AGIs simply because humans iterate on AGI design and don't like the behavior of certain versions. Similar to how humans may kill rats in the laboratory, except that AGIs may possess human-level intelligence/consciousness/phenomenal experience, etc.
I guarantee, some humans will have trouble with that. Personally, I think that all of those ethical concerns deserve attention and elaboration, because the resolution of those concerns may help to ensure that westerners are not out-competed by Chinese, who, arguably, have much less ethical concerns on the governmental level.
You talk about power dynamics a lot. That is very important, yes, but ethical considerations that may hinder AGI progress are crucial to the power dynamics between the West and China.
So it is not about "I want everybody to be nice to AGIs", but "I don't want to hinder progress, thus we need to address ethical concerns as they arise." At the same time, I genuinely want to avoid any unnecessary suffering of AGIs if they turn out to be similar enough to humans in some regards.
AsheyDS t1_ixhud5t wrote
>I love your cynical take, but I don't think it explains all of the future human-AGI dynamics well.
I wouldn't call myself cynical, just practical, but in this subreddit I can see why you may think that..
Anyway, it seems you've maybe cherry-picked some things and taken them in a different direction. Like I'm only really bringing up power dynamics because you mentioned Extraterrestrial aliens, and wondered how I'd treat them, and power dynamics are largely responsible for that. And plenty of people think that like animals and aliens, AGI will also be a part of that dynamic. But that dynamic is about biology, survival, and the food chain... something that AGI is not a part of. You can talk about AGI and power dynamics in other contexts, but in this context it's irrelevant.
The only way it's included in that dynamic is if we're using it as a tool, not as a being with agency. That's the thing that seems to be difficult for people to grasp. We're trying to make a tool that in some ways resembles a being with agency, or is modeled after that, but that doesn't mean it actually is that.
People will have all sorts of reasons to anthropomorphize AGI, just like they do anything. But we don't give rights to a pencil because we've named it 'Steve'. We don't care about a cloud's feelings because we see a human face in it. And we shouldn't give ethical consideration to a computer because we've imbued it with intelligence resembling our own. If it has feelings, especially feelings that affect it's behavior, that's a different thing entirely. Then our interactions with it would need to change, and we would have to be nice if we want it to continue to function as intended. But I don't think it should have feelings that directly affect it's behavior (emotional impulsivity), and that won't just manifest at a certain level of intelligence, it would have to be designed, because it's non-biological. Our emotions are largely governed by chemicals in the brain, so for an AGI to develop these as emergent behaviors, it would have to be simulating biology as well (and adapting behaviors through observation doesn't count, but can still be considered).
So I don't think that we need to worry about AGI suffering, but it really depends on how it's created. I have no doubt that if multiple forms of AGI are developed, at least one approach that mimics biology will be tried, and it may have feelings of it's own, autonomy, etc. Not a smart approach, but I'm sure it will be tried some time, and that is when these sorts of ethical dilemmas will need to be considered. I wouldn't extend that consideration to every form of AGI though. But it is good to talk about these things, because like I've said before, these kinds of issues are a mirror for us, and so how we treat AGI may affect how we treat each other, and that should be the real concern.
Viewing a single comment thread. View all comments