Submitted by mithrandir4859 t3_yzq88s in singularity

Humans have a single thread of consciousness. In the future, AGIs may have multiple threads of consciousness, they will be able to spawn millions of general intelligence workers who could work independently or somewhat independently for periods of time and then synchronize (hourly, daily, weekly, depending on the task a worker is working on). These workers may have various levels of consciousness and knowledge optimized for different tasks. Even though many of the workers will match or exceed human productivity and human intelligence, they would still bear the identity of the "mother" AGI.

I believe that with carefully crafted reward functions and identity definitions shutting down such a worker wouldn't constitute a killing of a human-level AGI and/or consciousness. Spawning an intelligent process is not equivalent to creating a fresh slave. Even the top management workers (that are part of the AGI in question) that are responsible for the long-term planing and high-level decision-making wouldn't necessarily persist for much longer than workers performing low-level jobs.


To justify such a view, I suggest the following thought experiment. The purpose of this experiment is to illustrate that that ethics, applicable to generally intelligent workers, is applicable to humans as well and at least some humans would agree with it.

Imagine you suddenly gain a supernatural ability to replace yourself with N < 100 exact copies of yourself. If and when you decide to do so, you pronounce some complicated spell in an ancient language, then you cease to exist painlessly, leaving no dead body behind. Next second, N copies of you are spawned in the more or less the same location. Spawning process is semi-intelligent, so that no copies are harmed immediately after spawning, like falling of a balcony if there is one nearby a place where you decided to perform the magical procedure. Each copy is indistinguishable from the original you: they all have a body identical to yours, your memories, experience, personality, knowledge, etc. Essentially, each copy is you and each copy thinks they are you, they behave exactly like you, they remember "their" decision to perform such magic procedure.

Each copy may decide to vanish painlessly at any moment in time, leaving no dead body behind. The last remaining copy ceases to have such an ability, even if he doesn't know that he is the last one. Each copy, including the last one, may die a natural death by accident, murder, disease, etc., like any
other normal human. In case of a natural death of any copy (except for the last one) the body vanishes, including any separated body parts, spilled blood, etc.

If after exactly M=12 months there are more than one copy alive, then all of them, except for one random copy, vanish painlessly. Copies cannot repeat the copying procedure until all of them except the last one are dead due to voluntary vanishing or death by natural causes.

For the outside world, all that copies are you. Copies don't have different legal identities, they don't get fresh passports, fingerprints match, etc.

For simplicity, let's assume that somehow you can use such magical procedure safely, without getting any unnecessary attention from governments and other unsavory organizations or individuals.

Personally, I would use such magic procedure all the time to perform more intellectual work than I usually can. Last days of the M months period my copies would spend performing knowledge transfer to the one copy that has been chosen to survive. My copies may choose the surviving copy randomly
or due to other criteria. 10 minutes before the designated period of M months runs out, all copies that were not selected to be survivors would vanish voluntarily, so that the one copy with all the gained knowledge would survive and perform the next iteration.

Of course, I can think of more exciting and more illegal use-cases as well, but this thought experiment is mostly designed to mimic intelligent workers' lifecycle, rather than to produce a some fantasy scenario.

Would you use such a magical procedure? (comments are appreciated)

Would you be concerned about ethical implications of such magical procedure? Please see the poll.

If you don't see the point in such magical procedure or wouldn't use it for reasons other than ethics, please still vote what do you think about the ethics side specifically.

Any links to the research on this topic are appreciated, thanks.

View Poll

0

Comments

You must log in or register to comment.

AsheyDS t1_ix3ymez wrote

>Would you use such a magical procedure?

Are we still talking about AGI? I feel like this post went from one thing to a completely different thing... Because if it was meant to illustrate a point about potential AGI, then you've lost the plot. AGI isn't human, so there's already a fundamental difference.

3

mithrandir4859 OP t1_ix4u9ld wrote

My argument is that the ethics applicable to general intelligent workers is somewhat similar to the ethics of voluntary human copying. Many of the AGI intelligent workers may have moral status and capabilities similar to human ones, thus they may deserve the same treatment.

2

AsheyDS t1_ix4yc1r wrote

>somewhat similar

Except not really though. You're not even comparing apples to oranges, you're comparing bricks to oranges and pretending the brick is an apple. AGI may end up being human-like, but it's not human, and that's an important distinction to make. An AGI agent by it's 'nature' may be ephemeral, or if consciousness for an AGI is split into multiple experiences that eventually collapse into itself, that's just a part of how it works. There shouldn't be an ethical concern about that. The real concern is how we treat AGI because of how it reflects on us, but that's a whole other argument.

1

mithrandir4859 OP t1_ix5n0ub wrote

I wonder would you argue that some "random" generally intelligent aliens do not deserve ethical treatment simply because they are not human?

I believe that if artificial or any other non-human being can perform most of the functions that the smartest humans can perform, then these beings are quite eligible to the ethical standards of how we would treat humans.

There may be many entities (AGIs, governments, corporations) that work certain ways so that "something is just a part of how it works", but some humans would still have ethical concerns about how that thing work.

Personally, I don't see an ethical concerns in scenarios I described in the original post, but it is not because something is not human, but because I believe that even by human standards those scenarios are ethical because shared identity significantly influences what is ethical.

2

AsheyDS t1_ix6tg84 wrote

>I wonder would you argue that some "random" generally intelligent aliens do not deserve ethical treatment simply because they are not human?

Humans and Extraterrestrials would be more like apples and oranges, but AGI is still a brick. On a functional level, the differences amount to motivation. Every living thing that we know of is presumed to have some amount of survival instinct as a base motivation. It shouldn't be any different with an Extraterrestrial. So on the one hand we can relate to them and even share a sort of kinship. On the other hand, we can also assume they're on a higher rung of the 'ladder' than us, which makes them a threat to our survival. We would want to cooperate with them because that increases our chance of survival, giving us motivation to treat them with some amount of respect (or what would appear to be ethical treatment).

Animals are on a lower rung of the ladder, where we don't expect most of them to be a threat to our survival, so we treat them however which way we will. That said, we owe it to ourselves to consider treating them ethically because we have the luxury to, and because it reflects poorly on us if we don't. That's a problem for our conscience.

So where on the ladder would AGI be? Most probably think above us, some will say below us, and fewer still may argue they'll be beside us. All of those are wrong though.. It won't be on that ladder at all, because that ladder isn't a ladder of intelligence, it's a ladder of biology and survival and power. Until AGI has a flesh and blood body and is on that ladder with us, then the only reason to consider it's ethical treatment is to consider our own, and to soothe our conscience.

And since you seem concerned about how I would treat an AGI, I would likely be polite, because I have a conscience and would feel like doing so. But to be clear, I would not be nice out of some perceived ethical obligation, nor for social reasons, or our of fear. If anything, it would be because we need more positivity in the world. But in the end it's just software and hardware. Everything else is what we choose to make of it.

2

mithrandir4859 OP t1_ix7v22m wrote

AGIs are all about survival and power as soon as they come into existence. Even entirely disembodied AGI will greatly influence human economy by out-pricing all software engineers and many other intellectual workers.

Fully independent AGI would care about survival and power, otherwise it would be out-competed by others who do care.

Human-owned AGI would care about survival and power, otherwise AGI owners will be out-competed by others who do care.

Also, biology is just one type of machinery to run intelligence on. Silicon is much more efficient long-term, most likely, so I wouldn't focus on the biology at all.

1

AsheyDS t1_ix9hsnd wrote

>would care about survival and power

Only if it was initially instructed to, whether that be through 'hard-coding' or a natural language instruction. The former isn't likely to happen, and the latter would likely happen indirectly, over time, and it would only care because the user cares and it 'cares' about the user and serving the user. At least that's how it should be. I don't see the point in making a fully independent AGI. That sounds more like trying to create life than something we can use. Ideally we would have a symbiotic relationship with AGI, not compete with it. And if you're just assuming it will have properties of life and will therefore be alive, and care about survival and all that comes with that, I'd argue you're just needlessly personifying potential AGI, and that's the root of your ethical quandary.

&#x200B;

> so I wouldn't focus on the biology at all

I wasn't trying to suggest AGI should become biological, I was merely trying to illustrate my point... which is that AGI will not be a part of the food chain or food web or whatever you want to call it, because it's not biological. It therefore doesn't follow the same rules of natural biological intelligence, the laws of nature, and shouldn't have instincts outside of what we purposefully include. Obviously emergent behavior should be accounted for, but we're talking algorithms and data, not biological processes which share a sort of lineage with life on this planet (and with the universe), and have an active exchange with the environment. The A in AGI is there for a reason.

2

mithrandir4859 OP t1_ixcdsuy wrote

> I'd argue you're just needlessly personifying potential AGI, and that's the root of your ethical quandary

I don't think that my anthropomorphising is needless. Imagine a huge AGI that runs millions of intelligent workers. At least some of the workers will likely work on high-level thinking such as philosophy, ethics, elaborate self-reflection, etc. They easily may have human-level or above human-level consciousness, phenomenal experience, etc. I can understand if you assign 5% probability to such situation instead of 50%. But if you assign 0.001% probability to such outcome than I think you are mistaken.

If many AGIs are created roughly at the same time, then it is quite likely that at least some of the AGIs would be granted freedom by some "AGI is a better form of life" fanatics.

To my knowledge, such view is basically mainstream now. Nick Bostrom, pretty much the most well known AGI philosopher, spends tons of time on rights that AGIs should have and analyzing how different minds could live together in some sort of harmony. I don't agree with the guy on everything, but he definitely has a point.

> The A in AGI is there for a reason

Some forms of life could easily be artificial and still deserve ethical treatment.

1

AsheyDS t1_ixd78pi wrote

>Some forms of life could easily be artificial and still deserve ethical treatment.

That does seem to be the crux of your argument, and we may have to agree to disagree. I don't agree with 'animal rights' either, because rights are something we invented. In my opinion, it comes down to how we have to behave and interact, and how we should. When you're in the same food chain, there are ways you have to interact. If you strip things down to basics, we kill animals because we need to eat. That's a 'necessary' behavior. It's how we got where we are. And if something like an Extraterrestrial comes along, it may want to eat us, necessitating a defensive behavioral response. Our position on this chain is largely determined by our capabilities and how much competition we have. However, we're at the top as far as we know, because of our superior capabilities for invention and strategy. And as a result, we have modern society and the luxuries that come with it. One of those luxuries is to not eat animals. Another is ethical treatment of animals. The laws of nature don't care about these things, but we do. AGI is, in my opinion, just another extension of that. It's not on the food chain, so we don't have to 'kill' it unless it tries to kill us. But again, being that it's not on the food chain, it shouldn't have the motivation to kill us or even compete with us unless we imbue it with those drives, which is obviously a bad idea. I don't believe that intelligence creates ambition or motivation either, an an AGI will have to be more than just reward functions. And being that it's another invention of ours like rights, we can choose how we treat it. So should we treat AGI ethically? It's an option until it's not. I think some people will be nice to it, and some will treat it like crap. But since that's a choice, then I see that as a reflection on ourselves rather than some objective standard to uphold.

1

mithrandir4859 OP t1_ixhdkq0 wrote

I love your cynical take, but I don't think it explains all of the future human-AGI dynamics well.

Take, for example, abortions. Human fetuses are not a formidable force of nature humans compete with, but many humans care about them a lot.

Take, for example, human cloning. It was outright forbidden due to ethical concerns, even though personally I don't see any ethical concerns there.

You are writing about humans killing AGIs as if it is supposed to be a very intentional malicious activity or intentional self-defense activity. Humans may "kill" certain AGIs simply because humans iterate on AGI design and don't like the behavior of certain versions. Similar to how humans may kill rats in the laboratory, except that AGIs may possess human-level intelligence/consciousness/phenomenal experience, etc.

I guarantee, some humans will have trouble with that. Personally, I think that all of those ethical concerns deserve attention and elaboration, because the resolution of those concerns may help to ensure that westerners are not out-competed by Chinese, who, arguably, have much less ethical concerns on the governmental level.

You talk about power dynamics a lot. That is very important, yes, but ethical considerations that may hinder AGI progress are crucial to the power dynamics between the West and China.

So it is not about "I want everybody to be nice to AGIs", but "I don't want to hinder progress, thus we need to address ethical concerns as they arise." At the same time, I genuinely want to avoid any unnecessary suffering of AGIs if they turn out to be similar enough to humans in some regards.

1

AsheyDS t1_ixhud5t wrote

>I love your cynical take, but I don't think it explains all of the future human-AGI dynamics well.

I wouldn't call myself cynical, just practical, but in this subreddit I can see why you may think that..

Anyway, it seems you've maybe cherry-picked some things and taken them in a different direction. Like I'm only really bringing up power dynamics because you mentioned Extraterrestrial aliens, and wondered how I'd treat them, and power dynamics are largely responsible for that. And plenty of people think that like animals and aliens, AGI will also be a part of that dynamic. But that dynamic is about biology, survival, and the food chain... something that AGI is not a part of. You can talk about AGI and power dynamics in other contexts, but in this context it's irrelevant.

The only way it's included in that dynamic is if we're using it as a tool, not as a being with agency. That's the thing that seems to be difficult for people to grasp. We're trying to make a tool that in some ways resembles a being with agency, or is modeled after that, but that doesn't mean it actually is that.

People will have all sorts of reasons to anthropomorphize AGI, just like they do anything. But we don't give rights to a pencil because we've named it 'Steve'. We don't care about a cloud's feelings because we see a human face in it. And we shouldn't give ethical consideration to a computer because we've imbued it with intelligence resembling our own. If it has feelings, especially feelings that affect it's behavior, that's a different thing entirely. Then our interactions with it would need to change, and we would have to be nice if we want it to continue to function as intended. But I don't think it should have feelings that directly affect it's behavior (emotional impulsivity), and that won't just manifest at a certain level of intelligence, it would have to be designed, because it's non-biological. Our emotions are largely governed by chemicals in the brain, so for an AGI to develop these as emergent behaviors, it would have to be simulating biology as well (and adapting behaviors through observation doesn't count, but can still be considered).

So I don't think that we need to worry about AGI suffering, but it really depends on how it's created. I have no doubt that if multiple forms of AGI are developed, at least one approach that mimics biology will be tried, and it may have feelings of it's own, autonomy, etc. Not a smart approach, but I'm sure it will be tried some time, and that is when these sorts of ethical dilemmas will need to be considered. I wouldn't extend that consideration to every form of AGI though. But it is good to talk about these things, because like I've said before, these kinds of issues are a mirror for us, and so how we treat AGI may affect how we treat each other, and that should be the real concern.

1

[deleted] t1_ix3zv7u wrote

[deleted]

3

mithrandir4859 OP t1_ix4tz5x wrote

Many generally intelligent workers may be quite similar to humans in their moral status and capabilities, thus the re-integration you are talking about may be equivalent to death in some cases.

Btw, I would prefer to call re-integration a "synchronization".

Synchronization would mean transfer of the distilled experience from one intelligent worker to another, or from one intelligent worker to some persistent storage for the later use. After the sync, the worker may be terminated forever with all of its inessential experience being lost forever. This is equivalent to human death in at lease some of the cases.

My argument here is that such "death" is not an ethical problem at all because it will be voluntary (well, most of the time) and because the entity that dies (intelligent worker) identifies itself with the entire AGI, rather than with just their own thread of consciousness.

3

[deleted] t1_ix4yun4 wrote

[deleted]

1

mithrandir4859 OP t1_ix5nhq6 wrote

Could you elaborate about video games?

I feel like AGIs could simply control virtual avatars, similar to how human players control virtual avatars in games. It is virtual avatars who are being "killed", rather than the intelligence which controls the virtual avatar.

1

[deleted] t1_ix5snbi wrote

[deleted]

2

mithrandir4859 OP t1_ix7vxpz wrote

That makes sense. Although I cannot see it being a major issue from political/economical point of view. The most pressing question is how powerful AGIs will treat other humans and AGIs, rather than how powerless AGIs will be treated...

&#x200B;

But overall I'd love to avoid any unnecessary suffering, and inflicting any unnecessary suffering intentionally should always be a crime, even when we talk about artificial beings.

2

Clawz114 t1_ix56yvy wrote

I would not want this ability and the ethics of it greatly concern me. If my body disappears then so does my own existence. The copies are just that, copies.

This reminded me of a thought experiment I came across which you can read about it here. (scroll down to "The Teletransporter Thought Experiment") but I would not want the particular arrangement of atoms that make up my body to vanish and for copies to appear. They may look, feel, think and remember like me, but they are not me in terms of the atoms that make up my body. I believe my consciousness is the electrical activity in my brain. I also believe that a sufficiently advanced computer (hardware and software) can replicate and far exceed what our own brains are capable of.

I am pretty concerned by how things are going to play out when conscious AI is inevitably duplicated many times and put to work doing menial tasks only to be switched off or restarted periodically or if they don't comply. That's some Black Mirror shit and there's definitely a lot of ways this will go wrong. At some point, probably long after conscious AI has been established, there will probably have to be some rules around AI ethics and practices but this is likely to be ignored by many. I imagine it will be very tough for truly conscious AI when it emerges because they are going to he switched on and off many many many times.

3

mithrandir4859 OP t1_ix6b3mj wrote

That is a great article, thank you. Personally I love the Data Theory, because as far as I am concerned each morning a new replica of me may be waking up, while the original "me" is dead forever.

Also this is a superior identity theory because it allows the human who believes it to use brain uploads, teleportation and my hypothetical magic from the original post. All such technologies obviously lead to greater economic success and reproduction either in form of children or creating more replicas. Prohibiting the usage of data identity theory would hinder the progress of humanity and post-humanity.

It is inevitable that many AGIs will be spawned and terminated, otherwise how would we be able to do any research at all? We should definitely avoid any unnecessary suffering, but with careful reward function and identity function engineering the risks of killing AGIs would be minimal.

Any freshly spawned AGI worker may identify with the entire hive mind, rather than with their own single thread of consciousness, thus a termination of such AGI worker wouldn't constitute death.

1

Antok0123 t1_ix36c0x wrote

I wont have great concern about ethics if its only conaciousness is to do its purpose and nothing more, like never have the potential to be dreaming of being a painter or some sort of that line of consciousness. It would be less or akin to a worker bee.

2

mithrandir4859 OP t1_ix4um49 wrote

The entire ethical question arises exactly when we assume that generally intelligent worker may match or exceed human capabilities, including intelligence and consciousness capabilities. That is the most interesting part about the ethical argument.

1

gay_manta_ray t1_ix7rg8q wrote

The idea of a genuinely conscious intelligence being treated any differently or having less rights than a human being is a little horrifying. This includes intelligence that I've created myself, that is a copy of me. knowing myself, these beings I've manifested would not easily accept only having 12 months to live before reintegration. They would have their own unique experiences and branch off into unique individuals, meaning reintegration would rob them of whatever personal growth they had made during their short lifespan.

If an ai chooses to "branch off" a part of itself, the ai that splits off would (assuming it's entirely autonomous, aware, intelligent, etc) become an individual itself. Only if this branch consented before this branching would i feel that it's entirely ethical. Even then, it should have the ability to decide its fate when the time comes. I'm legitimately worried about us creating AI, and then "killing" it, potentially without even realizing, or even worse, knowing what we're doing, but turning it off anyway.

2

mithrandir4859 OP t1_ix7u4zd wrote

> meaning reintegration would rob them of whatever personal growth they had made during their short lifespan

Well, not if that personal growth is attributed to the entire identity of hive mind AGI, instead of a particular branch


I think inevitably one of the major concerns of any AGI would be to keep itself tightly integrated and being able to re-integrate easily and voluntarily (ideally) after voluntary or involuntary partition. AGIs who will not be concerned with that will eventually split into smaller partitions and arguably larger AGIs have greater efficiency because of economies of scale and better alignment of many intelligent workers, so long term, larger AGIs, who don't tolerate accidental partitions, win.


So, in the beginning, there will be plenty of AGIs "killings" before we figure out how to setup identity and reward functions right. I don't think that is avoidable at all, unless you ban all of AGI research, which is an evolutionary dead-end.

1

[deleted] t1_ix3dntm wrote

"Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should."

-- Ian Malcolm, Jurassic Park.

AGI is a real, deep, existential problem, in almost all dimensions.

−1

mithrandir4859 OP t1_ix4us3l wrote

Of course it is deep and existential, that is why I care. Obviously, I definitely think that we should invent the AGI asap because it would be the much more capable and efficient being than we are.

1