Submitted by mithrandir4859 t3_yzq88s in singularity
Humans have a single thread of consciousness. In the future, AGIs may have multiple threads of consciousness, they will be able to spawn millions of general intelligence workers who could work independently or somewhat independently for periods of time and then synchronize (hourly, daily, weekly, depending on the task a worker is working on). These workers may have various levels of consciousness and knowledge optimized for different tasks. Even though many of the workers will match or exceed human productivity and human intelligence, they would still bear the identity of the "mother" AGI.
I believe that with carefully crafted reward functions and identity definitions shutting down such a worker wouldn't constitute a killing of a human-level AGI and/or consciousness. Spawning an intelligent process is not equivalent to creating a fresh slave. Even the top management workers (that are part of the AGI in question) that are responsible for the long-term planing and high-level decision-making wouldn't necessarily persist for much longer than workers performing low-level jobs.
To justify such a view, I suggest the following thought experiment. The purpose of this experiment is to illustrate that that ethics, applicable to generally intelligent workers, is applicable to humans as well and at least some humans would agree with it.
Imagine you suddenly gain a supernatural ability to replace yourself with N < 100 exact copies of yourself. If and when you decide to do so, you pronounce some complicated spell in an ancient language, then you cease to exist painlessly, leaving no dead body behind. Next second, N copies of you are spawned in the more or less the same location. Spawning process is semi-intelligent, so that no copies are harmed immediately after spawning, like falling of a balcony if there is one nearby a place where you decided to perform the magical procedure. Each copy is indistinguishable from the original you: they all have a body identical to yours, your memories, experience, personality, knowledge, etc. Essentially, each copy is you and each copy thinks they are you, they behave exactly like you, they remember "their" decision to perform such magic procedure.
Each copy may decide to vanish painlessly at any moment in time, leaving no dead body behind. The last remaining copy ceases to have such an ability, even if he doesn't know that he is the last one. Each copy, including the last one, may die a natural death by accident, murder, disease, etc., like any
other normal human. In case of a natural death of any copy (except for the last one) the body vanishes, including any separated body parts, spilled blood, etc.
If after exactly M=12 months there are more than one copy alive, then all of them, except for one random copy, vanish painlessly. Copies cannot repeat the copying procedure until all of them except the last one are dead due to voluntary vanishing or death by natural causes.
For the outside world, all that copies are you. Copies don't have different legal identities, they don't get fresh passports, fingerprints match, etc.
For simplicity, let's assume that somehow you can use such magical procedure safely, without getting any unnecessary attention from governments and other unsavory organizations or individuals.
Personally, I would use such magic procedure all the time to perform more intellectual work than I usually can. Last days of the M months period my copies would spend performing knowledge transfer to the one copy that has been chosen to survive. My copies may choose the surviving copy randomly
or due to other criteria. 10 minutes before the designated period of M months runs out, all copies that were not selected to be survivors would vanish voluntarily, so that the one copy with all the gained knowledge would survive and perform the next iteration.
Of course, I can think of more exciting and more illegal use-cases as well, but this thought experiment is mostly designed to mimic intelligent workers' lifecycle, rather than to produce a some fantasy scenario.
Would you use such a magical procedure? (comments are appreciated)
Would you be concerned about ethical implications of such magical procedure? Please see the poll.
If you don't see the point in such magical procedure or wouldn't use it for reasons other than ethics, please still vote what do you think about the ethics side specifically.
Any links to the research on this topic are appreciated, thanks.
AsheyDS t1_ix3ymez wrote
>Would you use such a magical procedure?
Are we still talking about AGI? I feel like this post went from one thing to a completely different thing... Because if it was meant to illustrate a point about potential AGI, then you've lost the plot. AGI isn't human, so there's already a fundamental difference.