Viewing a single comment thread. View all comments

jamesj OP t1_ja04150 wrote

Yes, and these systems are a reflection of humanity as well, carrying with them much of the same potential and biases.

7

anon10122333 t1_ja0dauo wrote

Knowing which biases to include in the AI is going to be difficult.

A purely logical mind could suggest things we're culturally unprepared for.

  • Voluntary euthanasia (but for whom? At all ages?)

  • Acceptable losses in war and also in peace

  • Extinction of some species (or somehow weighing the balance between human lives and the environment)

  • Elimination of some populations where it calculates a "greater good" for humanity. Or for the environment, depending on it's values. Or for the next gen of AI, for that matter.

  • assassinations and rapid deployment of the death penalty

11

jamesj OP t1_ja0hl19 wrote

Yeah, good examples. Another example is that if you take a utilitarian point of view, way more people will live in the future than the present, so you may be willing to cause a lot of harm in the present to prioritize the well-being of future people.

6