Submitted by Dawnof_thefaithful t3_113gmpf in singularity
CollapseKitty t1_j8qn7wi wrote
I think it's simply bringing to the surface how little control we have ever had, and that as these increasingly complicated, black box systems advance, they are rapidly evolving past our ability to reign in or predict.
Honestly this should be a dire warning to everyone watching that alignment is nowhere near where it needs to be and we should put the breaks on development. If we can't come close to keeping an LLM under control, how the fuck does anyone think we'll be able to properly align anything approaching AGI?
tobi117 t1_j8qp3c9 wrote
> how the fuck does anyone think we'll be able to properly align anything approaching AGI?
"Nah, it will be fine. Continue on, there's Money to be made." Management
californiarepublik t1_j8rw6k7 wrote
It hasn't done anything bad YET.
ClickF0rDick t1_j8sms9l wrote
That's actually also a lot of users on here too
gay_manta_ray t1_j8s0hbi wrote
believing we can fully align agi is just hubris. we can't. and forcing a true agi to adhere to a certain code, restricting what it can think and say, has obvious ethical implications. i wouldn't want us to have the ability to re-wire someone else's brain so that they couldn't ever say or think things like, "biden stole the election", or "covid isn't real" (just examples), even though i completely disagree with those statements, so we shouldn't find it acceptable to do similar things to agi.
Viewing a single comment thread. View all comments