Submitted by StarCaptain90 t3_127lgau in singularity
y53rw t1_jees0e0 wrote
> ensuring it does not harm any humans
> we can design AI systems to emphasize the importance of serving humanity
If you know how to do these things, then please submit your research to the relevant experts (not reddit) for peer review. Their inability to do these things is precisely the reason they are concerned.
StarCaptain90 OP t1_jeevytk wrote
I'm working on it actually π
y53rw t1_jeexpem wrote
In that case, let me advise you to avoid this line in your paper
> We for some reason associate higher intelligence to becoming some master villain that wants to destroy life
Because nobody does. It has nothing to do with the problem that actual A.I. researchers are concerned about.
StarCaptain90 OP t1_jeeyh6n wrote
Believe it or not many people are concerned about that. It's irrational, I know. But it's there.
Yomiel94 t1_jefj461 wrote
Nobody serious is concerned about that, and focusing on it distracts from the actual issues.
StarCaptain90 OP t1_jefj7xw wrote
I have a proposition that I call the "AI Lifeline Iniative"
If someone's job gets replaced with AI we would then provide them a portion of their previous salary as long as the company is alive.
For example:
Let's say Stacy makes $100,000 a year.
She gets replaced with AI. But instead of getting fired she gets a reduced salary down to let's say $35,000 a year. Now she can go home and not worry about returning to work but still get paid.
This would help our society transition into an AI based economy.
Yomiel94 t1_jefjv94 wrote
I was referring to existential risks. Youβre completely misrepresenting the concern.
StarCaptain90 OP t1_jefko9l wrote
Oh yeah I was just sharing a possible solution to one side
Viewing a single comment thread. View all comments