Viewing a single comment thread. View all comments

y53rw t1_jees0e0 wrote

> ensuring it does not harm any humans

> we can design AI systems to emphasize the importance of serving humanity

If you know how to do these things, then please submit your research to the relevant experts (not reddit) for peer review. Their inability to do these things is precisely the reason they are concerned.

7

StarCaptain90 OP t1_jeevytk wrote

I'm working on it actually πŸ™‚

2

y53rw t1_jeexpem wrote

In that case, let me advise you to avoid this line in your paper

> We for some reason associate higher intelligence to becoming some master villain that wants to destroy life

Because nobody does. It has nothing to do with the problem that actual A.I. researchers are concerned about.

1

StarCaptain90 OP t1_jeeyh6n wrote

Believe it or not many people are concerned about that. It's irrational, I know. But it's there.

2

Yomiel94 t1_jefj461 wrote

Nobody serious is concerned about that, and focusing on it distracts from the actual issues.

0

StarCaptain90 OP t1_jefj7xw wrote

I have a proposition that I call the "AI Lifeline Iniative"

If someone's job gets replaced with AI we would then provide them a portion of their previous salary as long as the company is alive.

For example:

Let's say Stacy makes $100,000 a year.

She gets replaced with AI. But instead of getting fired she gets a reduced salary down to let's say $35,000 a year. Now she can go home and not worry about returning to work but still get paid.

This would help our society transition into an AI based economy.

3

Yomiel94 t1_jefjv94 wrote

I was referring to existential risks. You’re completely misrepresenting the concern.

0

StarCaptain90 OP t1_jefko9l wrote

Oh yeah I was just sharing a possible solution to one side

3