Viewing a single comment thread. View all comments

Leanardoe t1_irbzjds wrote

Look up Google lambda, they tested it with crowd sourced data and it always turned racist in conversation. Now they only use carefully vetted sources for it's database. Same with celverbot, when it was in it's prime it was very racist.

I found an article discussing the Google Engineer's opinion, it's not a source from Google, but they likely buried that. The clever bot incidents are widely reported on youtube. https://www.businessinsider.com/google-engineer-blake-lemoine-ai-ethics-lamda-racist-2022-7

3

[deleted] t1_irc1nus wrote

[deleted]

−1

CptRabbitFace t1_irc64tr wrote

For one example, people have suggested using AI in court sentencing in an attempt to remove judicial bias. However, AI trained on biased data sets tend to recreate those biases. It sounds like this sort of problem is what this bill of rights is meant to address.

4

[deleted] t1_ird5caa wrote

[deleted]

1

Leanardoe t1_irfc9wf wrote

Welcome to the 21st century, where phytoplankton is dying out and micro plastics are slowly being absorbable into our bloodstream.

1

Leanardoe t1_irci520 wrote

It would be nice if it worked that way. Legislation requirements and how companies react to implement said legislation requirements tends to differ more than one may expect.

1