Submitted by Gari_305 t3_y0brkr in Futurology
Cheapskate-DM t1_irr0td6 wrote
The discrimination potential here is staggering - but what about diction?
Being able to filter job applicants based on regional slang, academic vocabulary and levels of deference is already possible, but being able to mathematically optimize for the smartest - or dumbest - candidates is a dangerous tool.
YareSekiro t1_irsa6y6 wrote
I mean, most of the world uses some sort of standardized test which correlates strongly to IQ and moderately to social-economic background to filter people going into higher education, and thus impacting their career outcomes, is that really that different?
Cheapskate-DM t1_irsdz90 wrote
The key difference here is refinement.
For example, let's take policing. There's a well-known problem of departments actively screening out people who are too smart, because they don't want to invest in field/street training for someone who's smart enough to go for a promotion to detective.
Sustaining that currently requires buy-in at a cultural level. However, with AI tools, you may need only one inserted bias to make everyone else go along with a "sorry, your compatibility score says its a no".
Apply the same logic to other fields - screening for people who sound just smart enough for the job, but not smart enough to unionize or report problems to HR/OSHA.
NotSoSalty t1_irtk33x wrote
Increased efficiency doesn't suddenly make this system wrong, if anything it should be more ethical than what we get now.
That one arbitrary bias is already in play, it's just in the hands of an unscrupulous human instead of an unscrupulous but fair AI.
Orc_ t1_irv3l08 wrote
You not even gonna be able to lie ever again, they will always know it including your every intention, in the context of the limited perception of your senses, every atom in your body betrays itself.
Viewing a single comment thread. View all comments