Submitted by Gari_305 t3_y0brkr in Futurology
Comments
pbradley179 t1_irsq3oh wrote
Siri I just want you to call mom
whatTheBumfuck t1_iru358s wrote
Apologize and bring her some flowers on Tuesday
yaosio t1_iru10rx wrote
They just have to see if a person has an active Reddit account for that.
[deleted] t1_irujvf3 wrote
[deleted]
Cheapskate-DM t1_irr0td6 wrote
The discrimination potential here is staggering - but what about diction?
Being able to filter job applicants based on regional slang, academic vocabulary and levels of deference is already possible, but being able to mathematically optimize for the smartest - or dumbest - candidates is a dangerous tool.
YareSekiro t1_irsa6y6 wrote
I mean, most of the world uses some sort of standardized test which correlates strongly to IQ and moderately to social-economic background to filter people going into higher education, and thus impacting their career outcomes, is that really that different?
Cheapskate-DM t1_irsdz90 wrote
The key difference here is refinement.
For example, let's take policing. There's a well-known problem of departments actively screening out people who are too smart, because they don't want to invest in field/street training for someone who's smart enough to go for a promotion to detective.
Sustaining that currently requires buy-in at a cultural level. However, with AI tools, you may need only one inserted bias to make everyone else go along with a "sorry, your compatibility score says its a no".
Apply the same logic to other fields - screening for people who sound just smart enough for the job, but not smart enough to unionize or report problems to HR/OSHA.
NotSoSalty t1_irtk33x wrote
Increased efficiency doesn't suddenly make this system wrong, if anything it should be more ethical than what we get now.
That one arbitrary bias is already in play, it's just in the hands of an unscrupulous human instead of an unscrupulous but fair AI.
Orc_ t1_irv3l08 wrote
You not even gonna be able to lie ever again, they will always know it including your every intention, in the context of the limited perception of your senses, every atom in your body betrays itself.
8to24 t1_irqzvfc wrote
Finally, an app U.S. officials will act quickly to regulate. Apps that promote authoritarian propaganda, antivaxx hysteria, Bitcoin pyramid schemes, etc are fine. An app that can diagnose illnesses of our aging rambling leaders and would be leaders will be a bridge too far.
[deleted] t1_irr1xm3 wrote
I’m just absolutely loving that you put anti vax propaganda on the same level as authoritarian propaganda. That was really good thank you for the laugh lol
Tbkssom t1_irs5vi6 wrote
“Yeah doc, my backs hurts and I feel tired a lot. Any idea what that is?”
“My diagnosis… Cancer!”
“Dammit… well, what can I do to treat it?”
“My diagnosis… Cancer!”
iNstein t1_irrkm17 wrote
Imagine calling your doctor and a voice interrupts your conversation with the receptionist to say, go to the hospital straight away if you want to live. Perhaps it even notifies the hospital to be ready for you and maybe even dispatches an ambulance to pick you up.
Jabromosdef t1_irt4rpp wrote
In a utopian society sure. I don’t trust things like this that could be abused and misused on a massive scale. I would love to see the accuracy and precision of these tests in comparison to whatever the current gold standard is.
James-VanderGeek t1_irsujem wrote
Tech investment guy here: There is some fascinating research outlining the ability to detect both the presence and progression of Alzheimer’s through voice print alone. The potential for early diagnosis is amazing.
Gari_305 OP t1_irqwlm2 wrote
From the Article
>Someone who speaks low and slowly might have Parkinson's disease. Slurring is a sign of a stroke. Scientists could even diagnose depression or cancer. The team will start by collecting the voices of people with conditions in five areas: neurological disorders, voice disorders, mood disorders, respiratory disorders and pediatric disorders like autism and speech delays.
>
>The project is part of the NIH's Bridge to AI program, which launched over a year ago with more than $100 million in funding from the federal government, with the goal of creating large-scale health care databases for precision medicine.
SatansMoisture t1_irqzz10 wrote
Mom was right, mumbling is going to catch up with me. Sorry Mom. I should have listened!
Flexi_102 t1_irr5671 wrote
Me to AI: I am Michael J Fox. AI: You have Parkinson.
ParkieDude t1_irr83yk wrote
Don't overlook "it may be something else"
SoylentRox t1_irsecq0 wrote
What is sad is the AI is good enough to detect some of these problems but not smart enough to develop solutions. Knowing you have parkinson's is worthless as there is no treatment that slows the disease.
theMonkeyTrap t1_irsl5ak wrote
where are all these cool AI application apps that we hear about once and go away for ever? seriously there should be a headline replayer from past where they publish couple year's old news with today's date and then eventually reveal the actual date.
whatTheBumfuck t1_iru3eue wrote
Probably because there are severe difficulties with actually implementing at scale.
[deleted] t1_irtbkd0 wrote
[deleted]
FuturologyBot t1_irqzn0h wrote
The following submission statement was provided by /u/Gari_305:
From the Article
>Someone who speaks low and slowly might have Parkinson's disease. Slurring is a sign of a stroke. Scientists could even diagnose depression or cancer. The team will start by collecting the voices of people with conditions in five areas: neurological disorders, voice disorders, mood disorders, respiratory disorders and pediatric disorders like autism and speech delays.
>
>The project is part of the NIH's Bridge to AI program, which launched over a year ago with more than $100 million in funding from the federal government, with the goal of creating large-scale health care databases for precision medicine.
Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/y0brkr/ai_app_could_diagnose_illnesses_based_on_speech/irqwlm2/
[deleted] t1_irr1oz2 wrote
[removed]
HexiCore t1_irr3k9f wrote
Jokes on you Googles algorithm already diagnoses people and then targets them with medical ads to address their needs. You don't need to do anything special. The algorithm just figures it out by monitoring your activities.
You may think this is a joke, but it's not.
Don't believe me? Watch youtube videos on your phone and pay attention, if you start getting the same or similar medical ads... see your doctor.
chaosgoblyn t1_irruvgr wrote
Hmm I thought this was just random prescription drug advertising
Ralphinader t1_irs0mog wrote
I've been diagnosed with "bad grammar" what do I take for that?
Strategerizer t1_irtxems wrote
You speak two languages: English and bad English.
[deleted] t1_irsrpuc wrote
[removed]
Tyrilean t1_irsrvdv wrote
I act completely different when I speak to a machine. I tend to fumble my words and forget what I'm trying to say. I'm sure this would be pretty useless for me.
[deleted] t1_irt0bj3 wrote
[removed]
Venefercus t1_irt79eq wrote
I don't have an article about it, but there is a group in Switzerland having some success in diagnosing mental deficits in babies based off movement patterns
[deleted] t1_irtdcvw wrote
[deleted]
[deleted] t1_iru1pb4 wrote
[removed]
SkyKiddo t1_iru5pfr wrote
Thank God! Malpractice is the #1 cause of death in hospitals. Doctors really truly suck at their jobs.
richflys t1_irvbac2 wrote
Interesting article. But what the fuck is on the screen?
whatTheBumfuck t1_irqzm0y wrote
"We've analysed your speech and have determined that you are suffering from... being a total asshole."