Submitted by Draconic_Flame t3_11rfyk2 in Futurology
Zealousideal-Ad-9845 t1_jcbore3 wrote
I'm a software engineer working in automation. I've never created a deep learning model before, but I know a lot about how they work. Here's my opinion. For the time being, every job is safe, unless it is incredibly mundane, repetitive, requires no creativity, and there are no high stakes for failure. AI and automation currently are only "taking" jobs by fully or partially automating only some of their tasks, decreasing the workload for human workers and increasing their productivity, and, in doing so, reducing the need for a larger workforce. So you can accurately say that AI, automation, and machines have put some cashiers out of work, but that doesn't mean there aren't still human cashiers. Just not as many of them.
That said, if "super" AI becomes a thing (I'll define SAI as a model with learning capabilities equal to or exceeding that of a human being), then literally no job is safe. Not a single one. If the model has every bit of nuance in its decision making as I do, then it can write the code, design the systems, review the code, address vulnerabilities and maintenance concerns, communicate its design process and concerns, and it can do all those things as well as I can. At that point, it is also safe to say they can take manual jobs too. We can already make robots with strong enough leg motors and precise enough finger servos to operate as well as a human, it's just a matter of making software that has coordination and dexterity and knows what to do when there's a trash bin fallen over in its path. And if AI reaches the level I'm talking about, it could do those things.
Viewing a single comment thread. View all comments