QristopherQuixote t1_j8nj781 wrote
Elong Mush doesn’t understand the difference strong AI, which emulates a human mind and then some, and weak AI which is task focused and would never be considered “conscious.” Strong AI doesn’t exist yet. ChatGPT is weak AI on a large model.
Throwaway08080909070 t1_j8nmkng wrote
The list of what Musk doesn't understand is too long for the internet to contain it.
8instuntcock t1_j8nou64 wrote
dude is not a programmer just controversy machine.
QristopherQuixote t1_j8nuhs2 wrote
Yup. His flailing around with engineers at Twitter looked like a Dilbert cartoon with the pointy-haired boss trying to talk about code.
AI seems like magic until you look under the hood. There's an enormous amount of human intelligence and judgment that goes into tweaking AIs to perform well. My first neural network was a class project in grad school to find a nose on a human face. When I got done and had it working, I was happy and also disappointed to learn how they actually worked. It drove home for me the differences between weak and strong AI.
str8grizzlee t1_j8ozla8 wrote
AI doesn’t have to be sentient to cause massive social problems.
QristopherQuixote t1_j8p50g7 wrote
It already causes problems for credit approvals, fraud detection, etc. However, this is very different from a sentient AI trying to become our digital overlord.
gundam1945 t1_j8poopz wrote
This type of social problem is different with the one in elon's mind.
647843267e t1_j8nnirs wrote
Weak AI is good enough to replace a whole lot of workers. It's not like most workers really think hard at their jobs.
SidewaysFancyPrance t1_j8nvonl wrote
Weak AI is good enough to sorta replace workers, in areas where accuracy is not super important (customer-facing stuff, where people are already used to corporations providing minimal/poor service).
If you train your customers to accept less and less every year, then eventually replacing an underpaid, poorly-trained human with a weak AI is not going to change much except save money. AI is going to end up in places C-levels already didn't care about and were strangling.
QristopherQuixote t1_j8noa2w wrote
Very true. However, weak AI is more about task automation and less about replacing human minds. Large segments of transportation will be replaced by weak AI. I think task AI will replace many jobs and augment many others.
ethereal3xp OP t1_j8njvj4 wrote
Isn't Musk Tesla ... self driving cars considered a strong AI?
Throwaway08080909070 t1_j8nmmu7 wrote
It isn't strong, it isn't AI, and it isn't self-driving.
So no.
QristopherQuixote t1_j8nnvqi wrote
No. It is still just task based AI.
ethereal3xp OP t1_j8nokh1 wrote
If its not Task based AI... what other kind of AI will there be?
Task based AI cater to human needs. To make human daily living easier
Even made up droids from movies like Interstellar are task based mainly. It is not going to override a human command. Only suggest/per calculation
QristopherQuixote t1_j8nqhxa wrote
Strong AI implies consciousness and self-awareness. This has been the holy grail of AI since the 1970s. Neural networks are function emulators where input produces the desired output. Neural networks use classified or labeled training data and feedback to self correct (back propagation) until their functional output is acceptable. Deep learning and layered networks are leveraging models that were already trained to produce a more complex network. There are several different types of neural networks like convolutional, feed forward, etc. By using a multi model and filtering approaches, models can be combined so that more and more complex tasks can be accomplished. For example, driving involves several models working in concert like one that determines a road type, a few more for feature extraction, etc. Many statistical models such as clustering and regression are called “machine learning” and AI, even though they weren’t when I first learned them. Many of the original AI systems were rules based and were called “expert systems.” However, how these techniques produce outputs is dramatically different than a brain. Mimicking human behavior and capabilities is very different from possessing them like any creature with a brain.
QristopherQuixote t1_j8nw2c1 wrote
You shouldn't rely on science fiction to be your guide on how AI will evolve in the future. Skynet says it will be evil. In I, Robot it was insane. In Bicentennial man, it became fully human. In Star Wars it was benign and essentially slavish. The robots in Interstellar were essentially assistants who did not act independently. In Transcendence a human mind was "uploaded" creating a strong AI. In Chappy, AI happened by accident, resulting in strong AI formed in a robot and by creating a digital copy of a human mind.
Strong AI doesn't exist... yet.
ethereal3xp OP t1_j8ny7ty wrote
>You shouldn't rely on science fiction to be your guide on how AI will evolve in the future.
Why not?
They are ideas...good or bad
Contagion was a brilliant movie... and many parts true/did happen in terms of Covid (real life)
- Self learning, consiousness, complex actions -... which is considered strong AI. 1st why are these things needed for humanity?
Eventually... it will mean "overriding" some of the human decisions
Is this notion better for humanity or not? Or pose as a danger?
And who will make this decision... a group of smart humans... or the so called strong AI they will create?
QristopherQuixote t1_j8ogjgs wrote
Do you need a history of all the things SciFi got wrong? Asimov, Heinlein, etc?
Contagion is based on actual science. Read books by Robin Cook if you want to see an actual scientist write science fiction. His book "vector" predicted the use of Anthrax as a terrorist weapon. However, folks like Michael Crichton have been spectacularly wrong even though he had an MD. Crichton was a science skeptic in some respects who questioned bans on DDT and wrote a book that made a mockery of environmental activism. He also wrote a book against AI called "Prey" which had a swarm intelligence using nanobots that was beyond silly.
We don't even know if strong AI is possible. It doesn't appear to be necessary for us to get value from task based AI. Artificial neural nets are everywhere including in cruise control in cars, smart thermostats, etc. Some smart phones like the Pixel have them. Components of AI are being used more and more.
We cannot confuse complexity with strong AI. Very complex AI systems can still be weak task based AI. Consciousness and independent action are not part of AI now. No existing AI system can be considered to be "thinking." This idea that an AI overlord will emerge to override human action is pure science fiction. The human brain has trillions of interconnections between billions of neurons with an incredible input system. No computer can match it yet.
Jake0024 t1_j8rx7du wrote
> what other kind of AI will there be?
General (not task based) AI
ethereal3xp OP t1_j8t2rub wrote
Ok but what?
Even self driving cars (automated).... is task based if you think about it
Its focus is to drive a car. And it wont be able to drive as well as a human....if factoring in traffic or accident situation.
General AI or strong AI as some of labelled it.... is almost like a human brain.
It can deep learn, make advance calculations and make this conscious decision without human approval. Its long ways away....
Jake0024 t1_j8tlta8 wrote
Correct, a self driving car is not general AI. And yes, it's a long way off. That's why Musk freaking out about ChatGPT is so hilarious.
ethereal3xp OP t1_j8tn6g5 wrote
Because he doesnt trust anybody else other than himself
General AI in theory would mean a easier/comfortable life for humans
But humans may end up becoming dumber. And in addition if the AI inventor was some kind of environmentalist.... he could set the AI to action based on "saving the planet"
Meaning shut off power, gas, factories (when not needed) etc.... even if it could mean some human suffering.
And a human couldnt override
[deleted] t1_j8tznm2 wrote
[removed]
Viewing a single comment thread. View all comments