darkmatter8879 t1_ivzy9h7 wrote
Reply to comment by RobleyTheron in The CEO of OpenAI had dropped hints that GPT-4, due in a few months, is such an upgrade from GPT-3 that it may seem to have passed The Turing Test by lughnasadh
I know that AI is not as Impressive as they make it to be, but is it really that far
RobleyTheron t1_iw1cvle wrote
I've been at it for 7 years and I got involved because I was excited and thought we were a lot closer as a society.
The reality is that ALL artificial intelligence today is pattern matching and nothing more. There is no self reinforcement learning, unsupervised learning, neuroplasticity between divergent subjects or base general comprehension (even that of an infant).
The closest our (human) supercomputers can muster is a few seconds mimicking the neural connections of a silk worm.
The entire fundamental architecture of modern AI will need to be restarted if we ever hope to reach self-aware AI.
JKJ420 t1_iw1wp6c wrote
Hey everybody! This guy has been at it for a whole seven years and he knows more than anybody!
RobleyTheron t1_iw2ryvr wrote
Most people in here don't know anything about actual artificial intelligence. They're caught up in completely unrealistic hope and fear bubbles.
2012 was really the breakthrough with ImageNet and convolutional neural networks. Self-driving cars, conversational AI, image recognition, it's all based on that architecture.
The only thing that really changed that year is data and servers became big enough to show progress. Most current AI architecture is based off Jeffrey Hintons work from the 1980's.
7 years out of 10 isn't nothing.
unflappableblatherer t1_iw2442a wrote
Right, but -- isn't the point that we don't know what the limits of pattern matching are, and that we keep pushing the envelope and finding that more and more impressive capabilities emerge from pattern-matching systems? What if it's pattern matching all the way to AGI?
As for self-awareness, the goal of AI isn't to precisely replicate the mechanisms that produce human intelligence. The goal is the replicate the functions of intelligence. It's a separate question whether a system with functional parity would be self-aware or not.
RobleyTheron t1_iw2t6ir wrote
Fair, I'll grant that human level intelligence and cognition could be separate. My own entirely unscientific opinion is that consciousness arises from the complex interactions of neurons. The more neurons, the more likely you are to be conscious.
I dont think pattern matching will ever get us to AGI. It's way, way too brittle. It's also completely lacks understanding. A lot of learning and intelligence comes from transference. I know the traits of object A, and I recognize the traits of object B are similar, therefore B will probably act like A. That jump is not possible with current architecture.
eldenrim t1_iwk5it8 wrote
Your second paragraph just describes pattern matching though?
GreenWeasel11 t1_iw1hgpn wrote
What do you make of people like Ben Goertzel who are obviously highly intelligent and are explicitly working toward AGI but apparently haven't realized how hard it is because they still think it's a few decades away at most?
SurroundSwimming3494 t1_iw1j5o8 wrote
Other than Goertzel, who else thinks it's a few decades away at most, and how do you know Goertzel thinks that, if you don't mind me asking?
RobleyTheron t1_iw2sfhp wrote
There's an annual AI conference and every year they ask the researchers how far away we are to AGI; the answers range from 10 years to 100 to its impossible. There is absolutely zero consensus from the smartest people in the industry on timeline.
SurroundSwimming3494 t1_iw31bli wrote
Do you know the name of the conference?
GreenWeasel11 t1_iw3yp9u wrote
Perhaps the AGI Conference?
RobleyTheron t1_iwggw55 wrote
That is correct 😀
botfiddler t1_iw6lnoa wrote
Hmm, Ben said 5-30 years a while ago.
SurroundSwimming3494 t1_iw8k79u wrote
Link? And when did he say this?
botfiddler t1_iw8rrvt wrote
Lex Friedman interview, YouTube.
GreenWeasel11 t1_iw3zyht wrote
Here's Goertzel in 2006; in particular, he said "But I think ten years—or something in this order of magnitude–could really be achievable. Ten years to a positive Singularity." I don't think he's become substantially more pessimistic since then, but I may have missed something he's said.
One also sees things like "Why I think strong general AI is coming soon" popping up from time to time (specifically, "I think there is little time left before someone builds AGI (median ~2030). Once upon a time, I didn't think this."), and while I don't know anything about that author's credentials, the fact that someone can assess the situation and come to that conclusion demonstrates that at the very least, if AI is actually as hard as it seems to the pessimists to be, that fact has not been substantiated and publicized as well as it should have been by now. Though actually, it's probably more a case of the people who understand how hard AI is simply not articulating it convincingly enough when they do publish on the subject; Dreyfus may have had the right idea, but the way he explained it was nontechnical enough that a computer scientist with a religious belief in AI's feasibility can read his book and come away unconvinced.
botfiddler t1_iw5r1kv wrote
>The reality is that ALL artificial intelligence today is pattern matching and nothing more.
This sounds like a construction to make your point. Reasoners exist, you can write a program doing logic. It's just not where the progress happens. Something more human-like needs to be constructed out of different parts.
Orc_ t1_iwas9a6 wrote
> The entire fundamental architecture of modern AI will need to be restarted if we ever hope to reach self-aware AI.
Self-aware AI? We don't even know if it's possible, the entire point is to automate things with dumb AGIs, that's a current and credible goal, not trying to bring a machine to life.
Viewing a single comment thread. View all comments