nosmelc t1_iw11l7r wrote
Reply to comment by havenyahon in The CEO of OpenAI had dropped hints that GPT-4, due in a few months, is such an upgrade from GPT-3 that it may seem to have passed The Turing Test by lughnasadh
We might create greater than human intelligence in some ways without understanding how the human brain works.
havenyahon t1_iw12f8n wrote
Sure, we might. But without understanding the fundamentals of how brains and bodies do what they do, we might also just end up creating a bunch of systems that will do some things impressively, but will always fall short of the breadth and complexity of human intelligence because they're misguided and incomplete at their foundations. That's how it's gone to date, but there's always a chance we'll fluke it?
kaushik_11226 t1_iw18s70 wrote
>havenyahon
When you mean AI do you mean basically a digital version of a human? I don't think AI needs to have consciousness or emotions.
havenyahon t1_iw19z2s wrote
Sure, we already have that. The question of the thread is about AI that can be considered equivalent to human intelligence, though. One of the issues is that it appears that, contrary to traditional approaches to understanding intelligence, emotions may be fundamental to it. That is, they're not separate from reasoning and thinking, they're necessarily integrated in that activity. The neuroscientist Antonio Damasio has spent his career on work that has revealed this.
That means that it's likely that, if you want to get anything like human intelligence, you're going to at least need something like emotions. But we have very little understanding of what emotions even are! And that's just the tip of the iceberg.
Like I say, we've thus far been capable of creating intelligent systems that can do specific things very well, even better than a human sometimes, but we still appear miles off creating systems that can do all the things humans can. Part of the reason for that is because we don't understand the fundamentals.
kaushik_11226 t1_iw1cfb3 wrote
>Like I say, we've thus far been capable of creating intelligent systems that can do specific things very well, even better than a human sometimes,
I do think this enough. What we need is an A.I that can rapidly increase our knowledge of physics, biology and medicine. These things I do think have objective answers to them. True Human intelligence that is basically a human but digital seems like its very complicated and I don't think its that needed to make a world a better place. Do you think this can be achieved without a human level AI?
havenyahon t1_iw1cn1n wrote
That's just not what I'm talking about, though. I agree we can create intelligent systems that are useful for specific things and do them better than humans. We already have them. We're talking about human-like general intelligence.
MassiveIndependence8 t1_iw19s7z wrote
That’s a bit backwards, what makes you think that “bunch of systems” will fall short in terms of breadth and complexity and not the other way around? After all, without even knowing how to play Go or know how human mind works when playing Go, researchers have created a machine that exceed far beyond what humans are capable of. Machine doesn’t have to mimic the human mind, it just has to be more capable . We are trying to create an artificial general intelligence, an entity that is able to self instruct itself to achieve any goals within an environment. We are only drawing parallel to ourselves because we are the only AGI that we know of but we are not the only kind of AGI that is possible out there, not to mention our brains are riddled with artifacts that are meaningless in terms of true intelligence in the purest sense since we are made for survival through evolutions. Fear, the sense of insecurity, the need for intimacy, etc… are all unnecessary component for AGI. We don’t expect the machines to be like us, it will be something much more foreign like an alien. If it can somehow be smart enough, it would look at us just like how we would look at ants, two inherently different brain structures but yet one is capable of understanding the other better. It doesn’t need to see the world the way we do, it only needs to truly see how simple we all are and pretend to be us.
havenyahon t1_iw1bwsb wrote
>That’s a bit backwards, what makes you think that “bunch of systems” will fall short in terms of breadth and complexity and not the other way around?
You mean apart from the entire history of AI research to date? Do you understand how many people since the 50s and 60s have claimed to have "the basic system down, we now just need to feed it with data and it will spring to life!" The reason why they've failed is because we didn't understand the fundamentals. We still don't. That's the point. It's not backwards, that's where we should begin from.
>Machine doesn’t have to mimic the human mind, it just has to be more capable . We are trying to create an artificial general intelligence, an entity that is able to self instruct itself to achieve any goals within an environment.
Sure, there may be other ways to achieve intelligence. In fact we know there are, because there are other animals with different physiologies that can navigate their environments. The point, again, is that we don't have an understanding of the fundamentals. We're not even close to creating something like an insect's general intelligence.
>Fear, the sense of insecurity, the need for intimacy, etc… are all unnecessary component for AGI.
I don't mean to be rude when I say this, but this is precisely the kind of naivety that led those researchers to create systems that failed to achieve general intelligence. In fact, as it turns out, emotions appear to be essential for our reasoning processes. There's no reasoning without them! As I said in the other post, you can see the work of the neuroscientist Antonio Damasio to learn a bit about how our understanding of the mind has changed thanks to recent empirical work. It turns out that a lot of those 'artifacts' you're saying we can safely ignore may be fundamental features of intelligence, not incidental to it.
MassiveIndependence8 t1_iw1egj2 wrote
>The reason why they've failed is because we didn't understand the fundamentals. We still don't. That's the point. It's not backwards, that's where we should begin from.
Nope, they failed because there’s not enough data and the strategy is not computationally viable. They did however, have the “basic system down”, it’s just not very efficient from a practical standpoint. A infinite neural net is mathematically proven to be able to converge to any continuous function, it’s just that it does it in a very lengthy way and without providing much certainty on how accurate and close we are. But yes, they did have A basic system down, they just haven’t found the right system yet. All we have to do now is to find a way to cut corners and once enough corners are being cut, the machine will learn to cut by itself. So no, we do not need to structurally know the fundamentals of how a human mind works, we however, needs to know the fundamentals of how such mind might be created.
We are finding ways to make the “fetus”, NOT the “human”.
Also, “emotions”, depending on your definition certainly does come into play in the creation of AI, that’s the whole point of reinforcement learning. But the problems lies in what the “emotions” are specifically catering to. In humans, emotions serve as a directive for survival. In machines, it’s a device to deter the machine from pathways that results in a failure of a task and to nudge itself towards pathways that are promising. I think we both could agree that we can create a machine that solve complicated abstract math problems without needing it feeling horny first.
havenyahon t1_iw1eui3 wrote
>All we have to do now is to find a way to cut corners and once enough corners are being cut, the machine will learn to cut by itself.
Yeah it all sounds pretty familiar! We've heard the same thing for decades. I guess we'll have to continue to wait and see!
MassiveIndependence8 t1_iw1ewxr wrote
Seems to be going pretty well so far, ig we’ll see indeed.
TheLastSamurai t1_iwcznnm wrote
Exactly, there are many phenomenon in pyschics we don't understand but we can still advance engineering and the world without knowing why, hell same in medicine the examples abound. I think this is overemphasized, we could replicate or surpass without knowing why or how exactly we did it.
Viewing a single comment thread. View all comments