havenyahon

havenyahon t1_iw1hfpy wrote

Laurence Shapiro is a good one to start with. Can recommend this and he also edited a Routledge handbook. The Stanford Encyclopedia entry he wrote is also a good overview of some of the philosophical context, but doesn't go too heavily into the empirical work. For an overview of some of the experimental work, this is worth a look.

1

havenyahon t1_iw1eui3 wrote

>All we have to do now is to find a way to cut corners and once enough corners are being cut, the machine will learn to cut by itself.

Yeah it all sounds pretty familiar! We've heard the same thing for decades. I guess we'll have to continue to wait and see!

1

havenyahon t1_iw1e19i wrote

Honestly, I think that's probably just as likely to fail, because our best and most cutting edge science is beginning to show that, as far as minds are concerned, it's not just neurons that matter, it's the whole body that's involved in cognition. The research on embodied cognition in my view casts doubt on whether brain emulation is going to cut it. That's no reason not to work on it, though! No doubt we'll find out lots useful along the way. But understanding the role of the body in cognition I personally believe will open up new ways of modelling and instantiating AI. We've only just begun that journey, though.

1

havenyahon t1_iw1cn1n wrote

That's just not what I'm talking about, though. I agree we can create intelligent systems that are useful for specific things and do them better than humans. We already have them. We're talking about human-like general intelligence.

1

havenyahon t1_iw1bwsb wrote

>That’s a bit backwards, what makes you think that “bunch of systems” will fall short in terms of breadth and complexity and not the other way around?

You mean apart from the entire history of AI research to date? Do you understand how many people since the 50s and 60s have claimed to have "the basic system down, we now just need to feed it with data and it will spring to life!" The reason why they've failed is because we didn't understand the fundamentals. We still don't. That's the point. It's not backwards, that's where we should begin from.

>Machine doesn’t have to mimic the human mind, it just has to be more capable . We are trying to create an artificial general intelligence, an entity that is able to self instruct itself to achieve any goals within an environment.

Sure, there may be other ways to achieve intelligence. In fact we know there are, because there are other animals with different physiologies that can navigate their environments. The point, again, is that we don't have an understanding of the fundamentals. We're not even close to creating something like an insect's general intelligence.

>Fear, the sense of insecurity, the need for intimacy, etc… are all unnecessary component for AGI.

I don't mean to be rude when I say this, but this is precisely the kind of naivety that led those researchers to create systems that failed to achieve general intelligence. In fact, as it turns out, emotions appear to be essential for our reasoning processes. There's no reasoning without them! As I said in the other post, you can see the work of the neuroscientist Antonio Damasio to learn a bit about how our understanding of the mind has changed thanks to recent empirical work. It turns out that a lot of those 'artifacts' you're saying we can safely ignore may be fundamental features of intelligence, not incidental to it.

1

havenyahon t1_iw19z2s wrote

Sure, we already have that. The question of the thread is about AI that can be considered equivalent to human intelligence, though. One of the issues is that it appears that, contrary to traditional approaches to understanding intelligence, emotions may be fundamental to it. That is, they're not separate from reasoning and thinking, they're necessarily integrated in that activity. The neuroscientist Antonio Damasio has spent his career on work that has revealed this.

That means that it's likely that, if you want to get anything like human intelligence, you're going to at least need something like emotions. But we have very little understanding of what emotions even are! And that's just the tip of the iceberg.

Like I say, we've thus far been capable of creating intelligent systems that can do specific things very well, even better than a human sometimes, but we still appear miles off creating systems that can do all the things humans can. Part of the reason for that is because we don't understand the fundamentals.

1

havenyahon t1_iw12f8n wrote

Sure, we might. But without understanding the fundamentals of how brains and bodies do what they do, we might also just end up creating a bunch of systems that will do some things impressively, but will always fall short of the breadth and complexity of human intelligence because they're misguided and incomplete at their foundations. That's how it's gone to date, but there's always a chance we'll fluke it?

2

havenyahon t1_iw0xs83 wrote

That's a pretty ignorant statement. My field is philosophy of mind and cognitive science. I was never convinced that the Turing Test was an adequate measure for machine intelligence, but I understand the context within which Turing proposed it, and the challenges for measuring that intelligence. It's not 'dumb', even if it's inadequate. The only people who say that type of shit are people who are ignorant of the context within which it was proposed.

9

havenyahon t1_iw0xjhm wrote

I work in cognitive science and it's so nice to see a reasonable and measured take on AI for once! We are 50-100 Nobel prizes away from understanding what it is human brains/bodies are doing, let alone creating machines that do it, too.

3