17:48 Tuesday 2nd December 2014
BBC Radio Cambridgeshire
CHRIS MANN: Cambridge professor Stephen Hawking says that efforts to create thinking machines pose a threat to our very existence. Best known of course for his A Brief History of Time, the iconic scientist is almost completely paralysed, due to a motor neurone-type condition. His warning came in response to a question about a revamp of a technology he uses to communicate, which involves a basic form of AI.
(TAPE)
STEPHEN HAWKING: The primitive forms of artificial intelligence we already have have proved very useful. But I think the development of full artificial intelligence could spell the end of the human race. Once humans develop artificial intelligence, it would take off on its own, and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superceded.
(LIVE)
CHRIS MANN: Professor Stephen Hawking there. He was talking to the BBC’s Rory Cellan-Jones at the Department of Mathematics at the University of Cambridge. So let’s get some reaction. Professor John Daugman is from the Artificial Intelligence Group at the University of Cambridge. What does he think of Professor Hawking’s warning?
JOHN DAUGMAN: Well this is a very interesting and provocative and colourful idea. It’s an old idea. You think about the Frankenstein scenario and golum men. There’s a long tradition of creating something that possibly turns out to be your nemesis. And so it’s not a new idea, but of course it’s more realistic in some ways day by day, because computing is becoming very powerful and machines are getting very intelligent, at least in specific tasks. Even if humans were to try to remain in control of machines that inevitably become smarter than humans and program them always to look after our best interests, which I think was one of the science fiction rules, that the machines may develop a rather different concept of what is in the human’s best interests than we have ourselves. And I think I heard one person say that maybe they would come to regard human safety as paramount, and therefore the robots, if they had control, would decide to lock up all humans in concrete bunkers on opium drips.
CHRIS MANN: (LAUGHS) Very good. But we program the computers, so doesn’t that mean that we are in control? Can they interpret the program in a different way, or change things themselves?
JOHN DAUGMAN: Well one vision that’s been around for a long time is soft programming machines. The problem with machine learning today is that it’s too brittle. It doesn’t generalise. And one solution that has been proposed is that we really need to produce machine learning algorithms or methods which become soft programming, and so one may have control initially, but it may fade away.
CHRIS MANN: The doomsday scenario during the Cold War for instance was that these early warning systems would somehow spark a nuclear war which got out of control, because the responses are built in, a bit like the start of the First World War really.
JOHN DAUGMAN: Indeed. And just like the scenario of the wonderful film Dr Strangelove. You remember the doomsday scenario. Part of the mystery here is what you could call the paradox of cognitive penetrance. Cognitive penetrance is what we’re good at doing versus what we understand how to do. And those are very different. So for example humans are extremely good at things like face recognition and acquiring a language. All children basically acquire their native language. But we don’t know how to program machines how to do those things. Face recognition is very primitive in computer vision today, and so is language acquisition. So the things that all humans take for granted and do with high competence like riding a bicycle or recognising faces or learning to understand languages, are things that we have difficulty in programming machines to do. And conversely the things that we easily program machines to do, that we understand how to draft algorithms for, are things that we are very bad at, very poorly performing. So for example let me ask you a question. What is 51 times 39? Quickly. Come on come on. 51 times 39. What’s so hard about that? What’s so hard about that stupid? I mean that’s trivial.
CHRIS MANN: (LAUGHS)
JOHN DAUGMAN: And it would take a nano-second for a bit of code to do simple arithmetic. You understand the theory of arithmetic, but you’re not very good at it. Don’t take it personally. It’s just ..
CHRIS MANN: No.
JOHN DAUGAN: All of us.
CHRIS MANN: Hopefully the code can’t yet produce this programme, but you never know. Of course the robots are already replacing mankind on the battlefield, the drones. And the US Army is going to replace thousands of soldiers with remote controlled vehicles very soon.
JOHN DAUGMAN: Yes. Well it will be less .. depending on who they’re facing the other side it could make war much less bloody, and more like video games. And that might be welcome.
CHRIS MANN: And of course self-driving cars, robotic postal deliveries, a lot of this has already happened. The question is John, this was all supposed to make life easier for us, but it hasn’t really, has it?
JOHN DAUGMAN: Well it’s partly the distinction between performative competence and meaningful intelligence. It’s like chess playing programs. Computer algorithms are extremely good at playing chess. They are basically world champions today. But they don’t really understand how they’re doing it, and they can’t explain it and their skills don’t generalise to any other task. And that’s not at all the way it is for humans. And so there are these basic paradoxes between for example intelligence and consciousness. And of course human meaning, the meaning that people find in their lives is not really derived from intellectual competence like being good at mental arithmetic. But it’s somehow derived from the soft world of social life and social interactions.
(LIVE)
CHRIS MANN: Professor John Daugman there from the Artificial Intelligence Group at the University of Cambridge.
======