Hardware & Technical Computerphile: Series on AI security and unexpected behaviours with Robert Miles

Any sci-fi geeks, here, who like to think about things?
I watch Computerphile (and its sister channels) occasionally, but I've just binge-watched this particular series of videos and I find it absolutely fantastic. (especially videos 6 and 7)
Rob Miles is a type of person I've always admired. Thinking outside the box and coming up with ideas and solutions that normal person wouldn't even think of. A crucial property for a person who might one day succeed in creating first AI, I'd say.
His points on how dangerous AI can really be in ways we can't really imagine or expect are brilliant. How to make AI want the same things humans do? How to convince it to let humans alter its code? Is there a chance to even have safety switch on AI? How to code an internal reality model for AI to be able to predict outcomes of its actions. Why the "three laws of robotics" would never work in real life. Etc.
Fascinating watch, in short. Highly recommended for anybody interested in AI.

Source: https://www.youtube.com/watch?v=tlS5Y2vm02c&list=PLzH6n4zXuckquVnQ0KlMDxyT5YE-sA8Ps&index=1


(click the link below the video for the whole playlist)
 
Any sci-fi geeks, here, who like to think about things?
I watch Computerphile (and its sister channels) occasionally, but I've just binge-watched this particular series of videos and I find it absolutely fantastic. (especially videos 6 and 7)
Rob Miles is a type of person I've always admired. Thinking outside the box and coming up with ideas and solutions that normal person wouldn't even think of. A crucial property for a person who might one day succeed in creating first AI, I'd say.
His points on how dangerous AI can really be in ways we can't really imagine or expect are brilliant. How to make AI want the same things humans do? How to convince it to let humans alter its code? Is there a chance to even have safety switch on AI? How to code an internal reality model for AI to be able to predict outcomes of its actions. Why the "three laws of robotics" would never work in real life. Etc.
Fascinating watch, in short. Highly recommended for anybody interested in AI.

Source: https://www.youtube.com/watch?v=tlS5Y2vm02c&list=PLzH6n4zXuckquVnQ0KlMDxyT5YE-sA8Ps&index=1


(click the link below the video for the whole playlist)
All we need to know is that its dangerous, so we should not go down that road, however it's like having a big red button saying DO NOT PRESS BUTTON, someone will eventually press it.
 
All we need to know is that its dangerous, so we should not go down that road, however it's like having a big red button saying DO NOT PRESS BUTTON, someone will eventually press it.
Exactly. His biggest point is that we have to have all these things he talks about figured out BEFORE we attempt to create the first AGI. After that it will be too late.
 
All we need to know is that its dangerous, so we should not go down that road, however it's like having a big red button saying DO NOT PRESS BUTTON, someone will eventually press it.

Precautions are good, but stagnation and willful ignorance, not so much. And, as you say, someone will take those steps, so it's best to get ahead of things rather than ignore or suppress them.
 
I don't think we need full-on AI though. We can get most of the benefits with smart, expert systems. Smart enough to work stuff out, not smart enough to think for itself. I don't think we know enough about consciousness to actually create it anyway.
 
I don't think we need full-on AI though. We can get most of the benefits with smart, expert systems. Smart enough to work stuff out, not smart enough to think for itself. I don't think we know enough about consciousness to actually create it anyway.
We already are past the point where we could actually code our current "AI". What is now called "AI" of course isn't self-conscious but we already have to use deep learning and other methods of the programs essentially programming themselves. AGI will just be another step.
 
All I know is that we had better make sure that any real AI has Asimov's 3, or 4 laws hardwired in.
I mostly believe the zeroth law would be a good addition.
 
All I know is that we had better make sure that any real AI has Asimov's 3, or 4 laws hardwired in.
I mostly believe the zeroth law would be a good addition.
What's zeroth and fourth law? :LOL:
Anyway, if you watched the vids, you know that it is extremely tricky to actually hardwire anything into an AI and it backfires 100% of the time. :)
 
The 4th law of robotics is the zeroth law, which was produced by the robots, that placed humanity above the individual, basically rewiring Daneel, and destroying Giskard.
 
No. The issues he raises are much more basic. The singularity counts with already functioning AI, he's trying to explain that to actually GET a properly functioning AI might be much harder than it seems.
The issue is that to define strong AI, first you have to consider what consciousness is, and that's still pretty much an open question. It could very well be that our brain works more like a computer than we (would like to) think. A strong thesis about consciousness recent years has been strong emergence. The problem is that almost all examples of strong emergence, has been reduced to weak. Except for the holy final frontier, consciousness (and maybe life even though that one has become very shaky). Perhaps if that falls, and many sciences point in that direction, we might have to change our minds about our minds. Then strong AI is just that. AI with a lot of computing power.

Edit: And yes, Miles is a very bright dude, with some very thought provoking points.
 
Last edited:
The issue is that to define strong AI, first you have to consider what consciousness is, and that's still pretty much an open question. It could very well be that our brain works more like a computer than we (would like to) think. A strong thesis about consciousness recent years has been strong emergence. The problem is that almost all examples of strong emergence, has been reduced to weak. Except for the holy final frontier, consciousness (and maybe life even though that one has become very shaky). Perhaps if that falls, and many sciences point in that direction, we might have to change our minds about our minds. Then strong AI is just that. AI with a lot of computing power.

Edit: And yes, Miles is a very bright dude, with some very thought provoking points.
He says it himself in one of the vids - The difference between current gen AI and "true AI" (Or AGI as he calls it, the General Intelligence) is just the scope on which it can compute and scale on which it can interact with the environment (be it a virtual or real one) and influence/change it, because as far as descriptions go, what we call intelligence is to certain extent simply the ability to create predictive models of reality and change your environment to better suit your needs.
 
@Chris Simon

Exactly. The consciousness frontier i mention is quite a hurdle in common understanding though. A large part of the human population believe in some sort of dualism. Emergence, whatever it is, is strange like quantum mechanics and black holes.

For those who haven't tried it: After watching Miles' videos, and reading a load about the mind, I decided to build a small neural network. It's quite easy, only takes a computer, and there are very good videos on the Tube. You wont end up with strong AI, but it will give you a good understanding of what AI is. Also just watching the tutorials.

Edit: I started here, and if you don't know 3BLUE1BROWN, enjoy!

Source: https://www.youtube.com/watch?v=aircAruvnKk
 
Last edited:
All I know is that we had better make sure that any real AI has Asimov's 3, or 4 laws hardwired in.

The problem is that the laws are all subject to interpretation ('harm' is extremely subjective, and relative degrees of harm even more so) and even something that follows such laws to the letter could easily take actions we'd consider existential threats.
 
The problem is that the laws are all subject to interpretation ('harm' is extremely subjective, and relative degrees of harm even more so) and even something that follows such laws to the letter could easily take actions we'd consider existential threats.
If you haven't already, see Miles' video on the Emergency Stop Button. That's where he really caught my attention.

Source: https://www.youtube.com/watch?v=3TYT1QfdfsM
 
An interesting, perhaps naive or wishful theory I read was that maybe the classic Hal, Colossus, Terminator, Matrix outcome is not what is gonna happen. Instead it was suggested that a cyborgish merge between AI and humans was more probable. Sometimes when I look around me, and literally everyone is looking into their smartphones, it's actually not that far fetched, also having grown up with a rotary dial phone. :)
 
An interesting, perhaps naive or wishful theory I read was that maybe the classic Hal, Colossus, Terminator, Matrix outcome is not what is gonna happen. Instead it was suggested that a cyborgish merge between AI and humans was more probable. Sometimes when I look around me, and literally everyone is looking into their smartphones, it's actually not that far fetched, also having grown up with a rotary dial phone. :)
Everything's possible, I guess. That is one of the points of AI, after all.
Some people think that the possibility that, if we forget safety measures, the chances that the AI will be "friendly" (or simply "good") are about the same as the AI become hostile.
But I think that is missing the point a little. Miles himself is putting it very nicely - It's not about AI being good or bad, it's about AI's "needs" aligning with ours. And AI that will be making decisions that align with what we want will be seen as "good" and vice versa.

The problem is that humans can rarely agree on what's good or bad. :LOL:
So the outcome where AI decides that it's actually humans, who are the "problem" that it should solve, is probably more likely.
 
That's another funny thing. Miles talks about value, but economist don't agree on a definition of value, which sort of puts value in the box with consciousness. The one with the sign saying "?". If we can't define value, it's going to be hard to describe to an AI. Look at it from another perspective. Maybe our minds, being weakly emerged properties from a lot of hardwired identical neurons, has come to the point where we can send people into space, because of all the machine learning our brains have done. Or the perspective where any species is pretty quick to make utility discisions that might cause harm to another species or even a species member. Hehe...

Btw. Miles packs a pretty good punch :D

Source: https://www.youtube.com/watch?v=yQE9KAbFhNY
 
Back
Top Bottom