Hahah, yeah.That's another funny thing. Miles talks about value, but economist don't agree on a definition of value, which sort of puts value in the box with consciousness. The one with the sign saying "?". If we can't define value, it's going to be hard to describe to an AI. Look at it from another perspective. Maybe our minds, being weakly emerged properties from a lot of hardwired identical neurons, has come to the point where we can send people into space, because of all the machine learning our brains have done. Or the perspective where any species is pretty quick to make utility discisions that might cause harm to another species or even a species member. Hehe...
Btw. Miles packs a pretty good punch
Source: https://www.youtube.com/watch?v=yQE9KAbFhNY
As for the value, you're right of course. Value is a strictly subjective thing. What value could mean to an AI would probably be determined by its reward function. Just like humans, we value things that give us positive feedback or things that empower us (e.g. enable us achieving our goals, therefore, again, result in positive feedback)
But how to define value to a machine is the question.
I like his example with making tea. We can't just teach the machine to make tea. If we show it how to make tea, we don't want it to make itself a cup of tea, we want it to make us a cup of tea. Therefore its reward function must be tied to our state, not its. If we want a successful tea-making AI, it has to value our well-being. Which in itself is a concept so abstract that it can't really be coded into reward function.