State of the Game

it is sometimes hard to tell, how it is making decisions, because not all of them are reasonable by itself and at a certain time. let's take a very simple case, for any input there would be just an output of 1 bit - it could be a decision like "linear motor active" (1 for forward motion) or "angular motor active" (0 for a turn). So you could make a look up table, where you have for any input an output, what to do in that case. if there is just 1 table, you have an equivalent of an "instinctive" behavior - it will always act like that, if a certain input is given. Now if you take 4 tables like this and choose one "randomly" if a decision has to be made, you get 5 different behaviors:
  1. instinctively turning away - 0 in every of those 4 tables
  2. likely to turn away - 0 in 3 of those tables, 1 in one table
  3. undecided - 0 in 2 of those tables, 1 in the other 2 tables
  4. likely to move forward - 0 in one table, 1 in 3 tables
  5. instinctively moving forward - 1 in every of those 4 tables
now if you assume a random start in those tables, some might think that "undecided" would be the most likely behavior, but it is actually not. There are just 6 out of 16 possibilities, to have just 2 bits set - so "undecided" has just a probability of 37.5% - whereas the rest has a tendency towards one or the other action. The more tables you would use, the less likely undecided and instinctive behavior will get in favor of "desire" and "dislike" (to use these human terms) - but it acts as such, even those are just lookup tables. The probabilities are shaped by evolution - simply by "survival of the fittest" - you take 3 from the pool, test them, discard the weakest and replace it with a cross-over of the 2 "winners" - simple multi-point cross-over like offspring := (A & R) | (B & ~R); with A and B the winner tables and R being a random table.

How these tables evolve is quite often dependent as well how parallel the computation is - because if you have parallel computation, you have chaos in the game as well and the resulting tables will be more chaos-stable then those, which were computed with less or no parallel execution. Some of those behaviors will be quite reasonable and one can find out pretty quickly why it came to this result - but others are taking chaos into account and it is not easy to say, why the behavior is like that. Evolution can cater to events with infinite complexity as well, because the result is purely experience based and happened to be the best way to deal with that problem - the way of the survivors. Whereas an engineer would eventually have a very hard time to find a good algorithm due to infinite complexity of some of the problem's parameters.

Not sure if that is now any clearer at all - but eventually you might see where I was going with it.
The main benefit of AI is that it can make mistakes much much faster than humans. ;)

Speeds up the iterative process.
 
The main benefit of AI is that it can make mistakes much much faster than humans. ;)

Speeds up the iterative process.
The thing is much weirder - you don't need any intelligence to create intelligent behavior - complex adaptation is perfectly capable of doing that, but it is a logic of survival, it will eventually favor superstition, if it is beneficial to be superstitious at those times, where it evolved. One doesn't even need a brain, lookup tables with a long enough time to evolve can do pretty well in the intelligence department, even being just lookup table where nothing is thinking at all.

The results are as good as the fitness function given - if it is crap, the results will be crap and faulty.

Complex adaptation has a tendency to cater to logic by a simple reason - logical stuff provides a higher chance to adapt to it, because it is repeatable - whereas arbitrary stuff is quite volatile and it is much harder to adapt to it. This creates a tendency towards logic and the resulting evolved behavior is much more "intelligent" than one would expect.
 
Last edited:
it is sometimes hard to tell, how it is making decisions, because not all of them are reasonable by itself and at a certain time. let's take a very simple case, for any input there would be just an output of 1 bit - it could be a decision like "linear motor active" (1 for forward motion) or "angular motor active" (0 for a turn). So you could make a look up table, where you have for any input an output, what to do in that case. if there is just 1 table, you have an equivalent of an "instinctive" behavior - it will always act like that, if a certain input is given. Now if you take 4 tables like this and choose one "randomly" if a decision has to be made, you get 5 different behaviors:
  1. instinctively turning away - 0 in every of those 4 tables
  2. likely to turn away - 0 in 3 of those tables, 1 in one table
  3. undecided - 0 in 2 of those tables, 1 in the other 2 tables
  4. likely to move forward - 0 in one table, 1 in 3 tables
  5. instinctively moving forward - 1 in every of those 4 tables
now if you assume a random start in those tables, some might think that "undecided" would be the most likely behavior, but it is actually not. There are just 6 out of 16 possibilities, to have just 2 bits set - so "undecided" has just a probability of 37.5% - whereas the rest has a tendency towards one or the other action. The more tables you would use, the less likely undecided and instinctive behavior will get in favor of "desire" and "dislike" (to use these human terms) - but it acts as such, even those are just lookup tables. The probabilities are shaped by evolution - simply by "survival of the fittest" - you take 3 from the pool, test them, discard the weakest and replace it with a cross-over of the 2 "winners" - simple multi-point cross-over like offspring := (A & R) | (B & ~R); with A and B the winner tables and R being a random table.

How these tables evolve is quite often dependent as well how parallel the computation is - because if you have parallel computation, you have chaos in the game as well and the resulting tables will be more chaos-stable then those, which were computed with less or no parallel execution. Some of those behaviors will be quite reasonable and one can find out pretty quickly why it came to this result - but others are taking chaos into account and it is not easy to say, why the behavior is like that. Evolution can cater to events with infinite complexity as well, because the result is purely experience based and happened to be the best way to deal with that problem - the way of the survivors. Whereas an engineer would eventually have a very hard time to find a good algorithm due to infinite complexity of some of the problem's parameters.

Not sure if that is now any clearer at all - but eventually you might see where I was going with it.

You're smart. I'm not. Have a cookie.
 
it is quite interesting what happens if you give them a reverse fitness function, like being the worst they can be - then they have to exist at the edge to extinction to be the worst they can be - the worst will not survive though, there are not many ways to be that bad and still survive, and that makes them predictable and some other entities will abuse their predictability. In such system it is never good to be any near to perfection, but just good enough to make it.
 
The main benefit of AI is that it can make mistakes much much faster than humans. ;)

Speeds up the iterative process.
It always confused me about Voight-Kampff test (AI as speed optimised construct vs human "trashy" code)

Unless it was restrained purposely while creating them, I see no reason preventing replicants from syncing their measured features to the expected timeline of human reactions.
I mean, they had a kind of neural network, it could learn to trick the VK test just be learning the expected reactions pattern for human subjects.
 
Last edited:
It always confused me about Voight-Kampff test (AI as speed optimised construct vs human "trashy" code)

Unless it was made purposely while creating them, I see no reason preventing replicants from syncing their measured features to the expected timeline of human reactions.
I mean, they had a kind of neural network, it could learn to trick the VK test just be learning the expected reactions pattern for human subjects.
reminds me of an experiment where we were still using Mersenne Twister as random generator - after a while those critters (it was an experiment with artificial ants) found a way to synchronize to each other in a quite perfect way - too perfect even. Well, they found out that Mersenne Twister - their random source - isn't cryptographic safe and they can predict the sequence somewhat - this taught us to be very observant to external influences, which might have an effect on the experiment - since then we use multiply-with-carry.
 
i-eat-my-own-poop-8682271.png
 
reminds me of an experiment where we were still using Mersenne Twister as random generator - after a while those critters (it was an experiment with artificial ants) found a way to synchronize to each other in a quite perfect way - too perfect even. Well, they found out that Mersenne Twister - their random source - isn't cryptographic safe and they can predict the sequence somewhat - this taught us to be very observant to external influences, which might have an effect on the experiment - since then we use multiply-with-carry.
You mean like those:

Source: https://youtu.be/wwTdojt96uw


or just "virtual"?
 
Last edited:
Back
Top Bottom