Here's the thing. The AI is currently having to parse all of these rules;
thou shalt:
- not be allowed to run
- not be allowed to leave
- be allowed to run
- be allowed to leave sometimes
- not shoot 24/7
- shoot at some point 24/7
- not be a genuine risk
- be sort of a risk? ¯\_(ツ)_/¯
- sit still all the time
- not sit still all the time
- not be a genuine threat
- be some sort of threat? ¯\_(ツ)_/¯
- not be dazed or confused
- be dazed or confused sometimes
- be dazed or confused always
- not face tank at all
- face tank sometimes so commanders can see the white's of their eyes (mah immersion!)
- not be stupid rolling potato because lulz (mah immersion!)
- be a stupid rolling potato because lulz (mah credits!)
- ..
- ..
- ..
This is called being
decision challenged and is a huge issue for AI programing (much like it is for actual people); there are so many cross purpose requirements some or all of which may be mutually exclusive, or equally important, that the AI becomes susceptible to something that actually happens to humans as well.
When there are so many input factors, the code is unable to find a specific solution; so the AI literally chooses to not choose. Commanders are demanding so many caveats in AI, they've gone from (ruthlessly) effective to confused modules flying in formation. All that probably needed to happen, was to tone the ruthlessness down.
Instead, bandwagons of people are all making their "immersion" demands and we're back to hopelessly confused AI.
And SJA had done such a fantastic job during beta..
- - - - - Additional Content Posted / Auto Merge - - - - -
They did. During Beta, the AI was scary, but really really good. The community decided it wasn't acceptable, in any way shape or form. Unfortunately the AI insta-gib weapons glitch (that wasn't in beta) poisoned the well.
So back to pre-2.0 AI behaviours we go.