Some months we find ourselves focused on two or three big features, but August was one of those months where we found ourselves doing lots of small tasks: bug fixes, feature improvements, responding to feedback from designers. They were all important and useful, but they don’t always make for exciting reading for you, the backers. So we’ll spare you the talk of merge conflicts and build configuration bugs, and stick to the cool stuff!
Design
An interesting piece of design work we’ve been doing is a refactoring of the Kythera perception system for characters. Now that designers have spent a reasonable amount of time building behavior trees in Kythera and using our perception system, they’ve found that they want some aspects of the perception system and the way it interacts with behaviors to work a bit differently. In particular, they want behavior trees to have more control over how AIs respond to certain events in the world such as hearing weapon fire or some unexpected noise, and to also control when AIs look for better targets and when they stick to their current one.
Right now the Kythera perception system will take in stimuli from different sources (vision, sound, and tactile events for characters; radar signatures for ships), perform calculations on whether enough stimuli have been received for an AI to have noticed something, and then pass this information through to the target selection system, which will look for the best target at any point in time. There are various parameters that each AI can set to affect how both their perception and target selection work, but from the behavior’s point of view, it is just told what the current best target is.
This setup allows for simpler behavior trees, and has worked really nicely for ships, but for characters we’ve found that the behavior trees tend to be set up quite differently from ships. In human behaviors, where the acting element is so complex and vital, more control is needed, so it’s desirable to move some of the logic of the perception system into the behaviors, even though this can make them more complicated. So we’ve been working on a design for an improved perception system that will allow behavior trees to be authored with the control that designers want to have. We’re also looking at whether we can improve ship behaviors as well by making similar changes to their behaviors, and that’s something that will be ongoing in the next few months.
Engineering
We made quite a few good improvements to ships this month, particularly with regard to improving their behaviors for Pirate Swarm. Some of the highlights are improvements to approach and retreat behaviors that better take into account max weapon range and current shield levels; changes to make AI missile usage less predictable; and some improvements to avoidance so AI are less likely to crash into other ships.
We also fixed an interesting bug where AI would sometimes behave strangely when going to fly on a spline in scripted situations such as the tutorial. We use the reported thrust values from IFCS (Intelligent Flight Control System) in order to plan out ship movement, but sometimes IFCS wasn’t fully online when we were planning, and so we weren’t working with correct values. So we added a way for a ship behavior to make sure IFCS is fully online before doing something that depends on it.
One nice change we made to character visual perception is to allow them to see things at greater than 180 degrees if desired. Our visual perception system had both a primary and a secondary vision cone, where the primary cone mimics regular vision abilities, while the secondary cone is intended to mimic peripheral vision. That means targets in the primary cone will generally be detected fast by the AI, while the secondary cone takes longer.
The problem is, AI often seem stupid when their peripheral vision is less than 180 degrees, so the obvious solution is to increase the field of view of the peripheral vision. The issue there, though, is that the primary and secondary vision cones are precisely that: cones. The mathematics of the view cones completely breaks if you try to go above 180 degrees. So to fix this, we needed to change the geometric shape that defines the vision space of an AI from a cone to a more appropriate shape when the field of view is 180 degrees or more (which is basically a sphere with a cone cut out of the back of it).
We also continued to make some improvements to the behavior tree editor in DataForge. We added the ability to now specify inputs to BT nodes as a dropdown list of predefined values when this makes sense, such as when telling a character to change to a new stance. Behavior trees can now also embed other trees within them, which allows designers to put a common piece of behavior in its own tree and then add that to their trees in multiple places as needed. As the trees get bigger and more complex, this will be invaluable for keeping them readable and avoiding duplication.
Finally, we made some great performance improvements to the Kythera Recording Server, which is the system that allows designers and programmers to record AI behavior and then play it back with detailed debug information to help figure out bugs or look for ways to improve behaviors. There is potentially a lot of debugging information that needs to be saved for this system to be useful, so we improved the system to be better at detecting what data has changed since the last update and what hasn’t, which means it can better compress the recordings. This is both good for disk space and for making it easier to export recordings to give to other developers.
Source:
https://robertsspaceindustries.com/comm-link/transmission/14937-Monthly-Studio-Report