Continuing the Discussion: Character Technology
Star Citizen's character technology has undergone an overhaul since the Morrow Tour demonstration, with the team primarily focused on improving meshes around the eyes (the 'soft skin') and eye movement. The latter is done with an “eye posing” system that CIG has instituted into the engine. Eye posing works with surrounding skin in the face to ensure that the eyes don't move alone – there's other facial movement that corresponds to different expressions and posed directions, making for a more cohesive face altogether.
Of working with the new head technology and perspective from other games, Sean Tracy told us:
"So, it's not really new character tech – well, it is a little bit new. Mostly, this is the stuff that we've been working on for years. Last year at CitizenCon we showed something called the Morrow Tour. We had only just started to receive the head rigs, head assets from 3Lateral [for] about a month, and then we went ahead and just put 50 characters in. They looked varying degrees of art progress. A lot of the game – we're happy to show WIP stuff, but the public can react pretty negatively to it. They don't get it. What we really want to show is the progress of that over the years, so all of the tech that we expected is online, polished, working.
“We had the entire year to get about 120 scanned heads in, and we have, even just in the demo that we're gonna show, we have 53 unique characters. Unique face, unique facial rigs. It's a ton of work. Just to contrast it again, [in] Crysis we're talking less than 20 characters the whole game. Ryse, we're talking maybe 30 – and probably 15 of those are barbarian variations. For this, because we've got all these actors, we've paid for the actors, so we're going to have a really nice scan of their head and we want them to look awesome in it, and they do now. All the tech is finally online for the faces to animate correctly, for it to get triggered with the dialogue.”
Tracy goes on to detail the team's new runtime rig logic system, which allows the moods and facial expressions to be universally applied to characters. Rather than creating, for instance, an expression structure of /happy/male/happy_male01, or similar, the team can just cut to “happy01,” in this case, then apply that globally and allow the runtime to work out the rest.
“Finally, there's one really big piece of tech that we're bringing online now. We call it "runtime rig logic" for the faces. We have varying facial skeletons, everybody's got a bit of a different face. A lot of games [...] unify the skull shape and unify the neck shape, and everybody's the same. That doesn't work for actors – especially not for really recognizable ones, like Gary Oldman, or Gillian Anderson, or Mark Hamill. You're going to know that that doesn't really look like him. So, what we do is we do still have all these unique rigs, but what we have is a system within the engine that is actually consuming unified animation data and it applies all the offsets to that animation data so we can drive that rig. What it means is that we can share animation across anybody, which is super cool. A smile on Gillian Anderson is actually the same data for a smile on Mark Hamill. We can share all this data. This is a pretty big deal.”
This system allows for procedurally driven character reactions, granting greater depth to the potential count of animations and expressions within the game. The rig logic skeleton has 183 inputs that drive the entirety of the face, and that's true for every character – at least, every human character. It's likely true for other humanoids, but we didn't ask about other potential species. Those 183 inputs work with another ~220 skin joints, creating a highly detailed rig.
Tracy expanded:
“The other thing with it is [that] we can procedurally drive things on the character, and have the rig react as if we were exporting animation. A big example of this is eyes. What we have is 'look posing' system – I'll get into that. We'll drive where the eyes are looking. The problem with this is that the eyes are all connected to blend shapes; usually, if any other game would do this, you see the eyes move around but none of the blend shapes would do anything because it just doesn't know that you're moving those eyes around. [...] With Rig Logic online, we can say, 'move those eyes,' and rig logic knows 'OK, I need to move this blend shape here, do this wrinkle here.' So we're getting really awesome performance even from procedurally generated data.
“[...] We apply this to every single head. There's kind of a workflow reason for that, and it's that I don't want to deal with two different pipelines. [...] I'd rather just, 'everybody's rig logic,' perfect. There's an implementation reason, too. Say in Star Marine I want a guy [to look angry] when he fires. If everybody has unique faces, I'd have to have, 'OK this guy is in his stance, he's got his weapon, and he's firing.' Here's angryface_male01, angryface_male02, they'd all be different animations for the same thing. Now what I do is I just say 'angryface,' and it'll figure out what face it's playing on and it'll do it. This is our whole mentality of content creation: Let's do it intelligently so we're not stuck here making thousands of things so that it takes us 10 years to make this game.”