nVidia's offering is interesting. Really the only company that could do that at this stage. It remains to be seen if it will be taken up and how.
I'm not optimistic on this one. It would be a lot of development
and running costs to add mediocre generated dialogue. Which a lot of people would skip anyway. Meanwhile, mission / event / etc generation works already, and an LLM wouldn't be of much (if any) help there, so, what's the point then? The only benefit I can see here is to generate a lot of small differences on what's being said - but at the same time, it'll always have the model's characteristics, and given enough output, people can pick up on those.
Even for pre-trained models, how can you prove that all users whose conversations were used for training, gave permission? i.e. If I work for DodgyCompany and I use e.g. the reddit/twitter/another API to scrape all conversations, and then sell on my model as pre-trained, how will my Steam disprove that I had the scraped users permissions or not?
Well, for "users whose conversations were used for training", it's simple: you get them to sign an EULA before chatting. However, for your second example, Valve would need access to your entire dataset, which would be, ahem, impractical. So, it's easier for them to say that "generated content is banned unless you can prove that you had permission for all the data you used" rather than say "generated content is allowed unless we can prove that you didn't have permission for some of the data you used". I could see the company not want to get involved in any potential lawsuits which would be filed not just against the game's developer, but also their distributors.
Of course, it's even easier and safer to say "generated content is banned, period". It's not like there are any major games out there where leaving them out would hurt Steam in any significant way.