ChatGPT NPC Taunts

ChatGPT is just the beginning anyways, and not always that good. I'm sure they'll sort out most issues in the years to come.
Actually, in the video that I posted above, there is a segment where they show a player chatting in real time with an NPC, and getting a quest to get rid of some criminal organization to resolve the problems the bartender was mentioning. It's just started on the fly, not scripted. It is a first step but I'm sure that they can improve things within the next 5 years to the point where video games will only take one year to develop instead of 25.
 
The problem is that Elite dangerous was started almost 10 years ago, long before this technology came into play. Had Elite been started today for development using this technology to create worlds on the fly...
Actually, No.
Don't forget that the original 1984 Elite was the original demonstration of how procedural generation could be used to generate all the all the data it needed, and ED still uses it today to generate it's worlds right now.

What you are seeing in the video looks like a directed conversation fed through a Large Language Model, so that the conversational tone and responses can vary (given that the video shows a single interaction, there is no evidence that it is not just a single well written script, but let's go for benefit of the doubt). LLMs work by tracking the entirety of your conversation so far plus what it needs to say, and mapping that against its history of previous conversations to give a response. i.e. Given those N thousand conversations that look like this one, what was the most common next response
However, an LLM is no help at all with anything outside of what it knows, i.e. language good, scientifically accurate 3d models bad.

ChatGPT is off limits for game dev. Steam has banned its use in any game they host in their store due to the cloudy legal nature of scraping copyright material for learning purposes.... who owns the information that the AI learns about you through interactions.
You should in theory be able to take an alternative pre-trained, fixed conversational model that does not learn and use that without falling foul of the rules.
All you need is a really large corpus of conversations to train with. Anyone got a few million bucks to use the reddit API? (Too soon?)
 
If these AI are as clever as folks think shouldn't they sort themselves out? 😜

O7
IMO, people are being ignorant/stupid as always. It's not an AGI yet, it's just a neural network that is getting pretty good at putting words together and doing some other stuff.

But that is one aspect that people are calling a singularity, the moment an AI can start creating an even better and smarter AI.
 
You should in theory be able to take an alternative pre-trained, fixed conversational model that does not learn and use that without falling foul of the rules.
All you need is a really large corpus of conversations to train with. Anyone got a few million bucks to use the reddit API? (Too soon?)
From what I read that's not going to be enough as it would be very tricky to prove unfortunately.
 
MINCEMEAT!

Hugging Face AI said:
“Oh no you don’t buster!”

“Gonna fry yer hide into a crisp!”

"You better run for cover, because I'm coming for you."

"You can try to hide, but I always find my prey."

"Don't waste your time trying to escape, because there's nowhere to go."

"I'll make mincemeat out of you."

"Get ready to meet your doom."

"Prepare for oblivion."

"Say goodbye to your hopes and dreams."
 
Yes - I always text someone before I punch them in the face.

TTS has improved so much from the game's release - it is time to drop the text threats and replace them with voice.

Don't like the voice, mute it.
Actually this ^.
Wait a second, let me plug immersion thingie into one of my holes...
So... these are people flying space ships, they've got damage, they are under fire, they need control their ships and weaponry.
But instead of these they are opening comms and typing.
That immersion thingie bumps out of the hole.
 
I'm surprised it isn't offered as a mod. I think Skyrim might have it.
The Skyrim one is an unholy union of ChatGPT and a voice generation thing (the name I can't recall) and it shows exactly why it's not ready for prime time. Don't get me wrong, it's an impressive effort from an amateur and a good show case of what's possible but it's clunky as hell. It takes minutes to respond, the lines are terrible, while it does produce unique lines there is zero personality (they all have ChatGPT's dry and analytical response to everything).

nVidia's offering is interesting. Really the only company that could do that at this stage. It remains to be seen if it will be taken up and how.
 
a voice generation thing (the name I can't recall)
xVASynth? It seems like it'd be pretty good for modding if it's just adding in a few new lines for existing NPCs that already have character, built by actual decent human voice acting.

But again, it's background noise mostly, you're not going to get deep and meaningful dialogue that's delivered in a superb way like that. Do people really care that little about quality? There's very few games where good (or so bad it's memorable) dialog either voiced or not has improved an otherwise boring game.
 
From what I read that's not going to be enough as it would be very tricky to prove unfortunately.
Interesting...
At work, we embedded a pre-trained model into our client UI. It could only be updated via the equivalent of a new release version on Steam.
Later we moved the model to the server, so the UIs didn't need to hold it in RAM, but the responses were just a http API call away. This also meant we could run a retraining script over last weeks data, and update the model each weekend.
I'm guessing Steam's arguments are that the server style model is out of their release control, so they would not be able to determine that the model was being updated or not, so they ban unless you can "prove" otherwise.

Oh Duh, just realised. Even for pre-trained models, how can you prove that all users whose conversations were used for training, gave permission? i.e. If I work for DodgyCompany and I use e.g. the reddit/twitter/another API to scrape all conversations, and then sell on my model as pre-trained, how will my Steam disprove that I had the scraped users permissions or not?
 
nVidia's offering is interesting. Really the only company that could do that at this stage. It remains to be seen if it will be taken up and how.
I'm not optimistic on this one. It would be a lot of development and running costs to add mediocre generated dialogue. Which a lot of people would skip anyway. Meanwhile, mission / event / etc generation works already, and an LLM wouldn't be of much (if any) help there, so, what's the point then? The only benefit I can see here is to generate a lot of small differences on what's being said - but at the same time, it'll always have the model's characteristics, and given enough output, people can pick up on those.

Even for pre-trained models, how can you prove that all users whose conversations were used for training, gave permission? i.e. If I work for DodgyCompany and I use e.g. the reddit/twitter/another API to scrape all conversations, and then sell on my model as pre-trained, how will my Steam disprove that I had the scraped users permissions or not?
Well, for "users whose conversations were used for training", it's simple: you get them to sign an EULA before chatting. However, for your second example, Valve would need access to your entire dataset, which would be, ahem, impractical. So, it's easier for them to say that "generated content is banned unless you can prove that you had permission for all the data you used" rather than say "generated content is allowed unless we can prove that you didn't have permission for some of the data you used". I could see the company not want to get involved in any potential lawsuits which would be filed not just against the game's developer, but also their distributors.

Of course, it's even easier and safer to say "generated content is banned, period". It's not like there are any major games out there where leaving them out would hurt Steam in any significant way.
 
Oh Duh, just realised. Even for pre-trained models, how can you prove that all users whose conversations were used for training, gave permission? i.e. If I work for DodgyCompany and I use e.g. the reddit/twitter/another API to scrape all conversations, and then sell on my model as pre-trained, how will my Steam disprove that I had the scraped users permissions or not?
Bingo. And Steam are keeping a lid on that can of worms while the law could go either way on who's responsible once it eventually gets settled - at least for the forseeable future, which I think is fair enough.
 
The Skyrim one is an unholy union of ChatGPT and a voice generation thing (the name I can't recall) and it shows exactly why it's not ready for prime time. Don't get me wrong, it's an impressive effort from an amateur and a good show case of what's possible but it's clunky as hell. It takes minutes to respond, the lines are terrible, while it does produce unique lines there is zero personality (they all have ChatGPT's dry and analytical response to everything).
These are still very early days. ChatGPT is to future AI NPCs what Daggerfall is to Skyrim. We'll get there! People don't realize just how explosively exponential the growth of AI has been in just the last few years. Ten years ago, even "clunky as hell" GhatGPT was considered by many to be the realm of science fiction rather than science fact. Ten years from now....
 
Back
Top Bottom