How we could help ED, with the new AI tools??

And a final revelation.

1675944202283.png


🤔

O7,
🙃
 
Last edited:
Those bots cannot differentiate between fact and tinfoil hattery. They also don't shy away from pulling "facts" out of their behinds if they have no clue, and, in the instance of ChatGPT, their training data is two years old.

I remember ChatGPT being asked about the founding date of a soccer club; it just made one up, and when told so, it just said "sorry" and made another one up. I would be very careful to trust those AIs with anything.

In fact I find that naive embracing of AI technology of the general "internet public" very concerning. And that is not some old man technophobe waffling. I said the same over 20 years ago about social media and facebook and was laughed at, and look how all the concerns about privacy and them roflstomping over it became true.

I say: Keep those AI constructs away from the real world as long as possible. They are fascinating from the scientific view, but that's about it for now.
 
Yes. We did extensive tests over weeks, and it's impressive what the AI can do for sure. You just can't trust the results, as, at some point, they inevitably get creative and 'invent' things, based on heuristics. In the case of specific questions on a scientific topic, ChatGPT finally came up with non-existant publications it credited random authors for. New Bing will contain references to check at least, but that behaviour is integral to how the Transformer algorithm works, even with the Reinforced Learning they applied. All in all, the AI will come up with a story that may be true or not. We also tried coding and it failed miserably, even when giving hints. For example, when asked to write a method calculating the Easter Sunday (the reference date for many holiday calculations) for which several proven algorithms exist (most prominent is Gauss, we told it to use Butcher-Meeus instead), it came up with a creative solution that compiled ok, but didn't implement the algorithm properly and stubbornly insisted on it always being in April (it can be in March, though, like March 23rd in 2008 or March 31st next year). On other occasions, it believed that 3+5 yield 6 instead of 8. Good luck with that in the real world.

O7,
🙃
 
Last edited:
Jeeze, are people really asking these questions of Chatbots? It's not an oracle, it just parrots whatever it digs up on the 'net.🤦‍♂️ It's literally an echo chamber.
This has been my concern from the beginning. People beleive that these things are objective sources of original knowledge. They aren't. They aren't even really AI. They are glorified net crawlers that only say what they are programmed to be allowed to say. Unfortunately, I see people already trusting them as knowledge sources and some go so far as to not allow them to be questioned.
 
It's not AI. It's just a marketing label for something that produces complex results from complex inputs. Hence the alternate label "true AI" for the thing that is AI.

How can we help ED, with the new AI tools, like GPT, DALL·E 2 or AudioGen, etc. To create new content or external applications to improve our experience in the game?
These tools are good at taking large data sets and seeing patterns (i.e. "training" is bending a curve/equation so it fits your example data). To produce something useful you will have the challenges of defining inputs, and separating cause from effect.

You could analyse how the Thargoids are spreading to reach their hidden objectives so that you can determine what those objectives are, assuming you can compensate for what the players did to stop them.
You could come up with something to analyse player activity against the Thargoids and advise the best strategy for defeating them, but then all the players would do that and it would become a less optimal strategy.
 
Yes. We did extensive tests over weeks, and it's impressive what the AI can do for sure. You just can't trust the results, as, at some point, they inevitably get creative and 'invent' things, based on heuristics [..l]

O7,
🙃
Just like a lot of people in forums or marketing departments :D
So, in the end, there's not much difference.
</s>
 
Absolutely. The AI "art" ones are even worse in my opinion, people thinking an AI has miraculously drawn a 'Tom Hanks dressed as santa in The Hobbit' but everything they are using is literally ripped off from someone else, scraped from copyrighted images. Stolen.
Artists call it "inspiration" instead. 😜
 
It's not AI. It's just a marketing label for something that produces complex results from complex inputs. Hence the alternate label "true AI" for the thing that is AI.
Though since "true AI" doesn't exist in the sense you presumably mean ... and since "complex results from complex inputs" is a pretty decent summary of "thinking" anyway, I'm not sure that's really solved the labelling problem.

The labelling isn't so much of a concern to me - far simpler things have been described without much controversy as "AI" for decades under a general heading of "get a computer to do something a human would have needed to do before" - as the attempts by the manufacturers to say "because we don't know how this computer program works, we can't be responsible for the results". I doubt they'll succeed in the long-term but they'll probably kill a few more people first.
 
I've been playing with it a lot and it's kind of scary to me when it's wrong, because I only ever know it's wrong when I ask it something that I already know the answer to.

So, if I don't know the answer, then how can I know I should trust it?

Screenshot 2023-02-09 at 2.20.00 PM.png

example 2:

Screen Shot 2023-01-09 at 9.17.54 PM.png
 
Back
Top Bottom