I asked the AI about Atmospheric Worlds and it answered...

LLM-s are fascinating tech, but there needs to be another development with them that makes them factual. As they are now, they're good for fiction only, and even then you need to curate their output.

Curiously, when I tried out Deepseek (I think the 14b parameter model) I had a conversation about Peter Watt's Blindsight and the nature of conscience, intelligence and AI. It distilled and represented some good points various people have made about these concepts (because that's what LLM-s do). Then I asked it to recommend me other such works in literature, film and theater, and its list was pretty solid. Except one theatrical play it recommended to me. The playwright is real, the theater where it was first performed is real, when I asked Deepseek more about the play—called "A machine that generates desire"—the characters and themes were all compelling, the concept of desiring-production is real. The only problem: as much as I searched, a play with that title does not exist. But I would actually like to go and see that play🙂
 
Really, we doing this kind of thing now?
It would appear that some folk have not understood the base definition of "Artificial" which is summed up in the single word "Fake"

As picked up on the internet, which everyone knows is never wrong...

Does artificial mean fake or real?

not produced by natural forces; artificial or fake. fake, false, faux, imitation, simulated. not genuine or real; being an imitation of the genuine article. man-made, semisynthetic, synthetic. not of natural origin; prepared or made artificially.
 
It would appear that some folk have not understood the base definition of "Artificial" which is summed up in the single word "Fake"

As picked up on the internet, which everyone knows is never wrong...

Does artificial mean fake or real?

not produced by natural forces; artificial or fake. fake, false, faux, imitation, simulated. not genuine or real; being an imitation of the genuine article. man-made, semisynthetic, synthetic. not of natural origin; prepared or made artificially.

I think AI is fine to use if one keeps in mind its limitations. But it seems like a lot of people aren't aware of the limits of AI, especially LLMs. It doesn't help that the tech companies behind this sort of stuff have a vested financial interest in hyping up AI services, in order to keep the investment money flowing in.

I've found ChatGPT to be genuinely helpful for providing pointers for expanding upon my own ideas for fictional stuff. And I've also found it useful to help me learn about some minor syntax issues I've been experiencing while learning about HTML and CSS.

But some people treat LLMs as if they were some kind of search engine or even an oracle, functions of which are way outside of its actual capabilities.
 
This reminds me of the book Blindsight, which was where I first heard about the Chinese Room idea. I couldn't be bothered - or perhaps lack the capability - to explain so I found this on reddit.

Source: https://www.reddit.com/r/scifi/comments/1c5ds15/the_chinese_room_of_blindsight_generative_ai_and/





The supposed AI is giving you what you wanted to hear, not factual information.
How do we know that isn't how our own consciousness "works"?
 
Next step. Please just get the AI to implement the entire game and report back in 3 weeks. We don't need answers we need progress.
Are any of the FDev (or any game company) developers actually AI programs? If not - why not? You'd think AI, if it's as amazing as we're told, could help humans develop games, in lockstep.
 
Don't get me started about AI...

A year or so ago a flat earther insisted there were scientific studies confirming something he was claiming (I cant remember what the claim was). He was challenged on his claim and replied with a list he said he had personally found, read and verified. He lied. The list looked convincing, had correctly formatted links, scientific journal numbers names and titles, by known researchers, and even cited specific pages in those real journals. But on those pages were different articles, by different authors, on different topics. This is what happens when you ask Chatgpt for scientific papers on x topic. It gives you what you want. Makes it up.

Amusingly the current US administration seemingly did the same thing recently. Blamed it on "formatting".

In this case, "AI" has summarised the same nonsense we argue about.
I suspect the issue wasn't with the AI but with the user. AI is already used to great effect to data mine scientific journals in my field. indeed I fear the job of a scientific curators days could well be numbered
 
Last edited:
I suspect the issue wasn't with the AI but with the user. AI is already used to great effect to data mine scientific journals in my field. indeed I fear the job of a scientific curators days could well be numbered

The problem is that the outputs of this recent wave of AIs are probabilistic, not deterministic. A human knows that the answer to a question doesn't change if it's repeated, and won't make up sources unless they're intentionally trying to pull the wool over your eyes. But an LLM, if asked a question that requires significantly more detail than basic queries like "what's the closest planet to the Sun", has a non-zero chance of confabulating an answer and backing it up with a source that it completely hallucinated from wholecloth.

It's happened multiple times when lawyers have attempted to use ChatGPT in their work. I've no reason to believe that there would be significantly different results when attempting to use ChatGPT in a field of science.

I think scientific curation by flesh and blood humans is here to stay, even if the role has to evolve in order to deal with AI-assisted BSing.
 
Last edited:
Back
Top Bottom