I asked the AI about Atmospheric Worlds and it answered...

inteligencia-artificial.jpg


Well, that's it, I asked the IA "There will be possibilities for Frontier developers to incorporate atmospheric worlds into Elite Dangerous" and this was the answer I received...



In short, atmospheric worlds have been added, but only with very thin atmospheres (≈0.1 atm), like those introduced with the Odyssey expansion.
. These atmospheres produce visual effects (blue or tinted skies), but do not affect flight physics or include phenomena such as rain, storms, or real-world aerodynamics.

The community is deeply divided on whether Frontier could implement dense atmospheres in the current engine. Some comment:

“Dense atmospheres would require entire new flight models… a massive upgrade… So no. Not till the next expansion.” PC Gamer+12Reddit+12Wiki Elite Dangerous+12
“Landing on a planet with an actual atmosphere would require them to rework large parts of the game engine. That feature probably won't happen unless we get ED2.” PC Gamer+15Reddit+15Reddit+15

According to official sources and the Frontier forum, these densely atmospheric worlds are still on the roadmap, but in the distant future, with no defined timeframe on Reddit. Since 2017, David Braben and other developers have admitted to working on it, although they warn that it's a complex task and won't be coming "anytime soon." Wiki Elite Dangerous+2Frontier Forums+2Elite: Odyssey+2.

Future Outlook
Current Engine: Will remain with thin atmospheres and gradual improvements. Substantial advancements (meteorology, atmospheric flight, life, aerodynamics) require a major overhaul of the engine.

Next major expansion or Elite 2: It is much more likely to bring dense atmospheres, realistic atmospheric flight, and rich environments, along with new physics systems and adapted gameplay content

In conclusion
Possible? Yes, it's technically feasible. Stellar Forge already supports atmospheres.

Probably in the current Elite Dangerous? Very little. Not expected in the next few years.

Where could it come from? The logical thing to see is a new engine, a major expansion, or Elite 2.


Given the answer, let everyone judge for themselves.







 
So, let's see what we would need to make it matter game mechanics wise, because purely cosmetic isn't worth the dev time.
  • wind and aerodynamics
  • precipitation and persistent, multiplayer and cross session synchronized weather patterns
  • water systems, pressure mechanics, pressure damage (gas giants go here)
  • dense and varied life in all kinds of shapes with all kinds of threat levels and all kinds of value
  • persistently procedurally generated urbanization that are the same for everybody and any time
  • complex, interactive and reactive global wide and galaxy wide societies

You see, that's kind of an exponential curve. How far can you climb before Icarus greets you?
 
Its not AI 🤷‍♂️

O7

This reminds me of the book Blindsight, which was where I first heard about the Chinese Room idea. I couldn't be bothered - or perhaps lack the capability - to explain so I found this on reddit.

Source: https://www.reddit.com/r/scifi/comments/1c5ds15/the_chinese_room_of_blindsight_generative_ai_and/

It took me a while to piece this together but the book Blind Sight by Peter Watts has changed the way I see many things, especially the current AI craze. For those who want to read the book (I highly recommend it) this post will contain minor spoilers on events that occur towards the start of the book. One of the concepts explored in the novel is the nature of consciousness.

The sequence that sticks out to me is when the crew of the human ship approaches the alien ship and starts communicating with it. At first, it seems like the alien ship is conscious, or sentient on some level, as it seems to offer very reasonable responses to the queries and communications of the crew. Then, the linguist on the crew has an epiphany and blurts a string of insults and profanities at the alien ship to the shock of the rest of the crew. The linguist then informs the crew that the alien ship had no real grasp of the exchange going on, tells them it behaves like a Chinese room. The concept of the Chinese room is not new and was not created by Watts but it is essential to understanding the capabilities and limits of the new tech we are seeing today.

The Chinese room is a thought experiment, in which a person who doesn't know any mandarin works in a room. A Chinese person (or who speaks mandarin) can write on a piece of paper and slide it under the door of the room. On the other side, the person has at their disposal inside the room, a complex set of instructions telling the them what characters to draw in reply. The reply is then slipped back under the door. The Chinese person then believes the person in the room is fluent in Mandarin, when in reality they simply followed an algorithm.

This old thought experiment has been used to analyze the implications of programs like Alexa and Siri, but it becomes even more relevant today when there is such a buzz about what people refer to as "artificial intelligence." All the tools people today call AI, or generative AI, all the Midjourneys and chat GPTs are all built using transformers. Essentially, Chat GPT and other generative AI apps are just overgrown text predictors (that's how they started). They got elaborate enough to "look" forwards and back to parse context, and exceeded their original text prediction application to full on conversations. These conversations seem natural, but at their heart, they just use context to scour a semantic vector space and spit out an reply that is the most likely withing the semantic region the prompt took it.

Not to put too fine a point on it, but none of these programs have anything we could remotely call conscious in them.

The supposed AI is giving you what you wanted to hear, not factual information.
 
Don't get me started about AI...

A year or so ago a flat earther insisted there were scientific studies confirming something he was claiming (I cant remember what the claim was). He was challenged on his claim and replied with a list he said he had personally found, read and verified. He lied. The list looked convincing, had correctly formatted links, scientific journal numbers names and titles, by known researchers, and even cited specific pages in those real journals. But on those pages were different articles, by different authors, on different topics. This is what happens when you ask Chatgpt for scientific papers on x topic. It gives you what you want. Makes it up.

Amusingly the current US administration seemingly did the same thing recently. Blamed it on "formatting".

In this case, "AI" has summarised the same nonsense we argue about.
 
Without a doubt, AI, whatever it is, is confused with or without a spoon, but as a curiosity, it expresses the data it transmitted to me. Which doesn't make it true, as some Cmdr rightly points out in the forum.
 
Well, as the AI rightly said, there would be a difference of opinion. The apocalyptic Skynet one already hints at it. Ultimately, those who really know this are FDEV.

The issue is that the outputs of Large Language Models like ChatGPT can provide no more insight than just reading Reddit and forum posts, because that's what's in its training data. That's before taking into account the fact that LLMs will hallucinate and confidently state things which are not actually true. This is because LLMs don't fact-check themselves, and wouldn't know how to do that even if you told them to.

LLM outputs might be worth considering as a starting point for further research, on topics not properly covered by the likes of Wikipedia. But personally, I would want to double-check every "factual" statement and at that point, one might as well write one's own post from scratch and/or do one's own googling.
 
Hell, the last time I tested chatgpt asking it to translate less than 100 lines of code between two languages it invented some variables but didn't define them, and tried to pass too many parameters into a function. Good job.
 
Unless the AI has access to internal FD plans & documentation or even internal discussions on atmospheric planets then all it's doing is regurgitating player guesses on why FD will or will not implement said feature.

And because the AI states it in very confident terms it comes across as factual instead of the patchwork of guesses it actually is.
 
Back
Top Bottom