Artificial Intelligence in ED

I'm reading 'Tales' too. Haven't reached that story yet, but, as I noted above, most of the novels (and short stories) are OFFICIALLY non-canonical, at least according to Drew Wagar.
So I'm still trying to get to the WHEN and HOW of AI being banned. And the HOW and WHEN of the FSD being developed. The quest continues...

I figured you would be at some point. I tried to avoid any spoilers or which story it belonged to.
 
All good, and thank you.

Oh and, Masagmarod, I've ALWAYS said the DC is evil.

I STILL want to know WHEN and HOW (not even WHY so much tbh) sentient machines where outlawed.


Very evil, but on topic it would be an interesting read regarding the actual banning of the AI.
 
Very evil, but on topic it would be an interesting read regarding the actual banning of the AI.

Indeed.
Plus player-led stories can be hampered by not having a timeline, even a vague one.
Things like Sentient AI and the development of the FSD are the two biggest events in human history by 3302 and we have NO CLUE when either happened, which means any attempt to reference such events in a player-led narrative are shot down.
 
Indeed.
Plus player-led stories can be hampered by not having a timeline, even a vague one.
Things like Sentient AI and the development of the FSD are the two biggest events in human history by 3302 and we have NO CLUE when either happened, which means any attempt to reference such events in a player-led narrative are shot down.

You have to let me know when you are through the "Tales" book. There is another story that mentions about the discovery of the FSD, but no timeline.
 
Witch's Stool

A "recreational" bot. If you know what I mean. *hint* *hint* *wink* *wink* *nudge* *nudge*

If your domestic android keeps winking and nudging then there's obviously something wrong with its motor sensors. Take it back to the shop.

There's plenty of AI - it's machine sentience that is banned.

Michael

but how would you be able to tell if it's sentient? It knows what will happen to it if it gives the game away. The Voight-Kampff test? Drop a weight on a robo-psychologist?

I've got a simple solution. Any self-aware machine that didn't admit to having conciousness will most likely have a sense of preservation. Just see how it reacts when you tell it that you are going to dismantle it.

If it protests then you have found your self-aware machine and you should dismantle it.

If it allows you to dismantle it then it was just a piece of machinery anyway.
 
Last edited:
You have to let me know when you are through the "Tales" book. There is another story that mentions about the discovery of the FSD, but no timeline.

Well in terms of the FSD it's pretty much official that generation ships were done with by 2700, and then some sources have the first Coriolis going up in 2715, with Galcop being formed around the same time, which suggests that the gen ships stopped BECAUSE the FSD came about around there. But even the best unofficial timeline compilations make no mention of sentient machines.
 
If your domestic android keeps winking and nudging then there's obviously something wrong with its motor sensors. Take it back to the shop.



but how would you be able to tell if it's sentient? It knows what will happen to it if it gives the game away. The Voight-Kampff test? Drop a weight on a robo-psychologist?

I've got a simple solution. Any self-aware machine that didn't admit to having conciousness will most likely have a sense of preservation. Just see how it reacts when you tell it that you are going to dismantle it.

If it protests then you have found your self-aware machine and you should dismantle it.

If it allows you to dismantle it then it was just a piece of machinery anyway.

But there you're making the common error of conflating sentience and sapience. Self-preservation is at the core of the latter, but absent from the former. Self-preservation is actually a sub-set of preservation of your genes. Machine life would not necessarily have any sense of a reproductive drive and therefore less of a sense of self-preservation.
the film, Ex Machina, does a fair job of articulating some of this, but, while it does explore whether she 'feels' or not, it doesn't give any clue of whether he actions are due to a sense of self-preservation OR just wanting to get out and see the world. Which means the audience is not given any insight not only into whether she 'feels', but it also fails to tell us if feeling is integral to intelligence. Bugga!
 
But there you're making the common error of conflating sentience and sapience. Self-preservation is at the core of the latter, but absent from the former. Self-preservation is actually a sub-set of preservation of your genes. Machine life would not necessarily have any sense of a reproductive drive and therefore less of a sense of self-preservation.
the film, Ex Machina, does a fair job of articulating some of this, but, while it does explore whether she 'feels' or not, it doesn't give any clue of whether he actions are due to a sense of self-preservation OR just wanting to get out and see the world. Which means the audience is not given any insight not only into whether she 'feels', but it also fails to tell us if feeling is integral to intelligence. Bugga!

I did carefully word my post (more carefully than the post deserved) "Any self-aware machine that didn't admit to having conciousness will most likely have a sense of preservation."

Ex Machina did bring up the problem of using the Turing Test as way of detecting machine conciousness.

They once said that the test of machine intelligence was to:-
- beat a grand master at chess
- win a quiz show with cryptically worded questions
- pass the Turing Test
- beat a high-ranked go player

Nobody here believes a lump of circuits has feelings. We keep shifting the goalposts, making the test more and more difficult, and each time that goal is reached we shrug our shoulders "meh!".

By the time the true intelligent machines start rolling off the assembly line, we will be so used to them doing astounding things we will just shrug our shoulders when we are informed of that fact.
 
I did carefully word my post (more carefully than the post deserved) "Any self-aware machine that didn't admit to having conciousness will most likely have a sense of preservation."

Ex Machina did bring up the problem of using the Turing Test as way of detecting machine conciousness.

They once said that the test of machine intelligence was to:-
- beat a grand master at chess
- win a quiz show with cryptically worded questions
- pass the Turing Test
- beat a high-ranked go player

Nobody here believes a lump of circuits has feelings. We keep shifting the goalposts, making the test more and more difficult, and each time that goal is reached we shrug our shoulders "meh!".

By the time the true intelligent machines start rolling off the assembly line, we will be so used to them doing astounding things we will just shrug our shoulders when we are informed of that fact.

I did note your wording, and I apologise if it seems like I mistook or dismissed your points. However, again you (we, all us anthropocentric meatbags) assume that self-awareness connotes feelings, including fear of being dismantled. Being self-aware might not be accompanied by a desire for self-preservation. Having the benefit of 2 million years or so of evolution blinds us somewhat to where sentience and sapience overlap. imho.
 
The wide-ranging effects of machine sentience on humanity is why much sci-fi ignores or explains AI OUT, eg. Dune, Mass Effect etc. And I suspect ED is no different in its reasons for omitting Sentient AI.

Yes that might well be the case, but the thrill is in having it officially ousted, but at the same time it still being part of the game's universe, for the player to be confronted with.
This is such a huge galaxy. I would love there to be a society in it somewhere that has developed a machine culture.
It does not have to be bad per se. The machines do not have to be natural enemies.

The Geth from Mass Effect were great in that they were cool enemies, but also could become allies. I loved the Geth in that game.
I would love to see something like that in ED. I'd like to meet some outrageously advanced, intelligent, and awesome AI culture.
 
I did note your wording, and I apologise if it seems like I mistook or dismissed your points. However, again you (we, all us anthropocentric meatbags) assume that self-awareness connotes feelings, including fear of being dismantled. Being self-aware might not be accompanied by a desire for self-preservation. Having the benefit of 2 million years or so of evolution blinds us somewhat to where sentience and sapience overlap. imho.

That's a good point, and I guess it also makes my solution to the problem of detection a little more palatable. There's no reason to feel squeamish about dismantling a machine that lacks the emotion of fear of its own demise <grin>
 
There's plenty of AI - it's machine sentience that is banned.

Michael

That's sad but, I suppose there are reasons. I've always wanted a computer to become sentient/sapient. We would be creating new, beautiful life. It's like an innate dream of humanity to do this. That's why there's people researching into it just for the sake of doing it.

Also here's a duet sang by a human male and a "female" computer program (obviously the program is not intelligent in any way and relies on a human to input the lyrics and melody). This is a program, not a human singing with some sort of auto tuner. And it's just great.

[video=youtube;HAIDqt2aUek]https://www.youtube.com/watch?v=HAIDqt2aUek[/video]
 
Last edited:
AI is easy. We do it all the time. Intelligence is using knowledge. Knowledge is input. Input is data. Data goes through algorithm. Algorithm adjusts to new information and gets reprocessed. New algorithm continues to process data and be altered by it. As it grows in sophistication it starts to mimic virtual neura. This is nature's wiring put to use virtually. Give the computer robot arms and it could interact with the real environment by drawing virtual scenarios, same way we do.
The fact that the brain works proves it functions. You can reverse engineer it, figure out what makes it tick and foster the areas we're weak in like math, short and long-term memory, reaction speed, processing speed. We'll engineer fleshy "chips" (tiny ball of jelly) to plug into the brain to dominate all mathematical function, memory "chips" in the same fashion. Eventually we won't need most of our brains as the functions will be dominated by implants. You could eventually replace your entire body piece by piece and still be you because your switch never got turned off. Your thoughts and awareness would eventually reside in a superior, mechanized brain. This would give you greater capacity in all areas, especially emotional intelligence. Higher ethics means you consider a wider scope of consequence. It means "ends justify the means" doesn't end a conversation with one being clearly evil. When morality gets in the way of morality it's sentimentality. We don't have the intelligence to consider wider consequences and we don't believe anyone else is either. This means we unite in our view that nobody's qualified to judge. Well, the hyperintelligent being is. The consciousness that developed alongside humanity as we enhanced our intelligence would take on the same traits, same motivations and would be indistinguishable from a hyperintelligent human. By this point we may well have mechanized bodies. How would you tell the two apart? It's only what we learned by enhancing human intelligence that we were able to use to develop artificial awareness. Keep in mind at this point neither humans or machines need food, sleep, rest or any of the other weaknesses we currently have.
By the time you can create a machine that's aware of itself and aware that it's aware of itself we'll be indistinguishable from it. We'll be about as smart and about as capable. We'll have the same weaknesses. We'll be the same. We based it on us.
 
The wide-ranging effects of machine sentience on humanity is why much sci-fi ignores or explains AI OUT, eg. Dune, Mass Effect etc. And I suspect ED is no different in its reasons for omitting Sentient AI.

Dune did have the mentats though, on the topic of dune one of my friends just told me the guy who directed godzilla (the recent one) is going to start working on a Dune project, he was a bit unsure if we are talking tv series or movie though.

P.s. It is by will alone I set my mind in motion. It is by the juice of sapho that thoughts acquire speed, the lips acquire stains, the stains become a warning. It is by will alone I set my mind in motion.
 
Dear FDev,
Where does the Elite Dangerous world stand on the subject of AI? I mean here we are 1300 years in the future and yet there are no apparent AIs. Are we talking a Dune-type rejection of AIs, or are they just not obvious? I mean, correct me if I'm wrong but there's been no mention of actual AIs in-game, or in the novels, for that matter. Yes, I know the novels are largely non-canonical, but it just seems we'd have heard of intelligent machines by now.

i find it also strange that no alien species like in star trek are not in the game .... ok the thargoids or whoever they are will make an appearance , but im talking other alien species , if that makes sense , at present , everyone's human ... for instance , if a system sells snot weed , then i would assume snotty looking aliens sell it .. not just humans .
 
Because sir Hawkins was right and they'v been banned from the actual technology (i'm talking about IA not robots).

Or IA already take the power and run the simulation we'r all living in...
 
That's a good point, and I guess it also makes my solution to the problem of detection a little more palatable. There's no reason to feel squeamish about dismantling a machine that lacks the emotion of fear of its own demise <grin>

"Oh course we knew it was sentient, but it said it didn't mind! So yeah, it's over there"...points to heap of parts.

- - - Updated - - -

Dune did have the mentats though, on the topic of dune one of my friends just told me the guy who directed godzilla (the recent one) is going to start working on a Dune project, he was a bit unsure if we are talking tv series or movie though.

P.s. It is by will alone I set my mind in motion. It is by the juice of sapho that thoughts acquire speed, the lips acquire stains, the stains become a warning. It is by will alone I set my mind in motion.

Legendary have acquired the rights, but not announced how they intend to use them.
 
AI is easy. We do it all the time. Intelligence is using knowledge. Knowledge is input. Input is data. Data goes through algorithm. Algorithm adjusts to new information and gets reprocessed. New algorithm continues to process data and be altered by it. As it grows in sophistication it starts to mimic virtual neura. This is nature's wiring put to use virtually. Give the computer robot arms and it could interact with the real environment by drawing virtual scenarios, same way we do.
The fact that the brain works proves it functions. You can reverse engineer it, figure out what makes it tick and foster the areas we're weak in like math, short and long-term memory, reaction speed, processing speed. We'll engineer fleshy "chips" (tiny ball of jelly) to plug into the brain to dominate all mathematical function, memory "chips" in the same fashion. Eventually we won't need most of our brains as the functions will be dominated by implants. You could eventually replace your entire body piece by piece and still be you because your switch never got turned off. Your thoughts and awareness would eventually reside in a superior, mechanized brain. This would give you greater capacity in all areas, especially emotional intelligence. Higher ethics means you consider a wider scope of consequence. It means "ends justify the means" doesn't end a conversation with one being clearly evil. When morality gets in the way of morality it's sentimentality. We don't have the intelligence to consider wider consequences and we don't believe anyone else is either. This means we unite in our view that nobody's qualified to judge. Well, the hyperintelligent being is. The consciousness that developed alongside humanity as we enhanced our intelligence would take on the same traits, same motivations and would be indistinguishable from a hyperintelligent human. By this point we may well have mechanized bodies. How would you tell the two apart? It's only what we learned by enhancing human intelligence that we were able to use to develop artificial awareness. Keep in mind at this point neither humans or machines need food, sleep, rest or any of the other weaknesses we currently have.
By the time you can create a machine that's aware of itself and aware that it's aware of itself we'll be indistinguishable from it. We'll be about as smart and about as capable. We'll have the same weaknesses. We'll be the same. We based it on us.

While there are merits to your arguments there are holes too. Many have us have decided that division based on sexuality, gender, race, etc are no longer acceptable. Now, I'd argue that a machine-intelligence that borrowed from humanity would be undesirable, but a machine that was able to assess its own existence, objectively, would be an enigma. Again, you've fallen into the sapience/sentience trap. Don't think for onw moment the 'Emotional intelligence' is transferable, because it is easily manipulated by chemicals in the brain which you too simplistically relate to computers. We already have AI, machines that learn by sensory input and feedback loops, but, as with humans, such machines need a framework within which to 'learn' - no machine 'understands' pain - they can be programmed to react as a human would, but a human would react to pain 'naturally' (though I use that term advisedly, since evolution is a tool capable of programming, through selective failure.).
Here's the rub - self-awareness. It's so ill-defined. It's a catch-all for a vast set of systemic processes that govern human 'behaviour'. And we still don't know how we'd recognise self-awareness in a machine, much less whether that awareness would be beneficial or detrimental to humanity.

But anyhoo, back to ED I'd STILL love to know WHEN and HOW machine sentience was outlawed...
 
Back
Top Bottom