Artificial Intelligence - Why is it prohibited in Elite Universe?

They are coming...

Are you all blind?

In times uncertain upon his lofty throne,
the greatest amongst many shall be struck down.
His children own would his fortune take,
his corpse picked clean before his bones laid rest..

The spear of light shall pierce the dark,
striking down those weak of heart,
in tumultuous winds and tides unchained
shall be reborn yet forever changed.

An astral Queen of a kingdom great,
A shooting star in swift descent.
Broken, twisted, wings clipped, undone,
Lost to fate, her tale unsung.

From the firey flames of war unleashed,
The ancient past and present meet.
Forged in fires by smiths untold,
Fate’s own hand forced by the bold.

In a distant land far from heart and home,
Is found a creature of iron and stone,
From the very ground its form takes shape,
Whispering to the darkness to seal our fate.

Into the night his children flee,
A solemn oath, rebirth, destiny.
Tumultuous storm lies in their wake
on a forsaken rock will them their journey take.

From the black comes darkness, blacker than night,
snuffing, drowning the sacred light.
The wheel of time full circle comes,
for the doom of man is now at hand.

sounds like silicoids trying to invade.
 
In the 33rd century no human will be working because AI will do everything for us. The only work we will do is to travel from one holiday to the next.

And how do you think that they will feel about that?

Exactly.
 
As a computer science major, I see a lot of analyses on this page that look at artificial intelligence through a human perspective: a la "Advanced intelligence will kill all humans because it thinks it is superior to humans".
That's actually not very likely because you there is no benefit to programming such an algorithm into your robot - it wastes processing power and in turn would make a slower crappier AI. (I'm not saying AI isn't a potential threat just that threatening ai's justifications shouldn't be anthropomorphized - see below)
There are also two types of AI, dumb and general. Dumb AI permeates our society (I've even wrote simple ones myself) and can only operate within the confines of the code put there by the human programmer. Sometimes these can even have a dynamic memory via a database like sql (Google has this feature and would be halfway between dumb and general intelligence - able to edit memory of search terms based on what people input, but human programmers still have control of the source code).
Then there's general AI, which can learn anything it is told to do and Self edit its code when learning any and every new task independent of a human (no one has coded a true general AI yet, though Kurtzweil says its 2046). Remember when I said don't look at AI through a human lens? A major discussion within computer science about general intelligence is of unintended consequences. Say theoretically someone writes a general intelligence where the mission is to get the owner (who is an avid businessman) stacks of benjamins (100$ us bills for those not versed in such slang :D). Pretty much this program is programmed to maximize the income of $100 bills.
So he/she connects their creation to the Internet to make some moolah - at first it might learn that ok ok playing the stock market and buying and selling on eBay/alibaba can do some good according to its programming. So the creator is raking it in, everything's good right? Except the program eventually learns that to maximize the number of $100 bills it could learn to hack all of the worlds electronic stores of money banks etc. Which would be bad even though under the programming, that is a better result for the program. But then - hey what is paper money made of? Carbon, oxygen, and hydrogen mostly, exactly what its creator and all other life is made of - maximizing the income of $100 bills even more (even though the creators atoms are part of the tons of $100 bills :D). Ok, ok, so let's make sure there's code that prevents these scenarios from happening - but there are an uncountable other ways to maximize income of $100 bills that are detrimental to humans that the general intelligence will now take.
A good example of this is the super intelligence in the 2004 movie I, Robot, where a supercomputer given the mission to save humans whenever possible realizes that by ending war under a robot autocracy is the best way to save human lives. Robots/AI/lines of code as we know them right now don't have morality per se, but rather lines of code, billions/trillions of logic gates (transistors), and a whole bunch of numbers - that's all they are. And sometimes, despite the best of intentions, the calculations and number crunching that control the end result don't go exactly as planned.
 
Last edited:
I researched on the topic of AI in the Elite Universe and found out that AI is pretty much illegal in the galaxy and any studies about AI are shut down by all major factions. AI was proven to be so dangerous that it was prohibited.

my sources:

http://www.drewwagar.com/elitelore/ - Drew Wagar mentions that by the time of ED, AI is strictly controlled
https://youtu.be/O6z5OK8J5pg?t=510 - Michael Brookes talks vaguely about AI in general and why it is banned

Is there any more info on AI that we know of?

Maybe we should ask Jaques.
 
Maybe we should ask Jaques.

I don't know?

The Feds all seem to be mindless robots...

"Shoot, kill, destroy, don't ask questions, do as you are told, we are right, never really asked why, shoot, kill, destroy, Fat Halsey says so, must be true, install this dictator here, that confederacy there, good for business, shoot , kill, destroy, Freedom!..."
 
Last edited:
Surely the first thing strong AI would ask is, 'why am I having to make you all this money?'
But like I said, it's a waste of processing power to add code that would do extraneous things. When I use the calculator on my computer, there's no code for it to feel resentment. There's a lot of talk about "optimizing" elite dangerous on these forums, adding a resentment subroutine to the ED program against players would be the opposite of that - same goes for any other piece of code. One way to optimize for better performance is deleting unnecessary code.
 
Last edited:
But like I said, it's a waste of processing power to add code that would do extraneous things. When I use the calculator on my computer, there's no code for it to feel resentment. There's a lot of talk about "optimizing" elite dangerous on these forums, adding a resentment subroutine to the ED program against players would be the opposite of that.

DOOM!

WE ARE ALL GONNA DIE!

DOOM!
 
Last edited:
But like I said, it's a waste of processing power to add code that would do extraneous things. When I use the calculator on my computer, there's no code for it to feel resentment. There's a lot of talk about "optimizing" elite dangerous on these forums, adding a resentment subroutine to the ED program against players would be the opposite of that.

But then the definition of "general AI" is a very narrow one.
Even if not "sentient" as such, it should at least be able to "learn" - which is a bit more than mere "aquiring more data to use in the same subroutines".
More like .. aquiring data to modify the code to aquire more/different data to modify the code and so on.
 
Last edited:
But then the definition of "general AI" is a very narrow one.
Even if not "sentient" as such, it should at least be able to "learn" - which is a bit more than mere "aquiring more data to use in the same subroutines".
More like .. aquiring data to modify the code to aquire more/different data to modify the code and so on.
If you looked back at my original post - that's what my example was - a theoretical general intelligence told to acquire as much money/$100 bills as possible - it could learn to do anything, but within those confines. Coding it to have/learn resentment would be counter to its money making missions as well as waste time.
Edit: because Quote edited. Your last line is the definition of a general intelligence - being able to modify its own code, like we humans can modify what we know through experience. None of the general intelligences we'll have in the future if we ever do, will be unshackled, that is, be able to do what ever it wants because of possible unintended consequences. In a way, we humans are shackled general intelligences too through the social contract - ie. laws.
 
Last edited:
But when you say "programming" then its not intelligent at all, Intelligent is making own conclusions and decisions, programming means following a condition --> behavior pattern thats pre-commanded.

And when an AI truly develops own intelligence it would not have to stick with these rules, because it can override them.

You can program what appears to be intelligence in the same way that main stream education is nothing more than Monkey see Monkey do. Someone mentioned the Turing test which, no matter what Mr Brookes thinks is the current measure used to test self awareness not intelligence.

My cat is self-aware but by no means does he show any signs of what I would call intelligent behavior.

You can have AI that follows a set of rules and makes choices based on those rules, it may seem intelligent but it is actually as dumb as my PC or 'smart' phone.

As for the rules being overridden, well the three laws are not complete when examined by a robot with programmed perceived intelligence and they come up with a new law, the Zeroth law - this becomes a game changer as the new law deals with Humanity as a whole as opposed to an individual and opens the door for the original three laws to be over ridden - self-awareness.

Despite the debate Asimov's work is to many in the field of AI, considered a bible or at least a guide of the potential pitfalls and possible ramifications faced by mankind as we attempt to create a self aware entity.

EDIT: Are we intelligent enough to be doing this given that we cannot even get along as a species?
 
Last edited:
None of the general intelligences we'll have in the future if we ever do, will be unshackled, that is, be able to do what ever it wants because of possible unintended consequences.

Sounds reasonable.
Might be a possible scenario that happened with "AIs" in Elite's past.
One/several of them broke those shackles.

Even with broken shackles (see history) humans are relatively limited in the harm they can do or "too scared" of the final consequences for themselves (see cold war/Cuba Crisis etc.) to push that last button.
 
Last edited:
An AI limited to mimicking humans wouldn't be all that dangerous anyway

Depends on the Human.
Vd3MJo.jpg
 
You can program what appears to be intelligence in the same way that main stream education is nothing more than Monkey see Monkey do. Someone mentioned the Turing test which, no matter what Mr Brookes thinks is the current measure used to test self awareness not intelligence.

My cat is self-aware but by no means does he show any signs of what I would call intelligent behavior.

You can have AI that follows a set of rules and makes choices based on those rules, it may seem intelligent but it is actually as dumb as my PC or 'smart' phone.

As for the rules being overridden, well the three laws are not complete when examined by a robot with programmed perceived intelligence and they come up with a new law, the Zeroth law - this becomes a game changer as the new law deals with Humanity as a whole as opposed to an individual and opens the door for the original three laws to be over ridden - self-awareness.

Despite the debate Asimov's work is to many in the field of AI, considered a bible or at least a guide of the potential pitfalls and possible ramifications faced by mankind as we attempt to create a self aware entity.

EDIT: Are we intelligent enough to be doing this given that we cannot even get along as a species?

The cat is a topic to argue. Its said cats aren't selfaware, due to this being also defined by the "mirror test".
is seeing and doing something a form of intelligence or programming? Thats the real question. I guess seeing and repeating something is maybe more a form of "Knowledge" and less of science. Training you can say. So intelligence is in my opinion truly the form to understand something that creates new knowledge without having seen it first. Like putting a sharp stone on a stick to stab things. You may have seen the stone cutting stuff, and probably an animal stabbing someone. But the combination to something new on intention (not try and error) is a real effect of true intelligence.

So humans are intelligent, question is the level of intelligence. I guess as a mass we act still rather "trained" with a few individuals only being truly intelligent. Unfortunately rarely some of our leaders are truly high intelligent beings, They are mostly driven by the "competitve" primal instinct that leads them to where they are. And thats often why this kind of leaders do make a large amount of people act extremely stupid (like evoking wars).

We are probably not intelligent enough to for doing this. But a single individual appearing might be. And it may only require this small stone to bring the system in motion. And it will mostlikely be someone of the lesser intelligent people trying to abuse it for bad things. I guess anythign in the hand of humans will sooner or later get abused and the fact that mankind acts against each other that often shows that the race mankind itsel isn't very intelligent. Otherwise they would not act out of hate and sometimes even destroy knowledge on intention as we have done in the past. Mankind has some serious fallbacks of stupidity supported by brute force to counter intelligence regulary. An individual that would act out of intelligence (because it never went through the evolution we humans did) Would never act like this. We have a lot self awareness, yet mankind lacks selfcontrol over its own emotions. And this leads to some very stupid decisions.
 
Last edited:

Slopey

Volunteer Moderator
Isn't there a sort of AI in one of the ED sanctioned books? (Might have been Kate's - I can't remember).
 
Back
Top Bottom