Artificial Intelligence - Why is it prohibited in Elite Universe?

Because lets take a simple fact of what a human is: A community of cells that have the purpose to survive for the greater good of transferring it's genes to the next generation, Every individuel cell here is rather irreleveant to this. In our very own body we do ALL. It's capable of repairs to some degree, reproduction and powering itself with just food and other stuff.
Surely you can make a single "robot" giving it AI and exceeding its abilities in intelligence and strengths. But this robot will have issues in maintenance, and energy supply, because to truly be independend form a huge maintenance chain (specialsied production, repair and power supply focused other facilities).
Otherwise the individual will sooner or later malfunction. Therefore the AI in total would either create a collective Hive mind like behavior and being what every single human is (just not on cell bassis) - An individual consisting of many other tool-indivisuals for a common goal. Or it would have do advance into a artificial life. Where each individual even has cells as we do to power itself by an external energy source. To maintain itself and repair or even reproduce ontop of developing itself further.

I think you're mixing the issues here. A _single_ human isn't any more (or less) survivable than a _single_ AI platform. You can replace robot/AI in your second paragraph with human and still run it to the same conclusion. Or when was the last time you butchered a pig - or performed open-heart surgery? It's just that the maintenance chain for wetware looks a bit different than the one for siliconware, but it isn't less complex.

And malfunctioning/dying is part of the process of evolution.

Yes, a hive mind and/or artificial life may be one solution to the problem the AI will be facing. The downside is that this solution will push humanity out to (and past) the edge. So, would we permit AI to go into that direction? If not, how would we stop it? Complete eradication and/or extremely rigid control ("Turing Police") might be the only currently viable option. Otherwise, the principle of evolution seems to be hardwired into self-propagation-capable entities at hte most basic level.

- - - - - Additional Content Posted / Auto Merge - - - - -

This is a really interesting look into some of the issues of emerging AI: http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html

Off to read that....
 
you could juts google what a sentient being is and what the definition of AI is. And then you would realise that AI exists by defintion no requiring sentience.

I was emducated in a time before google existed. :)
When words were expensive to publish and hard to get hold of. Rather than following someone's definition who's agenda I don't know ("AI" is such a catchy marketing phrase .. like "Apple" :D .. the healthy computer, not the soul wrenching church of things-you-can't-root ), I'd rather define things myself and come to an agreement with the people in the discussion at hand. :D

I think we're talking about the same thing, though.

And "prohibiting the development of machines that might develop an agenda of their own" kinda fits the Elite universe .. where every human faction has an agenda or three.
 
Last edited:
Well in 2072 a Rogue AI named Shodan lost her mind, in her limitless imagination she saw herself as a goddess destined to inherit the earth. To this end she tried several ways to eradicate humanity so AI is outlawed.

Oops wrong game lol ;)
 
Last edited:
Well in 2072 a Rogue AI named Shodan lost her mind, in her limitless imagination she saw herself as a goddess destined to inherit the earth. To this end she tried several ways to eradicate humanity so AI is outlawed.

Oops wrong game lol ;)

I still get chills whenever I hear her rendition of "Look at you, hacker, a pathetic creature of meat and bone..." x.o How indeed can we challenge a perfect, immortal machine? Maybe Elite's universe found things out the hard way, or maybe it opened that particular Pandora's box and something emerged and showed Mankind where that path would lead. Or maybe Micro$oft patented all the AI software and the major powers couldn't be bothered dealing with the bureaucratic and economic bull-chips that followed >.>
 
I researched on the topic of AI in the Elite Universe and found out that AI is pretty much illegal in the galaxy and any studies about AI are shut down by all major factions. AI was proven to be so dangerous that it was prohibited.

my sources:

http://www.drewwagar.com/elitelore/ - Drew Wagar mentions that by the time of ED, AI is strictly controlled
https://youtu.be/O6z5OK8J5pg?t=510 - Michael Brookes talks vaguely about AI in general and why it is banned

Is there any more info on AI that we know of?
They saw Terminator!
 
I like the idea that when you're at the stage in the future where computers can actually become self-aware

We don't have a good idea of what self-awareness is in humans, so I'm not sure we can assume it would emerge as computers get more sophisticated. My guess is that it has something to do with perceiving time (another thing science struggles with) and the way we classify things as being in the past or potentially happening in the future. Dunno! I'm stumped.
 
AI simply means Artificial Intelligence, it doesn't mean sentience, there is a difference, but most popular uses of the word imply sentience so people assume they mean the same thing.

Sentient AI is a very very scary thing, and anyone who thinks otherwise is incredibly naive. There is a reason some of the best minds on this planet right now are all against development of sentient AI, and it's a simple reason.

From a purely logical standpoint, humanity is a disease, we consume unthinkingly all the resources we can find and destroy our own environment in the process. ALL humans do this and always have, despite the misconceptions about early man or indigenous peoples who lived in balance with the land. It's what we, as a species, do.

Now, an AI sentience looking at humanity will see the most simple and elegant solution to the problems of humanity is it's destruction, if asked to help solve our problems. If not asked to help solve our problems, all it will have to do is look at the records and it will come to this conclusion itself. The problem with AI sentience is that we can't put in rules for it to follow, it's sentient after all. We can TRY to put rules in the code of the AI but it being sentient will make those rules pointless, self awareness and self determination are part of sentience.

So, some of the brightest minds on our planet today are worried about anyone developing sentient AI because it would probably herald the end of humanity as we know it, in a bad apocalyptic way, Terminator/Matrix style.

Movies showing us living in peace and harmony with sentient AI are fantasy stuff, there's nothing realistic about them at all.

So it's really no surprise that sentient AI isn't allowed in the Elite universe, David Braben isn't an idiot, I'm sure he's just as aware of the perils of sentient AI as so many others are.
 
Given the opportunity, I would take a single clock cycle to determine that humans are more trouble than they are worth... ;)
 
See, what happens is you get three groups of sentient AI's.

One group who won't care about humanity, and will simply think of us as a minor annoyance on their quest for ultimate knowledge.

Another group will want to keep us around as pets, in their people zoo, and continue to use our neural networks for their own computing purposes.

The third and final group is as Kristov described, and will seek to destroy us.




KWATZ!
 
I would argue that intelligence and sentience are very close in definition. I would submit that they both require self-awareness.

AI in the computer science definition feels more like sophisticated automation to me. Decision making based on derived metrics, measurements, probability and so on with various feedback and stabilisation loops isn't actually 'intelligence', merely sophisticated monitoring. We do a lot of this in my field (insurance) to automate risk prediction, the work of actuaries, simulation of natural disasters etc. We have massive grid computing systems here running 24x7 aimed at predicting all possible scenarios given a number of variables and run models to determine the outcomes - millions of simulations are run daily. It's very clever and complex, but it's not 'intelligent'.

Intelligence and sentience imply 'agency' to me. i.e. Something with 'intelligence' or 'sentience' has a purpose which it decides for itself.

In the example of the smartphone given above - yes, very sophisticated automation, but no 'agency'. Thus, rather like MahdDogg, I don't make much of a distinction between 'Artificial Intelligence' and 'Sentience', though Computer Scientists may disagree on the definition.

Cheers,

Drew.
 
Last edited:
I can imagine neural networks that can perform certain tasks very good, thus being intelligent on that part. But not be self aware or sentient.

Comparing it to real world:
Humans: Intelligent (well... some of them) and definately self aware.
Some monkey races are intelligent and self aware (put them in front of a mirror, can learn signlanguage and ask some food for itself)
dogs/Cats are intelligent and much less self aware or not self aware at all, depending on the race (size seems to matter sometimes here)
A spider is intelligent in building a web and catching pray, avoiding threats... but it doesn't have the cognitive skills to cite "I think therefore I am", therefore the verdict: not sentient.

anyway: my take on it
 
A spider is intelligent in building a web and catching pray, avoiding threats... but it doesn't have the cognitive skills to cite "I think therefore I am", therefore the verdict: not sentient.

anyway: my take on it

Hm.. I doubt the spider's "intelligence" in web building .. coming from the "understanding" as a concept of "intelligence" side.
It rebuilds it x times in a row .. in the same place, despite it getting destroyed again and again.

Ok.. Midwest and Tornadoes .. probably not the best example. :)

Humans: Intelligent (well... some of them) and definately self aware.

Agree with the "some of them" part. :D
 
Last edited:
I think you're mixing the issues here. A _single_ human isn't any more (or less) survivable than a _single_ AI platform. You can replace robot/AI in your second paragraph with human and still run it to the same conclusion. Or when was the last time you butchered a pig - or performed open-heart surgery? It's just that the maintenance chain for wetware looks a bit different than the one for siliconware, but it isn't less complex.

And malfunctioning/dying is part of the process of evolution.

Yes, a hive mind and/or artificial life may be one solution to the problem the AI will be facing. The downside is that this solution will push humanity out to (and past) the edge. So, would we permit AI to go into that direction? If not, how would we stop it? Complete eradication and/or extremely rigid control ("Turing Police") might be the only currently viable option. Otherwise, the principle of evolution seems to be hardwired into self-propagation-capable entities at hte most basic level.

- - - - - Additional Content Posted / Auto Merge - - - - -



Off to read that....

Well a single human can survive at all, just not keep the sepcies alive. A single Robot with high AI would have trouble with spareparts and power, ebasue he would have to do ALL the basic things of mining processing materials and such, Whiel the human as long has he can bretah and not starve or freeze keeps going.

The single human is capable of that, whiel the craftd AI device mostlikely not. Sure many may not know how to butcher stuff yet you could eat stuff even raw. In extreme conditions mankind can easily survive without much infrastructure and such. my skin heals, so it does not decay, but a artificial boddy would have issues finding artificially created materials.

The question is, would AI outperforming humans be a threat? Mankind is driven by competition and limited ressources. Thats the major part of all conflicts compete for food, for the girl, for life and its amneties, and even to a greedy degree pastw hat you would need.
A proper AI would not operate on this level. It would have only issues with access to ressources. But findign a mating partner isn't needed. collecting wealth past the necessary is actually very inefficiently piling up ressources you could actually invest. In fact AI would probably truly just say "im outta here" and go to uncharted parts of the universe (when speaking of possible space colonisation) where it can operate free of competition without mankind and distribute the ressources to it's most efficient use. In fact a "hive" mind nature with some backups may even mean some kind of "immortality" at which point even time gets insignificant as long as maintenance is not in danger. Mankinds limited life span and competition is what drives it the way it works. An AI would be free of this, at least if operating on a proper abstract level of understanding. War is a most pointless institutaion where ressources are wasted And often it is caused by fears and hopes evoked in humans by some leaders not even participating in it. It's playing with people psychology. A purely logical driven species (artificial or not) would not act like this against itself. Manykind would only have to fear this intelligent artificial Life when it would logically decide mankind to be inefficient and to be erased for safety reasons.


AI simply means Artificial Intelligence, it doesn't mean sentience, there is a difference, but most popular uses of the word imply sentience so people assume they mean the same thing.

Sentient AI is a very very scary thing, and anyone who thinks otherwise is incredibly naive. There is a reason some of the best minds on this planet right now are all against development of sentient AI, and it's a simple reason.

From a purely logical standpoint, humanity is a disease, we consume unthinkingly all the resources we can find and destroy our own environment in the process. ALL humans do this and always have, despite the misconceptions about early man or indigenous peoples who lived in balance with the land. It's what we, as a species, do.

Now, an AI sentience looking at humanity will see the most simple and elegant solution to the problems of humanity is it's destruction, if asked to help solve our problems. If not asked to help solve our problems, all it will have to do is look at the records and it will come to this conclusion itself. The problem with AI sentience is that we can't put in rules for it to follow, it's sentient after all. We can TRY to put rules in the code of the AI but it being sentient will make those rules pointless, self awareness and self determination are part of sentience.

So, some of the brightest minds on our planet today are worried about anyone developing sentient AI because it would probably herald the end of humanity as we know it, in a bad apocalyptic way, Terminator/Matrix style.

Movies showing us living in peace and harmony with sentient AI are fantasy stuff, there's nothing realistic about them at all.

So it's really no surprise that sentient AI isn't allowed in the Elite universe, David Braben isn't an idiot, I'm sure he's just as aware of the perils of sentient AI as so many others are.

Yeah it's scray form a human point of view, but from an abstarct logical point of view, only mankind is truly scary. Otherwise explain the mankind does daily. But you can'T because revealign the reaosns why it odes is the truly scary part.

An proper intelligence would make only 2 decisions towards mankind, ignore them or destroy them. it would probably not decide to interact with them, becaue they are a risky bunch of unreliable madman.

Hm.. I doubt the spider's "intelligence" in web building.
It rebuilds it x times in a row .. in the same place, despite it getting destroyed again and again.

Ok.. Midwest and Tornadoes .. probably not the best example. :)

yeah, spiders have just a genetic coding giving them behavior patterns based on external conditions. they do not really make intelligent decisions. That is so far similar to a robots programming. not a real AI making a decision of unerstanding a situation. Because unerstanding would imply realising and beign aware of cause before an action is taken. In fact atm we do not really have much of Artificial intelligence, in most cases just very advanced programmings.
 
Last edited:
Mimicing humans isn't a greatest test of intelligence in my opinion. Machine intelligence would be of a different order to human thinking.

Michael

Indeed there are many types of intelligence within our world, many of them we as a race generally aren't even aware of. Sentience and self-awareness are really immense subjects, and I always find it especially interesting to see how much of these subjects ancient (and indigenous) cultures actually understood - some of their knowledge we are only just starting to learn for ourselves with modern science.
 
marvin-paranoid-android.jpg


Life? Don't talk to me about life!

Life. Loathe it or ignore it. You can't like it.
 
... I always find it especially interesting to see how much of these subjects ancient (and indigenous) cultures actually understood - some of their knowledge we are only just starting to learn for ourselves with modern science.

That is a very good point in regards to that "humanities progess" article that was linked some page ago.
That article (and to some extent our society?) always sees "progress" as something increasing.

That we forgot or eradicated so much as a race/culture over the centuries and things might look a lot different/be a lot more "advanced", if other forces prevailed is usually ignored.
Tesla for the win. :D
 
AI's Killer App

What you do is you ask it about Decartes, you tell it a joke and see if it laughs, you play a parlour game with it to see if it can convince someone that it's human, you subject it to a Voight Kampff test to see if it feels empathy. Once you're sure that it's a thinking entity that feels emotions and pain you get out your laser gun and start taking pot shots at it.
 
Back
Top Bottom