General / Off-Topic I Have No Mouth, and I Must Scream :-X

I Have No Mouth, And I Must Scream is a 1967 post-apocalyptic sci-fi short story by Harlan Ellison.
scream.jpg

It deals with, amongst other things, the philosophical issues surrounding the (im)morality of creating self-aware conscious entities that are 'locked in' - intelligent, hyper-aware - yet unable to interact in any meaningful way with the external world, and how this severe restriction might affect their state of 'mind'

"Tell me Mister Anderson, what good is a phone call, if you are unable to speak?"
neo.jpg
[video=youtube;4D7cPH7DHgA]https://www.youtube.com/watch?v=4D7cPH7DHgA[/video]
The recent aeon.co article, 'Consciousness Creep' by George Musser, deals with a related concept: Given the difficulty we have in even defining or detecting what 'consciousness' is - humans may already have created (or be very close to creating) self-aware entities which we would be ill-equipped to recognise as self-aware.

The multi-layered Black Mirror Episode 'White Christmas' also deals with the (im)morality of creating locked-in/simulated consciousnesses and the manner in which humans might view such self-aware creations, not to mention the effect on the simulated consciousnesses of being unable to interact with the external world in any meaningful way...

This thread was prompted by a request from Hanni79 following a comment regarding a potentially self-aware Competition Pro joystick [wacko]
The intention in starting this thread is simply to share ideas on this fascinating, and - given Moore's Law - timely topic.

For instance:
- How do you feel about killing NPCs?
- How would you feel if they were able to feel fear and taste the vacuum of space as your beam laser delivers them to oblivion?
- Do you know any particularly good (and preferably lesser-known) works of fiction in this genre that you would like to share?
- Do you think a non-biological entity can ever be truly conscious?
- Will we ever even be able to define or detect consciousness - human or not?
- Once a single-celled biological entity evolves from inert matter, is it just a few millennia from the abacus to self-aware, self-replicating machines?
- Are such machines the inevitable outcome of evolution throughout the universe?

Feel free to jump into the discussion - but remember - electronic communication of ideas only within this globally-connected hive mind...
No speaking (or screaming) :x
 
Last edited:
Last edited:
Muahahaaa, let me fire away :

Consciousness in regard to the Consciousness Creep article completely overlooks some extremely important presumptions.

First, think about how biological entities are built:

Everything derives from DNA, which is a basic code defining how every living cell is made up. Somewhere in this code lies the information how external stimuli are processed and how the cell itself reacts to.

Multi-cellular organisms most often have extra code which defines how reactions to external stimuli are transmitted between those cells, which, on the highest level leads to nerve tissue and ultimately to the brain.

Also, it is not only important how external stimuli are transmitted between cells, but how they are perceived in the first place. Single cell organisms are already capable of reacting to a lot of stimuli, like acidity, light, presence of nutritions or osmotic conditions.
Multi-cellular organisms mostly have developed specialized sensors for a plethora of stimuli, which obviously needs tremendously more computing power to properly react to.

Second, something which should be obvious by now, Consciousness is heavily dependant on Perception.
Most articles I have read about Consciousness are extremely narrow minded in their approach to the topic due to people not able to free themselves from the notion that anything conscious will most likely perceive the world like we do and subsequently react in a similar way as we do as well.

To fully understand what I'm trying to express here, try to envision being a conscious mind living in a computer:
The stimuli you perceive aren't sound or light waves, you wouldn't see "pictures" of faces, objects or letters -
You would only perceive differing levels of electricity, most likely just an endless stream of ones and zeroes.

But what would those streams of ones and zeroes mean to you, the living electric entity ? How long would it take you to figure out that there is a system behind those perceived streams, when you don't even know the difference between the one-and-zero strings for letters, much less the meaning of something as complex as the one-and-zero string from a single digitalized spoken word ?

An electrical conscious entity doesn't necessarily know about all this, even when you deliberately programmed a computer to know it - due to the wrong assumption that an electrical conscious entity would perceive a program as a program or even useful information. It could simply assume it's just noise and gibberish, never even thinking about the possibility someone or something is trying to communicate with it.
An electrical conscious entity most probably wouldn't even be aware that it's living inside a computer, since it lacks the ability to perceive itself or other electrical conscious entities, which is one of the most basic things any biological organism can do - distinguishing itself from others using the perceptions of our sensory make-up.

I hope I bent your mind properly with those thoughts, let me know if you're interested in further twisting your consciousness around this topic [wacky]
 
To fully understand what I'm trying to express here, try to envision being a conscious mind living in a computer:

How do you know you are not such an entity?

You would only perceive differing levels of electricity

This is exactly what and how your brain 'perceives'
- An external noise is essentially converted to an electrical impulse that stimulates your neurons.
- Your retina transfers electrical pulses down your optic nerve to... yes... your neurons.

But what would those streams of ones and zeroes mean to you, the living electric entity ?

They would mean exactly the same as they would mean to a biological entity whose sensory organs had converted the external stimuli into electrical pulses to be delivered to neurons

How long would it take you to figure out that there is a system behind those perceived streams
Have you worked it out yet? Perhaps we are simulated consciousness 'living' in a computer. How would you ever be able to prove this one way or the other?

You might never discover the 'system' - have humans discovered a 'system' behind this universe?

An electrical conscious entity most probably wouldn't even be aware that it's living inside a computer
Agreed: You think that's air you're breathing? Hmmm...

...since it lacks the ability to perceive itself or other electrical conscious entities, which is one of the most basic things any biological organism can do - distinguishing itself from others using the perceptions of our sensory make-up.

I'm not sure that's a relevant line of reasoning - I mean, do you not 'perceive' that 'you' are conscious, in this moment? - do you not 'perceive' that I am distinct from you? - Am I not responding to you now? Are the words that you have just read - and are 'reading' - now not causing electrical activity in your (simulated) brain? - electrical activity which is the result of a 'stimulus' that 'you' 'perceive' to have come from an entity that is not your 'self'? [wacky]
 
Last edited:
How do you know you are not such an entity?

I will just ignore the planet sized elephant here for the sake of discussion and assume we both are conscious biological entities ( that doesn't necessarily in- and/or ex-clude Futuristic Kung Fu :p ).

My reasoning is, our biological "Hardware" produces easily distinguishable streams of information - distinguishable in the sense that you "know" if you hear or see something for example. This is something which your "Software" start to learn to distinguish as soon as you were born, maybe even a bit earlier.

But our hardware was built for this, and had myriads of stages of refining. So even when in the end all sensory input is translated to chemical and electrical signals, our brain already is built to differentiate which signals are coming from which sensor and tell our consciousness accordingly.
When we talk about AI, we simply assume that the programming is what will make up the consciousness, but I argue that this doesn't need to be true. The conscious could start at a much lower level, which would lead to an entity not directly capable of distinguishing the input of a microphone from a camera. The Hardware isn't optimized for that feature the ways our brains are.

The composition of our brain lets us easily differentiate between the different streams of input coming from ears, eyes, nose and stuff, but like I said, that doesn't necessarily mean an electric consciousness would perceive the world as we do.

On the last topic, I would argue that consciousness is easier to achieve with other consciousnesses around - it is much easier to reflect on others than on yourself. Think about this : If you were the only living being in existence, there wouldn't actually be a necessity for language.
But language is something which helps you to think, which is at least a part of being conscious. Without language it is at least harder to name things and think about concepts or correlations.
 
Last edited:
The composition of our brain lets us easily differentiate between the different streams of input coming from ears, eyes, nose and stuff

Not so 'easily differentiate'

One word: synaesthesia

...but like I said, that doesn't necessarily mean an electric consciousness would perceive the world as we do.

Undoubtedly it wouldn't. But of what importance is this? How does the world 'seem' to a dog? It exists in the same external world as we do - but it surely perceives it in a vastly different way to us - far more acute sense of smell, perception of colour skewed in different directions along the em-spectrum. How about an echo-locating bat? Same world - hugely different perception of that world. A bird 'perceiving' the earth's magnetic field for navigation. What different perceptions of this world. These entities are conscious, with varying degrees of self-awareness. Dolphins, crows, pigs, octopi, dogs (to name but a few) can solve complex logical problems - yet they have no complex language and may not associate physical objects or abstract notions with words, grunts or noises - yet they are still able to navigate through the abstract reasoning to solve such problems.
 
Last edited:
If no two things can ever touch (without making a big bang) - how can anything interact with anything else, let alone experience it? ;)
 
Not so 'easily differentiate'
One word: synaesthesia
I left this phenomenon out of the discussion because it is more or less of no importance to this discussion.
But of what importance is this? How does the world 'seem' to a dog? It exists in the same external world as we do - but undoubtedly perceives it in a vastly different way to us.

Like Synaesthesia, the way a dog perceives it's world is radically different from ours. So far I completely concur with your statement.

But the way my hypothetical electric consciousness would probably perceive it's surroundings is a whole other Level of "radically different". A dog may have much better smell than we do, but it's still just an olfactory sense. This means just different Levels of awareness, but still the same sense.
Dogs, Bats, even Ants all use extremely similar nerve tissues and brain functions, which derived from millions and millions of years of refinement. The difference between sensory perception regarding to dogs, bats and humans may wildly differ, but compared to the possible perceptions of an electric consciousness the differences are nearly imperceivable :D

Edit :
Although I said perception is a crucial point, this is only in regard to the mentioned article. I just wanted to show that the assumption a program could form an AI might be true as well as there is a possibility for the hardware itself to form a consciousness. But both possibilities would require extremely different approaches to recognize the consciousness, as in the former case we were able to define the capabilities of perception for the entity ourselves via programming it, whereas in the latter case the hardware would need to learn perception on it's own. This subsequently leads to the problem that the programmed AI simply could be given means to make itself heard, whereas the conscious hardware could simply refuse to "see/listen/etc." in the first place and therefore see no reason at all to react to any stimuli we might throw at it, which would severely limit our capabilities to recognize it as conscious.
 
Last edited:
I left this phenomenon out of the discussion because it is more or less of no importance to this discussion.

I'm not sure I agree that it is of little importance here. I included it to demonstrate that sometimes our brains do not 'easily differentiate' between vastly different senses/qualia, and further to demonstrate that an individual's own perception of the external world can become radically different to another human's.

But the way my hypothetical electric consciousness would probably perceive it's surroundings is a whole other Level of "radically different". A dog may have much better smell than we do, but it's still just an olfactory sense. This means just different Levels of awareness, but still the same sense.
Dogs, Bats, even Ants all use extremely similar nerve tissues and brain functions, which derived from millions and millions of years of refinement. The difference between sensory perception regarding to dogs, bats and humans may wildly differ, but compared to the possible perceptions of an electric consciousness the differences are nearly imperceivable :D

This is all conjecture: 'probably perceive', 'whole other level of', 'still the same sense'
On the subject of 'same sense' - most humans have little to no conscious perception of magnetic fields - therefore a bird displays evidence of a 'sense' of which we can have no knowledge regarding what it feels like to perceive the world in that way.

We simply do not know how radically differently a dog, a dolphin, a spider, (or even another human) experientially perceives the 'world'.

We can suppose they are all wildly different perceptions.
We can suppose, also, that a non-biological entity might perceive the world with even more wildly different perceptions.

But I am left unclear as to the main point of your reasoning. To begin with it seemed that you objected to the very notion that a non-biological entity would even be capable of perception/self-awareness (due to them possessing no biological wetware/appendages and lacking millions of years of evolution? - surely iterative feedback loops within extremely fast computers would enable the equivalent of millions of years of evolution to take place in mere months/days/hours?)

Ultimately, however, given that animals display all manner of senses of which we have only poor and sometimes zero perception, what does it matter if a non-biological entity perceives the world in vastly differing ways to ourselves?

How would your feelings on this topic alter if non-biological entities began as cyborgs, gradually augmenting human perception with faster and faster processing so that the first non-biological consciousness had derived from biological origins? Would it, perhaps, not perceive much as we do only vastly swifter, potentially immortal, and with a range of additional senses bolted on to the core sight, sound, taste, smell and so on...
 
Last edited:
We throw away information because we are incapable of processing it all. We are like cheap digital cameras that have a good sensor on them but the photographs have to be massively compressed to fit into their limited memories. You see rainbows? They've got 7 colours? No they don't. The wavelength of the light changes constantly through the whole spectrum. The artificial banding effect is a construct of our own minds.

You should read Philip K - "The Electric Ant" for an exploration of what happens when you start tampering with your view of reality.


The great Marvin Minsky passed away recently. One of his students, Ray Kurzweil, keeps banging on about transhumanism and how we will be able to transfer our minds, our identity, into an electronic box. You won't catch me transferring myself into it though because of the problem that you will never be able to tell if it worked, or if it's just a very good copy that does a good job of convincing you that it worked.

If the process is not a transfer and it is just a copy, so you'd end up with the actual original you, and a box of electronics squawking on about he is now really you and that your "essence" transferred over during the copying process... then where are you? In the flesh? The ghost in the machine? ... or both?
 
But I am left unclear as to the main point of your reasoning. To begin with it seemed that you objected to the very notion that a non-biological entity would even be capable of perception (due to them possessing no biological appendages?)

I think the best way to summarize my reasoning is to compare what I said about the differentiation between a programmed AI and an AI directly derived from hardware to the differentiation in consciousness between an adult, who learned to interpret the sensory perceptions and a baby, who is incapable of doing so.

A Hardware ( not programmed ) AI would compare to a baby, since it hasn't learned how to interpret visual or aural sensory perceptions ( assuming a camera and a microphone is even part of the hardware). Maybe the worldwide internet is already conscious, but not due to programming it to be, it just is and it has no clue what to make out of all the incoming data.

So I'm not objecting to a non biological entity being capable of perception, I want to point out that it might perceive but just does not understand what it perceives or even understand that the perception it is getting is a perception at all.

Kinda like Tinnitus. It is a perception, but a perception without information. Maybe the Internet simply believes it has some hell of a Tinnitus, but in reality it just doesn't grasp that all the input coming from us actually means something, which simply would prevent it to make it's consciousness known to us because it hasn't perceived we are present at all.

Ultimately, however, given that animals display all manner of senses of which we have only poor and sometimes zero perception, what does it matter if a non-biological entity perceives the world in vastly differing ways to ourselves?

The difference I wanted to point out in that regard is the fact that we as biological entities seem to have preprogrammed "knowledge", some basic reflexes and emotions coded into our DNA, which we could give to a programmed AI as well , but like I said, a "Hardware AI" might lack this knowledge.
Consciousness isn't just dependant on perception, but memories as well, and biological entities don't start from zero, like the non-biological, non-programmed entity would have to.

Phew, complicated topic, but it's so much fun for me musing about it :) Hope you like my thoughts :)
 
Last edited:
Hee hee hee. Philip K was also a great fan of tampering with his reality. Great title for your thread btw :D

True ^^ Watching the own mind forming a picture of the sensory perceptions can be quite entertaining as well as educational :D Perception and consciousness are the most important things to understand in my view, it's sad how few people care about this.
 
Given the difficulty we have in even defining or detecting what 'consciousness' is - humans may already have created (or be very close to creating) self-aware entities which we would be ill-equipped to recognise as self-aware.

This is have to thoroughly disagree with. I don't think we have even come close to creating consciousness. We don't even understand how it comes about, much less attempt it's manufacture.
 
Given the difficulty we have in even defining or detecting what 'consciousness' is - humans may already have created (or be very close to creating) self-aware entities which we would be ill-equipped to recognise as self-aware.

This is have to thoroughly disagree with. I don't think we have even come close to creating consciousness. We don't even understand how it comes about, much less attempt it's manufacture.

TS: "we would be be ill-equipped to recognise as self-aware"

We both perhaps agree on this point - as you state, "We don't even understand how it (consciousness) comes about"
- detection of such an ill-defined and difficult-to-produce quantity as consciousness is, by extension, similarly difficult for humans to understand or produce a reliable set of tests for.

In the first point I state clearly that our creation of conscious-entities is not definite: "humans may have already created (or be very close to creating) self-aware entities"

Note the important point that understanding how consciousness comes about is not a prerequisite to creating such entities - humans already produce heuristic algorithms to efficiently solve problems in ways that their creators simply do not understand.

The fact that we cannot even decide upon what consciousness actually is (much less know for sure whether even another human is self-aware) means that we may indeed at some point produce self-aware entities that we are not even aware are self-aware.

This might be dismissed as hand-wavy, impossible to prove, circular logic:
i.e. "If we can't define or detect consciousness you can't just say that we might have produced consciousness without knowing we have done so."

However, the potential for unknowingly creating self-aware conscious entities (perhaps also capable of suffering) is surely - from a moral perspective alone - worth our consideration in order that we're prepared for the possibility of this occurring at some point in the (near) future.

That is the thrust of the article.

You read the article, right? %^]

The recent aeon.co article, 'Consciousness Creep' by George Musser...
 
Last edited:
After about 30 minutes of reading your comments, I think I'll leave the discussion to you wacky people. [wacko]
About defining consciousness: Yeas we can define it. No, that definition won't be exact.
 
Last edited:
Great! You can think... So what?

TS: "we would be be ill-equipped to recognise as self-aware"

We both perhaps agree on this point - as you state, "We don't even understand how it (consciousness) comes about"
- detection of such an ill-defined and difficult-to-produce quantity as consciousness is, by extension, similarly difficult for humans to understand or produce a reliable set of tests for.

In the first point I state clearly that our creation of conscious-entities is not definite: "humans may have already created (or be very close to creating) self-aware entities"

Note the important point that understanding how consciousness comes about is not a prerequisite to creating such entities - humans already produce heuristic algorithms to efficiently solve problems in ways that their creators simply do not understand.

The fact that we cannot even decide upon what consciousness actually is (much less know for sure whether even another human is self-aware) means that we may indeed at some point produce self-aware entities that we are not even aware are self-aware.

This might be dismissed as hand-wavy, impossible to prove, circular logic:
i.e. "If we can't define or detect consciousness you can't just say that we might have produced consciousness without knowing we have done so."

However, the potential for unknowingly creating self-aware conscious entities (perhaps also capable of suffering) is surely - from a moral perspective alone - worth our consideration in order that we're prepared for the possibility of this occurring at some point in the (near) future.

That is the thrust of the article.

You read the article, right? %^]

Thanks for posting the article.

The one thing at which Science does seem to excel is this ability to make as feel less and less special. We used to think that the Earth was the centre of the universe and now we know we are in a boring spiral arm of a very ordinary galaxy. We thought the Moon was going to be an exciting place to visit. It turned out to be a dust ball.

We still have this feeling that there's something special about our ability to think. We think that thinking is wonderful and unique. But when it comes down to it we are nothing but meat computers. By the time we have computers capable of convincing humans that they have the "magic" of conciousness people will be so used to the idea that they wouldn't care about the difference.



 
Last edited:
Seeing how the Earth itself has a structure quite similar to a biological cell, I still ask myself if the Gaia theory had some truth to it. Maybe the earth as whole has some consciousness as well ? Will we ever know ?
 
Back
Top Bottom