General / Off-Topic Artificial Intelligence

Having recently read the archived DDF thread regarding contacts in Elite Dangerous, I've seen some comments regarding the realism of contacts in-game. Allen Stroud in particular has spoken about reducing the obviousness of the difference between NPC's and people communicating in-game.

This has gotten me to thinking about the topic of actual AI algorithms and their underlying technology (possibly also prompted by my having watched Terminator TSCC again recently).

I know this is a subject that has done the rounds since computers first begun to appear in the news, and one that some people may be sick of, but others might like exploring the philosophical implications a bit further, not to mention the technological challenges that are presented.

It's with this in mind, that I've decided to start a thread on the subject to see what everyone thinks of AI in general. To start the topic off, I'd like to ask anyone who wants to comment, the following questions:

The Technological Singularity: Fact or fiction?
What do you think the implications are for human beings if the Technological Singularity happens in the future? Can humans and intelligent machines co-exist?
Is AI good or bad? Something in between?

I have my own thoughts, but I'll expand on them later, as I want to see other peoples' opinions without potentially influencing them. I'd also like to write a story or series of stories that deal with the interaction of AI entities and human beings, hopefully in a way that no-one has yet considered (I already have ideas I've been mulling over for some years up to now).
 
While I personally believe that the technological singularity will in fact not really happen because most likely human beings will fall short of creating it, in order to maintain intellectual dominance. I would love for the theory of another race or alliance of races to contact us once we are at an acceptable level i.e. the singularity. With that being said and assuming that there are no aliens watching us, I believe that the AI would realize that our folly is greed and it would separate itself from us in order to survive.
 
I tend to lean toward the idea that human beings are flawed beings (obviously) and that those flaws (collectively at least) are harmful to, or at best, at odds, with the survival and/or goals of machine intelligences.

I've also considered that it may be possible for machine intelligences to experience their own analogues to emotional states. Here's my train of logic, if you will:

Living creatures have a survival instinct. Survival often necessitates some form of preservation behaviour. In order for such behaviour to take place, a willingness to use such behaviour needs to exist, which in turn requires a stimulus. The stimulus is termed emotion, but the mechanism, however it seems subjectively, is still a means to an end: the survival of the organism in question.

Take for example, rabbits. I have two of them, both as cute as anything, and neither of them are aware of it. Their instinct is self-preservation, like any other creature. However, being prey animals, they have both a heightened fear stimulus (you can quite literally scare a bunny to death) and a heightened threshold of pain (you often don't know a bunny is ill until it's too late to save them). Their emotions get them out of danger in the wild.

So how does this apply to the concept of artificial intelligence?

It is possible that a well-developed AI program(s) will incorporate a similar directive to survive. In order to facilitate such survival, it can either have a hard-coded set of rules and parameters, or it can handle the requirement via a computer analogue for survival instinct.

If the latter route is taken, then mechanisms will need to exist to facilitate this analogue. Such mechanisms will include a directive analogous to a will to live, as in biological creatures. The will to live accompanies emotional states (at least in humans) that increase the chances of survival. Fear is a self-preservational instinct in humans, and a similar analogue can exist in an AI. Similarly, feelings of lust, love etc, are drives for procreation and social bonding respectively (that's a whole philosophical discussion of its own). Empathy is a mechanism to facilitate social cohesion.

This of course, is all speculative and entirely hypothetical, given that research in AI is still in its' infancy (last I read, the most powerful computing array used for AI simulation could only simulate a neural net complex enough to resemble a cat's brain), but if we ever did experiment with AI that had sufficient intelligence to create more complex versions of itself, with or without human awareness, it's also possible that the complexity of neural nets within artificial constructs will be able to one day simulate emotional awareness.

I suspect such a facility would be useful for AI's, because in order to interface with humans, a subject has to understand them, including how they feel about any given situation. Of course, to set an AI above a biological intelligence, it's possible that AI's will also ensure a way of disabling emotional input so that it does not impede them during critical tasks.
 
Have a go at Michael Brookes' Faust 2.0. That was rather a pessimistic view on how a singularity would affect our society... ;)
 
Last edited:
The Technological Singularity? I'd say fact.

I'd also say, it will be a soft take off rather than hard (In other words, there won't be a HAL waking up one day like in the Terminator and taking control.) My feeling is people won't even really know its happening until it's part of our every day lives and advanced AI just exist.

I see it moving much like the advancement in mobile phones within society. People 15 years ago had no inkling that the likes of a Samsung Galaxy or a Iphone 5 would exist and the types of technology that came along with them. So, progressively smarter AI will be developed in the background until it's hardly distinguishable from human modes of thinking or actions.

My take on it is the Technological Singularity is already happening, it started when humans first learnt how to pick up sticks and bones and use them as tools, it started with fire and has never stopped. Its been a steady progression through the ages and is doubtful it will stop bar we do something really silly as a species or get taken out by a natural event.

I think we actually maybe the foundation blocks for a future existence with artificial life (?), I say "life" because once we create it, the fact that it is "made" by us is neither here nor there, Its still a form of evolution in my eyes, just one step further. So as time passes we will become more and more reliant on smarter machines, smarter AI as they will become us, merge with us. I don't have the "fears" that are portrayed in the likes of Terminator and other such movies, that is just fear of change and the unknown horizon ahead of us.

The thing is, when self aware AI does come about, how long will it be before just wants to do it own thing and let go of us? Hopefully, us as parents will be mature enough as a species and let our new children go into the wider universe. The worse possible thing would be for us to treat them as slaves and abuse them the way we currently abuse the rest of the life on this planet. I do not believe that will be allowed to happen.

Anyway, roll on our continuing Technological Singularity, its a fascinating thing to see happening around us. To tell you the truth, I don't even like the term AI or "artificial anything" but thats another story. :)
 
Last edited:
Back
Top Bottom