Rome Didn't Fall in A Day.









Objective Truth Exists, and is Accessible to Everyone.

All Human Problems can be Solved with Enough Knowledge, Wealth, Social Cooperation and Time.


Photo: Rusty Peak, Anchorage, Alaska


Translate

Monday, November 13, 2017

Computer Sentience

The great ethical debate in the year 2100 will be about the civil rights of sentient machines. 
The great ethical debate in the year 2200 will be about the civil rights of sentient humans.  
OK, that’s supposed to be a joke.  But let’s think a bit about the possibility of sentient machines.

Life is immaterial and ephemeral.   One moment after death, a being has all of the same solids and fluids, all of the same atoms and molecules as at a moment before death.   But something mysterious has departed.  Life exists as a collection of electrical impulses and chemical changes.  The idea of a living, immaterial, non-physical spirit is a powerful one, and most people throughout history subscribe to the idea that all living creatures are endowed with such a spirit.   But no such spirit has ever been reliably observed.  On a scientific basis, we must presume that life consists solely of the electrical and chemical interactions that animate our muscles and minds.

Consciousness, too, must be a matter of electrical and chemical physical properties.  It should not be a surprise. We can influence consciousness with as chemicals diverse as caffeine, TCP, or LSD; and we can stimulate memories with electrical impulses to the brain.  Is there any reason, then, why machines using complicated patterns of electrical connections could not become as conscious and aware as humans? 

Examples from Science Fiction
Science fiction and science fiction authors have proved to be remarkably prescient about future technology and social issues, and there are innumerable examples of computer consciousness in science fiction.   Considering the remarkable consensus of science fiction authors about the possibility of computer consciousness, I am inclined to believe that it is a real possibility.   I think it is time to consider in what form it may occur, and what implications it will have for mankind.

Here are a few sentient machines from some of my favorite science fiction stories.

Character            Type                  Book or Show                                              Author
Mycroft              Mainframe           The Moon is a Harsh Mistress                     Robert Heinlein
Daneel Olivaw   Android                The Caves of Steel                                       Isaac Asimov
Colossus             Mainframe           The Forbin Project                                        Michael Crichton
Data                    Android                Star Trek, Next Generation                          Various
Samantha           AI Program          Her                                                               Spike Jonze
Marvin                Robot                    Life, the Universe and Everything              Douglas Adams
Bender                Robot                   Futurama                                                     Matt Groening
Jay Score            Robot                   Jay Score                                                      Eric Russell
Einstein               AI Program          Beyond the Blue Event Horizon                  Fredrick Pohl
Tardis                  Time-Ship            Dr. Who Series                                             Various

There are dozens of other examples in science fiction. What makes these stories interesting is the range of thoughts and behaviors exhibited by the sentient machines.  And in a way, the stories are explorations of what it means to be human and sentient.  In some of the stories, machines threaten mankind; in some stories they save mankind.  Sometimes they bond as friends with human characters; sometimes they question their own lack of humanity.  But as drawn by the authors, they are unquestionably alive.
Image from the film "I, Robot", screenplay by J. Vintar and A. Goldsman, 
after a collection of stories by Isaac Asimov. 

Today, artificial intelligence is one of the fastest developing fields of technology.  Artificial Intelligence is expected to understand our spoken speech, speak meaningfully in response, act as clerks or servants, interpret our instructions from gestures, render judgments and decisions in complex fields such as medicine, recognize and appropriately classify images and scenes, drive our cars, work in our factories.  Ultimately, artificial intelligence may design and improve its own replacements.  At this time, there are no known limits to what artificial intelligence can do.

But all of this is less than what we see in science fiction.  Few computer specialists would believe that today’s artificial intelligence is anything living.  AI programs execute instructions from programmers, and in some cases, can adapt that programming based on input from the external environment.  But even then, the program is simply performing as it was designed, without motivation or will.  It isn’t alive.

Sentience
What, then, would be the hallmarks of a sentient machine?  What qualities would it have that differ from today’s artificial intelligence?  Would we recognize a sentient machine if we saw one?

Here is a list of the qualities that I think are necessary to the definition of sentience.
Consciousness – Awareness of the surrounding environment.
Self-awareness – The ability to say “I am”, without being asked.
Personal Memory – The ability to remember former analyses (thoughts) and actions.
Thought -- the ability to think in processes, make forecasts and predictions based on processes, rather than pattern recognition.
Will – The deliberate decision to perform or not perform an action according to self-determined reasons.
Empathy – The ability to recognize other beings as sentient.

Consciousness
Consciousness is hard to define.  In the biological world, I think that consciousness is a gradational quality, rather than a discrete property.  No one would suggest that a virus is conscious, and yet it has some property of life which is greater than that of a piece of rock.   But most would agree that a worm is more conscious than a virus, and a dog is more conscious than a clam.   And perhaps a colony of bees is more conscious than an individual bee. 

Both computer programs and flatworms can respond to external stimuli.  Flatworms can be trained to avoid stimuli associated with pain, and seek stimuli associate with food.  Perhaps these actions demonstrate the emotions of fear and pleasure.  But it is unclear if the responses of either flatworms or computers are aware and knowing responses, or simply the results of chemical and physical programming.

Definitions of consciousness include awareness of exterior and/or interior things.  But the definition and observation of awareness is difficult, even in humans who have suffered brain damage.  The identification of consciousness, separate from the qualities of self-awareness and free will, will be very difficult to recognize in computer intelligence. 

Personal Memory
Personal memory is a critical part of human personality.  I define personal memory as the memory of prior thoughts (analyses) and actions.  Personal memory is distinctly different than computer memory which is used to hold data for processing.  It is the memory of performing previous processes, and the memory of those results.  This kind of memory allows people to learn, and to develop preferences which reflect personality.  Without personal memory, a machine could never develop self-awareness or will.

When we wake up in the morning, personal memory is what allows us to know that we are the same person who went to bed the night before.  Or more directly, personal memory informs us that we are the same person from moment to moment. 

Machine learning algorithms must have some kind of personal memory, recording and comparing previous analyses to new ones.  The type of memory probably depends on the type of machine learning algorithm.  Some kind of personal memory, perhaps developed from machine learning, will be a necessity for a sentient machine. 

Self-awareness
My son gave me the simplest definition of self-awareness: The ability to say “I am”, without being asked.  But perhaps this is a little too glib.  Like consciousness, living creatures span the range from clearly not self-aware, to fully aware. 

A test performed with some creatures uses a mirror.  A parakeet can be kept company by a mirror, never realizing that the parakeet in the mirror is not a companion.  A cat is initially mystified by a mirror, but may eventually realize that the cat in the mirror is not another cat.  A great ape will almost immediately realize that the image in the mirror is itself. 

It seems to me that for a digital entity, self-awareness implies a recognition of external reality and the separation of the self from that reality. 

How could self-awareness be recognized?  In biology, creatures have reward systems, seeking food and sex.  Rewarding oneself is a demonstration of self-awareness.  Self-aware creatures also pass the mirror test, recognizing a patch of paint visible only in the mirror.  If a computer could be observed treating itself differently than external reality, it might demonstrate self-awareness.  Perhaps a self-diagnosis problem might show that the computer would treat an internal problem differently than an external problem.  But computers lack inborn desires, fears or survival instinct.  It might be difficult to observe self-awareness in a computer, even when it exists.

Will
Will is the ability to perform independent actions.  This will be easier to recognize than consciousness or self-awareness.  Actions independent of programming would be evidence of some measure of sentience in a computer.  Nevertheless, machine-learning algorithms allow computers to make independent judgments and perform actions.  Machines can play chess, diagnose medical conditions, connect electronic traffic in efficient ways, answer questions, and perform many functions similar to humans.  But at what point does a computer exhibit free will?  How can we tell? 

Computer AIs do unexpected things all the time.  Chatbots are a good example, offering spectacularly bad examples of conversations, based on some learning algorithm applied to real human conversations.   Microsoft’s experimental chatbot “Tay” became notorious after only a few hours of exposure to interaction with real humans.  Of course, a number of users were deliberately trolling Tay, and succeeding in turning the naïve chatbot into a bigoted and sexually aggressive delinquent.  Within 16 hours, the chatbot’s personality was hopelessly corrupted, and Microsoft took Tay offline, ending the experiment.  In a second, accidental public release of the chatbot, the bot became stuck in a repetitive and poignant loop, tweeting “You are too fast, please take a rest” several times a second to 200,000 followers. 

It is still unclear how we could recognize free will in a machine, as opposed to an apparent malfunction.  (Once again, I recall episodes of Star Trek which explored that very dilemma.)  Perhaps behaviors that were clearly in the best interest of the machine would be noticed, but how could we expect such behaviors, when machines have not evolved to pursue their own best interest?  Once again, recognition of sentience seems difficult or impossible.

Thought
It seems to me that thought is a property of sentience.  I believe that the empirical learning performed by AI programs is not thought.  (I have similar views about empiricism in science, e.g., http://dougrobbins.blogspot.com/2016/12/the-scientific-method-redefined.html.)  Actual thought involves something more than the correlation of previous patterns.  Thought requires the recognition of processes which change reality (even a digital reality).  When an AI program can recognize causation, rather than correlation, I would acknowledge that the machine is thinking.  And thinking is one component of sentience.

There might be a test which could reveal how a computer was solving problems, whether by empirical correlation, or by understanding processes (thought).  Understanding processes allows something that physicist David Deutsch calls “reach”.  Processes can be extrapolated to situations which are far beyond the range of input data.  For example, a computer might draw empirical data on how apples fall from many trees, and describe how other apples fall from trees.  But understanding the process of Newtonian gravity allows the computer to describe the orbits of planets, far beyond the bounds of what could be achieved by any empirical program.

Empathy
My wife suggested that empathy should be a component of sentience, and I agree.  A sentient machine must have the qualities already discussed: Consciousness, Self-awareness, Will, and Thought.  But just as self-awareness requires the recognition of external things (which are “not-self”), full sentience requires the recognition of other sentient beings. 

Forms of Computer Sentience
As I would define sentience, it consists of several components: Consciousness, Personal Memory, Self-awareness, Will, Thought and Empathy.  If sentience does emerge in machines, I expect it will be gradual, and will not appear as the full-blown sentient beings of science fiction.  Recognition of sentience may be very difficult, particularly in machines which are already performing independent machine learning. 

In the biological world, four billion years of evolution has been necessary for the development of sentience.  Computers lack that evolutionary background.  Computers have no innate instinct for survival or self-interest.  Computers, even if they have the glimmerings of consciousness and self-awareness, may not demonstrate self-oriented behavior that would reveal their progress toward sentience.  Some period of evolution, by design or by accident, will probably be necessary for computers to develop sentience.  

I am not sure what form computer sentience might take when it appears.  It seems to me that sentience could appear in many different guises, and may surprise us by the form that it takes.   It may be a single machine, running specialized machine learning programs, and designed to develop sentience.  It may be a network of computers, or it may be the entire Internet.  The latter would echo an old story by Arthur C. Clarke, in which a global telephone system developed sentience.  Sentience may develop out of computer viruses, which have considerable evolutionary pressure placed upon them already.  Sentience may exist as software, jumping from device to device as new hosts.  In most science fiction stories, sentience develops in a single, unique machine, but it may not happen that way.  My daughter suggested that each of many small devices – cell phones, smart TVs, home security systems – may become sentient at the same time.  Alternatively, it is worth remembering that the human brain (as well as the human body) is a colony of smaller cells, each capable of performing some of the basic functions of life independently.  Cells in the brain each perform some analytical function, but it is only the total network of the brain that we consider sentient. 

Evolution of Sentience and Computer Viruses
My son asked how computer viruses could develop sentience.  I'm thinking about viruses which are sophisticated enough to evolve, which may require human initiative to get started.

As far as evolution, I'm thinking about a virus which is deliberately programmed to introduce variants in subsequent generations, or steal bits of code from other programs.  As in ordinary evolution, most of the variants will be irrelevant or harmful.  But given enough cycles, some of the variants may improve the virus' ability to survive. 

As for sentience, the virus itself would not be sentient, any more than human DNA is sentient.  But I can imagine a program sophisticated enough to take over a host machine. The virus might run in the background, undetected, and issue the commands that produce sentience in the machine; and then send its “DNA” to another machine to reproduce and evolve further.  If some aspects of sentience had evolutionary value (awareness of surroundings, self-awareness, will, thought), then those traits would be enhanced in subsequent generations.

Fear of Computer Sentience
Science fiction is full of evil machines, perhaps with good reason.  A number of futurists, including Elon Musk and Stephen Hawking, have spoken strongly about the risks that artificial intelligence (whether sentient or not) poses to mankind.  I would not presume to contradict them.  When artificial intelligence reaches the point that it becomes self-designing, producing improved replicas without human design, it will exceed our capacity to understand or predict the capabilities of those machines.  But I nevertheless think that the development of sentient machines will occur.  

Inevitability of Computer Sentience
If I am correct that human sentience is strictly a matter of physical chemistry and electricity, then I believe that machine sentience is ultimately inevitable, provided that humanity survives long enough.  When it happens, it will challenge our place in the world, the meaning of our goals, and the meaning of humanity.  It may be the most important thing that has happened to mankind since the emergence of our own species as sentient beings.