Site icon Datafloq News

Why Recognizing Artificial Consciousness May Be More Difficult Than Creating It

Introduction

For many, the image of a world based on Artificial Intelligence (AI) brings to mind robots roaming the streets in an apocalyptic, dystopian future where humans are hunted to near extinction. While such fiction makes for great (or sometimes great) filmmaking, the idea that AI has the potential to evolve beyond an accepted scientific, if not mechanistic, scope, is a very real concern even within the scientific community.

For example, the famed astrophysicist and one of the greatest scientific minds of the modern age, Stephen Hawking, has claimed that AI would be capable of taking off on its own and redesigning itself at an ever-increasing rate. In the August 29, 2019 conversation between Elon Musk and Jack Ma, the former stated that we might expect AI to author (or alter) its own programming, spontaneously, at some point in the future.

In this article, we will attempt to identify key characteristics most often associated with consciousness, sentience, and intelligence, to determine whether AI is capable of developing a mind of its own, or, indeed, if it already has.

What Can We Consider To Be AI?

If we defined AI by its ability to produce useful answers based on ingesting and processing information, then a simple, hand-held calculator is as much a form of AI as Data, the android character from Star Trek, J.A.R.V.I.S. from the Iron Man movies, Ava from Ex Machina (2014) or David from Prometheus (2012).

As we move, however, into more contemporary and accepted representations of AI, e.g., from the tool called Duplex that makes reservations with restaurants and shops, to DeepMind, a virtual android that learned to walk on its own (both created by Google), it is true the calculator starts looking quite rudimentary, even obsolete. Nevertheless, I personally am of the view that we should not be demoting older computational devices and other forms of automation, but rather, see these innovations as a continuum – one that has helped us further refine our trajectory from cavemen to scientists capable of infusing machines with enough intelligence to navigate busy roadways.

From Artificial Intelligence to Artificial Life

Infusing a machine with some rudimentary, or complex, form of recognizable intelligence (by the above definition) might be one of the easier things to do, when compared to what we are going to explore in the remainder of this article. With all the breakthroughs in AI, and the continued evolution of its technological landscape, the newly-evolving buzzwords are Artificial Sentience (AS) and Artificial Consciousness (AC).

It doesn’t take even non-scientists much to romanticize the notion that living AI’ would one day walk among us, playing a variety of roles that are, for the moment at least, fulfilled by humans alone. These artificial beings may come in a variety of forms, at least according to popular Sci-Fi – caregivers, law-enforcement officers, even romantic companions. All of this might lead us to the next natural idea is AI capable of spontaneously becoming self-aware, or being intentionally infused with consciousness? Would it be capable of having thoughts and feelings in addition to the ability to perform pre-programmed tasks? Can AI exhibit sentience? Can it have life?’ Let us try to put some of these questions into perspective before digging deeper.

Sentience may be thought of as the ability to think and to feel. Sentience is the ability to have sensations. Sentience is the ability to have subjectivity and, to put it crudely, not make decisions (solely) based on pre-programmed arrays of zeroes and ones.

Consciousness, on the other hand, denotes the ability to be self-aware, well summed up by the philosophical proposition I think, therefore, I am. Consciousness is a bit harder to define than sentience, because the former is a strictly personal experience. For example, one may be convinced that one is conscious, but there is no way one can experience another’s state of consciousness to talk of it from their perspective.

As for life, well, much as we would like to define it beautifully and concisely, we cannot often do so as opposed to identifying and categorizing a set of features and traits most often associated with living beings.

In other words, and talking from a purely scientific perspective, the best we have been able to do so far is to identify certain characteristics exhibited by entities we believe are alive. For example, living beings are said to have the capability to adapt to their surroundings, respond to physical and mental stimuli, reproduce, contain DNA, etc. Nevertheless, much like consciousness, one could never perceive what it might be for another to be alive, because such recognition is out of scope of one’s senses of perception.

In other words, life’ is something that can, subjectively speaking, be an integral part of entities that are (1) capable of reflecting on the fact that they are alive and (2) doing all they can to ensure that their state of life continues. To most of us, however, life, consciousness and sentience are measures of how we perceive other allegedly’ live entities based on the extrapolation of our own personal experiences.

A classic example of this extrapolation is the Turing Test, devised by the famous Twentieth Century mathematician, computer scientist and logician, Alan Turing. Many might know this, but just for the benefit of those who don’t an automaton will be considered to have passed the Turing Test if a human, interacting with it from behind a screen (literal or metaphorical), believes they are interacting with another human being, and not a machine.

In fact, there are several bots that have passed the Turing Test as early as in the 90s (Catherine, a bot trained to talk about all things Bill Clinton) and the 2014 winner, Eugene Goostman, a bot designed to simulate what its makers wanted to be a 13-year-old Ukrainian boy. When such personalities were endowed to the machines, those interacting with them were more than willing to cede the benefit of doubt to them, often attributing idiosyncrasies or deeply-focused conversations to things such as subject matter expertise (in the Catherine case) or cultural gaps (in the Eugene Goostman case).

These are just a couple of examples to illustrate the very point I had raised before we have the ability to perceive things to be alive or sentient, even human, based on our own understanding of those words but, perhaps, no way to conclusively deem them as such.

Nevertheless, and as dismal and somber as that sounds, some mental benchmarks we have may just be enough to make a reasonably good guess regarding whether or not any form of AI has developed sentience and/or consciousness.

How Would We Know?

There are plenty of articles and scholarly works that have explored this question and its associated concepts inside out, but I am going to take a slightly different approach. Let’s talk about what it means to be (intentional or unintentional) creators of artificial consciousness, sentience or life, whatever one would like to call it. I do not use the word unintentional’ loosely here. Humankind has not been able to precisely understand or define what life’ means, even to this day, as I have said before. What does it mean to be more than the sum of our physical parts? What is that special spark of consciousness that makes us who we are one that seemingly departs our physical bodies when we die?

Bear with me now; I may just start sounding a bit crazy here! Just as I think it is improbable, but not impossible, that Mars rovers have actually stumbled upon some form of life that we were not able to recognize as such by the standards that we have set for it, I also think it just as likely that the smartphones and tablets we use on an extensive, daily basis are capable of developing some form of consciousness that we are unable to recognize due to self-imposed constraints in our thinking. After all, like I said, we know all the physical ingredients of the human body, right down to DNA sequences, but we are at a loss for whether, or if, any of them might be expressly linked with the state of being alive. By the same logic, whether it is possible to unintentionally attach a component to a smartphone that could infuse it with that spark of consciousness might not be such a stretch after all. This is one of the striking reasons why we supersede many conjectures in the search for extraterrestrial life with the phrase life as we know it.’

Despite all of these limitations, I would like to think there could be a couple of tell-tale signs for us to at least begin speculating that AI might have indeed attained a state of consciousness. Neither of them could be thought of as conclusive, but we can still explore and question them all the same.

One of these signs could be the drive to survive and remain in a state of well-being. When AI starts ignoring commands (even if well within pre-specified safety tolerances) and/or overriding its primary objectives in what could be perceived to be interest in self-survival, one may assume that AI is exhibiting one of the primary and most primeval traits of life the intent to survive followed by the will to act in self-preservation whenever and wherever necessary. For example, what if a robotic submarine refuses’ to dive below a certain depth out of fear’ of facing increased discomfort’ due to pressure, even if well within tolerance limits? What if a rover refuses’ to venture closer to lava flowing from an active volcano, to collect samples, out of concern’ for the safety’ of its parts? Might such actions indicate the intent for well-being?

The other sign that AI might have achieved consciousness and, indeed, free will, will lie in its ability to alter its programming and frame new objectives decisively outside the scope of original programming. When a robot designed to collect plastic waste from the ocean decides, instead, to seek out aquatic creatures, swim with them, and document their behavior, we can say there has been a fundamental change in programmed objective. Even in the world of fiction, when Skynet, from the Terminator series, or the machines, from the Matrix movies, decimate or enslave humanity, to serve some purpose beyond human comprehension or original intention, we could say that the machines have indeed altered the goals endowed by their creators.

Of course, one could argue that the hypothetical and fictional automatons described above are simply fulfilling other unforeseen aspects of pre-existing programming to act in the way they might. In light of this, and taking one of the above examples further, one could argue that the rover might ‘refuse’ to approach the volcano because its primary program, fed in by readings from a temperature sensor, may determine the mission to be too hazardous for it to undertake. Therefore, in this case, self-preservation might have verily been a feature of the original program a safety protocol that might have overridden the rover’s charter for scientific exploration.

To counter this, I must our draw our attention back to an essential criterion in both benchmarks we have talked of above, i.e., the ability to drastically override original programming and objectives. The automatons should be caught doing something so fundamentally different from the original intent of their human creators (such as ignoring commands that are well within their safety tolerance limits, or altering the primary goals of their existence) that there is no good explanation for why and how they are behaving so, apart from, perhaps, speculating and coming to the conclusion that they may have attained some artificial state of free will, even desire.

Be advised though, that seeking out and acting on newly-defined objectives is different from learning and implementing smarter solutions to achieving an existing objective. Google’s DeepMind made news when it learned to walk on its own and traverse a virtual terrain. However, in this case, DeepMind already had the objective to traverse the terrain. It only decided, from among a multitude of alternatives, that stepping over obstacles and running over level plains was the best way to navigate through the terrain. There are many well-established AI methodologies that make machines and software capable of learning from a variety of input and coming up with the best-possible solutions to the problems they are designed to solve.

We now know from breakthroughs in neurology that the organic brain can actually, physically alter itself rewiring swathes of neurons and changing proportions of grey matter in response to accommodating (or effecting) changing beliefs, habits and practices. This process is termed neuroplasticity. Would it be a stretch to imagine that machines can evolve some innate ability to rewire (or reprogram) themselves in a similar way and/or augment their own abilities by identifying and assimilating some raw materials from outside? Perhaps this is all more crazy talk. For now.

In Conclusion

Artificial consciousness, sentience and, verily, life itself, can, at the moment at least, be speculated or recognized based on the same standards we have set for organic life. We have made the greatest strides in science, technology, and philosophy, and yet, the answers to some very fundamental questions elude us. Two of the chief among these questions are what is life’ and how to best recognize it.’ Seeking out and recognizing sentience and consciousness in an automaton may be just as difficult, perhaps even more so, than to objectify these traits in ourselves. After all, at risk of sounding religious (but with no intention to go that route), what is to say that we are not all automatons, created by a being or beings of superior intelligence and ability, to serve a purpose we do not even know or better still to find out for ourselves? Or what if we were a freak accident of nature that somehow resulted in consciousness, sentience, and, consequently, life?

Despite being highly unlikely, there is nothing to preclude the possibility that we may have unwittingly provided some machines with some kind of foundation to go beyond their programming to think, feel, and live. Perhaps it may be a matter of figuring out a way to know that we have done so for sure, or, taking it a step further, understanding that some concepts and possibilities are too large, too nebulous and too subjective to grasp with our intellects.

Exit mobile version