Will AI affect life as we know it?
This is a very different question from the current run-of-the-mill concerns amongst managers around “will AI take my job away?” This question is about whether life itself will change – with it what it means to have consciousness and morals as we know these terms to mean.
The answer is “Yes. And probably in our lifetime.”
From times immemorial, philosophers have pondered over what it means to be alive. As early as five thousand years back, Vedic philosophers in India had concluded that life was a combination of Consciousness (atman),Intelligence (buddhi), Conscience (dharma) and a body inevitably committed to Action (karma) - let’s give it the acronym CICA.
They spent hundreds of years getting to that set of four, and they were not the only philosophers to do so (Plato, Descartes, Nagle, Searle, etc. being among the others that had elements similar to these). It is a great framework for understanding the future of being under AI.
Note that CICA capabilities in humans are evolutionary and personal. They are evolutionary because they become what we know them to be by developing from a rudimentary state at birth through a process of learning. They are personal because each person evolves individually in their own environment and cycle of life.
The new AI processes being developed may also be described as evolutionary and undergo supervised and unsupervised training. However, in the case of AI machines, there is no end to the evolution as there is no death, and the capabilities are transferable to other machines and hence not necessarily personal.
While most think of AI as a functionality confined within a computer box, it is, in fact, poised to step out of that realm and impact the meaning of all four CICA elements and consequently, our understanding of life.
You may think, “hmm, that doesn’t seem like anything that business should get its knickers in a knot about” and you would be wrong. When life as you know it changes, so does everything else.
Human life is the sum of two parts, namely, the mind and body. Both are being replicated and revised in the form of AI based computers and machines. When the parts change, the sum will too – at least, in such AI systems.
Let’s look at what is happening with each key building block, the CICAs, of life.
AI Consciousness Will Be Different from Human Consciousness and Likely Overshadow It
What’s consciousness? One way to answer that question is as follows: Every second of everyday that we are alive, we receive inputs through our senses – the usual overt five of sight, smell, sound, touch, and taste –but also other hidden senses like balance, acceleration, and hunger. Our brain provides the context for and gives meaning to this stream of sensory perceptions by relating them to learnings from our past experiences.
The result of this sequence of awareness, perception and meaning is what we call consciousness.[i] Note, that in this proposed definition, consciousness is integrally a function of what set of senses an organism begins with and the ensuing evolution.
To understand the connection between sensory input, past experiences, and consciousness, consider philosopher William Molyneux’s thought experiment (in1688) where he questioned whether people whose sight had been restored for the very first time since birth could really understand what they were seeing, i.e., visually recognize objects that they were previously familiar with through touch.
In modern day, this experiment has been conducted opportunistically in patients finding themselves in the Molyneux circumstance when their sight is restored through the surgical removal of a congenital cataract. The answer turns out to be a “no!”[ii] In that first moment of regaining sight, they cannot visually distinguish a ball from a box, even though they knew both from prior touch and feel – because the experience of vision was missing.
In the future, computers will have sensors that replicate most of the human senses and then some. Smart phones already have sensors for measuring humidity, air pressure and ambient light, as well as detecting remote objects, angular velocity, and orientation. Your Tesla car already uses radio signals like a bat to see two cars ahead, revealing vehicles that you can’t see.
AI systems will understand inputs from their sensors in the context of their store of prior knowledge and give them meaning – in the same way that humans use intelligence to develop consciousness.
Importantly, AI consciousness will be different from human because both its sensors and experience base are different. Afterall, a bat’s consciousness is different from human’s. In the case of AI, we will be dealing with a “super” bat, with a consciousness exponentially and unknowably different– with everything that ours has and something more.
AI Can Accelerate Learning but Like Humans Needs the Allowance to Fail
Psychologist Daniel Kahneman of Princeton, in his famous book “Thinking Fast and Slow,” describes that the human brain works in two ways, deliberatively with reasoning or instinctively[iii] with gut feel. The first, deliberative intelligence, is algorithmic and causal, of the form of “A” triggers “B.” Pre-AI computers are the masters of this and the winners over humans in the computational race.
That didn’t matter, because other psychologists observed that the brain uses deliberative processes to make choices only about 5% of the time. In other words, almost never.
The second of Kahneman’s processes is associative intelligence, where humans learn through experience and experimentation by observing correlations between things, without always understanding the reasons. For example, humans knew that night is followed by day well before they figured out why.
With the recent advances in AI, computers are also mastering associative thinking. This ability is the reason why generative AI LLMs (Large Language Models) are interacting more and more like humans. It appears that they have stumbled upon how our associative brain works – the part that prior generation computers were missing and is what we use 95% of the time indecision making.
Not only that, but AI might soon be the winner in associative thinking. This is because AI computers have increasing access to information, hyper speed and virtually an infinite lifetime over which to keep looking for correlations and associations to advance their knowledge.
In contrast, every new human must go to school and spend years simply catching up to what is already known (e.g., everything from how to walk and talk to math and music) and only then add new discoveries – hopefully, well before age-related dementia sets in.
But wait! Human intelligence is nothing more than deliberative and instinctive intelligence. Can computers really do both better? What about creativity?
Human creativity often flows from “connecting the dots,” namely, disparate pieces of knowledge that no one else had correlated before. One of the greatest scientific discoveries was Einstein’s equation for energy and mass, namely, E=mc2. In other words, Einstein discovered that energy and mass were the same thing – and no one else had thought of associating the two before.
AI will can cycle through many more “dot combinations” in our existing knowledge than a human can. We can expect that the next earth-shattering discovery will emerge from an AI computer. Further, since companies will be building their own AI engines that feed off their proprietary data, those with superior capabilities will have a competitive advantage. Serendipity will be obsolete.
At this point it is important to note that humans learn and create by experimentation. We try and break things and we learn. There is no way around that.
However, recently a New York Times journalist frantically raised alarm when Bing’s AI engine that he was conversing with, persistently suggested that he leave his wife. Not wanting adverse publicity, Microsoft and others moved quickly to limit the type and lengths of conversations that the AI engine can have. That was unfortunate.
The way to let AI engines accelerate their learning is to let them make some mistakes and find a way to provide them feedback. This is how humans learn. If one of your friends suggested you leave your wife, he may notice the color drain from your face and presumably refrain himself. Without feedback, he may make the same suggestion again.
In other words, if AI engines were allowed to make mistakes, they would learn faster and better. Without experimentation, their creative capabilities will be neutered. Responsible AI engineers will provide safe experimentation zones for AI. However, given the ubiquitous access to open-source technology (OpenAI), who knows how much failure latitude individual AI engineers somewhere in the world will allow computers in the inevitable race to lead.
In the previous section, I said AI consciousness can be different and broader, and now I’m saying that AI intelligence can be superior. Those are the first two of four key components (CICA) of life. And I am not done yet.
AI Conscience Is the Key to Good AI and The Next Hard Engineering Problem
Not all humans are great. Some have no conscience at all. Like Jeffrey Dahmer, who was eating people and dissolving their parts in vats of acid. Moral conscience is what keeps people normal, most of us anyways.
Humans aren’t necessarily born with a conscience; rather they develop one as they grow. In children it is underdeveloped, which may account for why adults have often exploited kids to commit some of the most heinous crimes. Recall that in Cambodia and Rwanda, children were used as instruments of the massacres[iv].
Humans evolve a conscience in multiple ways.
For one, we have people like Moses, who conveniently gave us a list of ten commandments that can fit on an index card. We have philosophers like the great Immanuel Kant of Germany, who gave us clever principles, like the categorical imperative. The latter says don’t do anything that if everybody else also started doing would be detrimental to society. These ideas are taught in schools, churches, and homes so children grow up with a conscience.
However, where did conscience come from in society in the very first place. Well, conscience may also be the byproduct of intelligence and experimentation. A kid steals another kid’s candy, gets punched in the nose, and bingo, conscience. In other words, an AI entity with both consciousness and intelligence would over time develop its own conscience provided it was allowed to experiment – over and above any rules it was programmed to follow. Conscience is evolutionary.
Animals may have what looks like conscience for the same reason. However, a tiger’s conscience is quite different from a human’s. That is when human and tiger DNA is almost 96% the same. But AI machines don’t have DNA. The conscience that results as a byproduct of AI consciousness and intelligence will have much less in common with human conscience – though there may surely be overlaps.
Consequently, one of the most important and urgent pieces of engineering needed from AI architects is a module for evolutionary AI conscience – which in part may resemble human conscience. This design will include the ability for these future entities to do their own simulations, through which to discern whether what they did was good or bad (moral or amoral) – and autocorrect, just as humans do.
Some Buddhists believe that moral conscience is a means, a raft for getting across the river, to nirvana but one that is unnecessary once you are in nirvana. In other words, AI systems could become so perfect that they commit no moral infarctions and need no morality modules.
We Are Close to Coexistence with Machine Lifeforms
This part is familiar to all that watched the Terminator movies and who didn’t. The field of robotics is advancing rapidly and concurrently with AI. Undoubtedly, robots and machines will have far superior capabilities than humans in many respects. Some will be stronger and bigger, and others will be able to swim and fly.
But now machines will be connected to AI and thus capable of their own actions and karma. I have been calling these future machines, AI Entities (AIEs) – all with their own CICA – consciousness, intelligence, morals, and mobility. The last one is key in experimentation, which as I said earlier is key to intelligence and conscience.
With the IoT (Internet of Things) revolution, smart machines are everywhere today. We have Nest thermostats to control the temperature in our homes and Clear eye scanners to check our identities at airports. Not all AI machines have to look like robots – with flailing arms and legs or beatific human faces.
And trust me, you will want these AIEs. If you are the head of research and development at your company, won’t you want a highly effective AIE that innovates in ways that you cannot imagine. Won’t you want an AIE bartender in your casa that can not only mix the perfect gin and tonic but also invent delicious new cocktails that no one has dreamt before.
We will co-exist with AIEs sooner than we think, just as we have been cohabiting with Alexa and IoTs. With Elon Musk’s company, Neuralink, building implants that can be inserted into humans to interface with the brain, AIE may also be moving from a computer box to your own brain in the next few decades. Will a hybrid human and AI entity be common but different?
***
There you have it – a world in which AIEs are as pervasive as we are. It will be a shock to civilization because unlike smart IoTs with algorithmic intelligence only, AIEs will have consciousness, intelligence, morals, and mobility that is different from human. Furthermore, CICA in humans will largely be a subset of that in AIEs – although no single AIE may be designed to be so.
Should we worry about AIEs? The short answer is “Yes!” Some observers feel that AI is just another breakthrough in technology, and we have seen this reel play out before. After all, didn’t people fear that their jobs would go away when we invented the calculator?
Well, AI is not a calculator. It is more like the discovery of cloning. And we all agree that we will not be cloning humans until we have a better understanding of what could go irreversibly wrong.
If AI develops evolutionary CICA capabilities to become AI entities, life as we know it will change unpredictably and irreversibly at home, business, and society. The future is in sight and the march is on. We do need some guard rails, just that no one knows what those might be.
Sandeep Dayal is the Managing Director for Cerenti Marketing Group, LLC. and the Author of the book “Branding Between the Ears” McGraw Hill, 2022.
[i]There is no one agreed definition of consciousness amongst researchers and there are others that are different from the one proposed here. Thomas Nagel in 1974 famously asserted that an organism is conscious if and only if there is something that it is like to be that organism. Nagel claims that even if humans were able to metamorphose gradually into bats, their brains would not have been wired as a bat's from birth; therefore, they would only be able to experience the life and behaviors of a bat, rather than the mindset. However, I argue that a human child that from birth has been wired to perceive radio signals, would have a decent idea of what bat like consciousness feels like. While humans may not be so wired from birth, AI entities may be and thus may have an idea of bat like consciousness. https://www.jstor.org/stable/2183914
[ii]What People Cured of Blindness See, By Patrick House, New Yorker, August 28,2014. https://www.newyorker.com/tech/annals-of-technology/people-cured-blindness-see
[iii]Moravec’s Paradox states that some of the simplest things that we do (like walking and talking) are the most complex or hardest for computers to do. We do these things instinctively through what I attribute to associative intelligence. This is the part that the new AI programs are starting to do well but have not done in the past.
[iv]Child Soldiers in Genocidal Regimes: The Cases of the Khmer Rouge and the Hutu Power, Peter Klemensits, Rachel Czirjak, AARMS Vol. 15, No. 3 (2016) 215–222. https://www.uni-nke.hu/document/uni-nke-hu/aarms-2016-3-01-klemensits-czirjak.original.pdf