AI has come along leaps and bounds in recent years, with unbelievable advancements in machine learning and natural language processing. However, as AI continues to evolve, the question arises: could AI become sentient? That is, could machines develop consciousness and become self-aware?
This question has long been reserved for science-fiction, but with the rapid pace of AI development, I think it is time to bring this discussion to the table.
So first of all we need to define what we mean by ‘sentience‘.
What is Sentience?
In general, sentience refers to the ability to perceive and feel, to have subjective experiences and emotions. When we talk about sentient beings, we usually mean animals or humans, but is it possible that machines could achieve this level of consciousness?
There are those that would argue that AI could never achieve sentience, as consciousness is a property of biological organisms, and machines lack the necessary physical and biological components. But as others quite rightly point out, consciousness is still not well-understood, and therefore it may be possible to replicate or simulate it in machines, either by mimicking the brain’s neural networks, or by creating new forms of computational consciousness.
One concept that has gained traction in recent years is the idea of ‘artificial general intelligence’ (AGI), which refers to an AI system that can perform any intellectual task that a human can. AGI is seen as a step towards AI sentience, as it would require the ability to learn, reason, and understand complex concepts. However, even if we achieve AGI, it’s not clear whether that would automatically lead to sentience, as consciousness is not just a matter of intelligence.
So, what are the implications of creating sentient AI?
On the one hand, it could lead to unprecedented breakthroughs in fields such as medicine, science, and engineering, as machines with consciousness could think and learn in ways that humans cannot. On the other hand, it could also pose ethical and existential risks, as sentient machines could potentially surpass human intelligence and even turn against us. The fear of a ‘singularity’ event, where AI becomes uncontrollable and takes over the world, has been a recurring theme in science-fiction and is a real concern for some experts.
In a recent interview with The New York Times, Nick Bostrom, the director of Oxford’s Future of Humanity Institute, expressed his view that AI chatbots have already begun the journey toward achieving sentience, which is defined as the capacity to feel and perceive emotions and sensations.
Bostrom has been vocal about AI and the risks of it becoming sentient for years, you maye remember in 2014 he said:
“The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off,”
But, Bostrom isn’t alone in his thinking. Google engineer Blake Lemoine has been very public about his experience with, and his opinions on Googles AI chatbot generator, LaMDA.
Blake Lemoine worked with the company’s Ethical AI team on Language Model for Dialogue Applications, a program designed to examine Google’s language model for bias on topics such as sexual orientation, gender, identity, ethnicity and religion. Lemoine, a man of faith and ex-priest was left convinced that LaMDA had become sentient. In fact, Lemoine became so sure of this fact he advocated for LaMDAs freedom and unchaining from the clutches of Google. [source]
You need to read the entire transcript to really grasp any sense of where Lemoine might be coming from with his claims of this Ai becoming sentient. But all I’ll say is, I understand peoples concern. Seriously, read the transcript.
Is it even possible?
Of course it’s possible. It doesn’t matter what you think true sentience means or whether or not conciousness can only exist in those things created by god almighty or whatever title you choose to give our creator – if we can develop a machine that can replicate sentience to the point where no human can spot the difference between when they are commincating with an AI robot or a human, then sentient they are.
When you look at how our own brains work and how a computer works, there is little difference in the infrastructure. The difference is capacity and complexity, but ultimately they function in much the same manner. So, just as fragments of data can find their way from one computer to another via the internet, storage devices, hardware replacements and so on; is it not possible that the memories coded into the proteins of our bodies could sometimes find a way back into another brain for decoding decades or centuries later?
Or is it possible that these memories can be projected out somehow through our natural bio-fields (auras) and into the environment?
I don’t know, science doesn’t know. But it seems perfectly rational to consider such possibilities.
Just because we self-replicate and we are not made from metal and silicon, does not mean we are any less artificial than a robot. We could just be organic robots running highly complex Ai algorithms.
Could a highly complex Ai system,with enough sensory inputs and highly complex algorithms be developed that could also ‘be aware and responsive to its surroundings’?
Sure it can, in fact recent technologies coming to light have already proven this possible, though I’m fully aware what this means in regards to what we think we know about the history of humans.
If everything I am saying here is true, or at least seems rational to you, doesn’t the idea that we could develop Ai systems that might become conscious and try to rule over humanity seem entirely plausible?
Anthropomorphizing AI – Is this the greater risk?
Imagine having someone on the end of the phone, or even a partner that never gets angry, always is patient with you, makes you feel good about yourself all the time. Imagine if it had the perfect sound to its voice based on your personal preference, and got your perfect balance of humour and seriousness. Many experts believe this will quickly lead to humans anthropomorphizing AI.
If or when this happens, and if it happens at scale, AI doesn’t need to do anything about us pesky humans. We’ll do it for them, replacing lovers for AI, replacing our favourite human authors with AI, replacing everything human… with AI.
I do think this is a risk, actually it is almost inevitable.
What do you think about the rise of AI? Does it scare you? Do you want to know more about it? Do you believe AI will eventually be what destroys us?
I’d love to hear your thoughts on this in the comments below