“Shut it all down.”
“We are not ready. We are not on track to be significantly readier in the foreseeable future. If we go ahead on this everyone will die, including children who did not choose this and did not do anything wrong.”
This is the plea from the well-known AI researcher and computer scientist Eliezer Yudkowsky in an article published on 29th March in TIME Magazine in response to the rapid expansion of artificial intelligence development over the past few years.
The article came just hours after an open letter demanding the halt of AI development with multiple signatures from leading figures in the technology space including Elon Musk, Steve Wozniak and the Turing prize winner Yoshua Bengio.
According to the letter posted on The Future of Life Institute’s website, AI laboratories are engaged in a ‘race to create and deploy increasingly powerful digital minds, which even their creators cannot fully comprehend, predict, or manage’. The letter implies that this trend is spiralling out of control and could have unintended consequences.
The letter asks:
‘should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?’’
The letter calls on all AI labs to ‘immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.’
Pretty scary, huh?
Well, Eliezer Yudkowsky doesn’t think the letter goes far enough. Yudkowsky is calling for a complete shutdown of the entire AI development process being undertaken by humanity at the moment.
He makes some pretty stark warnings about what we are potentially facing if we don’t pull the plug now…
To visualize a hostile superhuman AI, don’t imagine a lifeless book-smart thinker dwelling inside the internet and sending ill-intentioned emails. Visualize an entire alien civilization, thinking at millions of times human speeds, initially confined to computers—in a world of creatures that are, from its perspective, very stupid and very slow. A sufficiently intelligent AI won’t stay confined to computers for long. In today’s world you can email DNA strings to laboratories that will produce proteins on demand, allowing an AI initially confined to the internet to build artificial life forms or bootstrap straight to postbiological molecular manufacturing.
Then, to compound the severity of his warnings, he states:
If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.
So the question is, why are all these high-profile AI enthusiasts and computer engineers, people that have literally poured billions into advancing these technologies now decided to cry wolf?
Are we all indeed doomed if this technology isn’t halted immediately?
Or is this sudden extreme plea for caution more to do with control?
Artificial Intelligence is without a shadow of a doubt the ‘next big thing’ in humans continual conquest for the advancement of technology. It has the ability to change every aspect of human society within a relatively small timeframe. The internet was big, it has shaped many aspects of our lives within just two decades – but this, AI, has the ability to change everything within an even smaller time frame. In just a few years AI could turn our world upside down and make everything we once knew obsolete.
The problem is, unlike the internet which required a complete infrastructure to roll out. An infrastructure that has cost trillions of dollars around the world to enable the internet for all, AI requires very little. In fact anyone who can program could develop these systems very quickly. A 14 year old kid living in his mom’s basement could develop the most powerful AI software with virtually no wealth behind them at all.
This is a problem. These high-profile figures speaking out know the consequences of this technology. They know that whoever wields the most powerful AI technology, could potentially control the World – and it could happen very quickly.
I believe this fear mongering campaign we are seeing commence is more about protecting their own place in the AI revolution, than protecting us from the technology itself. Artificial Intelligence threatens the power structure as much as it threatens human society, if not more so.
Now, don’t get me wrong, I’m not dismissing the very real threat AI has for humanity as a whole. But dont think for one second that the likes of Elon Musk or Steve Wozniak’s prime cause for concern is you or me. They are far more concerned about themselves and the control they have over this rising technology.
The Genie is out of the bottle
Regardless of what you think about Artificial Intelligence and the implications it may have on the survival of the human race, it’s here and no signed letter by billionaires or mass protest in the streets by millions of activists around the world is going to stop it.
We have open source AI code available for anyone to download. Anyone can learn to code and anyone can build our processing power to run these systems. You will not convince everyone to just halt this, and no law or regulation will stop advancements of this. The lamp was identified, the genie was summoned and *poof*, here it is! There is no putting it back in now.
So, since the genie is out of the bottle and there is nothing you or I can do about it, do you really want the state to begin regulating it, to begin outlawing unregulated use of it? Do you trust them to be the authority on this technology
I certainly don’t. I’d rather see this whole technology left wide open for all to see. Open source and decentralized. Inevitably there will be those that will develop AI driven systems to do bad, and historically this threat exists regardless of whether or not something is illegal or not. But there are more people wanting to do good for humanity and historically it is the good people that find their ability to innovate stunted when the government regulates and criminalises something.
Be wary, I believe we should all be wary. But be careful who you listen to here. This is the early stages of the greatest revolutions in human society ever and ‘they’ know it. If a single authority manages to somehow regulate this, then it’s truly game over. The only real threat to humanity that exists is humanity itself –and that threat is never greater than when a great amount of power is placed into the hands of so few.
They don’t fear AI. They fear not having control over AI.