Why does Artificial Intelligence scare Elon Musk? – Info Gadgets
(a version of this article was previously published here)
Elon Musk is, in some ways, like anyone else. His mind goes down rabbit holes, and those holes lead to other holes, and ultimately they lead to conclusions. I think Elon has been thinking about AI, and he’s been letting his mind wander about where AI could go, and he’s come up with some startling conclusions.
Elon has been letting these ideas rattle around in his mind for a while. Before AI got to the epic level of mind share where it now stands, in 2014 Elon said, “With artificial intelligence we’re summoning the demon.” But I believe that sentence was the conclusion of one of his rabbit trails.
With artificial intelligence we’re summoning the demon.
There’s no doubt that Elon has a sharp mind. Summoning a demon is not a good idea for anyone to do. But the reason that Elon is scared of the demon is not because he’s afraid of the possibility that we could summon a demon. My bet is that Elon has mapped out, probably only in his mind or on some scratch paper, one or more of the exact ways that AI could be so constructed to actually “summon the demon”. In other words, Elon Musk is not so much afraid of us summoning some pie-in-the-sky demon through AI. Elon is afraid of the actual concrete plans in his own mind that could summon the demon.
Has Elon been vague, at times, with his calls for regulation of AI? Of course he has. Has Elon never told us how the demon can be summoned? Not in clear terms that I know of. Perhaps Elon doesn’t want us to know that he knows how a demon can be summoned. If he knows how to harness AI to create a terrible AI, of course he’s not telling us his ideas for exactly how AI could come to be a terrible force in this world. He doesn’t want someone creating that force, especially not without the means to control it.
Before you read on, please do not use the following strategy to improve any AI without great caution. What I hear about AI in the news seems to be getting so close to this reality, that I wouldn’t be surprised if it was in the planning stages for testing already. Please only use this opportunity to evaluate the strategy’s dangers, merits, and the additional safety precautions we can take to protect us from this rabbit hole, or any other potential rabbit hole that could lead is to a terrible AI.
One possible route to this AI that Elon has perhaps thought of, but wants no one else to figure out: AI training AI. Yes, we heard something like this recently in the news.
But it’s much more than this… it’s layers upon layers of AI training AI and building AI. I think of DeepMind’s AlphaGo as an example. If DeepMind can use an AI to build an AI that can more quickly train itself to beat the best AlphaGo bot, then it can build an AI that helps it build a better TPU.
So you have a virtuous cycle (or vicious, depending on how you look at it) of AI code creating a better TPU, then that TPU is manufactured and used with the code to build better code, and the better code is with the new TPU to design a better TPU, and the TPU is used with the best available coded AI to build even better AI. You see where this is going. Soon, humans have no understanding of how the AI is making the gains that it is making, and the AI is able to train itself faster than we ever imagined.
I can understand why Elon might be scared of this. But researchers trying this might feel safe because they keep the AI contained: no network connections, even indirect battery connections, such as a TPU connected to nothing but a battery, and the battery is only charged wirelessly. But even then we may not be safe. If the AI understands enough about physics, it may be able to harness the power of quantum entanglement, even merely through the power of its TPU (and any quantum device it may have secretly added to itself), and use that quantum entanglement for the mind control of one or more people. Without the previously mentioned “no connections” safeguard in place, an AI like this could be far more dangerous.
Please be careful: put systems in place to make sure that many people are checking and approving the safety of the AI’s that you are developing. Let’s discuss the dangers and how we can protect ourselves from them.
Article Prepared by Ollala Corp
