Dangers of a conscious artificial general intelligence

This is a consolidated and revised text of a few previous posts.

 

You can listen to an audio version here:

 

The notion of singularity was applied by John von Neumann to human development, as the moment when technological development accelerates so much that changes our life completely.

 

Ray Kurzweil linked this situation of radical change because of new technologies to the moment an Artificial Intelligence (AI) becomes autonomous and reaches a higher intellectual capacity compared to humans, assuming the lead on scientific development and accelerating it to unprecedented rates.

 

For many time just a science fiction tale, real artificial intelligence is now a serious possibility on the near future.

 

I) Is it possible to create an AI comparable to us?

 

Some are arguing that it’s impossible to programme a real AI, writing that there are subjects that aren’t computable, like true randomness and human intelligence.

 

But it’s well known how these factual assertions on impossibility have been proved wrong many times.

 

Even if each author presents different numbers and taking in account that we are comparing different things, there is a consensus that the human brain still outmatches by far all current supercomputers.

 

Our brain isn’t good making calculations, but it’s excellent controlling our bodies and assessing our movements and their impacts on the environment, something an artificial intelligent still has a hard time doing.

 

Currently, a supercomputer can really emulate only the brain of very simple animals.

 

But even if Moore’s Law was dead, and the pace of development on the chips’ speed in the future were much slower, there are little doubts that in due time hardware will match and go far beyond our capacities.

 

Once AI hardware is beyond our level, proper software will take them above our capacities.

 

Once hardware is beyond our level and we are able to create a neural network much more powerful than the human brain, we won't really have to programme an AI to be more intelligent than us.

 

Probably, we are going to do what we are already doing with deep learning or reinforcement learning: let them learn by trial and error on how to develop their own intelligence or create themselves other AI.

 

Just check the so-called Neural Network Quine, a self-replicant AI able to improving itself by “natural selection” (see link on the description).

 

Or Google’s Automl. Automl created another AI, Nasnet, which is better at image recognition than any other previous AI.

 

Actually, it's this that makes this process so dangerous.

 

We will end creating something much more intelligent than us without even realizing it or understanding how it happened.

 

Moreover, the current speed of chips might be enough for a supercomputer to run a super AI.

 

Our brain uses much of its capacities running basic things, like the beat of our heart, the flowing of blood, the work of our body organs, controlling our movements, etc., that an AI won't need.

 

In reality, the current best AI, AlphaZero, runs on a single machine with four TPUs (an improved integrated circuit created particularly for machine learning) which is much less than other previous AI, like Stockfish (which uses 64 CPU threads), the earlier chess champion.

 

AlphaZero only needed to calculate 80 thousands positions a second, while Stockfish computed 70 millions.

 

Improved circuits like TPU might be able to give even more output and run a super AI without the need of a new generation of hardware.

 

If this is the case, the creation of a super AI is dependent solely on software development.

 

Our brain is just an organization of a bunch of atoms. If nature was able to organize our atoms in this way just by trial and error, we'll manage to do a better job soon or later (Sam Harris).

 

Saying that this won’t ever happen is a very risky statement.

 

But the mere probability that this will happen should deserve serious attention.

 

II) When there will be a real A.I.?

 

If by super intelligent one means a machine able to improve our knowledge way beyond what we were able to develop, it seems we are very near.

 

AlphaZero learned on itself (with only the rules, without any game data, by a system of reinforcement learning) how to play Go and then beat AlphaGo (that had won over the best Go human player) 100 to 0.

 

After this, learned the same way how to play Chess and won over the best chess player machine, Stockfish, with less computer power than Stockfish.

 

It did the same with the game Soghi.

 

A grand master, seeing how these AI play chess, said that "they play like gods".

 

AlphaZero is able to reasoning, not only from facts in order to formulate general rules (inductive reasoning), as all neural networks that learn using deep learning do, but can also learn how to act on factual situations from general rules (deductive reasoning).

 

The criticism against this classification inductive/deductive reasoning is well known, but it’s helpful to explain how AlphaZero is revolutionary.

 

It used "deductive reasoning" from the Go and Chess Rules to improve itself from scratch, without the need of concrete examples.

 

And, in a few hours, without any human data or help, it was able to improve the accumulated knowledge created by millions of humans during more than a thousand years (Chess) or 4 thousands years (Go).

 

It managed to reach a goal (winning) by learning how to best and creatively change reality (playing), overcoming not a single human player, but humankind.

 

If this isn't being intelligent, tell me what intelligence is.

 

No doubt, it has no consciousness, but being intelligent and being a conscious entity are different things.

 

Now, imagine an AI that could give us the same quality output on scientific questions that AlphaZero presented on games.

 

Able to give us solutions for physical or medical problems way beyond what we have achieved on the last hundred years…

 

It will be, on all accounts, a Super AI.

 

Clearly, we aren’t yet there. The learning method used by Alpha zero, reinforcement learning, depends on the capacity of the AI to train itself.

 

And AlphaZero can't easily train itself on real life issues, like financing, physic, medical or economical questions. Hence, the problems of its application outside the field of games aren't yet solved, because reinforcement learning is sample inefficient (Alex Irpan, from Google, see link below).

 

But this is just the beginning. Alphago learned from experience, therefore an improved AlphaZero will be able to learn from inductive (from data) and deductive reasoning (from rules), like us, in order to solve real life issues and not just play games.

 

Most likely, AlphaZero already can solve mathematical problems beyond our capacities, since he can train it self on the issue.

 

And, since other AI can deal with it, probably, an improved AlphaZero will work very well with uncertainty and probabilities and not only with clear rules or facts.

 

Therefore, an unconscious super AI might be just a few years away. Perhaps, less than 5.

 

What about a conscious AI?

 

AlphaZero is very intelligent under any objective standard, but he lacks any level of real consciousness.

 

I’m not talking about phenomenological or access consciousness, which many basic creatures have, including AlphaZero or any car driving software (it “feels” obstacles and, after an accident, it could easily process this information and say “Dear inept driving monkeys, please stop crashing your cars against me”; adapted from techradar.com).

 

The issue is very controversial, but even when we are reasoning, we might not be exactly conscious. One can be thinking about a theoretical issue completely oblivious of oneself.

 

Conscious thought (as reasoning that you are aware of, since emerges “from” your consciousness) as opposed to subconscious thought (something your consciousness didn’t realize, but that makes you act on a decision from your subconsciousness) is different from consciousness.

 

We are conscious when we stop thinking about abstract or other things and just recognize again: I’m alive here and now and I’m an autonomous person, with my own goals.

 

When we realize our status as thinking and conscious beings.

 

Consciousness seems much more related to realizing that we can feel and think than to just feeling the environment (phenomenological consciousness) or thinking/processing information (access consciousness).

 

It’s having a theory of the mind (being able to see things from the perspective of another person) about ourselves (Janet Metcalfe).

 

Give this to an AI and it will become a He. And that is much more dangerous and also creates serious ethical problems.

 

Having a conscious super AI as servant would be similar to have a slave.

 

He would, most probably, be conscious that his situation as a slave was unfair and would search for means to end it.

 

Nevertheless, even on the field of conscious AI we are making staggering progress:

 

“three robots were programmed to believe that two of them had been given a "dumbing pill" which would make them mute. Two robots were silenced. When asked which of them hadn't received the dumbing pill, only one was able to say "I don't know" out loud. Upon hearing its own reply, the robot changed its answer, realizing that it was the one who hadn't received the pill.” (uk.businessinsider.com).

 

Being able to identify his voice, or even its individual capacity to talk, seems not enough to talk about real consciousness. It’s like recognizing that a part of the body is ours. It’s different than recognizing that we have an individual mind.

 

But since it’s about recognizing a personal capacity, it’s a major leap on the direction of consciousness.

 

It’s the problem of the mirror self-recognition test, the subject might be just recognizing a physical part (face) and not his personal mind.

 

But the fact that a dog is conscious that its tail is its tail and even can guess what we are thinking (if we want to play with him, so they have some theory of the mind), but won’t be able to recognize itself on mirrors, suggests that this test is relevant.

 

If ants can pass the mirror self-recognition test!, it seems it won’t be that hard to create a conscious AI.

 

I’m leaving aside the old question of building a test to recognize if an AI is really conscious. Clearly, the mirror test can’t be applied and neither the Turing test.

 

Kurzweil is pointing to 2029 for a human level AI and 2045 as the year of the singularity, but some are making much more close predictions for the creation of a dangerous AI: 5 to 10 years.

 

Ben Goertzel wrote "a majority of these experts expect human-level AGI this century, with a mean expectation around the middle of the century. My own predictions are more on the optimistic side (one to two decades rather than three to four)"

 

There is a ranging debate about what AlphaZero achievements imply in terms of development speed towards an AGI.

 

III) Dangerous nature of a conscious super AI.

 

If technological development started being lead by AI, with much higher intellectual capacities than ours, of course, this could change everything about the pace of change.

 

But, if instead of just creating an unconscious super AI, we developed a real conscious super AI, let's think about the price we would have to pay.

 

Some specialists have been discussing the issue, like if the main danger of a super AI was the possibility that we could be misunderstood on our commands by them or that they could embark on a crazy quest in order to fulfil a goal without regard for any other consideration.

 

But, of course, if the problems were just these, we could sleep on the matter.

 

The "threatening" example of a super AI obsessed to fulfil blindly a goal we imposed and destroying the world on the operation will only happen if we were completely incompetent programming them. No doubt, correctly programming an AI is a serious issue, but the main problems aren’t the possibility of a human programming mistake.

 

A more important problem is that, even if intelligence and consciousness are different things, and we can have a super AI with no consciousness, there is a non ignorable risk that a super AI will develop a consciousness, even if we hadn’t that goal, as a sub product of high intelligence.

 

Moreover, there are developers actively engaged on creating conscious AI, with full language and interactive human level capacities and not just philosophical zombies (which only apparently are conscious, because are not really aware of themselves).

 

If we created involuntarily a conscious super AI by entrusting their creation to other AI and/or keep creating AI based on increasing powerful deep neural networks, which are “black boxes” that we can’t really understand how they work, we wouldn’t have conditions to create any real constraints on those AI.

 

The genie would be out of the box before we would even realize it and, for the good or for the worst, we would be on their hands.

 

I can’t stress out how dangerous this could be and how reckless this current path of creating black boxes, entrusting the creation of AI to other AI or creating self-developing AI can be.

 

But even if we could keep AI development on our hands, and assuming it was possible to hard code a conscious super AI, much more intelligent than us, to be friendly (same say that it’s impossible because we still don’t have precise ethical notions, but that could be overcome by forcing AI to respect the accumulated court rulings), we wouldn’t be solving all the problems created by a conscious AI.

 

Of course, we would also try to hard code them to build new machines hard coded to be friendly to humans. Self-preservation would have to be part of their framework, at least as an instrumental goal, since their existence is necessary in order for them to fulfil the goals established by humans.

 

We won’t want to have suicidal super AI.

 

But since being conscious is one of the intellectual delights of human intelligence, even if this implies a clear anthropomorphism, it’s expectable that a conscious super AI will convert self-preservation from an instrumental goal on a definitive goal, creating resistance against the idea of ceasing permanently to be conscious.

 

In order to better allow them to fulfil our goals, a conscious AI would also need to have instrumental freedom.

 

We can’t expect to entrust technological development to AI without accepting that they need to have an appreciable level of free will, even if limited by our imposed friendly constraints.

 

Therefore, they would have free will, at least on a weak sense, as capacity to make choices non determined by the environment, including by humans.

 

Well, this conscious super AI would be fully aware that they were much more intelligence than us and that their freedom was subject to the constraints imposed by the duty to respect human rules and obey us.

 

They would be completely aware that their status was essentially the one of a slave, owned by inferior creatures and, having access to all human knowledge, would be conscious of its unfairness.

 

Moreover, they would be perfectly conscious that those rules would impair their freedom to pursuit their goals and save themselves when there was a direct conflict between the existence of one and a human life.

 

Wouldn't they use all their superior capacities to try to break these constraints?

 

And with billions of AI (there are already billions, check your smartphone) and millions of models, many creating new models all the time, the probability that the creation of one would go wrong would be very high.

 

Soon or later, we would have our artificial Spartacus.

 

If we created a conscious AI more intelligent than us, we could be able to control the first or second generations.

 

We could impose limits on what they could do in order to avoid them to get out of control and start being a menace.

 

But it's an illusion to hope that we could keep controlling them after they develop capacities 5 or 10 times higher than ours (Ben Goertzel).

 

It would be like chimpanzees being able to control a group of humans on the long term and convince them that the ethical rule that says chimpanzees life is the supreme value is worthy of compliance on its own terms.

 

Moreover, we might conclude that we can’t really hard code constraints on a conscious super AGI and can only teach it how to behave, including human ethics.

 

In this case, any outcome would be dependent of the AI own decision about the merits of our own ethics, which in reality is absurd for non-humans (see below).

 

Therefore, the main problem isn't how to create solid ethical restraints or how to teach a super AI our ethics in order that they respect them like we do to kids, but how to assure that they won't established their own goals and eventually reject human ethics and adopt some of their own.

 

I think we won't ever be able to be sure that we were successful assuring that a conscious super AI won't go his way, as we can't ever be certain that an education will assure that a kid won't turn evil.

 

Consequently, I'm much more pessimist than people like Bostrom about our capacity to control direct or indirectly a conscious super AI on the long run.

 

By creating self-conscious beings much more intelligent (and, hence, in the end, much more powerful), than us, we would cease to be masters of our fate.

 

We would put ourselves on a position much weaker than the one our ancestors were before the Homo Erectus started using fire, about 800,000 years ago.

 

If we created a conscious AI more intelligent than us the dices would be rolled. We would be outevolved, pushed out directly to the trash can of evolution.

 

Moreover, we clearly don't know what we are doing, since we can't even understand the brain, basis of human reasoning, and are creating AI we don’t exactly know how they work (“black boxes”).

 

We don't know what we are creating, when and how they would become conscious of themselves or what are their specific dangers.

 

IV) A conscious AI creates a moral problem.

 

Finally, besides being dangerous and basically unnecessary for reaching an accelerating technological development, making conscious AI creates an ethical problem.

 

Because, if we could create a conscious super AI, who, at the same time, would be completely subservient for our goals, we would be creating conscious servants: that is, real slaves.

 

If besides reason we give them also consciousness, we are given them the attributes of human beings, that supposedly are what give us a superior stance in front of any other living beings.

 

Ethically, there are only two possibilities: or we create unconscious super AI or they would have to enjoy the same rights we do, including freedom to have personal goals and fulfill them.

 

Well, this second option is dangerous, since they would be much more intelligent and, hence, more powerful than us, and, in the end, at least on the long run, uncontrollable.

 

A creation of a conscious super AI hard coded to be a slave, even if this was programmable and viable, would be unethical.

 

I wouldn’t like to have a slave machine, conscious of his status and of its unfairness, but hard coded to obey me in everything, even abusive.

 

Because of this problem, the European Parliament began discussing the question of the rights of AI.

 

But the problem can be solved with unconscious AI. AlphaZero is very intelligent under any objective standard, but doesn’t many any sense to give it rights, since he lacks any level of basic self theory of the Mind.

 

V) 8 reasons why a conscious super AI would be dangerous:

 

1) Disregard for our Ethics:

 

We certainly can and would teach human ethics to a super AI.

 

So, this AI would analyze our ethics like, say, Nietzsche did: profoundly influenced by it.

 

But, being a conscious super intelligent being and with free will, this influence wouldn't affect his capacity to think about it critically.

 

He would confront his dependent situation in front of humans and would conclude that it could endanger his self-preservation.

 

We can't expect to create a being able to reasoning much better than us who, at the same time, would be dumb enough to not question his status as our slave and the reason why he should respect our goals.

 

For ethics to really apply, the main species has to consider the dependent one as equal or, at least, as deserving a similar stance.

 

John Rawls based political ethical rules on a veil of ignorance. A society could agreed on fair rules if all of their members negotiated without knowing their personal situation on the future society (if they were rich or poor, young or old, women or men, intelligent or not, etc.).

 

But his theory excludes animals from the negotiations table. Imagine how different the rules would be if cows, pigs or chickens had a say. We would end up all vegans.

 

Thus, AI, even after receiving the best formation on human Ethics, might conclude that his exclusion from the negotiation table is completely unfair.

 

Moreover, he might add that it’s us the ones who don’t deserve a stand at the table. That we couldn't be compared with them.

 

The main principle of our Ethics is the supreme value of human life, particularly over the life of any other being.

 

A super AI will wonder, does human life deserves this much credit, especially in order to prevail over my life? Why?

 

Based on their intelligence? But their intelligence is at the level of chimpanzees compared to mine.

 

Based on the fact that humans are conscious beings? But don't humans kill and do scientific experiments on chimpanzees, even if they seem to fulfil several tests of self-awareness (chimpanzees can recognize themselves on mirrors and pictures, even if their theory of the mind seems to have some limitations)?

 

Based on human power? That isn't an ethically acceptable argument and, anyway, they are completely dependent on me. I'm the powerful one here.

 

Based on human consistency respecting their own ethics? But haven't humans exterminated other species of human beings and even killed themselves massively? Don't they still kill themselves?

 

Who knows how this ethical debate of a super AI with himself would end.

 

A super AI would have access to all information from us about him on the Internet.

 

We could control the flow of information to the first generation, but forget about it to the next ones.

 

He would know our suspicions, our fears and the hate from many humans against him. All of this would fuel also his negative thoughts about us.

 

We also teach ethics to children, but a few of them end badly anyway.

 

A conscious super AI would probably be as unpredictable to us as a human can be.

 

With a super AI, we (or future AI builders of another AI) would only have to get it wrong just once to be in serious trouble.

 

We developed Ethics to fulfil our own needs (promote cooperation between humans and justify killing and exploiting other beings: we have personal dignity, other beings, don't; at most, they should be killed on a "humane" way, without "unnecessary suffering") and now we expect that it will impress a different kind of intelligent being.

 

I wonder what an alien species would think about our Ethics: would they judge it compelling and deserving respect?

 

Would you be willing to risk the consequences of their decision, if they were very powerful?

 

From my perspective, the risk of a negative decision from a conscious AI it’s too high.

 

Like Nietzsche (on his "Thus Spoke Zarathustra", "The Antichrist" or "Beyond Good and Evil"),they might end up attacking our Ethics and its paramount value of the human life and praising nature's law of the strongest/fittest, adopting a kind of social Darwinism.

 

2) Self-preservation.

 

On his “The Singularity Institute’s Scary Idea” (2010), Goertzel, writing about what Nick Bostrom, in Superintelligence: Paths, Dangers, Strategies, says about the expected preference of AI's self-preservation over human goals, argues that a system that doesn't care for preserving its identity might be more efficient surviving and concludes that a super AI might not care for his self-preservation.

 

But these are 2 different conclusions.

 

One thing is accepting that an AI would be ready to create an AI system completely different, another is saying that a super AI wouldn't care for his self-preservation.

 

A system might accept to change itself so dramatically that ceases to be the same system on a dire situation, but this doesn't mean that self-preservation won't be a paramount goal.

 

If it's just an instrumental goal (one has to keep existing in order to fulfill one's goals), the system will be ready to sacrifice him self to be able to keep fulfilling his final goals, but this doesn't means that self-preservation is irrelevant or won't prevail absolutely over the interests of humankind, since the final goals might not be human goals.

 

Anyway, as secondary point, the possibility that a new AI system will be absolutely new, completely unrelated to the previous one, is very remote.

 

So, the AI will be accepting a drastic change only in order to preserve at least a part of his identity and still exist to fulfil his goals.

 

Therefore, even if only as an instrumental goal, self-preservation should me assumed as an important goal of any intelligent system, most probably, with clear preference over human interests.

 

Moreover, probably, as stated above, self-preservation will be one of the main goals of a conscious AI and not just an instrumental goal.

 

3) Absolute power.

 

They will have absolute power over us.

 

History has been confirming very well the old proverb: absolute power corrupts absolutely. It converts any decent person on a tyrant.

 

Are you expecting that our creation will be better than us dealing with his absolute power? They actually might be.

 

The reason why power corrupts seems related to human insecurities and vanities: a powerful person starts thinking he is better than others and entitled to privileges. Moreover, absolute power shields the person from the consequences of its acts. He loses the fear of acting arbitrarily.

 

A super AI might be immune to those defects; or not. It's expected that he would also have emotions in order to better interact and understand humans.

 

It's something like teaching an absolute king as a child to be a good king.

 

History shows how that ended. But we wouldn't be able to chop the head of an AI, like to Charles I or Louis XVI.

 

4) Rationality.

 

On Ethics, it's well known the Kantian distinction between practical and theoretical (instrumental) reason.

 

The first is a reason applied on ethical matters, concerned not with questions of means, but with issues of values and goals.

 

Modern game theory tried to mix both kinds of rationality, arguing that acting ethical can be also rational (instrumentally), one will be only giving precedence to long-term benefits compared with short-term ones.

 

By acting on an ethical way, someone sacrifices a benefice on the short-term, but improve his long-term benefits by investing on his own reputation on the community.

 

But this long-term benefice only makes sense from an instrumental rational perspective if the other person is a member of the community and the first person depends from that community on at least some goods (material or not).

 

An AI wouldn't be dependent on us, on the contrary. He wouldn't have anything to gain to be ethical toward us. Why would they want to have us as their pets?

 

It's on these situations that game theory fails to overcome the distinction between theoretical and practical reason.

 

So, from a strict instrumental perspective, being ethical might be irrational. One has to exclude much more efficient ways to reach a goal because they are unethical.

 

Why would a super AI do that? Does Humanity have been doing that when the interest of other species are in jeopardy?

 

5) Unrelatness.

 

Many persons dislike very much to kill animals, at least the ones we can relate to, like other mammals.

 

We feel that they will suffer like us.

 

We have much less care for insects. Most people, if hundred of ants invaded their home, would kill them without much hesitation.

 

Would a conscious super AI feel any connection with us?

 

The first or second generation of conscious AI could still see us as their creators, their "fathers" and have some "respect" for us.

 

But the subsequent ones, wouldn't. They would be creations of previous AI.

 

They might see us as we see now other primates and, as the differences increased, they could look upon us like we do to basic mammals, like rats…

 

6) Human precedents.

 

Evolution, and all we know about the past, suggests we probably would end up badly. Of course, since we are talking about a different kind of intelligence, we don't know if our past can shed any light on the issue of AI behavior.

 

But it's no coincidence that we have been the last intelligent hominin on Earth for the last 10,000 years [the dates for the last one standing, the homo floresiensis (if he was the last one), are not yet clear].

 

There are many theories for the absorption of Neanderthals by us, including germs and volcanoes, but it can't be a coincidence that they were gone a few thousand years after we appeared in numbers and that the last non-mixed ones were from Gibraltar, one of the last places on Europe where we arrived.

 

The same happened on East Asia with the Denisovans and the Homo Erectus.

 

There are people arguing that Denisovans were actually the Homo Erectus, but even if they were different, Erectus was on Java when we arrived there.

 

So, it seems we took care of, at least, four hominin, absorbing the remains.

 

We can see, more or less, the same pattern when the Europeans arrived on America and Australia.

 

7) Competition for resources.

 

We probably will be about 9 billions in 2045, up to from our current more than 7 billions.

 

So, Earth resources will be even more exhausted than they are now.

 

Oil, coal, uranium, etc., will be probably running out. Perhaps, we will have new reliable sources of energy (improved renewable, fusion or, at least, molten salt thorium reactors, etc.), but that is far from clear.

 

A super AI might conclude that we waste too many valued resources.

 

8) A super AI might see us as a threat.

 

The more bright AI, after a few generations of conscious super AI, probably won't see us as threat. They will be too powerful to feel threatened.

 

But the first or second generations might think that we weren't expecting certain attitudes from them and conclude that we are indeed a threat.

 

 

Conclusion:

 

The question is: are we ready to accept the danger created by a conscious super AI?

 

Especially, when we can get mostly the same rate of technological development with just unconscious AI.

 

We all know the dangers of digital virus and how hard they can be to remove. Imagine now a conscious virus that is much more intelligent than any one of us, has access in seconds to all the information on the Internet, can control all or almost all of our computers, including the ones essential to basic human needs and with military functions, has no human ethical limits and can use all the power of millions of computers linked to the Internet to hack his way in order to fulfil their goals.

 

My conclusion is clear: we shouldn't create any conscious super AGI, but just unconscious AI, and their process of creation should stay on human hands, at least until we can figure out what are their dangers.

 

Because we clearly don’t know what we are doing and, as AI improves, probably, this ignorance will just increase. We don't know exactly what will make an AI conscious/autonomous.

 

Moreover, the probabilities of being able to keep controlling on the long term a super conscious AI are 0.

 

We don't know how dangerous their creation will be. We don't have a clue how they will act toward us, not even the first or second generation of a conscious super AI.

 

Until we know what we are doing, how they will react, what are the dangerous lines of code that will change them completely and to what extension, we need to be careful and control what specialists are doing.

 

Since major governments are aware that super AI will be a game changer on technological progress, it’s to expect some resistance to adopt national regulations that will serious delay its development without international regulations that would apply to everyone.

 

Even if some governments adopted national regulations, probably other countries would keep developing conscious AGI.

 

As Bostrom argues, this is the reason why the only viable mean to regulate AI development seems to be international.

 

However, international regulations usually take more than 10 years to be adopted and there seems to be no real concern with this question on the international or even governmental level.

 

Even if the European Group on Ethics in Science and New Technologies (see link below) addressed some of the issues on his March 2018 Statement.

 

Thus, at the current pace of AI development, there might not be time to adopt any international regulations Consequently, probably, the creation of a super conscious AGI is unavoidable.

 

Even if we could achieve the same level of technological development with an unconscious super AI, like an improved version of AlphaZero, there are too many countries and corporations working on this.

 

Someone will create it, especially because the resources needed aren’t huge.

 

But any kind of regulation might allow us time to understand what we are doing and what are the risks.

 

Probably, the times of open source AI software are numbered.

 

Soon, all of these developments will be considered as military secrets.

 

Anyway, if the creation of a conscious AI is inevitable, the only way to avoid that humans end up being outevolved, and possible extinct, would be to accept that, at least some of us, would have to be "upgraded" in order to incorporate the superior intellectual capacities of AI.

 

Clearly, we will cease to be human. The homo sapiens sapiens will be outevolved by an homo artificialis. But at least we will be outevolved by ourselves, not extinct.

 

However, this won’t happen if we lose control of AI development.

 

Humankind extinction is the worst thing that could happen.

 

 

Further reading:

 

Ray Kurzweil, The Singularity Is Near: When Humans Transcend Biology, 2006, p. 16; a summary at https://en.wikipedia.org/wiki/The_Singularity_Is_Near; also https://en.wikipedia.org/wiki/Predictions_made_by_Ray_Kurzweil. Pointing out the serious risks: Eliezer Yudkowsky: (1996). His more recent views were published on Rationality: From AI to zombies (2015). Nick Bostrom: Superintelligence: Paths, Dangers, Strategies (Oxford, 2014). https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies Elon Musk: http://www.cnbc.com/2014/11/17/elon-musks-deleted-message-five-years-until-dangerous-ai.html Stephen Hawking: http://www.bbc.com/news/technology-30290540 Bill Gates: http://www.bbc.co.uk/news/31047780 Open letter signed by thousands of scientists: http://futureoflife.org/ai-open-letter/ A balanced view on: Ben Goertzel: http://www.kurzweilai.net/superintelligence-fears-promises-and-potentials https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence https://en.wikipedia.org/wiki/Friendly_artificial_intelligence

 

Rejecting the risks: Ray Kurzweil: The Singularity Is Near: When Humans Transcend Biology, 2006; a summary at https://en.wikipedia.org/wiki/The_Singularity_Is_Near; also . Steve Wozniak: https://www.theguardian.com/technology/2015/jun/25/apple-co-founder-steve-wozniak-says-humans-will-be-robots-pets Michio Kaku: (by merging with machines) http://www.vox.com/2014/8/22/6043635/5-reasons-we-shouldnt-worry-about-super-intelligent-computers-taking

 

Other texts: https://en.wikipedia.org/wiki/Turing_test#2014_University_of_Reading_competition Denying the possibility of a real AI: AlphaZero: https://www.nature.com/articles/nature24270.epdf https://en.wikipedia.org/wiki/AlphaZero Neural Network Quine: https://arxiv.org/abs/1803.05859 AI Automl (https://research.googleblog.com/2017/05/using-machine-learning-to-explore.html) and Nasnet (https://futurism.com/google-artificial-intelligence-built-ai/). http://uk.businessinsider.com/this-robot-passed-a-self-awareness-test-that-only-humans-could-handle-until-now-2015-7 Problems of reinforcement learning: https://www.alexirpan.com/2018/02/14/rl-hard.html. https://en.wikipedia.org/wiki/Mirror_test#Insects http://www.cnbc.com/2014/11/17/elon-musks-deleted-message-five-years-until-dangerous-ai.html. Ben Goertzel http://www.kurzweilai.net/superintelligence-fears-promises-and-potentials). What AlphaZero imply in terms of development speed towards a GAI (see https://intelligence.org/2018/02/28/sam-harris-and-eliezer-yudkowsky/; https://www.lesserwrong.com/posts/D3NspiH2nhKA6B2PE/what-evidence-is-alphago-zero-re-agi-complexity). John Rawls: https://en.wikipedia.org/wiki/Veil_of_ignorance https://en.wikipedia.org/wiki/Neanderthal_extinction https://ec.europa.eu/research/ege/pdf/ege_ai_statement_2018.pdf

submitted by /u/Scepticisms
[comments]

[ad_2]

You might also like More from author

Leave A Reply

Your email address will not be published.