Making the Complex Coherent – Info Gadgets
10. Risks of general AI
Is general AI a risk to humanity? If we create a general AI and give it control of decisions it will then outsmart us; is this true or false? If we tease apart this scenario we see that it misframes the nature of risk, misframes the nature of AI, it misframes even the framework of the question. Framing of this question is important. The first problem is that complexity is not something that AI does well, people do it much better. If we compete with AI in a complex environment we will win. This is something you can extrapolate into the future very far — farther than any of the other problems we will run into. They will win in a deep logic context, say playing Go. However it is not possible to translate dealing with deep logic to a complex domain. The technical origins of our ability to deal with complexity has to do with non-universal, parallel, and random architectures. There is thus a fundamental reason why we are good at this and computers are not. To the extent that computers are or will be what they are today, this is not a competition. People can do logic, but logic is a superficial aspect of human thinking, not a fundamental part of how we think. Logic requires assumptions. Human intelligence is not “one thing”, the key is to apply the right strategies to the right domains. Thus for the question of can we create a general purpose AI, the answer is no — as soon as you make it more close to what humans are capable of doing, it is now becoming special purpose. Even for the pattern recognition tasks we are giving it, it is now special purpose.
Will silicon be as good as biology, a system with millions of years of evolved structures? Well first you would have to embed that structure in silicon. If you embed all of that structure, what is it you’re doing? You’re replicating the structure, so can you replicate then one human being? Regarding the Turing test, I believe that more challenging than fooling a human is to what extent can you replicate particular human beings, a very different question. You can pass the Turing test of fooling a human trivially. You cannot make general AI before you have the ability to actually replicate an individual human being. (A: “the Bar-Yam test.”)
We are becoming a global collective with higher complexity capabilities. With respect to risks, the right question is not whether AI can replicate general intelligence at the level of human beings, but whether it can replicate intelligence at the level of collectives, of society. Our current society is stupid in many ways but smart in many others — look at the products we produce, at the growth of our economy. But for risks, we can kill ourselves off with lots of things, it doesn’t have to be with AI — things with global consequences from the connectivity of world. It would be dumb to create a program that has the capability to perform actions that can destroy humanity (as an example a program in control of the release of biological materials). Systems crash. There’s an example in the markets of Knight Capital in August 2012, they lost $440M in 45 minutes from a bug in their trading software. The vulnerability of the system is the issue. We need to protect ourselves from risks of our creations, regardless of their form, focussing only on AI misses the point of risks. We need to continually ask what are the risks we are taking, where is the global system vulnerable, and address them both as individuals and collectives. The potential failure modes of AI (e.g. as demonstrated by the paper clip factory thought experiment) are relevant and important, but not the only risks that need to be paid attention to.
Article Prepared by Ollala Corp
