What is artificial intelligence and, is it safe for mankind? | Artificial intelligence

From SIRI to self-driving cars, artificial (AI) is progressing rapidly. While science fiction often portrays AI as robots with human-like characteristics, AI can encompass anything from Google’s search algorithms to IBM’s Watson to autonomous weapons.

Artificial intelligence today is properly known as narrow AI (or weak AI), in that it is designed to perform a narrow task (e.g. only facial recognition or only internet searches or only driving a car). However, the long-term goal of many researchers is to create general AI (AGI or strong AI). While narrow AI may outperform humans at whatever its specific task is, like playing chess or solving equations, AGI would outperform humans at nearly every cognitive task.

Existential risk from artificial intelligence is the hypothesis that substantial progress in artificial intelligence (AI) could someday result in human extinction or some other unrecoverable global catastrophe.

For instance, the human species currently dominates other species because the human brain has some distinctive capabilities that other animals lack. If AI surpasses humanity in intelligence and becomes “superintelligent”, then this new superintelligence could become powerful and difficult to control. Just as the fate of the mountain gorilla depends on human goodwill, so might the fate of humanity depend on the actions of a future machine superintelligence

You might also like
Leave A Reply

Your email address will not be published.