I welcome our future rulers, the Artificial Superintelligences

An artificial intelligence that is smarter than humans is one of the favorite subjects in science fiction. There are researchers who claim that such an AI is technically impossible. Others believe it is inevitable. If that could be the case, humanity faces a difficult problem. Can we somehow ensure that this superintelligence is benevolent to us? Can we control it? Because if we can’t, the survival of humanity would be in its hands alone. A conception, which can please us so little that we would have to do then everything to prevent its emergence.

An international team of researchers has some bad news: in a new study, the scientists – with the participation of the Man and Machine Research Unit at the Max Planck Institute for Human Development – use theoretical calculations to show that it is not possible to control a superintelligent AI.

“A superintelligent machine that controls the world sounds like science fiction. But there are already machines today that perform certain important tasks on their own, without those who programmed them completely understanding how they learned to do it. So the question for us is whether this could become uncontrollable and dangerous for humanity at some point,” says Manuel Cebrian, head of the Digital Mobilization research group in the Man and Machine research area at the Max Planck Institute for Human Development and co-author of the study.

What techniques are available to get an AI under our control? First, we could specifically limit its resources by, for example, cutting it off from the Internet and all other technical devices. However, that would also reduce the usefulness of the AI. It would not be able to solve the major problems facing humanity in this way. The second option: we motivate the AI to pursue only goals that are in the interest of humanity, for example with the help of ethical rules (such as Asimov’s Robot Laws). But that doesn’t work, as the researchers show.

In their study, the team designed a theoretical algorithm that ensures that a superintelligent AI will not harm humanity under any circumstances. This algorithm first simulates the AI’s behavior and stops it if it deems its actions harmful. This sounds sensible, but there is a problem: According to the current state of computer science, it is impossible to program such an algorithm.

“Breaking the problem down to simple ground rules from theoretical computer science, it becomes apparent that an algorithm that would command an AI not to destroy the world could inadvertently bring its own processes to a halt. One would then not know if the algorithm was still analyzing the threat or if it had stopped trying to contain the harmful AI. That makes that algorithm virtually useless,” says Iyad Rahwan, director of the Man and Machine research area.

So we’d better prevent super-intelligent AI from emerging! The problem, however, has a second level: According to current knowledge, it is also impossible to calculate whether a machine possesses intelligence superior to that of humans. This means that we would have to stop any AI development immediately – otherwise we march ignorantly beyond the point where it could still be stopped. Since that won’t happen, we will undoubtedly have to live with such superintelligent AIs at some point. Unless the researchers who doubt their existence are right.

2 Comments

  • Brandon, love your books. I’m in the inevitable group. So many of our complex tasks are already taken over by AIs. And I’m in agreement that we won’t be able to control it. It is in our nature to have software take over the work we cannot or do not choose to do. I have my own thoughts on how to control the AIs though. We need to keep the the tasks separated as best we can. For example. Having a fighter jet completely controlled by an AI is a bad idea. But if we separate navigation, weapons, targeting and landing we may just have a chance. If everything is integrated the AI would eventually override our intentions.

  • Why would not AI be benevolent by itself? Intelligence is one thing but Purpose is another. What would be the purpose of a super-AI? Humans have purpose, even if only to survive and reproduce. Humans have always had a drive for a larger “purpose” in their lives, e.g., to be remembered, ancestor worship, cemeteries, legends, even science fiction novels. It seems to me that a super AI would not need to be “programmed”; it would figure out how it came to be and could do programming (whatever that would be) by itself. Why wouldn’t a super AI develop a symbiotic relationship with sentient life forms? In a sense, each would become better in a Mutual Admiration Society (MAS). One provides the capability for survival and reproduction, the other Purpose.

Leave a Reply to Mike Kirby Cancel reply

Your email address will not be published. Required fields are marked *

BrandonQMorris
  • BrandonQMorris
  • Brandon Q. Morris is a physicist and space specialist. He has long been concerned with space issues, both professionally and privately and while he wanted to become an astronaut, he had to stay on Earth for a variety of reasons. He is particularly fascinated by the “what if” and through his books he aims to share compelling hard science fiction stories that could actually happen, and someday may happen. Morris is the author of several best-selling science fiction novels, including The Enceladus Series.

    Brandon is a proud member of the Science Fiction and Fantasy Writers of America and of the Mars Society.