Hybrid Superintelligence War Machines: The Unpredictable Risks of an Emergent Goal

Lubomir Todorov PhD
Universal Future Foundation
5 min readJan 25, 2020

--

You cannot always survive long enough to enjoy the wisdom of hindsight

Homage to Picasso Guernica — Piero Sabatini

We are prone to overestimate how much we understand about the world and to underestimate the role of chance in events. Overconfidence is fed by the illusory certainty of hindsight. Daniel Kahneman

The advance of technology and human knowledge changes our tools, and every new kind of tool transforms the way we use it. From the Stone Age through metalworking, we used our tools following the rules of mechanics. After year 1600, when William Gilbert discovered electricity, we had to open a new field of knowledge — electronics, to learn how it functions, and to find various ways of making good use of it — by obeying the rules imposed by electronics. It was much more difficult to tackle nuclear power, provided we even don’t have anatomical receptors for radioactivity, but the inquisitive human mind found the way to formulate the laws of nuclear physics and, by following its rules, to achieve disruptive technologies in generating energy, medicine and new weapons. No matter how dramatic was road to success in the above areas, once the rules were experimentally confirmed, we, humans, acquired the feeling of being safe. We know for sure, for example, that fissile components below the critical mass cannot start a nuclear chain reaction. We know the rules, and we feel in control, because there are rules.

And then came Artificial Intelligence to transform our rock-hard castle of technological security perception into a House of Cards. Because

in the case of AI, we face a technology that comes with no rules: in its advanced versions, AI comes with an agency that generates its own rules.

There already are some symptoms that press us to basically redefine our approach to studying technological security, because no rules means no reliable prediction of the outcome, and unreliable predictions put us at odds with our decision-making processes. And that already means we are about to lose control.

Some years ago, Former Secretary of Defense Donald Rumsfeld said: “There are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns — the ones we don’t know we don’t know. And if one looks throughout the history of our country and other free countries, it is the latter category that tend to be the difficult ones.”

One illustration of such symptoms is described by Nick Bostrom: “… search process, tasked with creating an oscillator, was deprived of a seemingly even more indispensable component, the capacitor. When the algorithm presented its successful solution, the researchers examined it and at first concluded that it “should not work.” Upon more careful examination, they discovered that the algorithm had reconfigured its sensor-less motherboard into a makeshift radio receiver, using the printed circuit board tracks as an aerial to pick up signals generated by personal computers that happened to be situated nearby in the laboratory. The circuit amplified this signal to produce the desired oscillating output.” — Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies (p. 154). OUP Oxford. Kindle Edition. — Bird and Layzell (2002) and Thompson (1997); also Yaeger (1994, 13–14).

Another symptom is related to the intensively growing number of hybrid systems composed of three types of components: humans, artificial intelligence, and the physical infrastructure of actuators. It is important to note, that in this type of hybrid systems two of the components are in fact agencies with decision-making processes that interfere with each other, where we like to believe, that it is we, the humans who are calling the shots. But are we?

W. Daniel Hillis, an inventor, entrepreneur, and computer scientist, offers a well-grounded view on what is going on in reality: “Although we do not always perceive it, hybrid superintelligences such as nation-states and corporations have their own emergent goals. Although they are built by and for humans, they often act like independent intelligent entities, and their actions are not always aligned with the interests of the people who created them. The state is not always for the citizen, nor the company for the shareholder. … These organizations act as intelligences that perceive, decide, and act. Like the goals of individual humans, the goals of organizations are complex and often self-contradictory, but they are true goals in the sense that they direct action.” (Possible Minds (pp. 172–173). Penguin Publishing Group. Kindle Edition.)

There is enough ground to

categorize modern armies as hybrid superintelligence war machines.

Throughout human history every new invention has been with priority tested for use in warfare, and AI technologies in 21 century is far from exception.

If existential risk can be represented by any process that can evolve into changes with the potential to transform the immediate physical circumstances Homo sapiens into a biologically unviable environment, the meaning of warfare is to purposefully and powerfully materialize such changes. At this point of time, war machines compared, to other existential risks, have by far the highest potential of destruction. Their formidable warfare is under command of human tribes who are prepared to use it against other human tribes. So far, with consideration of the fact that a new World War can annihilate life on planet Earth, geopolitical superpowers could rely on the rational approach to the consequences of a global conflict — the perspective of a totally unlivable planet Earth.

But with the actual existence of a military organization functioning as hybrid superintelligence, human rationality does not matter anymore: those new organizations come with the capacity to generate and follow their own emergent goals. Goals, about which humans know nothing.

In 21st century, for the first time in human history, we have reached the absurd state of affairs where the projection of edge technologies in the form of hybrid superintelligence war machines onto the persevering primitive tribal structure of Humankind, functioning in the mode of nations fighting each other, has resulted in the real and imminent danger of self-generated global wars that can start without any explicit intention or decision made by a human being.

--

--

is researcher and lecturer in future studies, anthropology, artificial intelligence, and geopolitics; founder of the Universal Future civilizational strategy.