Are we still in control of our Brave New Artificial World? (Decomplexation I)

All countries that have the means to do so regard the digitization of information and its transmission as one of the most important technical tasks. In this way, growing volumes of data can be utilized in ever shorter time intervals. Nuclear power plants, ballistic missiles, drones, driverless cars, and surgical procedures can be controlled remotely. State surveillance of entire populations is just as possible as influencing the voting behavior of perfectly screened citizens.

It has, of course, been a trivial truth for thousands of years that knives can be used to cut open pumpkins or murder people. It should therefore not come as a surprise that Google may help us to gain an insight into thousands of facts on the one hand, while at the same time it subjects us to constant observation. SoIn other words, I do not want to criticize digitization for the mere fact that like all other technological breakthroughs, it may be used and misused at the same time. Instead, I would like to focus on a completely different aspect – one hardly ever taken into account: the increasing complexity of the new artificial world we have ourselves created.

Such complexity means first of all

that an overwhelming majority of contemporaries no longer understand the things they routinely use every day. While a car still belongs to the analog world, so that most of us can explain how and why it moves at all, more than ninety-nine out of a hundred people have no idea what happens in everyday gadgets like a cell phone. At first glance, this fact need not cause concern. Our body and brain provide us with the most amazing services every day, but even the greatest luminaries of medicine and neurology have only just unraveled some of the processes that take place within them at any given moment.

In other words, the natural world has always been a mystery to man, but this lack of understanding has not prevented even Stone Age people from subjecting it to their needs. Indeed, the complexity of the natural world stretching from atoms to cosmic galaxies never affected human survival. But what about the artificial world of computers, robots, nuclear-powered intercontinental rockets and the like, which we created ourselves? Is their growing complexity just as insignificant with regard to the individual and social existence of humans? Apparently not. The artificial world confronts us with existential problems that never existed in the past.

Here we come across a first Basic Law

The number of those who, due to their mental abilities and training, are able to develop, maintain and monitor the hardware and software of this artificial world is decreasing to the same extent as the latter’s complexity is increasing.

This is an inevitable consequence resulting from the fact that the Gaussian normal distribution of technical intelligence does not depend on our needs but is a constant (in every population there are only so and so many percent of people whose technical IQ exceeds a certain value). From the outset, therefore, only a fraction of the population can be considered as pioneers and waiting personnel for this need. Even if this potential is up to now far from being exhausted in countries with large populations such as India or China, the first Basic Law nevertheless indicates that it is bound to be constantly reduced in the future because increasing complexity will steeply raise the demands on technical intelligence. Not only today’s 99 percent of people will no longer understand the cell phones of the n-th generation, but the remaining one percent will also melt down to a residual value.

Complexity will be increased in two ways

In the analog age, no special technical skills were required to run a private institute like for instance a bank. This situation has changed fundamentally today. Every financial institution in our time must expect to become inoperative from one moment to the next unless highly paid specialists set up, maintain and update the programs that electronically manage and control the flow of money around the clock. Since national boundaries have long been crossed, international networking is further increasing complexity.

And this is only part of the story. Specialists – on the one hand brilliant amateurs, on the other hand equally highly paid experts from competing countries – do their utmost to gain unauthorized access to their systems. These ongoing attacks are another driving force behind the spiral of complexity in existing systems. Not only banks are affected by this compulsion, but also all manufacturing companies, which are becoming larger and larger for this very reason, because otherwise they would not be able to afford the required number of such specialists.

This results in a second Basic Law

The compulsion for size also results from the costs of increasing complexity. The consequences for society are already beginning to emerge. They are anything but harmless. I can still remember the fun I had as a child using the square beer coasters on the table of a pub to build a tower that could grow up to five stories high, but usually collapsed after the third. What will our future look like when the artificial world around us grows more complex with each passing year? The danger of a system collapse increases with every floor we add to the tower. To prevent this from happening, the demands on maintenance and monitoring must be increased at least to the same extent.

At this point a third Basic Law comes into effect

namely, the compulsion to massively expand technical education, especially in computer science, so that the potential of technical intelligence available in a given population is exploited to the greatest extent possible. From elementary school (perhaps even kindergarten) to universities, technical education will take up an ever greater share of the curriculum, pushing the traditional subjects, first and foremost, of course, the humanities, more and more into the background – a process that we are already witnessing all over the world. However, this tendency, forced by the growing complexity of systems, is in strange contrast to the intentions to which it owes its origins. We once believed that technology would simplify life, relieve people of the tiresome everyday material worries in order to free their minds for higher purposes.

These expectations came true in many respects. For a mother in Vienna, it is undoubtedly a tremendous relief to be able to call her son in New York at any time or transfer money electronically. At least in its initial phase, technical progress was really what it was meant to be: a breathtaking advance into a fantastic world previously imagined only by storytellers.

In the meantime, this fairytale time lies behind us. Not only revolutions, but complexity too devours its children. We know, for example, that fast breeders may significantly stretch the uranium reserves. That is the reason why China, in particular, is sticking with this technology. Other countries such as Germany have turned away from it because the extraordinarily high complexity of such plants extremely increases the risk of wholesale nuclear contamination.

Fourth Basic Law

Extreme risks lead to just as extreme measures of control and thus further the more or less apparent transition to the surveillance state – a trend to be noticed not only in China. Even among sociologists, it is common practice to interpret such surveillance by the state primarily in political terms, as if it were based primarily on evil intentions and lust for power. Undoubtedly, this is often enough the case, but an increasingly large part of central supervision is due to modern technology, that is to the growing complexity of our modern artificial world. As the consequences of sabotage become more and more devastating and costly, states strive to prevent them from happening in the first place by means of complete surveillance, which of course increasingly restricts human freedom. The fourth Basic Law says:

Not only sabotage but technical progress as such is to blame

For example, just consider the quantum computer, a product of outstanding technical intelligence. The moment it will be marketable, so that every private individual can buy it, it will be just as elementary a threat to society as the many nuclear arsenals that meanwhile even small countries can afford to develop and own. From one day to the next, banks will lose their protection against hackers because the new technology will be able to crack all existing codes in a matter of seconds. All money is then on the plate for all the world to take away, so to speak.

In the end, technicians will, of course, develop counter-strategies. As of now, the largest banks are already looking for these in the field of quantum encryption. But the necessary consequence will be a further increase in complexity and much higher costs. In other words, we are rapidly approaching the point where the tower collapses, because constant increases in complexity will no longer be either manageable or affordable.

In the arms industry this point has already been reached

Our „Brave New Artificial World“ has now reached a point where with every passing day, there is a growing likelihood that something might „happen“ because of mere chance or human failure. This is the inevitable result of nuclear missiles becoming faster and faster so that the advance warning time for their impact likewise becomes smaller and smaller. In the case of a first strike on the part of the opponent, both Russians and Americans will no longer dispose of about half an hour after its discovery as was still the case a couple of decades ago. Now that a few days ago Russia successfully demonstrated to the world the test flight of „Zircon“, a rocket of nine times supersonic speed, this already minimal period has shrunk to a few minutes (depending on where the nuclear missiles are fired from).

Fortunately, the danger of an arbitrary first strike by a superpower is so small that an optimist may completely neglect it. No president is so powerful that he would not have to consult with his military beforehand – and the military knows the consequences quite well. The situation is quite different with the second strike, which may be triggered by sheer misinformation. That is exactly what happened in the Soviet Union in 1983. At the very last moment the apocalyptic counter strike was prevented by the great Lieutenant Colonel Stanislav Petrov. As for the US after Kennedy and the Cuban Missile Crisis, an auxiliary (currently female) has to follow an American president wherever he goes with a special black suitcase, so that he is able to give the final order for a nuclear second strike in case an inimical first strike has been spotted. Since a first strike only makes sense if it destroys the enemy’s entire nuclear arsenal, the second strike must likewise be of maximum strength. Due to the minimal time window of meanwhile five minutes, a serious consultation with the military has, of course, become all but impossible. The president of a superpower must either rely on computers or on his guts to decide whether or not he will reduce the globe to rubble.

Whether we like it or not, we must acknowledge a fifth Basic Law

The growing complexity of the artificial world we have ourselves created has increased our freedom only in specific cases, but has radically restricted it as a whole, since the self-extinction of the human species – the maximum loss of freedom – hovers over our heads for the first time in history as a real danger and perspective. Even if – for reasons of mental health – we suppress this sinister possibility from our consciousness, we cannot overlook the prospect that growing complexity is pushing humanity towards a systemic collapse and therefore towards a total negation of freedom.

In the field of armament, where each superpower forces the other to respond to growing speed and deadliness with ever faster and more lethal systems, the state of unstable complexity has already been reached. The banking system will soon reach that point when all codes can be deciphered effortlessly. The technical progress in genetics is also heading in the direction of a complexity that threatens to elude the control even of experts, since we will probably never know for sure what long term effects selective interventions in the genetic material will have on the organism as a whole.

But the now classic example of potentially fatal effects

of growing complexity is the fossil-industrial revolution itself, whose main feature is the increasing hunger for resources on the one hand and their transformation into waste products largely consisting of non-biodegradable toxins, on the other. We know that the removal of CO2 from the air, of plastics from the seas and of electronic, industrial and radiating nuclear waste from the ground represent the great unresolved problems of our time.  While initially only raw materials such as coal were mined, thousands of other substances up to the rare earth elements have now been added. However, the waste materials and potential toxins are already in the hundreds of thousands. We thus exponentially increased the complexity of our interventions in nature – with consequences that can no longer be ignored. Climate change is only the most visible sign that the artificial tower could very well collapse.

Failure of ethical control

Technology is a subsystem within social realms while technical intelligence constitutes a subsystem within the mental abilities of human beings. As long as technology serves man, that is, human society as a whole, we have reason to call its achievements „progress“. But as soon as the technical subsystem becomes independent and – due to its growing complexity – turns into a danger for the social system as a whole (and that of nature), we are forced to speak of its achievements as „technical regression“. With the large-scale destruction of the natural foundations of life, the fossil-industrial epoch has reached a stage where this „technical regression“ is visible to everyone and (at least in the field of armaments) even questions man’s very survival.

With regard to the human body, we speak of cancer when a subsystem gets out of control. Then we say that the immune system is failing, i.e. the body’s defenses. If, on the other hand, technology gets out of control, then the immune system of a society is damaged. Its ethical controls no longer work – those controls, which examine and evaluate all human activities according to whether they are beneficial or harmful to the common good.

The ethical control of the whole

over its parts – its diverse subsystems – should be a matter of course. In the case of technology, it has failed because a taboo stands in the way, which has by now been hardened into dogma. The dogma looks somewhat like this: Every new discovery in the field of natural sciences represents an expansion of our knowledge – which is undoubtedly correct – and is therefore a blessing for mankind – which is undoubtedly incorrect.

The historical roots of this dogma are rooted in the fact that the beneficial effects of technical progress were long felt to be so overwhelming that the doubt about it could be dismissed as mere backwardness and stupidity. This explains why every time a Nobel Prize is awarded to the luminaries of science, humanity falls into a kind of euphoria, even though it is precisely this newly acquired knowledge that tends to increase complexity in our artificial world and heightens its instability. We are on the way to hopelessly damaging the natural world with its artificial counterpart, but it is still considered the worst heresy to doubt technology itself, even though it has brought about this process in the first place.

Decomplexation – the new Basic Law

If we do not want to fail because of the self-created complexity of the new artificial world, only decomplexation, i.e. the conscious reduction of complexity, can save us. Of course, this does not mean a revolt against technology, as if we had to regress back to the early Stone Age, where only a few thousand people in small hordes passed through Europe. Technical intelligence has long been our destiny and the artificial world is a subsystem that we can no longer do without. But this system needs strict control in order not to become completely uncontrollable.

Once society regains control over its technical subsystem, it will not only prohibit further research on nuclear, biological and chemical weapons but will also ensure that no more money is made available for research that promotes the development of a surveillance state and thus the suppression of freedom. Knowledge in itself, for example knowledge about how we may kill people en masse, has no value at all, but only knowledge that promotes life and freedom. Society therefore has not only the right but an obligation to distinguish between ethically valuable and ethically dangerous knowledge – to promote the one and to bring research on the other under its control. Because knowledge and truth are by no means neutral seen from the ethical perspective. We owe to a philanthropic science that service of truth which, in the 17. and 18. centuries, the times of Enlightenment, had successfully eliminated so many dogmatic lies. But knowledge and truth, which serve the development of weapons of mass destruction or increase complexity to the point of uncontrollability, retrospectively call into question all previous achievements that science and technology conferred on man.