Singularity

5 minute read

Willard McCarty on Humanist pointed me to a, quite silly, article in the Economist entitled “March of the Machines”. It can almost be called a genre piece. The author downplays very much the possible negative effects of artificial intelligence and then argues that society should find an ‘intelligent response’ to AI—as opposed, I assume, to uninformed dystopian stories.

But I do hope the intelligent response society will seek to AI will be less intellectually lazy than the author of said contribution. I think to be honest that someone needed to crank out a 1000 words piece quickly, and reverted to sad stopgap rhetorics.

In this type of article there’s invariably a variation on this sentence: “Each time, in fact, technology ultimately created more jobs than it destroyed”. As if—not denying here any of a job’s power to be meaningful and fulfilling for many people—a job is the single quality of existence.

Worse is that such multi purpose filler arguments ignore unintended side effects of technological development. Mass production was brought on by mechanisation. We know that it also brought mass destruction. It is always sensible to consider both the possible dystopian and utopian scenarios. No matter what Andrew Ng (quoted in the article) obviously should say as an AI researcher, it is actually very sensible to consider overpopulation of Mars before you colonise it. Before conditions are improved for human live there—at whatever expense—even a few persons will effectively establish such an overpopulation. Ng’s argument is a non sequitur anyway. If the premise of the article is correct we are not decades away from ubiquitous application of AI. Quite the opposite, the conditions on Earth for AI have been very favourable for more than a decade already. We hardly can wait to try out all our new toys.

No doubt AI will bring some good, and also no doubt it will bring a lot of awful bad. This is not inherent in the technology, but the in the people that wield it. Thus it is useful to keep critically examining all applications of all technologies while we develop them, instead of downplaying without evidence its unintended side effects.

If we do not, we may create our own foolish utopian illusions. For instance when we start using arguments such as “AI may itself help, by personalising computer-based learning and by identifying workers’ skills gaps and opportunities for retraining.” Which effectively means asking the machines what the machines think the non-machines should do. Well, if you ask a machine, chances are you’ll get a machinery answer and eventually a machinery society. Which might be fine for all I know, but I’d like that to be a very well informed choice.

I am not a believer of The Singularity. Chances that machines and AI will aggressively push out human kind are in all likelihood gross exaggerations. But a realistic possibility is the covert permeation of human society by AI. We change society by our use of technology and the technology changes us too. This has been and will always be the case, and it is far from some moral or ethical wrong. But of these changes we should be conscious and informed, so that we hold the choice and not the machine. If a dialogue between man and (semi-)intelligent machine would be started as naive as the author of the Economist piece suggests, then human kind might indeed be very naively set to become machine like.

Machines and AI are, certainly until now, extensions and models of human behaviour. They are models and simulations of such behaviour, they are never humans. This can improve human existence manyfold. But having the heater on is something quite different than asking a model of yourself: “What gives my life meaning? How should I come to a fulfilling existence?” Asking that of a machine, even a very intelligent one, is still asking a machine what it is to be human. It is not at all excluded that a machine will not ever find a reasonable or valuable answer to that. But I would certainly wait beyond the first few iterations of this technology before possibly buying into any of the answers we might get.

It is deceptively easy to be unaware of such influences. In 1995 most people found cell phones marginally useful and far too expensive. A mere 20 years later almost no one wants to depart from his or her smartphone. This has changed how we communicate, when we communicate, how we live, who we are. AI will have similar consequences. Those might be good, those might be bad. They shouldn’t be however covert.

Thus I am not saying at all that a machine should never enter a dialogue with humans on human existence. But when we enter that dialogue we change the character of the interaction we have had with technologies since we can remember considerably. Humans have always defined technology, and our use of it has in part defined us. By changing technology we change ourselves. This acts out on the individual level—I am a different person now due to using programming languages than I was when I did not—and on the scale of society where we are part of socio-technical ecosystems comprising both technologies, communities, and individuals.

But these interactions have always been a monologue on the intellectual level. As soon as this becomes a dialogue because the technology literally can now speak to us, we need to be aware that it is not a human speaking to us, but a model of a human.

I for one would be excited to learn what that means, what riches is may bring. But I would always enter such a conversation well aware that I am talking not to another, but to a machine, and I would weigh that fact into the value and evaluation of the conversation. To assume that AI will answer questions on what course of action would lead me to improving my skills and my being, may be too heavily a buy in into the abilities of AI models to understand human life.

Sure AI can help. Even more so if we are aware of the fact that its helpful qualities are by definition limited to the realm of what the machine can understand.

Last update