Driverless cars: AI will not solve philosophical conundrums

1. The hallucinations of a technocrat

The question Who should an AI driven car kill?, in the event it had to, has been posited by AI pundits and the media as one of the biggest conundrums facing self driving cars. The ideas and research findings as reported all over are disheartening.

Politico in their illustration of a senior and a child crossing the road at undesignated area and close to an oncoming AI driven car with what appears to be a middle aged male passenger asks whether the car should kill the kid or the retiree. But why not the middle-aged passenger? A study entitled Moral Machine conducted by MIT presenting various scenarios reported that some respondents depending on their cultures would favor the killing of the old over the young, while others would favor to kill the young over the old.

While the primary goal of the question – who should be killed by AI driven cars – attempts to find a solution in which less road carnage takes place, the reports suggest that there is an ethical or moral angle to this. Politico audaciously wrote that “Ethics are built into the way humans behave.” You cannot be further from the truth. Even the definition of ethics itself may be found unethical. The responses given by respondents to the Moral Machine study were termed ‘moral preference.’ Again this has nothing to do with morality or ethics. That an individual may prefer that an AI driven car kill someone and save another or a group of them cannot be taken as an indicator of morality. Among other things, we must also recognize that the choice of the majority does not qualify as an indicator of morality. These, however, are but digressions!

The elephant in the room is everything but noticed.

Why should a car – human or AI driven – be expected to kill a human? Does not such an object befit a weapon?

What is moral or ethical in such a design? Should such a technology be used in civil applications? We need to take a critical look at how the technology is designed and the affordances appropriated to them!

Fundamentally, why should a car – irrespective of who is at the wheel – be expected, i.e. have the license, to kill anyone?

The public and first and foremost, the media’s complacency in accepting without question the use of car technology with in-built affordances to harm and even end human life is despicable. We have not even asked whether their in-built affordance to pollute the environment is ethical or moral. Yet this is having effects on human health, and the environment. And the climate if you like. Is it time that we posed the question who should get asthma and other respiratory conditions from pollution caused by cars? Or whose climate should be affected?

Our focus, to the benefit of technocrats, has been narrowed and we have and continue to be misled. This peculiar framing of the discussion has blinded us from the totality of our human and social reality. We are not anymore concerned with how technology may safeguard human life, but rather absurdly, with which it should end, from a myriad of choices. Consequently, the technocrats have turned us into the plaything of their hallucinatory and deliberate ideological illusions.¹

Postman’s precisely captures our predicament in the title of his book; Technopoly: the surrender of culture to technology. Therein, Postman posits technocrats as experts who by trade and temperament are solely concerned with the efficient operation of their institutions and are indifferent to impact their creations have on the totality of the human condition and purpose.¹ To drive his point home, Postman presents the sobering case of Adolff Eichmann, who was a train engineer during the Nazi regime, who afterwards, on being charged of crimes against humanity “argued that he had no part in the formulation of Nazi political or sociological theory; he dealt only with technical problems of moving vast numbers of people from one place to another. Why they were being moved and, especially, what would happen to them when they arrived at their destination were not relevant to his job.”

Such is the argument and defense that vehicle technocrats might make were they to be charged of crimes against human health, life, climate and nature, that:

They have no part in the formulation of optimal human experience or life theory, politically or otherwise; they are dealing only with technical problems of building cars for moving people and cargo. Why human life may be affected and especially why it may be ended by cars are not relevant to their jobs.

Disturbingly enough, this is not far fetched. We have recently witnessed one technocrat – as if in quoting Eichmann – make this very argument in defense – against how his platform (for connecting everyone) has been used as a catalyst for disinformation and violence; crimes against humanity – that he runs a technology company not a media one. As if such distinction is enough to absolve one from the inherent affordance. This is not ignorance. It is anomie.²

The open road (to hell) has been paved with good intentions

The question who should be killed by self driven vehicles arises from the mere fact that the road is an open season: cars can veer into whichever direction, while people and animals can venture into traffic with ease, wherever, without indication or fore warning. Designing the road as an open field is another mark of the technocrats’ hallucination: how can roads be expected to be safe when all points are potential scenes for carnage? Yet we have played along and begot ourselves over a million deaths per year, not to mention disabilities, damage, among other ‘externalities.’

On an open season, inevitably, AI will not be free of such problems as Tesla and Uber have shown with their technologies behind the wheel. While much debate about AI has been around the notion that it can be or is better than humans at driving, the debate should not enter the realm of arguing that AI can and should decide who to kill. Driving is one thing, killing is another. But to confound the challenge of safe transport technology, the two have been conflated and made to appear part and parcel.

Gunkell in The machine question: critical perspectives on AI, robots and Ethics points out at such incoherence in various ideas and perspectives on AI. This includes the equation of AI as humans on the basis that if the two – during action – cannot be told apart, then they are the same. This notion has been extended further to conclude that if they are the same then they should have the same responsibilities and rights. This marks a watershed moment. AI driven cars can and should have full driving rights. In a muddled case where AI is not only assigned a technical task but a philosophical one, who in the world is going to accept the idea that AI chose to kill their innocent loved one because it was the best option? With humans, the idea that the event was an accident, could be insinuated, not without responsibility of course, but with AI, this befits premeditated murder! Of course, both cases are categorically unacceptable. And it is at the technical level that such events can be minimized if not entirely eliminated; the road must seize in its design to be an open season.

Premeditated murder by AI is neither justifiable, nor a consolation for those at loss. To say the least, this ability endows a godly power to machines. In addition to degrading human life, it poses a real threat to it. In addition to not solving the problem of ‘choosing’ the ‘right’ human kill, it introduces other philosophical conundrums.

The god particle in the machine

Postman’s goes on to show that the subtitle of his book – the surrender of culture to technology – is not far fetched. The godly ability for AI driven cars to determine life, whose, when and why, is for him the pinnacle at which machines supersede humans and attain godly power, and society in turn becomes a subservient thereof. Consequently, as humans of the ages had surrendered to the power of their Gods and spirits, humans in the age of technocracy will surrendered their moral, ethical and other ideals to AI. It might begin subtly, for instance, with a nuanced labeling of a robot as a Moral Machine as in the MIT study with the intention of endowing such a machine with the power to determine life.

“Scientism is the desperate hope and wish and ultimately the illusory belief that some standardized set of procedures called ‘science’ can provide us with an unimpeachable source of moral authority, a suprahuman basis for answers like “What is life, and when, and why?”

For Postman, the intention for scientism is to create an AI that in a given instance will know all the forces by which nature is animated and the position of all the bodies […] to include all these data within its analysis, in which case nothing will be uncertain for it, as the future and the past will be bare in its eyes. In other words, scientism hopes to create a god (of one sort or another); an unimpeachable moral authority, a suprahuman, an unlimited, all knowing mind to whom the past, present and future are in its know. We can fill in the gaps as unto what such a god may inherently require of us. Cheney-Lippold in We are data: Algorithms and the making of our digital selves, alerts us to the very dangers lurking as we toy with AI: in the case of AI driven cars, bethink yourselves profiled, owned, controlled, commodified, and marked to be killed. Without informed consent. It is easy to see how this can pan out.

In reference to the opening question, an AI driven car might get to a point where it simply starts running over seniors upon realizing that they are the least ‘worthy’ of living, or most ‘disruptive’ of traffic. It could earmark the ‘disabled’ or ‘men’ or ‘women’, or the ‘homeless’ or those ‘under the influence’, or the ‘foreigner’ for death. It could end up running over ‘young’ people based on their ‘appearance’ of being a ‘menace’ to the system or even merely to AI driven cars. Not to forget that such digital profiles will not exist on the fly but will reside in corporate or state servers where they will be enriched with more respective data on other points of interaction such as what your destination was, for how long you were there, who else was in that location, what did you buy, whom did you call or talk to, whether you obstructed traffic and so on.

Cheney-Lippold illustrates at length that such profiling by AI besides being proprietary, will always be in fluctuation and never complete. Irrespective of the fact, he suggests these profiles, our digital selves, as created by AI driven cars, may become more important than who we really are or who we may choose to be. Inescapably, AI even though appearing on the outset, as a mere means of transport, may easily end up being endowed with dangerous powers: playing god, without being answerable to us, thinking about and interacting with us in ways unknown and undesirable to us with none being unavailable for redress.

An ounce of common sense will see that the challenge is of creating car technology that upholds human life, and poses no risk to its health or tenure. Our age, however, is peculiar. It defies common sense. It creates solutions to non-existent problems, or solutions which do not in anyway solve their target problems. It is not because we were in need of a god that we should beget one in AI. We must however not overlook the economic and political gains to be made by the technocrat and the state. The ability to know more and more of what we do, and thus the ability to control our futures and their outcomes. It is at such a point where wealth and power inequalities may attain exponential proportions. Regrettably, what is presented as a quest for morality and ethics, begets us the contrary.

2. The nature of the beast

Leaving the hallucinations of the technocrat behind, it is worth to consider the nature of the beast i.e. the transport infrastructure, to determine an alternative course of action. Postman sheds some light to the problem by expounding the domain and suggesting that there are three levels to the problem with each having its own unique characteristics and demanding unique solutions:

  1. A technical level, where a technical solution suffice e.g. in the construction of a sewer system.
  2. A socio-technical level, where a technical solution may conflict with surrounding human purposes e.g. in medicine or architecture.
  3. A human purpose – philosophical – level, which cannot be solved by technology and where efficiency is usually irrelevant, such as in family life, spirituality, friendship, and problems of maladjustment.

AI driven cars span across these three levels. On a technical level, AI needs to drive safely from point A to point B. On the socio-technical level, there is a need to rethink the road infrastructure, even city architecture to achieve optimal outcomes. That is to lessen if nor eradicate the coming together of cars and human traffic. Things get murkier on the human purpose level. It is at this level that philosophical conundrums come alive. And it is here that technocrats, AI pundits and cheerleaders think that we can relegate life – its meaning, when and whose – to the spontaneity of a machine.

Cars and other technologies geared towards enhancing human life should not be designed with the ability to harm or end human life.

And the sooner we acknowledge that there is no just and agreeable, human, even democratic way to choose who gets killed by a car, the better off we will be, since we can then focus on where the biggest impact can be made. On the outset, we must make a bold stand against technologies designed with affordances to harm or end human life.

TBC

3. Towards safe car technology

4. Why safe car technology may remain utopia

Further reading:

  1. The human experience: design for optimal engagement
  2. Anomie: the absurdity of work
  3. Bethink yourselves. Owned, controlled and commodified
  4. The god particle: in man and machine