What are Our Duties and Moral Responsibilities Toward Humans when Constructing AI?

Much of what we fear about artificial intelligence comes down to our underlying values and perception about life itself, as well as the place of the human in that life. The New Yorker cover last week was a telling example of the kind of dystopic societies we claim we wish to avoid.

I say “claim” not accidently, for in some respects the nascent stages of such a society do already exist; and perhaps they have existed for longer than we realize or care to admit. Regimes of power, what Michel Foucault called biopolitics, are embedded in our social institutions and in the mechanisms, technologies, and strategies by which human life is managed in the modern world. Accordingly, this arrangement could be positive, neutral, or nefarious—for it all depends on whether or not these institutions are used to subjugate (e.g. racism) or liberate (e.g. rights) the human being; whether they infringe upon the sovereignty of the individual or uphold the sovereignty of the state and the rule of law; in short, biopower is the impact of political power on all domains of human life. This is all the more pronounced today in the extent to which technological advances have enabled biopower to stretch beyond the political to almost all facets of daily life in the modern world.

Thomas the Tank Engine is, for all intents and purposes, meant to be a role model for young children. The lessons of patience, resilience, and industriousness are not lost on the conscientious parent who seeks to instil positive values in his or her child. While I would by no means disparage these values, I have always found something unsettling about the way in which the anthropomorphized steam engine is praised for being a “Really Useful Engine.” Not “good,” not “moral,” not a vessel of one of the cardinal virtues: courage, prudence, temperance, justice—rather, Thomas seems to be praised first and foremost for his utility, for doing his bit to make the island of Sodor a better place. That is not intrinsically problematic, but it reveals to us something about how values are reflected and shaped in society, often unconsciously. One senses here something of Bentham’s shadow, the peculiar vestiges of a particular British version of utilitarianism.

Society seeks a utopia. This is no less true in real life than it is on the fictional island of Sodor. But society’s vision or version of a “better place” is necessarily directed by the citizens and people of that community, who each individually have their own view of an ideal life. Hence, competing or conflicting claims to what might constitute the best of all possible worlds invariably lead to a less-than-perfect social order. The prospects of supercomputers, robotics, and AI could potentially alleviate much suffering and make daily life both easier and safer; but it could just as equally develop in such a way that inequalities are exacerbated and that a few companies, engineers, or thinkers, dictate the trajectory of life for the many.

Apart from the religious implications of the advances in artificial intelligence, some of which can be found here—and about which I will write more at a later date—there is another big question for the here-and-now. What are our duties and moral responsibilities toward humans and humanity when constructing AI?

The Berggruen Institute is one establishment tackling this question. Their global media platform, The WorldPost, published in partnership with the Washington Post, examines pressing issues from a worldwide perspective looking across cultural and political boundaries. A recent short but compelling piece citing experts in the field argued that whoever dominates AI will put their stamp on the social order. I would argue furthermore that any AI program must as a matter of necessity take into account the impact on human lives on an individual and societal level. This discussion must take place not in the sphere of the market but according to fundamental principles of “beneficence” and “non-maleficence.” This might include—though is not limited to—discussion derived from concepts in ethics, religion, and culture, or around notions of the inviolability of the human, the dignity of personhood, or inalienable human rights.

Advancements in AI algorithms and the programs that drive them should be framed in properly human terms and not in economic market rubrics. Otherwise, we risk creating a society where humans are in the service of machines rather than the other way around. Though if we consider what I alluded to above, perhaps we are not all that far off from such a society today. Despite having more technology and resources than ever before, the wealth gap is greater than it has ever been, while people continue to work longer hours, have less leisure, and retire later. What purpose is our technology serving?

I would like to recall for a moment, perhaps nostalgically, Bertrand Russell‘s famous 1932 essay, “In Praise of Idleness“. Far from glorifying laziness, the mathematician-philosopher makes a strong case that if labour was apportioned equally amongst everyone, thereby resulting in shorter work days, unemployment would decrease and human happiness would increase due to the increase in leisure time, further resulting in increased involvement in the arts and sciences. We might apply the same argument today, only more forcefully, since labour could be shared out not amongst other people but delegated to computers, machines, robots. As AI becomes capable of doing more and more, we ought to be freed from the constraints of the 60+ hour work week that we might devote a better part of our time to contemplation, the creation of art, and the pursuit and production of knowledge, wisdom, meaning.

But this can only come about if we think seriously about how we want to order our society and what value we put on human life over and above the notions of “efficiency” or “utility” or whatever else might drive Thomas the Tank the Engine to do his work. The fear then is not that we make robots or AI too human—rather, the fear is that we make ourselves too machinelike.

One thought to “What are Our Duties and Moral Responsibilities Toward Humans when Constructing AI?”

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.