On October 25, 2019, the Institute for Human Rights hosted Mathias Risse, Lucius N. Littauer Professor of Philosophy and Public Administration and director of the Carr Center for Human Rights at Harvard University, and Sushma Raman, the executive director of the Carr Center. During the lecture and discussion, Risse asked the audience to consider the present and future moral and philosophical implications of ever-growing developments in artificial intelligence (AI) technology.
One of the most well-known ethical dilemmas that Risse addressed is the Trolley Problem thought experiment which, seemed to be irrelevant in real life at the time of its conception, has massive implications in today’s world. Imagine that you are standing by as a runaway trolley is headed toward five people who are tied to the tracks. You can either refuse to intervene and allow those five people to die, or you can divert the trolley onto a sidetrack where a single person is tied. Which option is more ethical? As AI technology is developed and products such as self-driving cars become more common, we cannot ignore the ethical concerns that will emerge and their attendant consequences.
Risse also discussed rising concerns about the relationship between social inequalities and AI technology. One concern is that, as technology develops, “unskilled” labor will be outsourced to AI, leaving low-income communities that typically work those jobs behind. Not only does that leave people struggling to find work to support themselves and their families, but it also takes away their voice and political power because it pushes them out of the job market and economic system. There is also a concern that technology will become less accessible to low-income communities as it develops, and that under-privileged groups will be left behind. This has led many to worry that AI will “drive a widening technological wedge into society.”
After the lecture, Risse and Raman answered some of the audience’s questions. One person asked which of the problems regarding AI and human rights is the most concerning. In response, Risse pointed out that it depends on who you ask. From policymakers to tech developers to “unskilled” laborers, each group would have a different perspective on which part of the issue is the most urgent because each party has a unique relationship with technology.
In closing his lecture, Risse noted that he wished he could end on a more cheerful note, but he found it to be nearly impossible due to the long list of concerns that the philosophical community has regarding the future of humanity and artificial intelligence. Throughout his lecture and the Q & A session, Risse emphasized the point that there needs to be a serious increase in the interaction that occurs between the AI community and the human rights community. While technological advancements can be wonderful and even lifesaving, it is vital that we evaluate the potential risks that come with them. Just because something is possible does not mean it should be done, and multiple perspectives are necessary to effectively evaluate any given possibility.