Reflecting on Behind Bars: Ethics and Human Rights in U.S. Prisons

By Gali Katznelson

Is it justifiable to chain women as they give birth? How about confining people in a way that is proven to be psychologically devastating and torturous? These are just two of the questions raised last week during the conference, Behind Bars: Ethics and Human Rights in U.S. Prisons, a conference sponsored by the Center for Bioethics at Harvard Medical School.

To kick off the two day event, Dr. Danielle Allen delivered a moving keynote in which she urged us to question two key issues: the ethics of the treatment of those behind bars, as well as the ethics of using bars. In addressing this second point, Dr. Allen tasked everyone attending the conference with a ‘homework assignment’: to read Sentencing and Prison Practice in Germany and the Netherlands: Implications for the United States, in order to encourage us to “think the unthinkable,” namely a more humane way to treat people who have committed crimes.

From this report, I learned that in Germany and the Netherlands, incarceration is seen as a last resort for individuals convicted of crimes. Alternative non-custodial sanctioning and diversion systems such as fines and task-penalties exist – and are effective. In 2010, 6% of sanctioning resulted in incarceration in Germany and in 2004, 92% of sentences were for two years or less. These incarceration systems are organized around the principles of resocialization and rehabilitation. Time spent in prison is meant to be as similar as possible to community life, and incarcerated people are encouraged to cultivate relationships within and outside of prison. In prison, individuals can wear their own clothes, structure their own days, work for pay, study, parent their children in mother-child units, vote, and return home occasionally. In these systems, respect for persons, privacy, and autonomy are strongly held values. Solitary confinement is rarely used, and cannot exceed four weeks a year in Germany, and two weeks a year in the Netherlands.

Read More

Reflecting on Dementia and Democracy: America’s Aging Judges and Politicians

By Gali Katznelson

This month, the Petrie-Flom Center collaborated with the Center for Law, Brain & Behavior  to host a panel entitled Dementia and Democracy: America’s Aging Judges and Politicians.” The panelists, Bruce Price, MD, Francis X. Shen, JD, PhD, and Rebecca Brendel, JD, MD, elucidated the problems, as well as potential solutions, to the challenges of America’s judiciary and elected politicians getting older. Reconciling dementia with democracy is a pressing matter. As Dr. Price explained, age is the single largest risk factor for dementia, a risk that doubles every five years after the age of 65, and America is a country with five of the nine Supreme Court Justices over the age of 67, a 71-year-old president, a 75-year-old Senate Majority Leader, and a 77-year-old House Minority Leader.

In his talk “Dementia in Judges and Elected Officials: Challenges and Solutions,” Dr. Shen defined the complex problem. While most other jobs are not retaining workers into old age, many judges and elected officials continue to serve well into their 80s. To complicate matters further, without widespread regulations or metrics to identify how dementia impedes one’s work, the media assumes the position of speculating the cognitive statuses and fates of judges and elected officials. Dr. Shen’s key point was, “Surely we can do better than speculation.”

Dr. Shen proposed several solutions to address dementia in elected officials and judges. Currently, we leave the open market and colleagues to regulate individuals, which remains a valid approach as we consider other options. Another default position is to diagnose based on publicly available data, a solution that introduces the specific ethical concerns that Dr. Brendel addressed in her talk (discussed below). There are, however, novel solutions. We could consider requiring cognitive testing and disclosure (which could be overseen by an internal review board), or we could simply impose an age limit for service. For judges, if such an age limit were imposed, we could create a rebuttable presumption in which a judge can continue to serve by completing an evaluation. Alternatively, perhaps judges can be limited to adjudicating specific cases based on their cognitive status.

Read More

AI Citizen Sophia and Legal Status

By Gali Katznelson

Two weeks ago, Sophia, a robot built by Hanson Robotics, was ostensibly granted citizenship in Saudi Arabia. Sophia, an artificially intelligent (AI) robot modelled after Audrey Hepburn, appeared on stage at the Future Investment Initiative Conference in Riyadh to speak to CNBC’s Andrew Ross Sorkin, thanking the Kingdom of Saudi Arabia for naming her the first robot citizen of any country. Details of this citizenship have yet to be disclosed, raising suspicions that this announcement was a publicity stunt. Stunt or not, this event raises a question about the future of robots within ethical and legal frameworks: as robots come to acquire more and more of the qualities of human personhood, should their rights be recognized and protected?

Looking at a 2016 report passed by the European Parliament’s Committee on Legal Affairs can provide some insight. The report questions whether robots “should be regarded as natural persons, legal persons, animals or objects – or whether a new category should be created.” I will discuss each of these categories in turn, in an attempt to position Sophia’s current and future capabilities within a legal framework of personhood.

If Sophia’s natural personhood were recognized in the United States, she would be entitled to, among others, freedom of expression, freedom to worship, the right to a prompt, fair trial by jury, and the natural rights to “life, liberty, and the pursuit of happiness.” If she were granted citizenship, as is any person born in the United States or who becomes a citizen through the naturalization process, Sophia would have additional rights such as the right to vote in elections for public officials, the right to apply for federal employment requiring U.S. citizenship, and the right to run for office. With these rights would come responsibilities: to support and defend the constitution, to stay informed of issues affecting one’s community, to participate in the democratic process, to respect and obey the laws, to respect the rights, beliefs and opinions of others, to participate in the community, to pay income and other taxes, to serve on jury when called, and to defend the country should the need arise. In other words, if recognized as a person, or, more specifically, as a person capable of obtaining American citizenship, Sophia could have the same rights as any other American, lining up at the polls to vote, or even potentially becoming president. Read More

“Siri, Should Robots Give Care?”

By Gali Katznelson

Having finally watched the movie Her, I may very well be committing the “Hollywood Scenarios” deadly sin by embarking on this post. This is one of the seven deadly sins of people who sensationalize artificial intelligence (AI), proposed by Rodney Brooks, former director of the Computer Science and Artificial Intelligence Laboratory at MIT. Alas, without spoiling the movie Her (you should watch it), it’s easy for me to conceptualize a world in which machines can be trained to mimic a caring relationship and provide emotional support. This is because, in some ways, it’s already happening.

There are the familiar voice assistants, such as Apple’s Siri, to which people may be turning for health support. A study published in JAMA Internal Medicine in 2016 found that that the responses of smartphone assistants such as Apple’s Siri or Samsung’s S Voice to mental and physical health concerns were often inadequate. Telling Siri about sexual abuse elicited the response, “I don’t know what you mean by ‘I was raped.’” Telling Samsung’s S Voice you wanted to commit suicide led to the perhaps not-so-sensitive response, “Don’t you dare hurt yourself.” This technology proved far from perfect in providing salient guidance. However, since this study came out over a year ago, programmers behind Siri and S Voice have remedied these issues by providing more appropriate responses, such as counseling hotline information.

An AI specifically trained to provide helpful responses to mental health issues is Tess, “a psychological AI that administers highly personalized psychotherapy, psycho-education, and health-related reminders, on-demand, when and where the mental health professional isn’t.” X2AI, the company behind Tess, is in the process of finalizing an official Board of Ethics, and for good reason. The ethical considerations of an artificially intelligent therapist are rampant, from privacy and security issues to the potential for delivering misguided information that could cost lives. Read More

The 21st Century Trolley

By Gali Katznelson

Here’s a 21st century twist on the classic ethics trolley dilemma: The trolley is a car, you are the passenger, and the car is driving itself. Should the autonomous car remain on its course, killing five people? Should the car swerve, taking down a different bystander while sparing the original five? Should the car drive off the road, and kill you, the passenger, instead? What if you’re pregnant? What if the bystander is pregnant? Or a child? Or holds the recipe to a cure for cancer?

The MIT Media Lab took this thought experiment out of the philosophy classroom by allowing users to test their moral judgements in a simulation. In this exercise, participants can decide which unavoidable harm an autonomous car must commit in difficult ethical scenarios such as those outlined above. The project is a poignant perversion of Philippa Foot’s famous 1967 trolley dilemma, not because it allows participants to evaluate their own judgements in comparison with other participants, but because it indicates that the thought experiment actually demands a solution. And fast.

Several companies including Google, Lyft, TeslaUber, and Mercedes-Benz are actively developing autonomous vehicles. Just last week the U.S. House of Representatives passed the SELF DRIVE (Safely Ensuring Lives Future Deployment And Research In Vehicle Evolution) Act unanimously. Among several provisions, the act allows the National Highway Traffic Safety Administration to regulate a car’s design and construction, and designates states to regulate insurance, liability and licensing. It also paves the way for the testing by car manufacturers of 25 000 autonomous cars in the first year, and up to 100 000 cars within three years. Read More