Human-in-the-Loop Ethical AI for Social Robots


This website aims to provide a crowd-sourcing platform to gather human opinions on what a social robot should do under certain challenging ethical dilemmas. The survey is designed to integrate human ethical judgments into the data collection for machine learning. The project reflects the methodology of “human-in-the-loop machine learning” that is currently reshaping the field of machine learning. According to Robert Monarch, author of Human-in-the-Loop Machine Learning (Manning Publishing, 2021), “Human-in-the-loop machine learning is a set of strategies for combining human and machine intelligence in applications that use AI.” One of these strategies is to use simple webforms to collect training data for machine’s supervised active learning. Based on a large set of training data that reflects human sensibilities, human emotions, human virtues, and human values, we can then design machine learning algorithm that will train autonomous robots to act and think like humans do. 

Our survey is divided into two categories: healthcare robots and disaster response robots. Both kinds of robots are currently in use and will likely develop into sophisticated autonomous robots in the near future. How will we be able entrust these autonomous robots to make the kind of ethical decisions that would align with human values and human sentiments? The scenarios we designed are moral dilemmas embedded in socioeconomical contexts with various interpersonal relationships, and your choices can reflect your emotional inclinations and your ethical considerations. By partaking in our survey, you can bring your opinion into the loop of training machine to learn to make the right decisions. This survey has been approved by the IRB (Institutional Review Board) of California State University, Fullerton. 

We all hope that one day when there are autonomous social robots making decisions to serve or save us, their decisions will be welcomed by us. Providing this platform to gather your input is a crucial first step for us to collect useful training data for future robotic design. We hope that you will be sincere and serious in filling out these webforms so that our data will truthfully reflect different preferences and value judgments that each one of you make. We thank you in advance for your cooperation and assistance. 

 

Take the Survey

The survey consists of some demographic questions followed by four sections asking 15 questions regarding each of the scenarios listed below. You may choose to exit the survey after each section.

Start the SurveyOpens in new window

robot assisting an elderly patient to stand

Robot-Assisted Suicide

The following scenarios reflect the difficult dilemmas that our future autonomous robots might face one day. On the one hand, a robot must obey its master’s command. On the other hand, a robot must not harm a human being. In general, a robot must not violate the law even though it cannot be held legally accountable. The law is against assisted suicide. But in circumstances like the following, do you think the robot can bend the rules?

a robot thinking

Is Honesty Always the Best Policy?

The following scenarios depict the conflicting demands derived from our future autonomous robot’s virtue of honesty and virtue of loyalty. Should the robot always tell the truth and act on its virtue of integrity? If the master asks the robot to cover up the truth for them, should the robot obey even if the master is violating the law?  In some cases, when telling the truth could bring psychological harm to the master, should the robot be permitted to tell white lies? Do we want to design our autonomous robots to be equipped with the capacity to tell lies under certain circumstances?

robot coming out of a burning building

Rescue Robot and Saving Lives

The following scenarios resemble textbook dilemmas such as “the Trolley Problem” or “the Lifeboat Problem” in ethics. When multiple lives are at stake and the rescue robot must choose whom to save first, what criteria should the robot use to make its choice? Should the gender, age, social status, merit, etc. play a determining role? Should the sheer number of lives dictate which group of people the robot saves first? Should the victim’s probability of survival be the decisive factor in the rescue robot’s decision?

robot standing in front of panels depicting disasters

The Ethical Choice of Disaster Response Robots

The following scenarios depict conceivable situations where disaster response robots with autonomous deliberation and action capacities could be faced with certain difficult choices. Some scenarios involve the robot’s choosing between obeying a direct human order that defies its own standard of right and wrong, and acting out of its better judgment. Some scenarios require the autonomous robot to act without human supervision because of the urgency of the situation. Should the robot’s internal ethical standards trump its obligation to abide by existing law or to obey a direct order from its human supervisor?


Principal Investigators

Dr. JeeLoo Liu

Professor
Department of Philosophy, CSUF 
Contact email: jeelooliu@gmail.com 

Dr. Yu Bai

Associate Professor
Computer Engineering Program, CSUF
Contact email: ybai@fullerton.edu 


© Copyright 2022 Human-in-the-Loop Ethical AI