Autonomous Intelligent Systems are often designed with complex, interacting objectives, or have learned behavior that is difficult to specify. This obfuscates the social and ethical rules of systems like self-driving vehicles and nursing robots in hospitals. Though their missions (e.g., go from point A to point B) are known, we need to determine their ethical obligations (e.g., should the vehicle protect its passengers at the risk of pedestrians?). Knowledge of a system’s obligations and a method of communicating those obligations are necessary for the development of trustworthy robotic systems. However, there are few tools for verifying, discovering, or instilling ethics in such systems. This talk will introduce recent work on tools for reasoning about the ethics of autonomous systems. Formal methods will be discussed for verifying the norms of systems with obligations expressed by deontic logic. I will describe processes for discovering the obligations of a system, and discuss the future of ethical obligations in autonomous systems.
Colin Shea-Blymyer is a fourth-year PhD student in computer science and artificial intelligence. His research interests include ethics and robustness in artificial intelligence, automated discovery, and logic. He previously received his Master’s degree in computer science applications at Virginia Tech, and performed research on adversarial machine learning at MITRE.
Galois was pleased to host this tech talk via live-stream for the public. A recording of the presentation can be found above.