fbpx Harrison Receives NSF Grant for Autonomous Systems Research | University of Kentucky College of Engineering

Harrison Receives NSF Grant for Autonomous Systems Research

June 03, 2019

The three-year grant is for approximately $600,000 and will involve collaboration with the Georgia Tech Institute of Technology.

Brent Harrison, assistant professor in the Department of Computer Science, has received a three-year award from the National Science Foundation (NSF) to enhance the safety of autonomous systems. The amount of the grant is approximately $600,000 and will involve collaboration with the Georgia Tech Institute of Technology.

The abstract for the project is below.

“In the near future we are likely to see increasingly-capable autonomous systems operating in proximity to humans and immersed in society. As these systems become more sophisticated, they will interact increasingly with humans. With this increased human-agent interaction comes an increased obligation to ensure that autonomous systems do not cause even unintentional harm to a human. Creating systems that cannot intentionally or unintentionally harm humans in not an easy task. This is because there are infinitely many undesirable outcomes that can be achieved in an open world, making it impossible to instruct these systems to avoid each one. If the desired behavior cannot be directly specified, then it must be learned. Past approaches to learn these types of behaviors have focused on learning from human examples, but these methods are unlikely to scale. This research uses natural language explanations of behavior as a scalable alternative for training autonomous agents for safe operation. Naturalistic descriptions contain vast amounts of information about sociocultural norms, which make them rich sources for such training. Enabling systems to better understand and learn from such descriptions will enable human operators to more naturally specify goals or tasks for the agent to complete. 

This research explores the concept of learning via natural language descriptions of desired behavior. This technique uses procedural knowledge contained in natural language explanations to help train autonomous agents. Concretely, this approach learns utility functions that can be used to guide autonomous agents towards behaviors that are aligned with the description used for training. To accomplish this, researchers will create computational models capable of extracting both knowledge about sociocultural norms as well as procedural knowledge from naturally occurring corpora. These models will then be used to create behavior policies that are both aligned with sociocultural norms and procedurally plausible. To further ensure that these models can be practically deployed, researchers will enable their models to incorporate a "human in the loop" to provide online feedback about the quality of these learned behavior policies in terms of their social acceptability and appropriateness. Safeguards will also be investigated to protect the learned behavior policies against the effects of adversarial or malicious training examples.”