NAV-VIPER: Explainable Reinforcement Learning for Navigation


Overview

Many powerful machine learning algorithms have been developed that are very successful, but which are “black boxes,” in that these models are often inscrutable as to their inner logic, or at least require significant analysis to penetrate. Thus, there is a growing interest in Explainable Artificial Intelligence (XAI) and an increasing body of work focusing on explainability and interpretability in deep learning and reinforcement learning.

NAV-VIPER is a method solving a challenging real-world problem of robot navigation in an explainable manner, by using a reinforcement learning pipeline that results in a decision tree style policy. It solves a robotics navigation challenge, where a mobile robot must avoid static and dynamic obstacles and reach a goal location. This also requires adapting the algorithm to a handle a reinforcement learning problem that requires curriculum learning to solve.

With a tree-format policy and semantic features, one can not only use existing verification techniques to make deterministic statements about the robot behavior, but we can also answer the question of why a robot might choose those behaviors.

We also developed XAI-N, which is a procedure for improving navigation by taking advantage of the interpretability of the decision tree. The three stage process is shown above.

By use of this procedure, once can improve the extracted policy beyond that achieved by the expert policy on certain navigation metrics. Below, find a video that demonstrates using XAI-N to improve a robot’s navigation policy with regards to freezing and oscillation.

If you want to use the JackalSimulator yourself, you can find it at https://github.com/AMR-/JackalCrowdEnv

As part of this project, additional techniques for XAI will also be investigated

This project aims to improves the state of the art in Explainable AI by enabling interpretable, DT-based methods to be used in challenging, real-world scenarios. It brings closer a future in which these interpretable methods are used not simply on toy scenarios or simulations but on complex problems facing society.

For more information, contact Aaron M. Roth at