Abstract
We present a novel method for reliable robot navigation in uneven outdoor terrains. Our approach employs a novel fully-trained Deep Reinforcement Learning (DRL) network that uses elevation maps of the environment, robot pose, and goal as inputs to compute an attention mask of the environment. The attention mask is used to identify reduced stability regions in the elevation map and is computed using channel and spatial attention modules and a novel reward function. We continuously compute and update a navigation cost-map that encodes the elevation information or the level-of-flatness of the terrain using the attention mask. We then generate locally least-cost waypoints on the cost-map and compute the final dynamically feasible trajectory using another DRL-based method. Our approach guarantees safe, locally least-cost paths and dynamically feasible robot velocities in uneven terrains. We observe an increase of 35.18% in terms of success rate and, a decrease of 26.14% in the cumulative elevation gradient of the robot's trajectory compared to prior navigation methods in high-elevation regions. We evaluate our method on a Husky robot in real-world uneven terrains (~4m of elevation gain) and demonstrate its benefits.
Paper
TERP: Reliable Planning in Uneven Outdoor Environments using Deep Reinforcement Learning, ICRA 2022.
Kasun Weerakoon, Adarsh Jagan Sathyamoorthy, Utsav Patel, Dinesh Manocha
Video
Please cite our work if you found it useful,
@inproceedings{weerakoon2022terp,
title={Terp: Reliable planning in uneven outdoor environments using deep reinforcement learning},
author={Weerakoon, Kasun and Sathyamoorthy, Adarsh Jagan and Patel, Utsav and Manocha, Dinesh},
booktitle={2022 International Conference on Robotics and Automation (ICRA)},
pages={9447--9453},
year={2022},
organization={IEEE}
}