Despite significant advancements, collision-free navigation in autonomous driving is still challenging. On one hand, the perception module needs to interpret multimodal unstructured data and produce structured data of the environment. On the other hand, the navigation module needs to balance the use of machine learning and motion planning in order to achieve efficient and effective control of the vehicle. We propose a novel framework combining context-aware multi-sensor perception and inverse reinforcement learning with hybrid weight tuning IRL-HWT for autonomous maneuver. IRL-HWT incorporates several attributes including non-uniform prior for features, hybrid weight tuning based on trust-region optimization, parameter reuse for continuous training, and learning from accidents. These attributes help reduce the number of collisions of the vehicle up to 41%, increase the training efficiency by 2.5x, and obtain higher test scores up to two orders of magnitude. Overall, our method can enable the vehicle to drive 10x further than other methods, while achieving collision avoidance over both static and dynamic obstacles.
Demo video can be found here
Slides can be found here(TBD)
The GitHub repository is located [here](). Link to be posted after paper acceptance.