Despite significant advancements, collision-free navigation in autonomous driving is still challenging. On one hand, the perception module needs to interpret multimodal unstructured data and produce structured data of the environment. On the other hand, the navigation module needs to balance the use of machine learning and motion planning in order to achieve efficient and effective control of the vehicle. We propose a novel framework combining context-aware multi-sensor perception and enhanced inverse reinforcement learning (EIRL) for autonomous driving. Our perception module not only achieves the highest ‘mean Average Precision’ (mAP) scores across all test cases in the KITTI dataset, but also suppresses up to 15% false positives, compared to other recent methods. The EIRL module implements several attributes including non-uniform prior for features, reuse model parameters for continuous training, and learn from accidents. These attributes help reduce the number of collisions of the vehicle up to 41%, increase the training efficiency by 2.5x, and obtain higher test scores up to two orders of magnitude. Overall, our method can enable the vehicle to drive 10x further than other methods, while achieving collision avoidance over both static and dynamic obstacles.