facebook pixel
@nvidia
Precise environmental perception is critical for #autonomousvehicle (AV) safety, especially when handling unseen conditions. In this episode of DRIVE Labs, we discuss a Vision Transformer model called SegFormer, which generates robust semantic segmentation while maintaining high efficiency. This video introduces the mechanism behind SegFormer that enables its robustness and efficiency. 00:00:00 - Robust Perception with SegFormer 00:00:05 - Why accuracy and robustness are important for developlng autonomous vehicles 00:00:15 - What is SegFormer? 00:00:28 - The difference between CNN and Transformer Models 00:01:23 - Testing semantic segmentation results on MB’s Cityscapes Dataset 00:02:09 - The impact of JPEG compression on SegFormer 00:02:27 - How SegFormer understands unseen conditions 00:02:41 - Learn more about segmentation for autonomous vehicle use cases GitHub: github.com/NVlabs/SegFormer Read more: arxiv.org/abs/2105.15203 Watch the full series here: n...

 7.7k

 215

 7.7k

Credits
    Tags, Events, and Projects
    • autonomousvehicle
    • nvidiadrive