DiffE2E: Rethinking End-to-End Driving with a Hybrid Action Diffusion and Supervised Policy

1College of Automotive Engineering, Jilin University 2National Key Laboratory of Automotive Chassis Integration and Bionics, Jilin University

Visualization in Navtest Benchmark

GIF 1
GIF 2
GIF 3
GIF 4

Video Demonstration in CARLA

Town03

Town04

Town05

Town06

Abstract

Interpolation end reference image.
End-to-end learning has emerged as a transformative paradigm in autonomous driving. However, the inherently multimodal nature of driving behaviors and the generalization challenges in long-tail scenarios remain critical obstacles to robust deployment. We propose DiffE2E, a diffusion-based end-to-end autonomous driving framework. This framework first performs multi-scale alignment of multi-sensor perception features through a hierarchical bidirectional cross-attention mechanism. It then introduces a novel class of hybrid diffusion-supervision decoders based on the Transformer architecture, and adopts a collaborative training paradigm that seamlessly integrates the strengths of both diffusion and supervised policy. DiffE2E models structured latent spaces, where diffusion captures the distribution of future trajectories and supervision enhances controllability and robustness. A global condition integration module enables deep fusion of perception features with high-level targets, significantly improving the quality of trajectory generation. Subsequently, a cross-attention mechanism facilitates efficient interaction between integrated features and hybrid latent variables, promoting the joint optimization of diffusion and supervision objectives for structured output generation, ultimately leading to more robust control. Experiments demonstrate that DiffE2E achieves state-of-the-art performance in both CARLA closed-loop evaluations and NAVSIM benchmarks. The proposed integrated diffusion-supervision policy offers a generalizable paradigm for hybrid action representation, with strong potential for extension to broader domains including embodied intelligence.

Framework

Interpolation end reference image.

The main architecture consists of a Transformer-based perception module and a Hybrid Diffusion and Supervision Decoder. The blue arrows indicate the data flow exclusively used for the CARLA benchmark, while the black arrows represent the data flow shared between both the CARLA and NAVSIM benchmarks.

Comparative Visualization of Different Methods

Interpolation end reference image.