AC-DiT: Adaptive Coordination Diffusion Transformer for Mobile Manipulation

Sixiang Chen1,4*, Jiaming Liu1*, Siyuan Qian1,4*, Han Jiang2, Lily Li1, Renrui Zhang3, Zhuoyang Liu1, Chenyang Gu1,4, Chengkai Hou1,4, Pengwei Wang4, Zhongyuan Wang4, Shanghang Zhang1,4
1State Key Laboratory of Multimedia Information Processing, School of Computer Science, Peking University, 2Nanjing University (NJU), 3The Chinese University of Hong Kong (CUHK), 4Beijing Academy of Artificial Intelligence (BAAI)

Demonstrations (Task1 & Task2)

Demonstrations (Task3 & Task4)

Abstract

Recently, mobile manipulation has attracted increasing attention for enabling language-conditioned robotic control in household tasks. However, existing methods still face challenges in coordinating mobile base and manipulator, primarily due to two limitations. On the one hand, they fail to explicitly model the influence of the mobile base on manipulator control, which easily leads to error accumulation under high degrees of freedom. On the other hand, they treat the entire mobile manipulation process with the same visual observation modality (e.g., either all 2D or all 3D), overlooking the distinct multimodal perception requirements at different stages during mobile manipulation.

To address this, we propose the Adaptive Coordination Diffusion Transformer (AC-DiT), which enhances mobile base and manipulator coordination for end-to-end mobile manipulation. First, since the motion of the mobile base directly influences the manipulator's actions, we introduce a mobility-to-body conditioning mechanism that guides the model to first extract base motion representations, which are then used as context prior for predicting whole-body actions. This enables whole-body control that accounts for the potential impact of the mobile base's motion. Second, to meet the perception requirements at different stages of mobile manipulation, we design a perception-aware multimodal conditioning strategy that dynamically adjusts the fusion weights between various 2D visual images and 3D point clouds, yielding visual features tailored to the current perceptual needs. This allows the model to, for example, adaptively rely more on 2D inputs when semantic information is crucial for action prediction, while placing greater emphasis on 3D geometric information when precise spatial understanding is required.

We empirically validate AC-DiT through extensive experiments on both simulated and real-world mobile manipulation tasks, demonstrating superior performance compared to existing methods.

Video

Overview

teaser

AC-DiT, the proposed end-to-end mobile manipulation framework, enhances the coordination between the mobile base and the manipulator by introducing two key mechanisms: mobile-to-body conditioning and perception-aware multimodal adaptation. The former enables action prediction that conditioning on how upcoming mobile base movements may affect manipulator control, thereby reducing error accumulation. The latter constructs multimodal features tailored to the perception requirements at different stages of the mobile manipulation process. Under this paradigm, AC-DiT demonstrates superior performance in both simulation and real-world environments.

AC-DiT Framework

teaser

We first train the modules in the grey-shaded region under the supervision of mobile base actions, allowing the lightweight mobility action head to learn to extract latent mobility features. After this, we optimize the entire AC-DiT model, enabling the mobile manipulation action head to predict both mobile base and manipulator actions.

With the Mobility-to-Body Conditioning mechanism, this action head conditions on the extracted latent mobility features, allowing whole-body action prediction to account for the influence of mobile base motion.

Meanwhile, the Perception-Aware Multimodal Adaptation mechanism enables this action head to adaptively assign different importance weights to various visual input features, resulting in a perception-aware visual condition tailored to the perception needs of different manipulation stages.

BibTeX

@misc{chen2025acditadaptivecoordinationdiffusion,
      title={AC-DiT: Adaptive Coordination Diffusion Transformer for Mobile Manipulation}, 
      author={Sixiang Chen and Jiaming Liu and Siyuan Qian and Han Jiang and Lily Li and Renrui Zhang and Zhuoyang Liu and Chenyang Gu and Chengkai Hou and Pengwei Wang and Zhongyuan Wang and Shanghang Zhang},
      year={2025},
      eprint={2507.01961},
      archivePrefix={arXiv},
      primaryClass={cs.RO},
      url={https://arxiv.org/abs/2507.01961}, 
}