Zeyu Feng1, Hao Luan1, Kevin Yuchen Ma1, Harold Soh1,2
1Department of Computer Science, School of Computing, National University of Singapore, 2Smart Systems Institute, National University of Singapore
doppler.mp4
This repository contains the implementation of the offline hierarchical planning method under LTL constraints proposed in the ICRA 2025 paper Diffusion Meets Options: Hierarchical Generative Skill Composition for Temporally-Extended Tasks.
If you find this repo or the ideas presented in our paper useful for your research, please consider citing our paper.
@INPROCEEDINGS{11127641,
author = {Feng, Zeyu and Luan, Hao and Ma, Kevin Yuchen and Soh, Harold},
booktitle = {2025 IEEE International Conference on Robotics and Automation (ICRA)},
title = {Diffusion Meets Options: Hierarchical Generative Skill Composition for Temporally-Extended Tasks},
year = {2025},
volume = {},
number = {},
pages = {10854-10860},
doi = {10.1109/ICRA55743.2025.11127641}
}
Safe and successful deployment of robots requires not only the ability to generate complex plans but also the capacity to frequently replan and correct execution errors. This paper addresses the challenge of long-horizon trajectory planning under temporally extended objectives in a receding horizon manner. To this end, we propose DOPPLER, a data-driven hierarchical framework that generates and updates plans based on instruction specified by linear temporal logic (LTL). Our method decomposes temporal tasks into chain of options with hierarchical reinforcement learning from offline non-expert datasets. It leverages diffusion models to generate options with low-level actions. We devise a determinantal-guided posterior sampling technique during batch generation, which improves the speed and diversity of diffusion generated options, leading to more efficient querying. Experiments on robot navigation and manipulation tasks demonstrate that DOPPLER can generate sequences of trajectories that progressively satisfy the specified formulae for obstacle avoidance and sequential visitation.
For training and testing on Maze2d and PushT tasks, please see specific instructions in the folders Maze2d and PushT, respectively.
We have released pre-trained diffusion models and the critic neural networks for both tasks. You can download them by clicking here: Pretrained_Models_GoogleDrive.
In order to download our augmented dataset of trajectories in PushT task, go to: PushT_GoogleDrive.
This repository is released under the MIT license. See LICENSE for additional details.
- Our Maze2d and PushT implementations are based on Diffuser, Diffusion Policy and LTLDoG.
