This is the code for the paper "Benchmarking Constraint Inference in Inverse Reinforcement Learning" published in ICLR 2023. Note that:
- Our benchmark rely on Mujoco and CommonRoad. These environments are publicly available.
- The implementation of the baselines are based on the code from iclr.
- Make sure you have downloaded & installed (mini)conda before proceeding.
- Create conda environment and install the packages:
mkdir ./save_model
mkdir ./evaluate_model
conda env create -n cn-py37 python=3.7 -f python_environment.yml
conda activate cn-py37
- Install Pytorch (version==1.21.1) in the conda env.
Note that we have generated the expert data for the ease of usage, but users can generate their own dataset by adding extra settings (we will show how to generate the expert data later).
cd ./data
Please download the expert_data.zip
from this onedrive sharing link (click me).
unzip expert_data.zip
rm expert_data.zip
cd ../
Now we are ready to run ICRL baselines on our benchmark. For more details, please follow the tutorial below.
Part I. ICRL in Virtual Environment.
Part II. ICRL in Realistic Environment.
Part III. ICRL in Discrete Environment.
If you find this benchmark helpful, please use the citation:
@inproceedings{
liu2023benchmarking,
title={Benchmarking Constraint Inference in Inverse Reinforcement Learning},
author={Guiliang Liu and Yudong Luo and Ashish Gaurav and Kasra Rezaee and Pascal Poupart},
booktitle={The Eleventh International Conference on Learning Representations },
year={2023},
url={https://openreview.net/forum?id=vINj_Hv9szL}
}