Skip to content

Latest commit

 

History

History
28 lines (23 loc) · 1.23 KB

README.md

File metadata and controls

28 lines (23 loc) · 1.23 KB

Meta-Inverse Reinforcement Learning with Probabilistic Context Variables

Lantao Yu*, Tianhe Yu*, Chelsea Finn, Stefano Ermon.
The 33rd Conference on Neural Information Processing Systems. (NeurIPS 2019)
[Paper] [Website]

Usage

Requirement: The rllab package used in this project is provided here.

To get expert trajectories for downstream tasks:

python scripts/maze_data_collect.py

After getting expert trajectories, run Meta-Inverse RL to learn context dependent reward functions:

python scripts/maze_wall_meta_irl.py

We provided a pretrained IRL model here, which will be loaded by the following codes by default.

To visualize the context-dependent reward function (Figure 2 in the paper):

python scripts/maze_visualize_reward.py

To use the context-dependent reward function to train a new policy under new dynamics:

python scripts/maze_wall_meta_irl_test.py