Skip to content
forked from rlworkgroup/garage

A toolkit for reproducible reinforcement learning research

License

Notifications You must be signed in to change notification settings

lywong92/garage

This branch is 714 commits behind rlworkgroup/garage:master.

Folders and files

NameName
Last commit message
Last commit date
May 28, 2019
May 15, 2019
Jun 27, 2019
Jun 27, 2019
Jun 27, 2019
Jun 27, 2019
May 8, 2019
Jun 14, 2018
Feb 3, 2019
Apr 18, 2019
May 29, 2019
Mar 1, 2019
Nov 2, 2018
Jun 27, 2019
Mar 2, 2019
May 27, 2019
May 28, 2019
Jan 29, 2019
May 15, 2019
May 13, 2019
Jun 21, 2019
Jun 21, 2019

Repository files navigation

Docs Build Status License

garage

garage is a framework for developing and evaluating reinforcement learning algorithms. It includes a wide range of continuous control tasks plus implementations of algorithms.

garage is fully compatible with OpenAI Gym. All garage environments implement gym.Env, so all garage components can also be used with any environment implementing gym.Env.

garage only officially supports Python 3.5+.

garage comes with support for running reinforcement learning experiments on an EC2 cluster, and tools for visualizing the results. See the documentation for details.

garage supports TensorFlow for neural network frameworks. TensorFlow modules can be found under garage/tf.

Documentation

Documentation is available online at https://rlgarage.readthedocs.org/en/latest/.

Citing garage

If you use garage for academic research, you are highly encouraged to cite the following paper on the original rllab implementation:

Credits

garage is based on a predecessor project called rllab. The garage project is grateful for the contributions of the original rllab authors, and hopes to continue advancing the state of reproducibility in RL research in the same spirit.

rllab was originally developed by Rocky Duan (UC Berkeley/OpenAI), Peter Chen (UC Berkeley), Rein Houthooft (UC Berkeley/OpenAI), John Schulman (UC Berkeley/OpenAI), and Pieter Abbeel (UC Berkeley/OpenAI).

About

A toolkit for reproducible reinforcement learning research

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 97.5%
  • Shell 2.2%
  • Makefile 0.3%