Skip to content

Implementation of the Model-Based Meta-Policy-Optimization (MB-MPO) algorithm

License

Notifications You must be signed in to change notification settings

jonasrothfuss/model_ensemble_meta_learning

Repository files navigation

Model-Based Meta-Policy Optimization (MB-MPO)

This repository contains code corresponding to the paper "Model-Based Reinforcement Learning via Meta-Policy Optimization".

Dependencies

This code is based off of the rllab code repository as well as the maml_rl repository and can be installed in the same way (see below). This codebase is not necessarily backwards compatible with rllab. The code uses the TensorFlow rllab version which can be find in the folder sandbox, so be sure to install TensorFlow v1.0+. Furthermore baseline inplementations of PPO, ACKTR and DDPG from open-ai baselines are also included in the paper

Installation

To install all neccessary packages and dependencies, please follow th instructions on the rllab documentation. Also be aware that for running the experiments, the Mujoco physics simulator 1.3 is required, which requires a licence.

Usage

The core components of our code such as the algorithm can be found in the directory sandbox/ours/.

Scripts for running the experiments found in the paper are located in experiments/run_scripts. For each experiment in the paper a corresponding folder in experiments/run_scripts contains the runscripts.

For instance, in order to run MB-MPO on your local machine execute the folloowing command from the root of this repository:

python experiments/run_scripts/mb_mpo_train.py --mode local

The hyperparameters and the environment(s) on which to run the experiments can be specified in the same file.

The results and logs of the experiment run are saved into the folder data/local/.

rllab

rllab is a framework for developing and evaluating reinforcement learning algorithms. It includes a wide range of continuous control tasks plus implementations of the following algorithms:

rllab is fully compatible with OpenAI Gym. See here for instructions and examples.

rllab only officially supports Python 3.5+. For an older snapshot of rllab sitting on Python 2, please use the py2 branch.

rllab comes with support for running reinforcement learning experiments on an EC2 cluster, and tools for visualizing the results. See the documentation for details.

The main modules use Theano as the underlying framework, and we have support for TensorFlow under sandbox/rocky/tf.

Documentation

Documentation is available online: https://rllab.readthedocs.org/en/latest/.

#$ Citing rllab

If you use rllab for academic research, you are highly encouraged to cite the following paper:

Credits

rllab was originally developed by Rocky Duan (UC Berkeley / OpenAI), Peter Chen (UC Berkeley), Rein Houthooft (UC Berkeley / OpenAI), John Schulman (UC Berkeley / OpenAI), and Pieter Abbeel (UC Berkeley / OpenAI). The library is continued to be jointly developed by people at OpenAI and UC Berkeley.

Slides

Slides presented at ICML 2016: https://www.dropbox.com/s/rqtpp1jv2jtzxeg/ICML2016_benchmarking_slides.pdf?dl=0

About

Implementation of the Model-Based Meta-Policy-Optimization (MB-MPO) algorithm

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •