Skip to content

JonathanYang0127/iris_robot_learning

Repository files navigation

README last updated on: 01/24/2018

railrl

Reinforcement learning framework. Some implemented algorithms:

To get started, checkout the example scripts, linked above.

Installation

Some dependancies

  • sudo apt-get install swig

Create Conda Env

Install and use the included ananconda environment

$ conda env create -f docker/railrl/railrl-env.yml
$ source activate railrl-env
(railrl-env) $ # Ready to run examples/ddpg_cheetah_no_doodad.py

Or if you want you can use the docker image included.

Download Simulation Env Code

  • multiworld (contains environments):git clone https://github.com/vitchyr/multiworld

Testing

Writing more tests in progress. Run with:

nose2 -v -B -s tests/regression

(Optional) Install doodad

I recommend installing doodad to launch jobs. Some of its nice features include:

  • Easily switch between running code locally, on a remote compute with Docker, on EC2 with Docker
  • Easily add your dependencies that can't be installed via pip (e.g. you borrowed someone's code)

If you install doodad, also modify CODE_DIRS_TO_MOUNT in config.py to include:

  • Path to rllab directory
  • Path to railrl directory
  • Path to other code you want to juse

You'll probably also need to update the other variables besides the docker images/instance stuff.

Setup Config File

You must setup the config file for launching experiments, providing paths to your code and data directories. Inside railrl/config/launcher_config.py, fill in the appropriate paths. You can use railrl/config/launcher_config_template.py as an example reference.

cp railrl/launchers/config-template.py railrl/launchers/config.py

Visualizing a policy and seeing results

During training, the results will be saved to a file called under

LOCAL_LOG_DIR/<exp_prefix>/<foldername>
  • LOCAL_LOG_DIR is the directory set by railrl.launchers.config.LOCAL_LOG_DIR
  • <exp_prefix> is given either to setup_logger.
  • <foldername> is auto-generated and based off of exp_prefix.
  • inside this folder, you should see a file called params.pkl. To visualize a policy, run
(railrl) $ python scripts/sim_policy LOCAL_LOG_DIR/<exp_prefix>/<foldername>/params.pkl

If you have rllab installed, you can also visualize the results using rllab's viskit, described at the bottom of this page

tl;dr run

python rllab/viskit/frontend.py LOCAL_LOG_DIR/<exp_prefix>/

Add paths

export PYTHONPATH=$PYTHONPATH:/path/to/multiworld/repo
export PYTHONPATH=$PYTHONPATH:/path/to/doodad/repo
export PYTHONPATH=$PYTHONPATH:/path/to/viskit/repo
export PYTHONPATH=$PYTHONPATH:/path/to/railrl-private/repo

Credit

This repository was initially developed primarily by Vitchyr Pong, until July 2021, at which point it was transferred to the RAIL Berkeley organization and is primarily maintained by Ashvin Nair. Other major collaborators and contributions:

A lot of the coding infrastructure is based on rllab. The serialization and logger code are basically a carbon copy of the rllab versions.

The Dockerfile is based on the OpenAI mujoco-py Dockerfile.

The SMAC code builds off of the PEARL code, which built off of an older RLKit version.

About

robot learning repository for IRIS robots.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors