noreward-rl

[ICML 2017] TensorFlow code for Curiosity-driven Exploration for Deep Reinforcement Learning

Github星跟蹤圖

Curiosity-driven Exploration by Self-supervised Prediction

In ICML 2017 (http://pathak22.github.io/noreward-rl/) (http://pathak22.github.io/noreward-rl/index.html#demoVideo)

Deepak Pathak, Pulkit Agrawal, Alexei A. Efros, Trevor Darrell
University of California, Berkeley

This is a tensorflow based implementation for our ICML 2017 paper on curiosity-driven exploration for reinforcement learning. Idea is to train agent with intrinsic curiosity-based motivation (ICM) when external rewards from environment are sparse. Surprisingly, you can use ICM even when there are no rewards available from the environment, in which case, agent learns to explore only out of curiosity: 'RL without rewards'. If you find this work useful in your research, please cite:

@inproceedings{pathakICMl17curiosity,
    Author = {Pathak, Deepak and Agrawal, Pulkit and
              Efros, Alexei A. and Darrell, Trevor},
    Title = {Curiosity-driven Exploration by Self-supervised Prediction},
    Booktitle = {International Conference on Machine Learning ({ICML})},
    Year = {2017}
}

1) Installation and Usage

  1. This code is based on TensorFlow. To install, run these commands:
# you might not need many of these, e.g., fceux is only for mario
sudo apt-get install -y python-numpy python-dev cmake zlib1g-dev libjpeg-dev xvfb \
libav-tools xorg-dev python-opengl libboost-all-dev libsdl2-dev swig python3-dev \
python3-venv make golang libjpeg-turbo8-dev gcc wget unzip git fceux virtualenv \
tmux

# install the code
git clone -b master --single-branch https://github.com/pathak22/noreward-rl.git
cd noreward-rl/
virtualenv curiosity
source $PWD/curiosity/bin/activate
pip install numpy
pip install -r src/requirements.txt
python curiosity/src/go-vncdriver/build.py

# download models
bash models/download_models.sh

# setup customized doom environment
cd doomFiles/
# then follow commands in doomFiles/README.md
  1. Running demo
cd noreward-rl/src/
python demo.py --ckpt ../models/doom/doom_ICM
python demo.py --env-id SuperMarioBros-1-1-v0 --ckpt ../models/mario/mario_ICM
  1. Training code
cd noreward-rl/src/
# For Doom: doom or doomSparse or doomVerySparse
python train.py --default --env-id doom

# For Mario, change src/constants.py as follows:
# PREDICTION_BETA = 0.2
# ENTROPY_BETA = 0.0005
python train.py --default --env-id mario --noReward

xvfb-run -s "-screen 0 1400x900x24" bash  # only for remote desktops
# useful xvfb link: http://stackoverflow.com/a/30336424
python inference.py --default --env-id doom --record

2) Other helpful pointers

3) Acknowledgement

Vanilla A3C code is based on the open source implementation of universe-starter-agent.

主要指標

概覽
名稱與所有者pathak22/noreward-rl
主編程語言Python
編程語言Shell (語言數: 2)
平台
許可證Other
所有者活动
創建於2017-05-15 09:57:27
推送於2022-12-07 23:59:41
最后一次提交2018-11-13 11:43:16
發布數0
用户参与
星數1.4k
關注者數62
派生數301
提交數10
已啟用問題?
問題數43
打開的問題數22
拉請求數3
打開的拉請求數13
關閉的拉請求數22
项目设置
已啟用Wiki?
已存檔?
是復刻?
已鎖定?
是鏡像?
是私有?