PARL

PARL A high-performance distributed training framework

Github星跟蹤圖

English, 简体中文
Documentation

PARL is a flexible and high-efficient reinforcement learning framework.

Features

Reproducible. We provide algorithms that stably reproduce the result of many influential reinforcement learning algorithms.

Large Scale. Ability to support high-performance parallelization of training with thousands of CPUs and multi-GPUs.

Reusable. Algorithms provided in the repository could be directly adapted to a new task by defining a forward network and training mechanism will be built automatically.

Extensible. Build new algorithms quickly by inheriting the abstract class in the framework.

Abstractions

Model

Model is abstracted to construct the forward network which defines a policy network or critic network given state as input.

Algorithm

Algorithm describes the mechanism to update parameters in Model and often contains at least one model.

Agent

Agent, a data bridge between the environment and the algorithm, is responsible for data I/O with the outside environment and describes data preprocessing before feeding data into the training process.

Note: For more information about base classes, please visit our tutorial and API documentation.

Parallelization

PARL provides a compact API for distributed training, allowing users to transfer the code into a parallelized version by simply adding a decorator. For more information about our APIs for parallel training, please visit our documentation.
Here is a Hello World example to demonstrate how easy it is to leverage outer computation resources.

#============Agent.py=================
@parl.remote_class
class Agent(object):

    def say_hello(self):
        print("Hello World!")

    def sum(self, a, b):
        return a+b

parl.connect('localhost:8037')
agent = Agent()
agent.say_hello()
ans = agent.sum(1,5) # run remotely and not consume any local computation resources

Two steps to use outer computation resources:

  1. use the parl.remote_class to decorate a class at first, after which it is transferred to be a new class that can run in other CPUs or machines.
  2. call parl.connect to initialize parallel communication before creating an object. Calling any function of the objects does not consume local computation resources since they are executed elsewhere.

For users, they can write code in a simple way, just like writing multi-thread code, but with actors consuming remote resources. We have also provided examples of parallized algorithms like IMPALA, A2C and GA3C. For more details in usage please refer to these examples.

Install:

Dependencies

  • Python 2.7 or 3.5+.
  • PaddlePaddle >=1.5.1 (Optional, if you only want to use APIs related to parallelization alone)
pip install parl

Examples



主要指標

概覽
名稱與所有者PaddlePaddle/PARL
主編程語言Python
編程語言Python (語言數: 7)
平台
許可證Apache License 2.0
所有者活动
創建於2018-04-25 17:54:22
推送於2025-01-24 08:14:17
最后一次提交2018-04-25 10:54:23
發布數9
最新版本名稱v2.2 (發布於 )
第一版名稱v1.0 (發布於 )
用户参与
星數3.4k
關注者數61
派生數822
提交數517
已啟用問題?
問題數524
打開的問題數109
拉請求數503
打開的拉請求數20
關閉的拉請求數122
项目设置
已啟用Wiki?
已存檔?
是復刻?
已鎖定?
是鏡像?
是私有?