sru

Training RNNs as Fast as CNNs (https://arxiv.org/abs/1709.02755)

Github星跟踪图

About

SRU is a recurrent unit that can run over 10 times faster than cuDNN LSTM, without loss of accuracy tested on many tasks.

The paper has multiple versions, please check the latest one.

Reference:

Simple Recurrent Units for Highly Parallelizable Recurrence

@inproceedings{lei2018sru,
  title={Simple Recurrent Units for Highly Parallelizable Recurrence},
  author={Tao Lei and Yu Zhang and Sida I. Wang and Hui Dai and Yoav Artzi},
  booktitle={Empirical Methods in Natural Language Processing (EMNLP)},
  year={2018}
}

Requirements

Install requirements via pip install -r requirements.txt. CuPy and pynvrtc needed to support training / testing on GPU.

Installation

From source:

SRU can be installed as a regular package via python setup.py install or pip install ..

From PyPi:

pip install sru

pip install sru[cuda] additionally installs Cupy and pynvrtc.

pip install sru[cpu] additionally installs ninja

Directly use the source without installation:

Make sure this repo and CUDA library can be found by the system, e.g.

export PYTHONPATH=path_to_repo/sru
export LD_LIBRARY_PATH=/usr/local/cuda/lib64

Examples

The usage of SRU is similar to nn.LSTM. SRU likely requires more stacking layers than LSTM. We recommend starting by 2 layers and use more if necessary (see our report for more experimental details).

import torch
from torch.autograd import Variable
from sru import SRU, SRUCell

# input has length 20, batch size 32 and dimension 128
x = Variable(torch.FloatTensor(20, 32, 128).cuda())

input_size, hidden_size = 128, 128

rnn = SRU(input_size, hidden_size,
    num_layers = 2,          # number of stacking RNN layers
    dropout = 0.0,           # dropout applied between RNN layers
    bidirectional = False,   # bidirectional RNN
    layer_norm = False,      # apply layer normalization on the output of each layer
    highway_bias = 0,        # initial bias of highway gate (<= 0)
    rescale = True,          # whether to use scaling correction
)
rnn.cuda()

output_states, c_states = rnn(x)      # forward pass

# output_states is (length, batch size, number of directions * hidden size)
# c_states is (layers, batch size, number of directions * hidden size)

Contributors

https://github.com/taolei87/sru/graphs/contributors

Other Implementations

@musyoku had a very nice SRU implementaion in chainer.

@adrianbg implemented the first CPU version.

主要指标

概览
名称与所有者asappresearch/sru
主编程语言Python
编程语言Python (语言数: 5)
平台
许可证MIT License
所有者活动
创建于2017-08-28 20:37:41
推送于2022-01-04 21:17:53
最后一次提交2021-05-19 11:52:48
发布数36
最新版本名称v2.7.0-rc1 (发布于 )
第一版名称2.0.0 (发布于 )
用户参与
星数2.1k
关注者数63
派生数303
提交数400
已启用问题?
问题数134
打开的问题数65
拉请求数63
打开的拉请求数3
关闭的拉请求数12
项目设置
已启用Wiki?
已存档?
是复刻?
已锁定?
是镜像?
是私有?