DeepMind-Teaching-Machines-to-Read-and-Comprehend

Implementation of "Teaching Machines to Read and Comprehend" proposed by Google DeepMind

  • 所有者: thomasmesnard/DeepMind-Teaching-Machines-to-Read-and-Comprehend
  • 平台:
  • 许可证: MIT License
  • 分类:
  • 主题:
  • 喜欢:
    0
      比较:

Github星跟踪图

DeepMind : Teaching Machines to Read and Comprehend

This repository contains an implementation of the two models (the Deep LSTM and the Attentive Reader) described in Teaching Machines to Read and Comprehend by Karl Moritz Hermann and al., NIPS, 2015. This repository also contains an implementation of a Deep Bidirectional LSTM.

The three models implemented in this repository are:

  • deepmind_deep_lstm reproduces the experimental settings of the DeepMind paper for the LSTM reader
  • deepmind_attentive_reader reproduces the experimental settings of the DeepMind paper for the Attentive reader
  • deep_bidir_lstm_2x128 implements a two-layer bidirectional LSTM reader

Our results

We trained the three models during 2 to 4 days on a Titan Black GPU. The following results were obtained:

Here is an example of attention weights used by the attentive reader model on an example:

Requirements

Software dependencies:

  • Theano GPU computing library library
  • Blocks deep learning framework
  • Fuel data pipeline for Blocks

Optional dependencies:

  • Blocks Extras and a Bokeh server for the plot

We recommend using Anaconda 2 and installing them with the following commands (where pip refers to the pip command from Anaconda):

pip install git+git://github.com/Theano/Theano.git
pip install git+git://github.com/mila-udem/fuel.git
pip install git+git://github.com/mila-udem/blocks.git -r https://raw.githubusercontent.com/mila-udem/blocks/master/requirements.txt

Anaconda also includes a Bokeh server, but you still need to install blocks-extras if you want to have the plot:

pip install git+git://github.com/mila-udem/blocks-extras.git

The corresponding dataset is provided by DeepMind but if the script does not work (or you are tired of waiting) you can check this preprocessed version of the dataset by Kyunghyun Cho.

Running

Set the environment variable DATAPATH to the folder containing the DeepMind QA dataset. The training questions are expected to be in $DATAPATH/deepmind-qa/cnn/questions/training.

Run:

cp deepmind-qa/* $DATAPATH/deepmind-qa/

This will copy our vocabulary list vocab.txt, which contains a subset of all the words appearing in the dataset.

To train a model (see list of models at the beginning of this file), run:

./train.py model_name

Be careful to set your THEANO_FLAGS correctly! For instance you might want to use THEANO_FLAGS=device=gpu0 if you have a GPU (highly recommended!)

Reference

Teaching Machines to Read and Comprehend, by Karl Moritz Hermann, Tomáš Kočiský, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman and Phil Blunsom, Neural Information Processing Systems, 2015.

Credits

Thomas Mesnard

Alex Auvolat

Étienne Simon

Acknowledgments

We would like to thank the developers of Theano, Blocks and Fuel at MILA for their excellent work.

We thank Simon Lacoste-Julien from SIERRA team at INRIA, for providing us access to two Titan Black GPUs.

主要指标

概览
名称与所有者thomasmesnard/DeepMind-Teaching-Machines-to-Read-and-Comprehend
主编程语言Python
编程语言Python (语言数: 1)
平台
许可证MIT License
所有者活动
创建于2016-02-29 23:26:12
推送于2016-08-26 05:11:00
最后一次提交2016-03-05 11:37:03
发布数0
用户参与
星数407
关注者数28
派生数104
提交数2
已启用问题?
问题数11
打开的问题数7
拉请求数0
打开的拉请求数0
关闭的拉请求数1
项目设置
已启用Wiki?
已存档?
是复刻?
已锁定?
是镜像?
是私有?