finetune-transformer-lm

Code and model for the paper "Improving Language Understanding by Generative Pre-Training"

  • Owner: openai/finetune-transformer-lm
  • Platform:
  • License:: MIT License
  • Category::
  • Topic:
  • Like:
    0
      Compare:

Github stars Tracking Chart

Status: Archive (code is provided as-is, no updates expected)

finetune-transformer-lm

Code and model for the paper "Improving Language Understanding by Generative Pre-Training"

Currently this code implements the ROCStories Cloze Test result reported in the paper by running:
python train.py --dataset rocstories --desc rocstories --submit --analysis --data_dir [path to data here]

Note: The code is currently non-deterministic due to various GPU ops. The median accuracy of 10 runs with this codebase (using default hyperparameters) is 85.8% - slightly lower than the reported single run of 86.5% from the paper.

The ROCStories dataset can be downloaded from the associated website.

Main metrics

Overview
Name With Owneropenai/finetune-transformer-lm
Primary LanguagePython
Program languagePython (Language Count: 1)
Platform
License:MIT License
所有者活动
Created At2018-06-11 06:04:40
Pushed At2019-01-25 09:52:16
Last Commit At2018-11-21 22:05:06
Release Count0
用户参与
Stargazers Count2.2k
Watchers Count73
Fork Count506
Commits Count6
Has Issues Enabled
Issues Count42
Issue Open Count25
Pull Requests Count1
Pull Requests Open Count3
Pull Requests Close Count3
项目设置
Has Wiki Enabled
Is Archived
Is Fork
Is Locked
Is Mirror
Is Private