finetune-transformer-lm

Code and model for the paper "Improving Language Understanding by Generative Pre-Training"

  • 所有者: openai/finetune-transformer-lm
  • 平台:
  • 许可证: MIT License
  • 分类:
  • 主题:
  • 喜欢:
    0
      比较:

Github星跟踪图

Status: Archive (code is provided as-is, no updates expected)

finetune-transformer-lm

Code and model for the paper "Improving Language Understanding by Generative Pre-Training"

Currently this code implements the ROCStories Cloze Test result reported in the paper by running:
python train.py --dataset rocstories --desc rocstories --submit --analysis --data_dir [path to data here]

Note: The code is currently non-deterministic due to various GPU ops. The median accuracy of 10 runs with this codebase (using default hyperparameters) is 85.8% - slightly lower than the reported single run of 86.5% from the paper.

The ROCStories dataset can be downloaded from the associated website.

主要指标

概览
名称与所有者openai/finetune-transformer-lm
主编程语言Python
编程语言Python (语言数: 1)
平台
许可证MIT License
所有者活动
创建于2018-06-11 06:04:40
推送于2019-01-25 09:52:16
最后一次提交2018-11-21 22:05:06
发布数0
用户参与
星数2.2k
关注者数73
派生数506
提交数6
已启用问题?
问题数42
打开的问题数25
拉请求数1
打开的拉请求数3
关闭的拉请求数3
项目设置
已启用Wiki?
已存档?
是复刻?
已锁定?
是镜像?
是私有?