finetune-transformer-lm

Code and model for the paper "Improving Language Understanding by Generative Pre-Training"

  • 所有者: openai/finetune-transformer-lm
  • 平台:
  • 許可證: MIT License
  • 分類:
  • 主題:
  • 喜歡:
    0
      比較:

Github星跟蹤圖

Status: Archive (code is provided as-is, no updates expected)

finetune-transformer-lm

Code and model for the paper "Improving Language Understanding by Generative Pre-Training"

Currently this code implements the ROCStories Cloze Test result reported in the paper by running:
python train.py --dataset rocstories --desc rocstories --submit --analysis --data_dir [path to data here]

Note: The code is currently non-deterministic due to various GPU ops. The median accuracy of 10 runs with this codebase (using default hyperparameters) is 85.8% - slightly lower than the reported single run of 86.5% from the paper.

The ROCStories dataset can be downloaded from the associated website.

主要指標

概覽
名稱與所有者openai/finetune-transformer-lm
主編程語言Python
編程語言Python (語言數: 1)
平台
許可證MIT License
所有者活动
創建於2018-06-11 06:04:40
推送於2019-01-25 09:52:16
最后一次提交2018-11-21 22:05:06
發布數0
用户参与
星數2.2k
關注者數73
派生數510
提交數6
已啟用問題?
問題數42
打開的問題數25
拉請求數1
打開的拉請求數3
關閉的拉請求數3
项目设置
已啟用Wiki?
已存檔?
是復刻?
已鎖定?
是鏡像?
是私有?