Tokenizers

为研究和生产而优化的快速先进的分词器。「💥 Fast State-of-the-Art Tokenizers optimized for Research and Production」

Github stars Tracking Chart

Provides an implementation of today's most used tokenizers, with a focus on performance and
versatility.

Main features:

  • Train new vocabularies and tokenize, using today's most used tokenizers.
  • Extremely fast (both training and tokenization), thanks to the Rust implementation. Takes
    less than 20 seconds to tokenize a GB of text on a server's CPU.
  • Easy to use, but also extremely versatile.
  • Designed for research and production.
  • Normalization comes with alignments tracking. It's always possible to get the part of the
    original sentence that corresponds to a given token.
  • Does all the pre-processing: Truncate, Pad, add the special tokens your model needs.

Performances

Performances can vary depending on hardware, but running the ~/bindings/python/benches/test_tiktoken.py should give the following on a g6 aws instance:
image

Bindings

We provide bindings to the following languages (more to come!):

Installation

You can install from source using:

pip install git+https://github.com/huggingface/tokenizers.git#subdirectory=bindings/python

our install the released versions with

pip install tokenizers

Quick example using Python:

Choose your model between Byte-Pair Encoding, WordPiece or Unigram and instantiate a tokenizer:

from tokenizers import Tokenizer
from tokenizers.models import BPE

tokenizer = Tokenizer(BPE())

You can customize how pre-tokenization (e.g., splitting into words) is done:

from tokenizers.pre_tokenizers import Whitespace

tokenizer.pre_tokenizer = Whitespace()

Then training your tokenizer on a set of files just takes two lines of codes:

from tokenizers.trainers import BpeTrainer

trainer = BpeTrainer(special_tokens=["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"])
tokenizer.train(files=["wiki.train.raw", "wiki.valid.raw", "wiki.test.raw"], trainer=trainer)

Once your tokenizer is trained, encode any text with just one line:

output = tokenizer.encode("Hello, y'all! How are you 😁 ?")
print(output.tokens)
# ["Hello", ",", "y", "'", "all", "!", "How", "are", "you", "[UNK]", "?"]

Check the documentation
or the quicktour to learn more!

Main metrics

Overview
Name With Ownerhuggingface/tokenizers
Primary LanguageRust
Program languageRust (Language Count: 8)
Platform
License:Apache License 2.0
所有者活动
Created At2019-11-01 17:52:20
Pushed At2025-04-16 15:30:11
Last Commit At
Release Count142
Last Release Namev0.21.1 (Posted on )
First Release Namev0.0.3 (Posted on )
用户参与
Stargazers Count9.6k
Watchers Count122
Fork Count883
Commits Count1.9k
Has Issues Enabled
Issues Count1046
Issue Open Count68
Pull Requests Count530
Pull Requests Open Count21
Pull Requests Close Count153
项目设置
Has Wiki Enabled
Is Archived
Is Fork
Is Locked
Is Mirror
Is Private