BPE(Byte-Pair Encoding)

#deepLearning/BPE

BPE 原理

通过合并最常见的字对(或字符对)来逐步构建词汇表,并使用这些合并后的单元来表示原始文本。每次合并操作都会减少一个字对的频率,并在下次迭代中继续基于新的频率数据选择最常见的字对进行合并。

假设语料库有着五个词

"hug", "pug", "pun", "bun", "hugs"
基础单词集合将是 ["b", "g", "h", "n", "p", "s", "u"]

实际应用中,词汇表会包含所有 ASCII 字符,如果 tokenization 不在训练语料库中的字符,则该字符将转换为未知 tokens。

BPE 算法会搜索最常见的现有 tokens 对 (“对”是指一个词中的两个连续 tokens )。最常见的这一对会被合并,然后重复这个过程。

假设单词具有以下频率

("hug", 10), ("pug", 5), ("pun", 12), ("bun", 4), ("hugs", 5)

将每个单词拆分为字符来开始训练,视为一个 token 列表。

("h" "u" "g", 10), ("p" "u" "g", 5), ("p" "u" "n", 12), ("b" "u" "n", 4), ("h" "u" "g" "s", 5)

最常见的对是 ("u", "g") ,在 "hug" 、 "pug" 和 "hugs" 中出现总共 20 次。

因此,tokenizer 学习的第一个合并规则是 ("u", "g") -> "ug" ,"ug" 将被添加到词汇表中,且应在语料库的所有词中合并这一对。

词汇表: ["b", "g", "h", "n", "p", "s", "u", "ug"]
语料库: ("h" "ug", 10), ("p" "ug", 5), ("p" "u" "n", 12), ("b" "u" "n", 4), ("h" "ug" "s", 5)

然后出现频率最高的对是 ("u", "n") ,在语料库中出现 16 次,所以学到的第二个合并规则是 ("u", "n") -> "un"

词汇表: ["b", "g", "h", "n", "p", "s", "u", "ug", "un"]
语料库: ("h" "ug", 10), ("p" "ug", 5), ("p" "un", 12), ("b" "un", 4), ("h" "ug" "s", 5)

然后频率最高的一对是 ("h", "ug") ,所以学到的第三个合并规则 ("h", "ug") -> "hug"

词汇表: ["b", "g", "h", "n", "p", "s", "u", "ug", "un", "hug"]
语料库: ("hug", 10), ("p" "ug", 5), ("p" "un", 12), ("b" "un", 4), ("hug" "s", 5)

后面继续这样合并,直到达到所需的词汇量。

BPE算法实现

语料库

corpus = [
"This is the Hugging Face Course.",
"This chapter is about tokenization.",
"This section shows several tokenizer algorithms.",
"Hopefully, you will be able to understand how they are trained and generate tokens.",
]

使用 gpt2 进行预分词

from transformers import AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("gpt2")

计算语料库中每个单词的频率

# 计算单词频率
from collections import defaultdict

word_freqs = defaultdict(int)

for text in corpus:
words_with_offsets = tokenizer.backend_tokenizer.pre_tokenizer.pre_tokenize_str(text)
new_words = [word for word, offset in words_with_offsets]
for word in new_words:
word_freqs[word] += 1

print(word_freqs)

# outputs:
defaultdict(int, {'This': 3, 'Ġis': 2, 'Ġthe': 1, 'ĠHugging': 1, 'ĠFace': 1, 'ĠCourse': 1, '.': 4, 'Ġchapter': 1,
'Ġabout': 1, 'Ġtokenization': 1, 'Ġsection': 1, 'Ġshows': 1, 'Ġseveral': 1, 'Ġtokenizer': 1, 'Ġalgorithms': 1,
'Hopefully': 1, ',': 1, 'Ġyou': 1, 'Ġwill': 1, 'Ġbe': 1, 'Ġable': 1, 'Ġto': 1, 'Ġunderstand': 1, 'Ġhow': 1,
'Ġthey': 1, 'Ġare': 1, 'Ġtrained': 1, 'Ġand': 1, 'Ġgenerate': 1, 'Ġtokens': 1})

计算基础词汇表,由语料库中使用的所有字符组成

alphabet = []

for word in word_freqs.keys():
for letter in word:
if letter not in alphabet:
alphabet.append(letter)
alphabet.sort()
alphabet

# outputs:
[ ',', '.', 'C', 'F', 'H', 'T', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'k', 'l', 'm', 'n', 'o', 'p', 'r', 's',
't', 'u', 'v', 'w', 'y', 'z', 'Ġ']

添加模型使用的特殊 tokens 对于 GPT-2,唯一的特殊 tokens 是 "<|endoftext|>"

vocab = ["<|endoftext|>"] + alphabet.copy()

将每个单词拆分为单独的字符

# 将单词拆分为字符
splits = {word: [c for c in word] for word in word_freqs.keys()}

计算每对字符的频率

# 计算字符对频率
def compute_pair_freqs(splits):
pair_freqs = defaultdict(int)
for word, freq in word_freqs.items():
split = splits[word]
if len(split) == 1:
continue
for i in range(len(split) - 1):
pair = (split[i], split[i + 1])
pair_freqs[pair] += freq
return pair_freqs

pair_freqs = compute_pair_freqs(splits)
pair_freqs

# outputs:
('T', 'h'): 3
('h', 'i'): 3
('i', 's'): 5
('Ġ', 'i'): 2
('Ġ', 't'): 7
('t', 'h'): 3

找到出现频率最高的对

# 找到出现频率最高的对
best_pair = ""
max_freq = None

for pair, freq in pair_freqs.items():
if max_freq is None or max_freq < freq:
best_pair = pair
max_freq = freq

print(best_pair, max_freq)

# outputs:
(('Ġ', 't'), 7)

所以第一个要学习的合并规则是 ('Ġ', 't') -> 'Ġt' ,将 'Ġt' 添加到词汇表:

merges = {("Ġ", "t"): "Ġt"}
vocab.append("Ġt")

splits 字典中进行合并

# 合并字符对
def merge_pair(a, b, splits):
for word in word_freqs:
split = splits[word]
if len(split) == 1:
continue

i = 0
while i < len(split) - 1:
if split[i] == a and split[i + 1] == b:
split = split[:i] + [a + b] + split[i + 2 :]
else:
i += 1
splits[word] = split
return splits

splits = merge_pair("Ġ", "t", splits)
print(splits["Ġtrained"])

# outputs:
['Ġt', 'r', 'a', 'i', 'n', 'e', 'd']

循环学习到所有的合并

vocab_size = 50

while len(vocab) < vocab_size:
pair_freqs = compute_pair_freqs(splits)
best_pair = ""
max_freq = None
for pair, freq in pair_freqs.items():
if max_freq is None or max_freq < freq:
best_pair = pair
max_freq = freq
splits = merge_pair(*best_pair, splits)
merges[best_pair] = best_pair[0] + best_pair[1]
vocab.append(best_pair[0] + best_pair[1])

# 最终学习了 19 条合并规则(初始词汇量为 31 —— 字母表中的 30 个字符,加上特殊 token ):
print(merges)

# outputs:
{('Ġ', 't'): 'Ġt', ('i', 's'): 'is', ('e', 'r'): 'er', ('Ġ', 'a'): 'Ġa', ('Ġt', 'o'): 'Ġto', ('e', 'n'): 'en',
('T', 'h'): 'Th', ('Th', 'is'): 'This', ('o', 'u'): 'ou', ('s', 'e'): 'se', ('Ġto', 'k'): 'Ġtok',
('Ġtok', 'en'): 'Ġtoken', ('n', 'd'): 'nd', ('Ġ', 'is'): 'Ġis', ('Ġt', 'h'): 'Ġth', ('Ġth', 'e'): 'Ġthe',
('i', 'n'): 'in', ('Ġa', 'b'): 'Ġab', ('Ġtoken', 'i'): 'Ġtokeni'}

词汇表由特殊 token 初始字母和所有合并结果组成

print(vocab)

# outputs:
['<|endoftext|>', ',', '.', 'C', 'F', 'H', 'T', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'k', 'l', 'm', 'n', 'o',
'p', 'r', 's', 't', 'u', 'v', 'w', 'y', 'z', 'Ġ', 'Ġt', 'is', 'er', 'Ġa', 'Ġto', 'en', 'Th', 'This', 'ou', 'se',
'Ġtok', 'Ġtoken', 'nd', 'Ġis', 'Ġth', 'Ġthe', 'in', 'Ġab', 'Ġtokeni']

使用学到的所有合并规则

def tokenize(text):
pre_tokenize_result = tokenizer._tokenizer.pre_tokenizer.pre_tokenize_str(text)
pre_tokenized_text = [word for word, offset in pre_tokenize_result]
splits = [[l for l in word] for word in pre_tokenized_text]
for pair, merge in merges.items():
for idx, split in enumerate(splits):
i = 0
while i < len(split) - 1:
if split[i] == pair[0] and split[i + 1] == pair[1]:
split = split[:i] + [merge] + split[i + 2 :]
else:
i += 1
splits[idx] = split

return sum(splits, [])

tokenize("This is not a token.")

# outputs:
['This', 'Ġis', 'Ġ', 'n', 'o', 't', 'Ġa', 'Ġtoken', '.']

Open in a lab