从零开始训练 Llama 模型
此脚本已弃用!自发布以来,transformers 已进行了多次更新!
在本教程中,我们将逐步讲解使用 Llama 模型架构和 Transformers 库训练语言模型的过程。
1. 安装所需库
我们将首先使用 pip 安装必要的库。
!pip install -q datasets accelerate evaluate trl accelerate transformers jinja2
2. 登录 Hugging Face Hub
接下来,我们将登录 Hugging Face Hub 以访问所需的模型和数据集。
from huggingface_hub import notebook_login
notebook_login()
3. 加载所需库和模型
我们将导入所需的库并加载 Llama 模型和分词器。
这部分相当复杂,请耐心阅读。
from datasets import load_dataset
dataset = load_dataset("your_dataset_name", split="train") # load the dataset
在这里,我们将获取要传递给分词器的语料库。
def get_training_corpus():
for i in range(0, len(dataset), 1000):
yield dataset[i : i + 1000]["text"]
training_corpus = get_training_corpus()
基础分词器由您选择,我使用的是一个空白分词器,但很多人选择不同的分词器,例如 gpt2。
from tokenizers import ByteLevelBPETokenizer
tokenizer = ByteLevelBPETokenizer()
tokenizer.train_from_iterator(
training_corpus,
vocab_size=3200,
min_frequency=2,
special_tokens=["<s>", "<pad>", "</s>", "<unk>", "<mask>", "<|user|>", "<|bot|>", "<|end|>"] # you can pick the last two or three, as you'll see next
)
接下来,我们将定义分词器的特殊标记和聊天模板。
from transformers import PreTrainedTokenizerFast
special_tokens = {
"bos_token": "<s>",
"eos_token": "</s>",
"unk_token": "<unk>",
"pad_token": "<pad>",
"mask_token": "<mask>",
"additional_special_tokens": ["<|user|>", "<|bot|>", "<|end|>"] # same here
}
tokenizer.add_special_tokens(special_tokens)
tokenizer.user_token_id = tokenizer.convert_tokens_to_ids("<|user|>") # here
tokenizer.assistant_token_id = tokenizer.convert_tokens_to_ids("<|bot|>") # too
chat_template = "{{ bos_token }}{% for message in messages %}{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}{% endif %}{% if message['role'] == 'user' %}{{ '<|user|>\n' + message['content'] + '<|end|>\n' }}{% elif message['role'] == 'assistant' %}{{ '<|bot|>\n' + message['content'] + '<|end|>\n' }}{% else %}{{ raise_exception('Only user and assistant roles are supported!') }}{% endif %}{% endfor %}{{ eos_token }}" # this is where you define the chat template, so you can go crazy here. Something a lot of people do for whatever reason is add seamingly random newline characters
tokenizer.chat_template = chat_template
现在,最后,我们将定义模型。
from transformers import LlamaConfig, LlamaForCausalLM
print(tokenizer.apply_chat_template([{"role": "user", "content": "Why is the sky blue?"}, {"role": "assistant", "content": "Due to rayleigh scattering."}], tokenize=False)) # test to see if the chat template worked
config = LlamaConfig(
vocab_size=tokenizer.vocab_size,
hidden_size=512,
intermediate_size=1024,
num_hidden_layers=8,
num_attention_heads=8,
max_position_embeddings=512,
rms_norm_eps=1e-6,
initializer_range=0.02,
use_cache=True,
pad_token_id=tokenizer.pad_token_id,
bos_token_id=tokenizer.bos_token_id,
eos_token_id=tokenizer.eos_token_id,
tie_word_embeddings=False,
)
model = LlamaForCausalLM(config)
4. 格式化数据集
我们将定义一个函数来格式化数据集中的提示并映射数据集。
def format_prompts(examples):
"""
Define the format for your dataset
This function should return a dictionary with a 'text' key containing the formatted prompts.
"""
pass
dataset = dataset.map(format_prompts, batched=True)
print(dataset['text'][2]) # Check to see if the fields were formatted correctly
5. 设置训练参数
定义训练参数
from transformers import TrainingArguments
args = TrainingArguments(
output_dir="your_output_dir",
num_train_epochs=4, # replace this, depending on your dataset
per_device_train_batch_size=16,
learning_rate=1e-4,
optim="sgd" # sgd, my beloved
)
6. 创建训练器
我们将从 `trl` 库中创建一个 `SFTTrainer` 实例。
from trl import SFTTrainer
trainer = SFTTrainer(
model=model,
tokenizer=tokenizer,
args=args,
train_dataset=dataset,
dataset_text_field='text',
max_seq_length=512
)
7. 训练模型
最后,我们将开始训练过程。
trainer.train()
8. 将训练好的模型推送到 Hugging Face Hub
训练完成后,您可以使用以下命令将训练好的模型推送到 Hugging Face Hub:
trainer.push_to_hub()
这将把模型上传到您的 Hugging Face Hub 账户,以便将来使用或共享。
就是这样!