Ettin 套件:当前最佳的成对编码器与解码器

发布于 2025 年 7 月 16 日
在 GitHub 上更新

摘要

如果将 ModernBERT 的训练方案应用到纯解码器模型上会发生什么?结果是,我们得到了一个当前最先进的解码器语言模型,它击败了 Llama 3.2 1B 和 SmolLM2!

我们引入了一个全新的开放数据训练方案,以复现纯编码器模型 ModernBERT(并且实际上超越了它!)。然后,我们将完全相同的方案应用到纯解码器模型上。我们首次在相同的设置下,使用两种不同的训练目标(掩码语言建模 MLM 和因果语言建模 CLM)训练出了两个当前最先进的模型。

这篇博客文章介绍了 Ettin,这是首个包含当前最先进的 成对纯编码器和纯解码器模型(参数量从 1700 万到 10 亿)的套件,它们使用完全相同的数据(2 万亿词元)、架构和训练方案进行训练。Ettin 使得在两种架构之间进行真正的同类比较成为可能,并在两个类别中都提供了 当前最佳性能的开放数据模型。我们还进一步探讨了从解码器出发获得有竞争力的编码器,以及反向操作的可能性。

如果您有兴趣尝试这些模型,在本文末尾提供了一些样板代码!

Attention patterns comparison between encoder and decoder models

编码器 vs. 解码器:架构之争

大型语言模型(LLM)社区已大体上趋向于使用像 GPT、Llama 和 Qwen 这样的纯解码器模型。它们的生成能力令人印象深刻,但这种关注却分散了对其他类别模型的注意力,例如像 BERT 这样的纯编码器模型。

然而,类似于 BERT 的编码器模型仍然是分类、检索和嵌入任务生产系统中的主力。它们更快、内存效率更高,并且在判别性任务上通常更准确。关键区别在于它们的注意力模式:

  • 编码器模型 使用双向注意力,允许每个词元“看到”序列中的所有其他词元(完全可见)。
  • 解码器模型 使用因果注意力,词元只能“看到”前面的词元,以实现自回归生成。

尽管解码器模型取得了快速创新,但编码器模型的开发却停滞不前——直到最近,像 ModernBERT 这样的努力才使其现代化。但哪种架构更好呢?以往对编码器和解码器的比较使用了不同的数据集、架构和训练方案,因此很难判断。

Ettin 以北欧神话中的双头巨人命名,通过在相同的数据、相同的模型形态和相同的训练方案上训练两种架构,提供了一次 受控比较。它们唯一的区别在于注意力模式和训练目标!

训练方案:适用于两种架构的现代技术

我们基于 ModernBERT 的训练方案,它借鉴了纯解码器模型的现代技术并将其引入编码器训练中。这为训练两种架构提供了坚实的基础。

模型大小

我们训练了六种不同大小的模型,参数量从 1700 万到 10 亿不等。这使我们能够测试规模效应,并为您提供多种模型选择!无论您需要一个极速的端侧模型,还是一个功能强大但速度较慢的模型,我们都能满足您的需求!

Sizes of Ettin models

三阶段训练过程

我们采用全面的三阶段训练方法以最大化性能:

第一阶段 - 预训练(1.7 万亿词元):我们从多样化的高质量数据源混合开始,在较短的上下文(1024 词元)上进行训练,以建立坚实的基础知识。

第二阶段 - 上下文扩展(2500 亿词元):我们使用更高质量的过滤数据将上下文长度增加到 8K 词元,使模型能够理解更长的文档和更复杂的关系。

第三阶段 - 衰减(1000 亿词元):我们以包括科学论文、教科书和精选内容在内的高级数据源结束训练,同时逐渐降低学习率。

现代架构组件

我们的编码器模型获得了 ModernBERT 速度上的所有优势,使其比前几代编码器快得多。

数据来源与质量

与 ModernBERT 不同,我们所有的训练数据都是公开且可复现的

Data used to train Ettin models

您可以继续在新数据上训练这些模型,或提出新的方案来进一步提升结果!

编码器结果:击败 ModernBERT

我们的编码器模型在所有任务和模型大小上均 优于 ModernBERT,同时完全使用开放的训练数据。由于我们提供了多种大小的模型,您现在可以在更小的尺寸上使用 ModernBERT 风格的模型(非常适合端侧设备或快速推理),或者使用 10 亿参数规模的编码器来碾压竞争对手。

Encoder performance comparison showing Ettin models beating ModernBERT

解码器结果:击败 Llama 3.2 和 SmolLM2

将相同的训练方案应用于解码器模型同样取得了令人印象深刻的结果,我们的模型 优于或持平 于 Llama 3.2 和 SmolLM2 等现有基线。

Decoder performance comparison showing Ettin models beating Llama 3.2 and SmolLM2

在像 SciQ 这样的知识密集型任务上,增益尤为明显,这反映了我们高质量训练数据混合的优势。这些结果表明,我们的训练方案在两种架构范式中都能创建出真正强大的模型。

公平对决:编码器与解码器的同台竞技

我们首次能够公平地比较在相同数据和训练方案下训练的编码器和解码器架构。结果揭示了即使在控制了所有其他因素的情况下,根本性的架构优势依然存在。

Encoder vs decoder comparison across model sizes and tasks

架构特定优势依然存在

结果显示出清晰的模式:

编码器在分类和检索任务中占主导地位:在 MNLI 分类任务上,一个 1.5 亿参数的编码器(89.2)甚至优于一个 4 亿参数的解码器(88.2)。在检索任务中,差距虽小但仍然显著——尤其是在解码器未经 MNTP 训练时。

解码器在生成任务上表现出色:在生成任务上,解码器保持了一贯的优势,并且随着模型规模的增大,性能差距实际上在扩大。

大小并非总是决定因素:一个 4 亿参数的编码器在分类任务上击败了一个 10 亿参数的解码器,而一个 4 亿参数的解码器在生成任务上击败了一个 10 亿参数的编码器。

跨目标训练效果不佳

由于缺乏新的编码器模型,像 LLM2Vec 这样的工作提出了用 MLM 继续预训练解码器。我们现在可以测试这种策略的有效性了!

我们切换了目标,并用相反的目标继续训练我们的模型,增加了 500 亿词元的训练量。以下是我们的发现:

  • 解码器转编码器:在分类/检索任务上,仍然普遍落后于原生编码器。
  • 编码器转解码器:比原生解码器差很多,尤其是在较大规模时。这可能是因为编码器是用 MLM 而不是 MNTP(掩码下一词元预测)训练的,而 LLM2Vec(以及我们的解码器转编码器方案)推荐使用 MNTP。

这表明架构选择本身至关重要,而不仅仅是训练目标。

超越性能:理解模型行为

由于训练数据完全相同,我们可以研究不同目标如何影响学习过程。例如,使用 WinoGender 基准分析性别偏见揭示了:

  • 编码器模型 更频繁地偏好性别中立的代词(60%+ 的中性代词 vs. 解码器的 30%+)。
  • 两种架构 都显示出男性偏见,但解码器略微更严重。
  • 跨目标训练 以可衡量的方式影响了偏见模式。

这为系统性研究训练目标如何影响模型行为(不仅仅是准确性指标)打开了大门。

使用示例

只需几行代码,您就可以使用这些模型!

编码器

from transformers import AutoTokenizer, AutoModel

# Load encoder for classification/embeddings
tokenizer = AutoTokenizer.from_pretrained("jhu-clsp/ettin-encoder-150m")
model = AutoModel.from_pretrained("jhu-clsp/ettin-encoder-150m")

def predict_masked_token(text):
    inputs = tokenizer(text, return_tensors="pt")
    with torch.no_grad():
        outputs = model(**inputs)
    
    # Get predictions for [MASK] tokens
    mask_indices = torch.where(inputs["input_ids"] == tokenizer.mask_token_id)
    predictions = outputs.logits[mask_indices]
    
    # Get top 5 predictions
    top_tokens = torch.topk(predictions, 5, dim=-1)
    return [tokenizer.decode(token) for token in top_tokens.indices[0]]

# Example
masked_text = "The capital of France is [MASK]."
predictions = predict_masked_token(masked_text)
print(f"Predictions: {predictions}")

对于分类和检索任务,请使用编码器模型: 您可能也想在这些任务上使用微调过的版本。

解码器

对于文本生成任务,请使用解码器模型:

from transformers import AutoTokenizer, AutoModelForCausalLM

# Load decoder for generation
tokenizer = AutoTokenizer.from_pretrained("jhu-clsp/ettin-decoder-150m")
model = AutoModelForCausalLM.from_pretrained("jhu-clsp/ettin-decoder-150m")

# Generate text
prompt = "The future of artificial intelligence is"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(inputs.input_ids, max_length=50, temperature=0.7)
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)

微调示例

编码器

点击查看如何使用 Sentence Transformers 将其微调为密集嵌入模型
import argparse

from datasets import load_dataset
from sentence_transformers import (
    SentenceTransformer,
    SentenceTransformerTrainer,
    SentenceTransformerTrainingArguments,
)
from sentence_transformers.evaluation import TripletEvaluator
from sentence_transformers.losses import CachedMultipleNegativesRankingLoss
from sentence_transformers.training_args import BatchSamplers

def main():
    # parse the lr & model name
    parser = argparse.ArgumentParser()
    parser.add_argument("--lr", type=float, default=8e-5)
    parser.add_argument("--model_name", type=str, default="jhu-clsp/ettin-encoder-150m")
    args = parser.parse_args()
    lr = args.lr
    model_name = args.model_name
    model_shortname = model_name.split("/")[-1]

    # 1. Load a model to finetune
    model = SentenceTransformer(model_name)

    # 2. Load a dataset to finetune on
    dataset = load_dataset(
        "sentence-transformers/msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1",
        "triplet-hard",
        split="train",
    )
    dataset_dict = dataset.train_test_split(test_size=1_000, seed=12)
    train_dataset = dataset_dict["train"].select(range(1_250_000))
    eval_dataset = dataset_dict["test"]

    # 3. Define a loss function
    loss = CachedMultipleNegativesRankingLoss(model, mini_batch_size=16)  # Increase mini_batch_size if you have enough VRAM

    run_name = f"{model_shortname}-DPR-{lr}"
    # 4. (Optional) Specify training arguments
    args = SentenceTransformerTrainingArguments(
        # Required parameter:
        output_dir=f"output/{model_shortname}/{run_name}",
        # Optional training parameters:
        num_train_epochs=1,
        per_device_train_batch_size=512,
        per_device_eval_batch_size=512,
        warmup_ratio=0.05,
        fp16=False,  # Set to False if GPU can't handle FP16
        bf16=True,  # Set to True if GPU supports BF16
        batch_sampler=BatchSamplers.NO_DUPLICATES,  # (Cached)MultipleNegativesRankingLoss benefits from no duplicates
        learning_rate=lr,
        # Optional tracking/debugging parameters:
        save_strategy="steps",
        save_steps=500,
        save_total_limit=2,
        logging_steps=500,
        run_name=run_name,  # Used in `wandb`, `tensorboard`, `neptune`, etc. if installed
    )

    # 5. (Optional) Create an evaluator & evaluate the base model
    dev_evaluator = TripletEvaluator(
        anchors=eval_dataset["query"],
        positives=eval_dataset["positive"],
        negatives=eval_dataset["negative"],
        name="msmarco-co-condenser-dev",
    )
    dev_evaluator(model)

    # 6. Create a trainer & train
    trainer = SentenceTransformerTrainer(
        model=model,
        args=args,
        train_dataset=train_dataset,
        eval_dataset=eval_dataset,
        loss=loss,
        evaluator=dev_evaluator,
    )
    trainer.train()

    # 7. (Optional) Evaluate the trained model on the evaluator after training
    dev_evaluator(model)

    # 8. Save the model
    model.save_pretrained(f"output/{model_shortname}/{run_name}/final")

    # 9. (Optional) Push it to the Hugging Face Hub
    model.push_to_hub(run_name, private=False)

if __name__ == "__main__":
    main()
点击查看如何使用 PyLate 将其微调为多向量嵌入模型
from datasets import load_dataset
from pylate import losses, models, utils
from sentence_transformers import (
    SentenceTransformerTrainer,
    SentenceTransformerTrainingArguments,
)

def main():
    # Load the datasets required for knowledge distillation (train, queries, documents)
    train = load_dataset(
        path="lightonai/ms-marco-en-bge",
        name="train",
    )

    queries = load_dataset(
        path="lightonai/ms-marco-en-bge",
        name="queries",
    )

    documents = load_dataset(
        path="lightonai/ms-marco-en-bge",
        name="documents",
    )

    # Set the transformation to load the documents/queries texts using the corresponding ids on the fly
    train.set_transform(
        utils.KDProcessing(queries=queries, documents=documents).transform,
    )

    # Define the base model, training parameters, and output directory
    num_train_epochs = 1
    lr = 8e-5
    batch_size = 16
    accum_steps = 1
    model_name = "jhu-clsp/ettin-encoder-150m"
    model_shortname = model_name.split("/")[-1]

    # Set the run name for logging and output directory
    run_name = f"{model_shortname}-colbert-KD-{lr}"
    output_dir = f"output/{model_shortname}/{run_name}"

    # Initialize the ColBERT model from the base model
    model = models.ColBERT(model_name_or_path=model_name)

    # Configure the training arguments (e.g., epochs, batch size, learning rate)
    args = SentenceTransformerTrainingArguments(
        output_dir=output_dir,
        num_train_epochs=num_train_epochs,
        per_device_train_batch_size=batch_size,
        fp16=False,  # Set to False if you get an error that your GPU can't run on FP16
        bf16=True,  # Set to True if you have a GPU that supports BF16
        run_name=run_name,
        logging_steps=10,
        learning_rate=lr,
        gradient_accumulation_steps=accum_steps,
        warmup_ratio=0.05,
    )

    # Use the Distillation loss function for training
    train_loss = losses.Distillation(model=model)

    # Initialize the trainer
    trainer = SentenceTransformerTrainer(
        model=model,
        args=args,
        train_dataset=train,
        loss=train_loss,
        data_collator=utils.ColBERTCollator(tokenize_fn=model.tokenize),
    )

    # Start the training process
    trainer.train()

    model.save_pretrained(f"{output_dir}/final")

if __name__ == "__main__":
    main()
点击查看如何使用 Sentence Transformers 将其微调为稀疏检索模型
import logging

from datasets import load_dataset

from sentence_transformers import (
    SparseEncoder,
    SparseEncoderModelCardData,
    SparseEncoderTrainer,
    SparseEncoderTrainingArguments,
)
from sentence_transformers.sparse_encoder.evaluation import SparseNanoBEIREvaluator
from sentence_transformers.sparse_encoder.losses import SparseMultipleNegativesRankingLoss, SpladeLoss
from sentence_transformers.training_args import BatchSamplers

logging.basicConfig(format="%(asctime)s - %(message)s", datefmt="%Y-%m-%d %H:%M:%S", level=logging.INFO)

# 1. Load a model to finetune with 2. (Optional) model card data
model = SparseEncoder(
    "jhu-clsp/ettin-encoder-150m",
    model_card_data=SparseEncoderModelCardData(
        language="en",
        license="apache-2.0",
    )
)

# 3. Load a dataset to finetune on
full_dataset = load_dataset("sentence-transformers/natural-questions", split="train").select(range(100_000))
dataset_dict = full_dataset.train_test_split(test_size=1_000, seed=12)
train_dataset = dataset_dict["train"]
eval_dataset = dataset_dict["test"]

# 4. Define a loss function
loss = SpladeLoss(
    model=model,
    loss=SparseMultipleNegativesRankingLoss(model=model),
    query_regularizer_weight=5e-5,
    document_regularizer_weight=3e-5,
)

# 5. (Optional) Specify training arguments
run_name = "splade-distilbert-base-uncased-nq"
args = SparseEncoderTrainingArguments(
    # Required parameter:
    output_dir=f"models/{run_name}",
    # Optional training parameters:
    num_train_epochs=1,
    per_device_train_batch_size=16,
    per_device_eval_batch_size=16,
    learning_rate=2e-5,
    warmup_ratio=0.1,
    fp16=True,  # Set to False if you get an error that your GPU can't run on FP16
    bf16=False,  # Set to True if you have a GPU that supports BF16
    batch_sampler=BatchSamplers.NO_DUPLICATES,  # MultipleNegativesRankingLoss benefits from no duplicate samples in a batch
    # Optional tracking/debugging parameters:
    eval_strategy="steps",
    eval_steps=1000,
    save_strategy="steps",
    save_steps=1000,
    save_total_limit=2,
    logging_steps=200,
    run_name=run_name,  # Will be used in W&B if `wandb` is installed
)

# 6. (Optional) Create an evaluator & evaluate the base model
dev_evaluator = SparseNanoBEIREvaluator(dataset_names=["msmarco", "nfcorpus", "nq"], batch_size=16)

# 7. Create a trainer & train
trainer = SparseEncoderTrainer(
    model=model,
    args=args,
    train_dataset=train_dataset,
    eval_dataset=eval_dataset,
    loss=loss,
    evaluator=dev_evaluator,
)
trainer.train()

# 8. Evaluate the model performance again after training
dev_evaluator(model)

# 9. Save the trained model
model.save_pretrained(f"models/{run_name}/final")

# 10. (Optional) Push it to the Hugging Face Hub
model.push_to_hub(run_name)
点击查看如何使用 Sentence Transformers 将其微调为重排模型
import logging
import traceback

import torch
from datasets import load_dataset

from sentence_transformers import SentenceTransformer
from sentence_transformers.cross_encoder import (
    CrossEncoder,
    CrossEncoderModelCardData,
    CrossEncoderTrainer,
    CrossEncoderTrainingArguments,
)
from sentence_transformers.cross_encoder.evaluation import (
    CrossEncoderNanoBEIREvaluator,
    CrossEncoderRerankingEvaluator,
)
from sentence_transformers.cross_encoder.losses import BinaryCrossEntropyLoss
from sentence_transformers.evaluation import SequentialEvaluator
from sentence_transformers.util import mine_hard_negatives

# Set the log level to INFO to get more information
logging.basicConfig(format="%(asctime)s - %(message)s", datefmt="%Y-%m-%d %H:%M:%S", level=logging.INFO)


def main():
    model_name = "jhu-clsp/ettin-encoder-150m"

    train_batch_size = 64
    num_epochs = 1
    num_hard_negatives = 5  # How many hard negatives should be mined for each question-answer pair

    # 1a. Load a model to finetune with 1b. (Optional) model card data
    model = CrossEncoder(
        model_name,
        model_card_data=CrossEncoderModelCardData(
            language="en",
            license="apache-2.0",
        ),
    )
    print("Model max length:", model.max_length)
    print("Model num labels:", model.num_labels)

    # 2a. Load the GooAQ dataset: https://huggingface.co/datasets/sentence-transformers/gooaq
    logging.info("Read the gooaq training dataset")
    full_dataset = load_dataset("sentence-transformers/gooaq", split="train").select(range(100_000))
    dataset_dict = full_dataset.train_test_split(test_size=1_000, seed=12)
    train_dataset = dataset_dict["train"]
    eval_dataset = dataset_dict["test"]
    logging.info(train_dataset)
    logging.info(eval_dataset)

    # 2b. Modify our training dataset to include hard negatives using a very efficient embedding model
    embedding_model = SentenceTransformer("sentence-transformers/static-retrieval-mrl-en-v1", device="cpu")
    hard_train_dataset = mine_hard_negatives(
        train_dataset,
        embedding_model,
        num_negatives=num_hard_negatives,  # How many negatives per question-answer pair
        margin=0,  # Similarity between query and negative samples should be x lower than query-positive similarity
        range_min=0,  # Skip the x most similar samples
        range_max=100,  # Consider only the x most similar samples
        sampling_strategy="top",  # Sample the top negatives from the range
        batch_size=4096,  # Use a batch size of 4096 for the embedding model
        output_format="labeled-pair",  # The output format is (query, passage, label), as required by BinaryCrossEntropyLoss
        use_faiss=True,
    )
    logging.info(hard_train_dataset)

    # 2c. (Optionally) Save the hard training dataset to disk
    # hard_train_dataset.save_to_disk("gooaq-hard-train")
    # Load again with:
    # hard_train_dataset = load_from_disk("gooaq-hard-train")

    # 3. Define our training loss.
    # pos_weight is recommended to be set as the ratio between positives to negatives, a.k.a. `num_hard_negatives`
    loss = BinaryCrossEntropyLoss(model=model, pos_weight=torch.tensor(num_hard_negatives))

    # 4a. Define evaluators. We use the CrossEncoderNanoBEIREvaluator, which is a light-weight evaluator for English reranking
    nano_beir_evaluator = CrossEncoderNanoBEIREvaluator(
        dataset_names=["msmarco", "nfcorpus", "nq"],
        batch_size=train_batch_size,
    )

    # 4b. Define a reranking evaluator by mining hard negatives given query-answer pairs
    # We include the positive answer in the list of negatives, so the evaluator can use the performance of the
    # embedding model as a baseline.
    hard_eval_dataset = mine_hard_negatives(
        eval_dataset,
        embedding_model,
        corpus=full_dataset["answer"],  # Use the full dataset as the corpus
        num_negatives=30,  # How many documents to rerank
        batch_size=4096,
        include_positives=True,
        output_format="n-tuple",
        use_faiss=True,
    )
    logging.info(hard_eval_dataset)
    reranking_evaluator = CrossEncoderRerankingEvaluator(
        samples=[
            {
                "query": sample["question"],
                "positive": [sample["answer"]],
                "documents": [sample[column_name] for column_name in hard_eval_dataset.column_names[2:]],
            }
            for sample in hard_eval_dataset
        ],
        batch_size=train_batch_size,
        name="gooaq-dev",
        # Realistic setting: only rerank the positives that the retriever found
        # Set to True to rerank *all* positives
        always_rerank_positives=False,
    )

    # 4c. Combine the evaluators & run the base model on them
    evaluator = SequentialEvaluator([reranking_evaluator, nano_beir_evaluator])
    evaluator(model)

    # 5. Define the training arguments
    short_model_name = model_name if "/" not in model_name else model_name.split("/")[-1]
    run_name = f"reranker-{short_model_name}-gooaq-bce"
    args = CrossEncoderTrainingArguments(
        # Required parameter:
        output_dir=f"models/{run_name}",
        # Optional training parameters:
        num_train_epochs=num_epochs,
        per_device_train_batch_size=train_batch_size,
        per_device_eval_batch_size=train_batch_size,
        learning_rate=2e-5,
        warmup_ratio=0.1,
        fp16=False,  # Set to False if you get an error that your GPU can't run on FP16
        bf16=True,  # Set to True if you have a GPU that supports BF16
        dataloader_num_workers=4,
        load_best_model_at_end=True,
        metric_for_best_model="eval_gooaq-dev_ndcg@10",
        # Optional tracking/debugging parameters:
        eval_strategy="steps",
        eval_steps=1000,
        save_strategy="steps",
        save_steps=1000,
        save_total_limit=2,
        logging_steps=200,
        logging_first_step=True,
        run_name=run_name,  # Will be used in W&B if `wandb` is installed
        seed=12,
    )

    # 6. Create the trainer & start training
    trainer = CrossEncoderTrainer(
        model=model,
        args=args,
        train_dataset=hard_train_dataset,
        loss=loss,
        evaluator=evaluator,
    )
    trainer.train()

    # 7. Evaluate the final model, useful to include these in the model card
    evaluator(model)

    # 8. Save the final model
    final_output_dir = f"models/{run_name}/final"
    model.save_pretrained(final_output_dir)

    # 9. (Optional) save the model to the Hugging Face Hub!
    # It is recommended to run `huggingface-cli login` to log into your Hugging Face account first
    try:
        model.push_to_hub(run_name)
    except Exception:
        logging.error(
            f"Error uploading model to the Hugging Face Hub:\n{traceback.format_exc()}To upload it manually, you can run "
            f"`huggingface-cli login`, followed by loading the model using `model = CrossEncoder({final_output_dir!r})` "
            f"and saving it using `model.push_to_hub('{run_name}')`."
        )


if __name__ == "__main__":
    main()

解码器

点击展开解码器训练代码

全量训练

python trl/scripts/sft.py \
    --model_name_or_path jhu-clsp/ettin-decoder-17m \
    --dataset_name trl-lib/Capybara \
    --learning_rate 2.0e-5 \
    --num_train_epochs 1 \
    --packing \
    --per_device_train_batch_size 2 \
    --gradient_accumulation_steps 8 \
    --gradient_checkpointing \
    --eos_token '<|im_end|>' \
    --eval_strategy steps \
    --eval_steps 100 \
    --output_dir ettin-decoder-17m \
    --push_to_hub

LoRA

python trl/scripts/sft.py \
    --model_name_or_path jhu-clsp/ettin-decoder-17m \
    --dataset_name trl-lib/Capybara \
    --learning_rate 2.0e-4 \
    --num_train_epochs 1 \
    --packing \
    --per_device_train_batch_size 2 \
    --gradient_accumulation_steps 8 \
    --gradient_checkpointing \
    --eos_token '<|im_end|>' \
    --eval_strategy steps \
    --eval_steps 100 \
    --use_peft \
    --lora_r 32 \
    --lora_alpha 16 \
    --output_dir ettin-decoder-17m \
    --push_to_hub

使用 sft.py

import argparse

from datasets import load_dataset
from transformers import AutoConfig, AutoModelForCausalLM, AutoTokenizer
from transformers.models.auto.modeling_auto import MODEL_FOR_IMAGE_TEXT_TO_TEXT_MAPPING_NAMES

from trl import (
    ModelConfig,
    ScriptArguments,
    SFTConfig,
    SFTTrainer,
    TrlParser,
    clone_chat_template,
    get_kbit_device_map,
    get_peft_config,
    get_quantization_config,
)


def main(script_args, training_args, model_args):
    ################
    # Model init kwargs & Tokenizer
    ################
    quantization_config = get_quantization_config(model_args)
    model_kwargs = dict(
        revision=model_args.model_revision,
        trust_remote_code=model_args.trust_remote_code,
        attn_implementation=model_args.attn_implementation,
        torch_dtype=model_args.torch_dtype,
        use_cache=False if training_args.gradient_checkpointing else True,
        device_map=get_kbit_device_map() if quantization_config is not None else None,
        quantization_config=quantization_config,
    )

    # Create model
    config = AutoConfig.from_pretrained(model_args.model_name_or_path)
    valid_image_text_architectures = MODEL_FOR_IMAGE_TEXT_TO_TEXT_MAPPING_NAMES.values()

    if config.architectures and any(arch in valid_image_text_architectures for arch in config.architectures):
        from transformers import AutoModelForImageTextToText

        model_kwargs.pop("use_cache", None)  # Image models do not support cache
        model = AutoModelForImageTextToText.from_pretrained(model_args.model_name_or_path, **model_kwargs)
    else:
        model = AutoModelForCausalLM.from_pretrained(model_args.model_name_or_path, **model_kwargs)

    # Create tokenizer
    tokenizer = AutoTokenizer.from_pretrained(
        model_args.model_name_or_path, trust_remote_code=model_args.trust_remote_code, use_fast=True
    )

    # Set default chat template if needed
    if tokenizer.chat_template is None:
        # TODO: source should be passed as an argument
        model, tokenizer = clone_chat_template(model, tokenizer, "Qwen/Qwen3-0.6B")

    ################
    # Dataset
    ################
    dataset = load_dataset(script_args.dataset_name, name=script_args.dataset_config)

    ################
    # Training
    ################
    trainer = SFTTrainer(
        model=model,
        args=training_args,
        train_dataset=dataset[script_args.dataset_train_split],
        eval_dataset=dataset[script_args.dataset_test_split] if training_args.eval_strategy != "no" else None,
        processing_class=tokenizer,
        peft_config=get_peft_config(model_args),
    )

    trainer.train()

    # Save and push to hub
    trainer.save_model(training_args.output_dir)
    if training_args.push_to_hub:
        trainer.push_to_hub(dataset_name=script_args.dataset_name)


def make_parser(subparsers: argparse._SubParsersAction = None):
    dataclass_types = (ScriptArguments, SFTConfig, ModelConfig)
    if subparsers is not None:
        parser = subparsers.add_parser("sft", help="Run the SFT training script", dataclass_types=dataclass_types)
    else:
        parser = TrlParser(dataclass_types)
    return parser


if __name__ == "__main__":
    parser = make_parser()
    # When using the trl cli, this script may be run with additional arguments, corresponding accelerate arguments.
    # To ensure that their parsing does not interfere with the script arguments, parse the arguments with
    # `return_remaining_strings=True`, then ignore the remaining strings.
    script_args, training_args, model_args, _ = parser.parse_args_and_config(return_remaining_strings=True)
    main(script_args, training_args, model_args)

模型家族与链接

完整的 Ettin 套件包括六种不同规模的模型(编码器和解码器均有):

标准模型

研究资源

社区

有没有可能加上这些模型训练好的 sentence transformer 版本?那会非常有用(而且我没有从头开始训练的计算资源 :))

·
文章作者

是的,我们正在和一些人合作,以获得一些训练好的嵌入版本。希望你不用等太久 :)

一家抠图服务公司专门通过精确地移除或隔离背景来编辑和增强图像。熟练的编辑使用像 Adobe Photoshop 这样的高级工具,创造出符合专业标准的干净、高质量的视觉效果。这些服务被电子商务企业、摄影师和广告公司广泛使用,以使产品和视觉效果脱颖而出。常见的服务包括背景移除、图像遮罩、照片修饰、阴影创建和色彩校正。通过提供精美无瑕的图像,抠图公司帮助品牌提升其视觉吸引力并吸引更多客户。

https://cutthephoto.com/clipping-path-service-provider/

注册登录 发表评论