针对单标签图像分类任务微调 SigLIP 2

SigLIP 2:具有改进的语义理解、定位和密集特征的多语言视觉-语言编码器
SigLIP 2 引入了新的多语言视觉-语言编码器,该编码器基于原始 SigLIP 的成功。在此第二个迭代中,我们通过将几种独立开发的技术整合到统一的方法中,扩展了原始的图像-文本训练目标。其中包括基于字幕的预训练、自监督损失(如自蒸馏和掩码预测)。
以下脚本用于在单标签图像分类问题上微调 SigLIP 2 基础模型。
微调 Notebook
最后更新时间:2025 年 7 月
Notebook 名称 | 描述 | Notebook 链接 |
---|---|---|
notebook-siglip2-finetune-type1 | 训练/测试拆分 | 下载 |
notebook-siglip2-finetune-type2 | 仅训练拆分 | 下载 |
此 Notebook 演示了如何针对单标签图像分类任务微调 SigLIP 2,这是一个强大的多语言视觉-语言模型。微调过程结合了高级技术,例如基于字幕的预训练、自蒸馏和掩码预测,并将其统一到流线型的训练管道中。该工作流支持结构化和非结构化形式的数据集,使其适用于各种领域和资源水平。
该 Notebook 概述了两种数据处理场景。在第一种场景中,数据集包含预定义的训练集和测试集,支持传统的监督学习和泛化评估。在第二种场景中,只有训练集可用;在这种情况下,训练集要么部分保留用于验证,要么完全重新用于评估。这种灵活性支持在受限或特定领域设置中的实验,其中可能不存在标准的测试标注。
示例
例如,我创建了一个增强型垃圾分类器 SigLIP2,用于分类以下类别:电池、生物垃圾、纸板、衣服、玻璃、金属、纸张、塑料、鞋子和普通垃圾。
https://huggingface.co/prithivMLmods/Augmented-Waste-Classifier-SigLIP2
训练脚本的分步说明。👇
ID 到标签映射
!pip install -qqq datasets
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("YOUR-DATASET-HERE")
# Extract unique labels
labels = dataset["train"].features["label"].names
# Create id2label mapping
id2label = {str(i): label for i, label in enumerate(labels)}
# Print the mapping
print(id2label)
[ 或 ]
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("---your--dataset--here--")
# Extract unique masterCategory values (assuming it's a string field)
labels = sorted(set(example["label"] for example in dataset["train"]))
# Create id2label mapping
id2label = {str(i): label for i, label in enumerate(labels)}
# Print the mapping
print(id2label)
发布:微调 >
1. 安装所需软件包
我们首先安装所有必需的软件包。这些包括用于评估、数据处理、模型训练、图像处理和其他实用程序的库。如果您在 Google Colab 中运行此操作,则可以跳过某些安装。
!pip install -q evaluate datasets accelerate
!pip install -q git+https://github.com/huggingface/transformers.git
!pip install -q huggingface_hub
!pip install -q imbalanced-learn
#Skip the installation if your runtime is in Google Colab notebooks.
!pip install -q numpy
#Skip the installation if your runtime is in Google Colab notebooks.
!pip install -q pillow==11.0.0
#Skip the installation if your runtime is in Google Colab notebooks.
!pip install -q torchvision
#Skip the installation if your runtime is in Google Colab notebooks.
!pip install -q matplotlib
!pip install -q scikit-learn
#Skip the installation if your runtime is in Google Colab notebooks.
说明
这些命令安装了 evaluate
、datasets
、transformers
和 huggingface_hub
等库用于模型训练和评估;imbalanced-learn
用于处理不平衡数据集;以及 numpy
、pillow
、torchvision
、matplotlib
和 scikit-learn
等其他常用库。
2. 导入库并配置警告
接下来,我们导入标准库并配置警告以保持输出整洁。
import warnings
warnings.filterwarnings("ignore")
说明
我们导入 warnings
模块并将其设置为忽略警告,以便我们的 Notebook 输出保持整洁。
3. 额外导入
此处我们导入数据操作、模型训练和图像预处理所需的模块。
import gc
import numpy as np
import pandas as pd
import itertools
from collections import Counter
import matplotlib.pyplot as plt
from sklearn.metrics import accuracy_score, roc_auc_score, confusion_matrix, classification_report, f1_score
from imblearn.over_sampling import RandomOverSampler
import evaluate
from datasets import Dataset, Image, ClassLabel
from transformers import (
TrainingArguments,
Trainer,
#....................................................................
#Retain this part if you are working on ViTForImageClassification.
ViTImageProcessor,
ViTForImageClassification,
#....................................................................
DefaultDataCollator
)
from transformers import AutoModel, AutoProcessor
from transformers.image_utils import load_image
import torch
from torch.utils.data import DataLoader
from torchvision.transforms import (
CenterCrop,
Compose,
Normalize,
RandomRotation,
RandomResizedCrop,
RandomHorizontalFlip,
RandomAdjustSharpness,
Resize,
ToTensor
)
说明
此块引入了各种库和函数
- 数据处理:
numpy
、pandas
和itertools
- 可视化:
matplotlib.pyplot
- 指标:来自 scikit-learn 的准确率、F1 分数和混淆矩阵
- 过采样:
RandomOverSampler
用于平衡类 - 数据集和 Transformers:用于加载数据集和模型训练
- Torch 和 torchvision:用于张量处理和图像转换
4. 处理图像元数据和截断图像
对于 Colab 之外的图像处理,我们导入额外的模块来处理图像元数据并启用截断图像的加载。
#.......................................................................
#Retain this part if you're working outside Google Colab notebooks.
from PIL import Image, ExifTags
#.......................................................................
from PIL import Image as PILImage
from PIL import ImageFile
# Enable loading truncated images
ImageFile.LOAD_TRUNCATED_IMAGES = True
说明
此部分使用 Python Imaging Library (PIL) 处理图像。启用截断图像的加载可确保脚本在遇到轻微损坏的图像文件时不会崩溃。
5. 加载和准备数据集
我们加载一个预定义的数据集,并将文件路径和标签提取到列表中。然后我们创建一个 DataFrame 以进行进一步处理。
from datasets import load_dataset
dataset = load_dataset("--your--dataset--goes--here--", split="train")
from pathlib import Path
file_names = []
labels = []
for example in dataset:
file_path = str(example['image'])
label = example['label']
file_names.append(file_path)
labels.append(label)
print(len(file_names), len(labels))
说明
- 数据集使用 Hugging Face 的
load_dataset
函数加载。 - 我们遍历数据集以提取图像文件路径和标签。
- 最后,我们打印图像和标签的数量以验证提取。
6. 创建 DataFrame 并平衡数据集
下一步是将列表转换为 Pandas DataFrame,并使用过采样平衡类。
df = pd.DataFrame.from_dict({"image": file_names, "label": labels})
print(df.shape)
df.head()
df['label'].unique()
y = df[['label']]
df = df.drop(['label'], axis=1)
ros = RandomOverSampler(random_state=83)
df, y_resampled = ros.fit_resample(df, y)
del y
df['label'] = y_resampled
del y_resampled
gc.collect()
说明
- 从文件名和标签创建 DataFrame。
- 我们检查 DataFrame 的形状、头部和唯一标签。
- 为了处理不平衡的类,使用
RandomOverSampler
来平衡数据集。 - 调用垃圾回收以释放未使用的内存。
7. 检查数据集图像
我们检查数据集中的几张图像,以确保它们正确加载。
dataset[0]["image"]
dataset[9999]["image"]
说明
这个简单的检查证实了图像可以通过其索引访问,确保数据集正确加载。
8. 使用标签子集
我们打印标签子集,然后定义用于分类的完整标签列表。
labels_subset = labels[:5]
print(labels_subset)
labels_list = ['example_label_1', 'example_label_2']
label2id, id2label = {}, {}
for i, label in enumerate(labels_list):
label2id[label] = i
id2label[i] = label
ClassLabels = ClassLabel(num_classes=len(labels_list), names=labels_list)
print("Mapping of IDs to Labels:", id2label, '\n')
print("Mapping of Labels to IDs:", label2id)
说明
- 打印一小部分标签以预览数据。
- 为单标签分类问题定义
labels_list
。 - 两个字典(
label2id
和id2label
)用于将标签映射到数字 ID,反之亦然。 - 创建一个
ClassLabel
对象以标准化数据集的标签格式。
9. 映射和转换标签
我们将字符串标签转换为整数值,并转换数据集的标签列。
def map_label2id(example):
example['label'] = ClassLabels.str2int(example['label'])
return example
dataset = dataset.map(map_label2id, batched=True)
dataset = dataset.cast_column('label', ClassLabels)
说明
map_label2id
函数将标签字符串转换为整数。map
函数将此转换应用于整个数据集。- 最后,使用
ClassLabel
对象转换标签列以保持一致性。
10. 拆分数据集
数据集按 60/40 的比例拆分为训练和测试子集,同时保持类别分层。
dataset = dataset.train_test_split(test_size=0.4, shuffle=True, stratify_by_column="label")
train_data = dataset['train']
test_data = dataset['test']
说明
- 数据集被拆分为训练集和测试集。
- 分层确保了两个拆分中类别的比例保持一致。
11. 设置模型和处理器
我们加载 SigLIP2 模型及其对应的图像处理器。该处理器用于提取图像的预处理参数。
from transformers import AutoImageProcessor
from transformers import SiglipForImageClassification
# Use AutoImageProcessor instead of AutoProcessor
model_str = "google/siglip2-base-patch16-224"
processor = AutoImageProcessor.from_pretrained(model_str)
# Extract preprocessing parameters
image_mean, image_std = processor.image_mean, processor.image_std
size = processor.size["height"]
说明
- SigLIP2 模型使用其模型标识符从 Hugging Face 加载。
AutoImageProcessor
检索输入归一化所需的预处理配置(均值、标准差和图像大小)。
12. 定义数据转换
我们使用 torchvision.transforms
定义训练和验证图像转换。这些转换包括调整大小、随机旋转、锐度调整和归一化。
# Define training transformations
_train_transforms = Compose([
Resize((size, size)),
RandomRotation(90),
RandomAdjustSharpness(2),
ToTensor(),
Normalize(mean=image_mean, std=image_std)
])
# Define validation transformations
_val_transforms = Compose([
Resize((size, size)),
ToTensor(),
Normalize(mean=image_mean, std=image_std)
])
说明
- 训练转换添加了数据增强(旋转和锐度调整)以提高模型泛化能力。
- 验证转换仅调整图像大小并进行归一化,确保评估期间的一致性。
13. 将转换应用于数据集
定义函数以将上述转换应用于数据集,然后将这些函数设置为训练和测试数据集的转换函数。
# Apply transformations to dataset
def train_transforms(examples):
examples['pixel_values'] = [_train_transforms(image.convert("RGB")) for image in examples['image']]
return examples
def val_transforms(examples):
examples['pixel_values'] = [_val_transforms(image.convert("RGB")) for image in examples['image']]
return examples
# Assuming train_data and test_data are loaded datasets
train_data.set_transform(train_transforms)
test_data.set_transform(val_transforms)
说明
train_transforms
和val_transforms
函数将图像转换为 RGB 并应用相应的转换。- 然后将这些函数设置到训练和测试数据集,以便在训练和评估期间实时预处理每个图像。
14. 创建数据整理器
定义自定义整理函数,通过将图像和标签堆叠成张量来准备训练批次。
def collate_fn(examples):
pixel_values = torch.stack([example["pixel_values"] for example in examples])
labels = torch.tensor([example['label'] for example in examples])
return {"pixel_values": pixel_values, "labels": labels}
说明
此函数通过堆叠处理后的图像张量并将标签转换为张量来将单个示例收集到批次中。它被传递给 Trainer 以确保正确的批处理。
15. 初始化模型
我们加载用于图像分类的 SigLIP2 模型,配置标签映射,并打印可训练参数的数量。
model = SiglipForImageClassification.from_pretrained(model_str, num_labels=len(labels_list))
model.config.id2label = id2label
model.config.label2id = label2id
print(model.num_parameters(only_trainable=True) / 1e6)
说明
- SigLIP2 模型针对指定类别的图像分类进行实例化。
- 标签映射分配给模型配置。
- 打印可训练参数的数量(以百万计)以了解模型的大小。
16. 定义指标和计算函数
我们加载准确率指标并定义一个函数以在评估期间计算准确率。
accuracy = evaluate.load("accuracy")
def compute_metrics(eval_pred):
predictions = eval_pred.predictions
label_ids = eval_pred.label_ids
predicted_labels = predictions.argmax(axis=1)
acc_score = accuracy.compute(predictions=predicted_labels, references=label_ids)['accuracy']
return {
"accuracy": acc_score
}
说明
evaluate
库用于加载准确率指标。compute_metrics
函数通过比较预测标签与真实标签来计算准确率。
17. 设置训练参数
使用 TrainingArguments
类定义训练参数。这些参数包括批次大小、学习率、 epoch 数量、日志详细信息等。
args = TrainingArguments(
output_dir="siglip2-finetune",
logging_dir='./logs',
evaluation_strategy="epoch",
learning_rate=2e-6,
per_device_train_batch_size=32,
per_device_eval_batch_size=8,
num_train_epochs=6,
weight_decay=0.02,
warmup_steps=50,
remove_unused_columns=False,
save_strategy='epoch',
load_best_model_at_end=True,
save_total_limit=1,
report_to="none"
)
说明
这些参数配置
- 输出目录和日志记录。
- 评估策略(每个 epoch 后评估)。
- 学习率、批次大小、训练 epoch 数量和权重衰减。
- 在末尾加载最佳模型和限制检查点存储的设置。
18. 初始化训练器
我们现在使用模型、训练参数、数据集、数据整理器、指标函数和图像处理器(用作分词器)初始化 Hugging Face Trainer。
trainer = Trainer(
model,
args,
train_dataset=train_data,
eval_dataset=test_data,
data_collator=collate_fn,
compute_metrics=compute_metrics,
tokenizer=processor,
)
说明Trainer
是训练和评估的主要接口。所有必要的组件(模型、参数、数据集、整理函数和指标)都传递给它。
19. 评估、训练和预测
在训练之前,我们评估测试集上的模型。然后,开始训练,接着进行第二次评估并获得预测。
trainer.evaluate()
trainer.train()
trainer.evaluate()
outputs = trainer.predict(test_data)
print(outputs.metrics)
说明
- 初始评估提供了微调之前的基线。
- 训练后,模型再次进行评估。
- 获得测试集上的预测,并打印结果指标。
20. 计算额外指标并绘制混淆矩阵
我们计算准确率和 F1 分数,绘制混淆矩阵(如果类别较少),并打印完整的分类报告。
y_true = outputs.label_ids
y_pred = outputs.predictions.argmax(1)
def plot_confusion_matrix(cm, classes, title='Confusion Matrix', cmap=plt.cm.Blues, figsize=(10, 8)):
plt.figure(figsize=figsize)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=90)
plt.yticks(tick_marks, classes)
fmt = '.0f'
thresh = cm.max() / 2.0
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt), horizontalalignment="center", color="white" if cm[i, j] > thresh else "black")
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.tight_layout()
plt.show()
accuracy = accuracy_score(y_true, y_pred)
f1 = f1_score(y_true, y_pred, average='macro')
print(f"Accuracy: {accuracy:.4f}")
print(f"F1 Score: {f1:.4f}")
if len(labels_list) <= 150:
cm = confusion_matrix(y_true, y_pred)
plot_confusion_matrix(cm, labels_list, figsize=(8, 6))
print()
print("Classification report:")
print()
print(classification_report(y_true, y_pred, target_names=labels_list, digits=4))
说明
- 预测与真实标签进行比较,以计算准确率和宏观 F1 分数。
- 如果类别数量较少,自定义函数将绘制混淆矩阵。
- 最后,打印详细的分类报告。
21. 保存模型并上传到 Hugging Face Hub
微调后的模型将本地保存,然后上传到 Hugging Face Hub。
trainer.save_model()
#upload to hub
from huggingface_hub import notebook_login
notebook_login()
from huggingface_hub import HfApi
api = HfApi()
repo_id = f"prithivMLmods/siglip2-finetune"
try:
api.create_repo(repo_id)
print(f"Repo {repo_id} created")
except:
print(f"Repo {repo_id} already exists")
api.upload_folder(
folder_path="siglip2-finetune/",
path_in_repo=".",
repo_id=repo_id,
repo_type="model",
revision="main"
)
说明
trainer.save_model()
调用保存最佳微调模型。- 使用
notebook_login()
启动 Hugging Face Hub 登录。 - 使用
HfApi
创建(或验证存在)存储库。 - 最后,模型文件夹上传到存储库。
最终脚本
以下是完整脚本,没有虚线分隔符
!pip install -q evaluate datasets accelerate
!pip install -q git+https://github.com/huggingface/transformers.git
!pip install -q huggingface_hub
!pip install -q imbalanced-learn
#Skip the installation if your runtime is in Google Colab notebooks.
!pip install -q numpy
#Skip the installation if your runtime is in Google Colab notebooks.
!pip install -q pillow==11.0.0
#Skip the installation if your runtime is in Google Colab notebooks.
!pip install -q torchvision
#Skip the installation if your runtime is in Google Colab notebooks.
!pip install -q matplotlib
!pip install -q scikit-learn
#Skip the installation if your runtime is in Google Colab notebooks.
import warnings
warnings.filterwarnings("ignore")
import gc
import numpy as np
import pandas as pd
import itertools
from collections import Counter
import matplotlib.pyplot as plt
from sklearn.metrics import accuracy_score, roc_auc_score, confusion_matrix, classification_report, f1_score
from imblearn.over_sampling import RandomOverSampler
import evaluate
from datasets import Dataset, Image, ClassLabel
from transformers import (
TrainingArguments,
Trainer,
ViTImageProcessor,
ViTForImageClassification,
DefaultDataCollator
)
from transformers import AutoModel, AutoProcessor
from transformers.image_utils import load_image
import torch
from torch.utils.data import DataLoader
from torchvision.transforms import (
CenterCrop,
Compose,
Normalize,
RandomRotation,
RandomResizedCrop,
RandomHorizontalFlip,
RandomAdjustSharpness,
Resize,
ToTensor
)
from PIL import Image, ExifTags
from PIL import Image as PILImage
from PIL import ImageFile
# Enable loading truncated images
ImageFile.LOAD_TRUNCATED_IMAGES = True
from datasets import load_dataset
dataset = load_dataset("--your--dataset--goes--here--", split="train")
from pathlib import Path
file_names = []
labels = []
for example in dataset:
file_path = str(example['image'])
label = example['label']
file_names.append(file_path)
labels.append(label)
print(len(file_names), len(labels))
df = pd.DataFrame.from_dict({"image": file_names, "label": labels})
print(df.shape)
df.head()
df['label'].unique()
y = df[['label']]
df = df.drop(['label'], axis=1)
ros = RandomOverSampler(random_state=83)
df, y_resampled = ros.fit_resample(df, y)
del y
df['label'] = y_resampled
del y_resampled
gc.collect()
dataset[0]["image"]
dataset[9999]["image"]
labels_subset = labels[:5]
print(labels_subset)
labels_list = ['example_label_1', 'example_label_2']
label2id, id2label = {}, {}
for i, label in enumerate(labels_list):
label2id[label] = i
id2label[i] = label
ClassLabels = ClassLabel(num_classes=len(labels_list), names=labels_list)
print("Mapping of IDs to Labels:", id2label, '\n')
print("Mapping of Labels to IDs:", label2id)
def map_label2id(example):
example['label'] = ClassLabels.str2int(example['label'])
return example
dataset = dataset.map(map_label2id, batched=True)
dataset = dataset.cast_column('label', ClassLabels)
dataset = dataset.train_test_split(test_size=0.4, shuffle=True, stratify_by_column="label")
train_data = dataset['train']
test_data = dataset['test']
from transformers import AutoImageProcessor
from transformers import SiglipForImageClassification
# Use AutoImageProcessor instead of AutoProcessor
model_str = "google/siglip2-base-patch16-224"
processor = AutoImageProcessor.from_pretrained(model_str)
# Extract preprocessing parameters
image_mean, image_std = processor.image_mean, processor.image_std
size = processor.size["height"]
# Define training transformations
_train_transforms = Compose([
Resize((size, size)),
RandomRotation(90),
RandomAdjustSharpness(2),
ToTensor(),
Normalize(mean=image_mean, std=image_std)
])
# Define validation transformations
_val_transforms = Compose([
Resize((size, size)),
ToTensor(),
Normalize(mean=image_mean, std=image_std)
])
# Apply transformations to dataset
def train_transforms(examples):
examples['pixel_values'] = [_train_transforms(image.convert("RGB")) for image in examples['image']]
return examples
def val_transforms(examples):
examples['pixel_values'] = [_val_transforms(image.convert("RGB")) for image in examples['image']]
return examples
# Assuming train_data and test_data are loaded datasets
train_data.set_transform(train_transforms)
test_data.set_transform(val_transforms)
def collate_fn(examples):
pixel_values = torch.stack([example["pixel_values"] for example in examples])
labels = torch.tensor([example['label'] for example in examples])
return {"pixel_values": pixel_values, "labels": labels}
model = SiglipForImageClassification.from_pretrained(model_str, num_labels=len(labels_list))
model.config.id2label = id2label
model.config.label2id = label2id
print(model.num_parameters(only_trainable=True) / 1e6)
accuracy = evaluate.load("accuracy")
def compute_metrics(eval_pred):
predictions = eval_pred.predictions
label_ids = eval_pred.label_ids
predicted_labels = predictions.argmax(axis=1)
acc_score = accuracy.compute(predictions=predicted_labels, references=label_ids)['accuracy']
return {
"accuracy": acc_score
}
args = TrainingArguments(
output_dir="siglip2-finetune",
logging_dir='./logs',
evaluation_strategy="epoch",
learning_rate=2e-6,
per_device_train_batch_size=32,
per_device_eval_batch_size=8,
num_train_epochs=6,
weight_decay=0.02,
warmup_steps=50,
remove_unused_columns=False,
save_strategy='epoch',
load_best_model_at_end=True,
save_total_limit=1,
report_to="none"
)
trainer = Trainer(
model,
args,
train_dataset=train_data,
eval_dataset=test_data,
data_collator=collate_fn,
compute_metrics=compute_metrics,
tokenizer=processor,
)
trainer.evaluate()
trainer.train()
trainer.evaluate()
outputs = trainer.predict(test_data)
print(outputs.metrics)
y_true = outputs.label_ids
y_pred = outputs.predictions.argmax(1)
def plot_confusion_matrix(cm, classes, title='Confusion Matrix', cmap=plt.cm.Blues, figsize=(10, 8)):
plt.figure(figsize=figsize)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=90)
plt.yticks(tick_marks, classes)
fmt = '.0f'
thresh = cm.max() / 2.0
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt), horizontalalignment="center", color="white" if cm[i, j] > thresh else "black")
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.tight_layout()
plt.show()
accuracy = accuracy_score(y_true, y_pred)
f1 = f1_score(y_true, y_pred, average='macro')
print(f"Accuracy: {accuracy:.4f}")
print(f"F1 Score: {f1:.4f}")
if len(labels_list) <= 150:
cm = confusion_matrix(y_true, y_pred)
plot_confusion_matrix(cm, labels_list, figsize=(8, 6))
print()
print("Classification report:")
print()
print(classification_report(y_true, y_pred, target_names=labels_list, digits=4))
trainer.save_model()
#upload to hub
from huggingface_hub import notebook_login
notebook_login()
from huggingface_hub import HfApi
api = HfApi()
repo_id = f"prithivMLmods/siglip2-finetune"
try:
api.create_repo(repo_id)
print(f"Repo {repo_id} created")
except:
print(f"Repo {repo_id} already exists")
api.upload_folder(
folder_path="siglip2-finetune/",
path_in_repo=".",
repo_id=repo_id,
repo_type="model",
revision="main"
)
用于 SigLIP2 全程微调的最终脚本。
!pip install -q evaluate datasets accelerate
!pip install -q git+https://github.com/huggingface/transformers.git
!pip install -q huggingface_hub
!pip install -q imbalanced-learn
!pip install -q numpy
!pip install -q pillow==11.0.0
!pip install -q torchvision
!pip install -q matplotlib
!pip install -q scikit-learn
import warnings
warnings.filterwarnings("ignore")
import gc
import numpy as np
import pandas as pd
import itertools
from collections import Counter
import matplotlib.pyplot as plt
from sklearn.metrics import accuracy_score, roc_auc_score, confusion_matrix, classification_report, f1_score
from imblearn.over_sampling import RandomOverSampler
import evaluate
from datasets import Dataset, Image, ClassLabel, load_dataset
from transformers import (
TrainingArguments,
Trainer,
ViTImageProcessor,
ViTForImageClassification,
DefaultDataCollator,
AutoModel,
AutoProcessor
)
from transformers.image_utils import load_image
import torch
from torch.utils.data import DataLoader
from torchvision.transforms import (
CenterCrop,
Compose,
Normalize,
RandomRotation,
RandomResizedCrop,
RandomHorizontalFlip,
RandomAdjustSharpness,
Resize,
ToTensor
)
from PIL import Image, ExifTags, ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
# -----------------------
# Load and preprocess dataset
# -----------------------
dataset = load_dataset("--your--dataset--goes--here--", split="train")
# Build DataFrame from dataset (for oversampling)
file_names = []
labels = []
for example in dataset:
file_path = str(example['image'])
label = example['label']
file_names.append(file_path)
labels.append(label)
print(len(file_names), len(labels))
df = pd.DataFrame.from_dict({"image": file_names, "label": labels})
print("DataFrame shape:", df.shape)
print(df.head())
print("Unique labels:", df['label'].unique())
# Oversample to balance classes
y = df[['label']]
df_no_label = df.drop(['label'], axis=1)
ros = RandomOverSampler(random_state=83)
df_resampled, y_resampled = ros.fit_resample(df_no_label, y)
df_resampled['label'] = y_resampled
df = df_resampled # use the oversampled DataFrame
del y, y_resampled, df_no_label
gc.collect()
# Define label mappings (adjust labels_list as needed)
labels_list = ['example_label_1', 'example_label_2']
label2id = {label: i for i, label in enumerate(labels_list)}
id2label = {i: label for i, label in enumerate(labels_list)}
ClassLabels = ClassLabel(num_classes=len(labels_list), names=labels_list)
print("Mapping of IDs to Labels:", id2label)
print("Mapping of Labels to IDs:", label2id)
# Update dataset with label mapping
def map_label2id(example):
example['label'] = ClassLabels.str2int(example['label'])
return example
dataset = dataset.map(map_label2id, batched=True)
dataset = dataset.cast_column('label', ClassLabels)
# Use the full dataset for fine-tuning (no train-test split)
full_data = dataset
# -----------------------
# Define image processing and transformations
# -----------------------
from transformers import AutoImageProcessor, SiglipForImageClassification
model_str = "google/siglip2-base-patch16-224"
processor = AutoImageProcessor.from_pretrained(model_str)
# Extract parameters from processor
image_mean, image_std = processor.image_mean, processor.image_std
size = processor.size["height"]
# Define training and validation transforms
_train_transforms = Compose([
Resize((size, size)),
RandomRotation(90),
RandomAdjustSharpness(2),
ToTensor(),
Normalize(mean=image_mean, std=image_std)
])
_val_transforms = Compose([
Resize((size, size)),
ToTensor(),
Normalize(mean=image_mean, std=image_std)
])
def train_transforms(examples):
examples['pixel_values'] = [_train_transforms(image.convert("RGB")) for image in examples['image']]
return examples
def val_transforms(examples):
examples['pixel_values'] = [_val_transforms(image.convert("RGB")) for image in examples['image']]
return examples
# Create training and evaluation datasets with different transforms
train_data = full_data.with_transform(train_transforms)
eval_data = full_data.with_transform(val_transforms)
def collate_fn(examples):
pixel_values = torch.stack([example["pixel_values"] for example in examples])
labels = torch.tensor([example['label'] for example in examples])
return {"pixel_values": pixel_values, "labels": labels}
# -----------------------
# Load model and set configuration
# -----------------------
model = SiglipForImageClassification.from_pretrained(model_str, num_labels=len(labels_list))
model.config.id2label = id2label
model.config.label2id = label2id
print("Trainable parameters (in millions):", model.num_parameters(only_trainable=True) / 1e6)
# -----------------------
# Define compute_metrics
# -----------------------
accuracy_metric = evaluate.load("accuracy")
def compute_metrics(eval_pred):
predictions = eval_pred.predictions
label_ids = eval_pred.label_ids
predicted_labels = predictions.argmax(axis=1)
acc_score = accuracy_metric.compute(predictions=predicted_labels, references=label_ids)['accuracy']
return {"accuracy": acc_score}
# -----------------------
# Set up TrainingArguments and Trainer
# -----------------------
args = TrainingArguments(
output_dir="siglip2-finetune-full",
logging_dir='./logs',
evaluation_strategy="epoch", # Evaluate at the end of each epoch on eval_data
learning_rate=2e-6,
per_device_train_batch_size=32,
per_device_eval_batch_size=8,
num_train_epochs=6,
weight_decay=0.02,
warmup_steps=50,
remove_unused_columns=False,
save_strategy='epoch',
load_best_model_at_end=True,
save_total_limit=1,
report_to="none"
)
trainer = Trainer(
model=model,
args=args,
train_dataset=train_data,
eval_dataset=eval_data,
data_collator=collate_fn,
compute_metrics=compute_metrics,
tokenizer=processor,
)
# -----------------------
# Fine-tuning: Evaluation, Training, and Prediction
# -----------------------
# Optional evaluation before training
trainer.evaluate()
# Fine-tune the model on the full dataset
trainer.train()
# Evaluate after training
trainer.evaluate()
# Get predictions and compute metrics
outputs = trainer.predict(eval_data)
print("Prediction metrics:", outputs.metrics)
y_true = outputs.label_ids
y_pred = outputs.predictions.argmax(1)
# -----------------------
# Plot confusion matrix and print classification report
# -----------------------
def plot_confusion_matrix(cm, classes, title='Confusion Matrix', cmap=plt.cm.Blues, figsize=(10, 8)):
plt.figure(figsize=figsize)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=90)
plt.yticks(tick_marks, classes)
fmt = '.0f'
thresh = cm.max() / 2.0
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.tight_layout()
plt.show()
acc = accuracy_score(y_true, y_pred)
f1 = f1_score(y_true, y_pred, average='macro')
print(f"Accuracy: {acc:.4f}")
print(f"F1 Score: {f1:.4f}")
if len(labels_list) <= 150:
cm = confusion_matrix(y_true, y_pred)
plot_confusion_matrix(cm, labels_list, figsize=(8, 6))
print("\nClassification Report:")
print(classification_report(y_true, y_pred, target_names=labels_list, digits=4))
# -----------------------
# Save and upload the model
# -----------------------
trainer.save_model()
from huggingface_hub import notebook_login, HfApi
notebook_login()
api = HfApi()
repo_id = "prithivMLmods/siglip2-finetune-full"
try:
api.create_repo(repo_id)
print(f"Repo {repo_id} created")
except Exception as e:
print(f"Repo {repo_id} already exists or could not be created: {e}")
api.upload_folder(
folder_path="siglip2-finetune-full/",
path_in_repo=".",
repo_id=repo_id,
repo_type="model",
revision="main"
)
更新的图表视觉效果
import matplotlib.pyplot as plt
import seaborn as sns
def plot_confusion_matrix(cm, classes, title='Confusion Matrix', figsize=(12, 10)):
plt.figure(figsize=figsize)
# Use a minimalistic style with the 'viridis' colormap
ax = sns.heatmap(cm, annot=True, fmt='g', cmap='viridis',
xticklabels=classes, yticklabels=classes,
cbar=True, annot_kws={"size":8})
ax.set_xlabel('Predicted label', fontsize=12)
ax.set_ylabel('True label', fontsize=12)
ax.set_title(title, fontsize=14)
plt.xticks(rotation=90)
plt.yticks(rotation=0)
plt.tight_layout()
plt.show()
# Example usage remains the same:
acc = accuracy_score(y_true, y_pred)
f1 = f1_score(y_true, y_pred, average='macro')
print(f"Accuracy: {acc:.4f}")
print(f"F1 Score: {f1:.4f}")
if len(labels_list) <= 150:
cm = confusion_matrix(y_true, y_pred)
plot_confusion_matrix(cm, labels_list, figsize=(12, 10))
print("\nClassification Report:")
print(classification_report(y_true, y_pred, target_names=labels_list, digits=4))
此完整脚本针对深度伪造图像质量的单标签分类问题对 SigLIP2 模型进行了微调。从软件包安装到模型上传的每个部分都已详细解释,以帮助您理解整个过程的每一步。
与最新 Transformers 版本 [或] 开发版本兼容
已移除 evaluation_strategy 和 save_strategy 以与您的 Transformers 版本兼容。
如果您的 Transformers 版本 ≤ 4.49.0,请使用上述旧训练脚本。对于 Transformers 版本 ≥ 4.50.0,请遵循更新的脚本。
最后更新时间:2025 年 4 月 15 日
import warnings
warnings.filterwarnings("ignore")
import gc
import numpy as np
import pandas as pd
import itertools
from collections import Counter
import matplotlib.pyplot as plt
from sklearn.metrics import accuracy_score, roc_auc_score, confusion_matrix, classification_report, f1_score
from imblearn.over_sampling import RandomOverSampler
import evaluate
from datasets import Dataset, Image, ClassLabel, load_dataset
from transformers import (
TrainingArguments,
Trainer,
ViTImageProcessor,
ViTForImageClassification,
DefaultDataCollator,
AutoModel,
AutoProcessor
)
from transformers.image_utils import load_image
import torch
from torch.utils.data import DataLoader
from torchvision.transforms import (
CenterCrop,
Compose,
Normalize,
RandomRotation,
RandomResizedCrop,
RandomHorizontalFlip,
RandomAdjustSharpness,
Resize,
ToTensor
)
from PIL import Image, ExifTags, ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
# -----------------------
# Load and preprocess dataset
# -----------------------
dataset = load_dataset("--your--dataset--goes--here--", split="train")
# Build DataFrame from dataset (for oversampling)
file_names = []
labels = []
for example in dataset:
file_path = str(example['image'])
label = example['label']
file_names.append(file_path)
labels.append(label)
print(len(file_names), len(labels))
df = pd.DataFrame.from_dict({"image": file_names, "label": labels})
print("DataFrame shape:", df.shape)
print(df.head())
print("Unique labels:", df['label'].unique())
# Oversample to balance classes
y = df[['label']]
df_no_label = df.drop(['label'], axis=1)
ros = RandomOverSampler(random_state=83)
df_resampled, y_resampled = ros.fit_resample(df_no_label, y)
df_resampled['label'] = y_resampled
df = df_resampled # use the oversampled DataFrame
del y, y_resampled, df_no_label
gc.collect()
# Define label mappings (adjust labels_list as needed)
labels_list = ['example_label_1', 'example_label_2']
label2id = {label: i for i, label in enumerate(labels_list)}
id2label = {i: label for i, label in enumerate(labels_list)}
ClassLabels = ClassLabel(num_classes=len(labels_list), names=labels_list)
print("Mapping of IDs to Labels:", id2label)
print("Mapping of Labels to IDs:", label2id)
# Update dataset with label mapping
def map_label2id(example):
example['label'] = ClassLabels.str2int(example['label'])
return example
dataset = dataset.map(map_label2id, batched=True)
dataset = dataset.cast_column('label', ClassLabels)
# Use the full dataset for fine-tuning (no train-test split)
full_data = dataset
# -----------------------
# Define image processing and transformations
# -----------------------
from transformers import AutoImageProcessor, SiglipForImageClassification
model_str = "google/siglip2-base-patch16-224"
processor = AutoImageProcessor.from_pretrained(model_str)
# Extract parameters from processor
image_mean, image_std = processor.image_mean, processor.image_std
size = processor.size["height"]
# Define training and validation transforms
_train_transforms = Compose([
Resize((size, size)),
RandomRotation(90),
RandomAdjustSharpness(2),
ToTensor(),
Normalize(mean=image_mean, std=image_std)
])
_val_transforms = Compose([
Resize((size, size)),
ToTensor(),
Normalize(mean=image_mean, std=image_std)
])
def train_transforms(examples):
examples['pixel_values'] = [_train_transforms(image.convert("RGB")) for image in examples['image']]
return examples
def val_transforms(examples):
examples['pixel_values'] = [_val_transforms(image.convert("RGB")) for image in examples['image']]
return examples
# Create training and evaluation datasets with different transforms
train_data = full_data.with_transform(train_transforms)
eval_data = full_data.with_transform(val_transforms)
def collate_fn(examples):
pixel_values = torch.stack([example["pixel_values"] for example in examples])
labels = torch.tensor([example['label'] for example in examples])
return {"pixel_values": pixel_values, "labels": labels}
# -----------------------
# Load model and set configuration
# -----------------------
model = SiglipForImageClassification.from_pretrained(model_str, num_labels=len(labels_list))
model.config.id2label = id2label
model.config.label2id = label2id
print("Trainable parameters (in millions):", model.num_parameters(only_trainable=True) / 1e6)
# -----------------------
# Define compute_metrics
# -----------------------
accuracy_metric = evaluate.load("accuracy")
def compute_metrics(eval_pred):
predictions = eval_pred.predictions
label_ids = eval_pred.label_ids
predicted_labels = predictions.argmax(axis=1)
acc_score = accuracy_metric.compute(predictions=predicted_labels, references=label_ids)['accuracy']
return {"accuracy": acc_score}
# -----------------------
# Set up TrainingArguments and Trainer
# -----------------------
args = TrainingArguments(
output_dir="siglip2-finetune-full",
logging_dir='./logs',
learning_rate=2e-6,
per_device_train_batch_size=32,
per_device_eval_batch_size=8,
num_train_epochs=6,
weight_decay=0.02,
warmup_steps=50,
remove_unused_columns=False,
load_best_model_at_end=True,
save_total_limit=1,
report_to="none"
)
trainer = Trainer(
model=model,
args=args,
train_dataset=train_data,
eval_dataset=eval_data,
data_collator=collate_fn,
compute_metrics=compute_metrics,
tokenizer=processor,
)
# -----------------------
# Fine-tuning: Evaluation, Training, and Prediction
# -----------------------
# Optional evaluation before training
print("Evaluating before training...")
trainer.evaluate()
# Fine-tune the model on the full dataset
print("Starting training...")
trainer.train()
# Evaluate after training
print("Evaluating after training...")
trainer.evaluate()
# Get predictions and compute metrics
outputs = trainer.predict(eval_data)
print("Prediction metrics:", outputs.metrics)
y_true = outputs.label_ids
y_pred = outputs.predictions.argmax(axis=1)
# -----------------------
# Plot confusion matrix and print classification report
# -----------------------
def plot_confusion_matrix(cm, classes, title='Confusion Matrix', cmap=plt.cm.Blues, figsize=(10, 8)):
plt.figure(figsize=figsize)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=90)
plt.yticks(tick_marks, classes)
fmt = '.0f'
thresh = cm.max() / 2.0
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.tight_layout()
plt.show()
acc = accuracy_score(y_true, y_pred)
f1 = f1_score(y_true, y_pred, average='macro')
print(f"Accuracy: {acc:.4f}")
print(f"F1 Score: {f1:.4f}")
if len(labels_list) <= 150:
cm = confusion_matrix(y_true, y_pred)
plot_confusion_matrix(cm, labels_list, figsize=(8, 6))
print("\nClassification Report:")
print(classification_report(y_true, y_pred, target_names=labels_list, digits=4))
# -----------------------
# Save and upload the model
# -----------------------
trainer.save_model()
修复说明
- 移除不支持的参数:参数
evaluation_strategy="epoch"
和save_strategy='epoch'
已从TrainingArguments
中移除。这避免了意外关键字参数的错误,同时仍允许手动评估和模型保存。 - 手动评估调用:训练前后对
trainer.evaluate()
的调用以及预测步骤确保您在微调过程中仍能获得评估指标。
计算机视觉与模式识别
论文参考文献
标题 | 链接(摘要) | 链接(PDF) |
---|---|---|
SigLIP 2:多语言视觉-语言编码器 | arXiv:2502.14786 |
详情和优势
SigLIP 2 基于 Vision Transformers 构建,确保与早期版本向后兼容。这允许用户在不彻底改革整个系统的情况下替换模型权重。与传统的对比损失不同,SigLIP 2 采用 Sigmoid 损失,从而可以更平衡地学习全局和局部特征。
除了 Sigmoid 损失,SigLIP 2 还集成了基于解码器的损失,增强了图像字幕和区域特定定位等任务。这导致在密集预测任务中性能得到改进。该模型还包含一个 MAP 头,它从图像和文本组件中汇集特征,确保稳健和详细的表示。
SigLIP 2 的一项关键创新是 *NaFlex 变体*,它通过使用单个检查点以各种分辨率处理图像来支持原生纵横比。这种方法保留了图像的空间完整性,使其特别适用于文档理解和光学字符识别 (OCR) 等应用。
此外,自蒸馏和掩码预测提高了局部特征的质量。通过训练模型预测被掩码的补丁,它学会关注对于分割和深度估计等任务至关重要的微小细节。这种经过优化的设计使得即使是更小的模型也能通过高级蒸馏技术实现卓越的性能。
结论
SigLIP 2 代表着视觉-语言模型领域经过精心设计和深思熟虑的进步。通过将成熟技术与深思熟虑的创新相结合,它有效地解决了细粒度定位、密集预测和多语言支持等关键挑战。SigLIP 2 超越了传统的对比损失,融入了自监督目标,从而实现了视觉数据更平衡和细致的表示。其通过 NaFlex 变体对原生纵横比的细致处理进一步增强了其在实际应用中的适用性,在这些应用中,保持图像完整性至关重要。
该模型包含多语言数据和去偏措施,这表明它意识到了其操作所处的各种上下文。这种方法不仅提高了跨各种基准的性能,而且还符合 AI 中更广泛的伦理考量。最终,SigLIP 2 的发布标志着视觉-语言研究领域迈出了重要一步。它提供了一个多功能、向后兼容的框架,可以无缝集成到现有系统中。凭借其在各种任务中提供可靠性能的能力——同时优先考虑公平和包容性——SigLIP 2 为该领域的未来进步树立了强劲的基准。
祝您微调愉快!🤗