X-LoRA
LoRA 专家混合 (X-LoRA) 是一种 PEFT 方法,基于高粒度(令牌、层、序列)缩放矩阵启用 LoRA 专家稀疏或密集混合。这利用了冻结的 LoRA 适配器和冻结的基本模型,大幅减少了需要微调的参数数量。
X-LoRA 的一个独特之处在于其通用性:它可以应用于具有 LoRA 适配器的任何 transformers
基本模型。这意味着,尽管采用了专家混合策略,但不必对模型代码进行任何更改。
下图演示了每个令牌的不同提示的缩放方式如何变化。这突出了不同适配器的激活,随着生成的进行和序列创建新上下文。
该论文的摘要是
我们报告了一种利用基于低秩适配 (LoRA) 的深度层级标记级方法来创建经过微调的大语言模型的专家策略组合。通过从一组预训练的 LoRA 适配器开始,我们的闸门策略使用隐藏状态动态混合适应层,从而允许生成的 X-LoRA 模型利用不同的能力并创建前所未有、可用来解决任务的深度层级组合。该设计灵感来自普遍性和多样性的生物原理,其中神经网络构建块在不同的层次表现形式中得到重复使用。因此,X-LoRA 模型可以轻松为任何现有的大语言模型 (LLM) 实施,而无需修改基础结构。我们开发了一个定制的 X-LoRA 模型,它提供科学能力,包括正/逆分析任务和增强的推理能力,重点关注生物材料分析、蛋白质力学和设计。这项工作的意义在于能够访问知识储备丰富且具有很强适应能力的可轻易扩展的模型,以及跨领域整合知识的能力。在生物学、数学、推理、仿生材料、力学和材料、化学、蛋白质生物物理学、力学和基于量子力学的分子特性方面的专家们的参与下,我们开展了一系列以物理为重点的案例研究。我们研究了知识召回、蛋白质力学正/逆任务、蛋白质设计、对抗性代理建模(包括本体知识图谱构建)以及分子设计。该模型不仅能够对蛋白质的纳米力学特性或量子力学分子特性进行定量预测,还能对结果进行推理,并正确预测解释不同分子行为的可能机制。.
请引用 X-LoRA 为
@article{10.1063/5.0203126,
author = {Buehler, Eric L. and Buehler, Markus J.},
title = "{X-LoRA: Mixture of low-rank adapter experts, a flexible framework for large language models with applications in protein mechanics and molecular design}",
journal = {APL Machine Learning},
volume = {2},
number = {2},
pages = {026119},
year = {2024},
month = {05},
abstract = "{We report a mixture of expert strategy to create fine-tuned large language models using a deep layer-wise token-level approach based on low-rank adaptation (LoRA). Starting with a set of pre-trained LoRA adapters, our gating strategy uses the hidden states to dynamically mix adapted layers, allowing the resulting X-LoRA model to draw upon different capabilities and create never-before-used deep layer-wise combinations to solve tasks. The design is inspired by the biological principles of universality and diversity, where neural network building blocks are reused in different hierarchical manifestations. Hence, the X-LoRA model can be easily implemented for any existing large language model without a need for modifications of the underlying structure. We develop a tailored X-LoRA model that offers scientific capabilities, including forward/inverse analysis tasks and enhanced reasoning capability, focused on biomaterial analysis, protein mechanics, and design. The impact of this work includes access to readily expandable and adaptable models with strong domain knowledge and the capability to integrate across areas of knowledge. Featuring experts in biology, mathematics, reasoning, bio-inspired materials, mechanics and materials, chemistry, protein biophysics, mechanics, and quantum-mechanics based molecular properties, we conduct a series of physics-focused case studies. We examine knowledge recall, protein mechanics forward/inverse tasks, protein design, adversarial agentic modeling including ontological knowledge graph construction, and molecular design. The model is capable not only of making quantitative predictions of nanomechanical properties of proteins or quantum mechanical molecular properties but also reasoning over the results and correctly predicting likely mechanisms that explain distinct molecular behaviors.}",
issn = {2770-9019},
doi = {10.1063/5.0203126},
url = {https://doi.org/10.1063/5.0203126},
eprint = {https://pubs.aip.org/aip/aml/article-pdf/doi/10.1063/5.0203126/19964043/026119\_1\_5.0203126.pdf},
}
XLoraConfig
类 peft.XLoraConfig
< 源 ><span data-svelte-h="svelte-8mvn6a">(<span class="comma cursor-default">peft_type:Union = None,auto_mapping:Optional = None,base_model_name_or_path:Optional = None,revision:Optional = None,task_type:Union = None,inference_mode:bool = False,hidden_size:int = None,adapters:dict[str,str] = None,enable_softmax:bool = True,enable_softmax_topk:bool = False,layerwise_scalings:bool = False,xlora_depth:int = 1,xlora_size:int = 2048,xlora_dropout_p:float = 0.2,use_trainable_adapters:bool = False,softmax_temperature:float = 1.0,top_k_lora:Optional[int] = None,scaling_pass_value:float = 0.0,global_scaling_weight:float = 1.0 </span>
参数
- hidden_size (
int
) — 基准模型的隐含大小。 - adapters (
dict
) — 根据 PeftModel.load_adapter,将适配器名称映射到 LoRA 适配器 ID。 它们将自动加载,用作 LoRA 专家。使用 from_pretrained 时,将新的适配器 dict 作为关键词参数传递。 - softmax_temperature (
float
, 可选,默认为 1.0) — Softmax 温度,较低时预测更加清晰 - xlora_depth (
int
, 可选, 默认为 1) — X-LoRA 分类器的深度。 - use_trainable_adapters (
bool
, 可选,默认为 False) — 使适配器可训练。
这是用于存储XLoraModel
配置的配置类。当重新加载配置时,将忽略adapters
字段的路径,而使用已保存的适配器。因此,在加载期间仅密钥重要。
XLoraModel
创建一个 X-LoRA(LoRA 专家混合)模型,根据预训练的 transformers 模型。目前,这个 X-LoRA 实现只适用于模型带有 transformer 架构。
此方法在 https://arxiv.org/abs/2402.07148 中有详细描述。
示例
>>> from transformers import AutoModelForCausalLM, AutoConfig, BitsAndBytesConfig
>>> from peft import LoraConfig, PeftModel, get_peft_model, prepare_model_for_kbit_training
>>> model_config = AutoConfig.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1")
>>> config = XLoraConfig(
... task_type="CAUSAL_LM",
... hidden_size=model_config.hidden_size,
... xlora_depth=4,
... adapters={
... "adapter_1": "./path/to/the/checkpoint/",
... "adapter_2": "./path/to/the/checkpoint/",
... "adapter_n": "./path/to/the/checkpoint/",
... },
... )
>>> int8_config = BitsAndBytesConfig(load_in_8bit=True)
>>> model = AutoModelForCausalLM.from_pretrained(
... "mistralai/Mistral-7B-Instruct-v0.1",
... trust_remote_code=True,
... attn_implementation="flash_attention_2",
... device_map="cuda:0",
... torch_dtype=torch.bfloat16,
... quantization_config=int8_config,
... )
>>> model = prepare_model_for_kbit_training(4)
>>> xlora_model = get_peft_model(model, config)
清除缩放记录。
禁用缩放记录,而不清除记录。
启用缩放日志。
根据 seq_len 返回分块的标度。每个值包含位置(第一个)和关联的张量。位置与关联的张量配对,并提供缩放日志中的位置。
获取全局 LoRA 权重。
返回最新的缩放预测,如果没有预测任何缩放,则返回 None。张量的形状为 (batch_size, seq_len, n_layers, n_classes)。
返回一个包含权重对数的列表的浅表副本(仅复制列表本身,不复制张量)。编辑列表不会更改基础对数。张量的形状为 (batch_size, seq_len, n_layers, n_classes)。seq_len 维度可能随输入维度而异。
设置全局 LoRA 权重,一个标量,用于乘以每个 LoRA 适配器的输出。这个默认值是 1。这会反映在配置中。
设置缩放传递值,即在缩放传递中设置缩放的值。如果值为 None,则缩放传递值为 1/n,其中 n 为适配器的数量。
稀疏选择指定的 top_k LoRA 专家,而不是使用默认的稠密方法。设为 None 使用稠密选择。这反映在配置中。