优化
转换
类 optimum.fx.optimization.转换
< 资源 >( )
Torch.fx 图转换。
它必须实现 transform() 方法,并用作可调用对象。
__call__
< 源代码 >( graph_module: GraphModule lint_and_recompile: bool = True ) → torch.fx.GraphModule
get_transformed_nodes
< source >( graph_module: 图模模块 ) → List[torch.fx.Node]
将节点标记为该转换的转换。
转换
< 源代码 >( 图模块: 图模块 ) → 图模块 (torch.fx.GraphModule)
已转换
< source >( node: Node ) → bool
可逆转换
class optimum.fx.optimization.ReversibleTransformation
< source >( )
可逆的 torch.fx 图转换。
必须实现 transform() 和 reverse() 方法,并作为可调用的对象使用。
__call__
< source >( graph_module: GraphModule lint_and_recompile: bool = True reverse: bool = False ) → torch.fx.GraphModule
将节点标记为已恢复到其原始状态。
reverse
< source >( 图模块: 图模块 ) → 图模块 (torch.fx.GraphModule)
optimum.fx.optimization.compose
< source >( *args: Transformation inplace: bool = True )
参数
- args (Transformation) — 要组合在一起的转换。
- 就地修订 (
布尔值
,默认为True
) — 结果变换是否应就地修订,或创建一个新的图模块。
将一系列变换组合在一起。
示例
>>> from transformers import BertModel
>>> from transformers.utils.fx import symbolic_trace
>>> from optimum.fx.optimization import ChangeTrueDivToMulByInverse, MergeLinears, compose
>>> model = BertModel.from_pretrained("bert-base-uncased")
>>> traced = symbolic_trace(
... model,
... input_names=["input_ids", "attention_mask", "token_type_ids"],
... )
>>> composition = compose(ChangeTrueDivToMulByInverse(), MergeLinears())
>>> transformed_model = composition(traced)
变换
类 optimum.fx.optimization.MergeLinears
< 源代码 >( )
合并采用相同输入的线性层的转换。
示例
>>> from transformers import BertModel
>>> from transformers.utils.fx import symbolic_trace
>>> from optimum.fx.optimization import MergeLinears
>>> model = BertModel.from_pretrained("bert-base-uncased")
>>> traced = symbolic_trace(
... model,
... input_names=["input_ids", "attention_mask", "token_type_ids"],
... )
>>> transformation = MergeLinears()
>>> transformed_model = transformation(traced)
>>> restored_model = transformation(transformed_model, reverse=True)
类 optimum.fx.optimization.FuseBiasInLinear
< source >( )
将偏置融合到 torch.nn.Linear 中的权重的转换。
示例
>>> from transformers import BertModel
>>> from transformers.utils.fx import symbolic_trace
>>> from optimum.fx.optimization import FuseBiasInLinear
>>> model = BertModel.from_pretrained("bert-base-uncased")
>>> traced = symbolic_trace(
... model,
... input_names=["input_ids", "attention_mask", "token_type_ids"],
... )
>>> transformation = FuseBiasInLinear()
>>> transformed_model = transformation(traced)
>>> restored_model = transformation(transformed_model, reverse=True)
类 optimum.fx.optimization.ChangeTrueDivToMulByInverse
< 来源 >( )
当分母是静态时,将truediv节点更改为乘以逆节点变换。例如,这在注意力层的缩放因子中很常见。
示例
>>> from transformers import BertModel
>>> from transformers.utils.fx import symbolic_trace
>>> from optimum.fx.optimization import ChangeTrueDivToMulByInverse
>>> model = BertModel.from_pretrained("bert-base-uncased")
>>> traced = symbolic_trace(
... model,
... input_names=["input_ids", "attention_mask", "token_type_ids"],
... )
>>> transformation = ChangeTrueDivToMulByInverse()
>>> transformed_model = transformation(traced)
>>> restored_model = transformation(transformed_model, reverse=True)
类 optimum.fx.optimization.FuseBatchNorm2dInConv2d
< 来源 >( )
将跟随 nn.Conv2d
的 nn.BatchNorm2d
融合成单个 nn.Conv2d
的转换。仅当卷积层的后续节点只有批归一化时,才会执行融合。
例如,以下情况下不会进行融合:
示例
>>> from transformers.utils.fx import symbolic_trace
>>> from transformers import AutoModelForImageClassification
>>> from optimum.fx.optimization import FuseBatchNorm2dInConv2d
>>> model = AutoModelForImageClassification.from_pretrained("microsoft/resnet-50")
>>> model.eval()
>>> traced_model = symbolic_trace(
... model,
... input_names=["pixel_values"],
... disable_check=True
... )
>>> transformation = FuseBatchNorm2dInConv2d()
>>> transformed_model = transformation(traced_model)
类 optimum.fx.optimization.FuseBatchNorm1dInLinear
< source >( )
将跟随或前导于 nn.Linear
的 nn.BatchNorm1d
融合成单个 nn.Linear
的转换。仅当线性层后续节点只有批归一化,或者批归一化后续节点只有线性层时,才会执行融合。
例如,以下情况下不会进行融合:
示例
>>> from transformers.utils.fx import symbolic_trace
>>> from transformers import AutoModel
>>> from optimum.fx.optimization import FuseBatchNorm1dInLinear
>>> model = AutoModel.from_pretrained("nvidia/groupvit-gcc-yfcc")
>>> model.eval()
>>> traced_model = symbolic_trace(
... model,
... input_names=["input_ids", "attention_mask", "pixel_values"],
... disable_check=True
... )
>>> transformation = FuseBatchNorm1dInLinear()
>>> transformed_model = transformation(traced_model)