使用 optimum.exporters.tflite 将模型导出到 TFLite
摘要
将模型导出到 TFLite 非常简单,只需
optimum-cli export tflite --model bert-base-uncased --sequence_length 128 bert_tflite/
查看帮助以获取更多选项
optimum-cli export tflite --help
使用 CLI 将模型导出到 TFLite
要将 🤗 Transformers 模型导出到 TFLite,您首先需要安装一些额外的依赖项
pip install optimum[exporters-tf]
Optimum TFLite 导出可以通过 Optimum 命令行使用。由于目前仅支持静态输入形状,因此需要在导出期间指定它们。
optimum-cli export tflite --help
usage: optimum-cli <command> [<args>] export tflite [-h] -m MODEL [--task TASK] [--atol ATOL] [--pad_token_id PAD_TOKEN_ID] [--cache_dir CACHE_DIR]
[--trust-remote-code] [--batch_size BATCH_SIZE] [--sequence_length SEQUENCE_LENGTH]
[--num_choices NUM_CHOICES] [--width WIDTH] [--height HEIGHT] [--num_channels NUM_CHANNELS]
[--feature_size FEATURE_SIZE] [--nb_max_frames NB_MAX_FRAMES]
[--audio_sequence_length AUDIO_SEQUENCE_LENGTH]
output
optional arguments:
-h, --help show this help message and exit
Required arguments:
-m MODEL, --model MODEL
Model ID on huggingface.co or path on disk to load model from.
output Path indicating the directory where to store generated TFLite model.
Optional arguments:
--task TASK The task to export the model for. If not specified, the task will be auto-inferred based on the model. Available tasks depend on
the model, but are among: ['default', 'fill-mask', 'text-generation', 'text2text-generation', 'text-classification', 'token-classification',
'multiple-choice', 'object-detection', 'question-answering', 'image-classification', 'image-segmentation', 'masked-im', 'semantic-
segmentation', 'automatic-speech-recognition', 'audio-classification', 'audio-frame-classification', 'automatic-speech-recognition', 'audio-xvector', 'vision2seq-
lm', 'zero-shot-object-detection', 'text-to-image', 'image-to-image', 'inpainting']. For decoder models, use `xxx-with-past` to export the model using past key
values in the decoder.
--atol ATOL If specified, the absolute difference tolerance when validating the model. Otherwise, the default atol for the model will be used.
--pad_token_id PAD_TOKEN_ID
This is needed by some models, for some tasks. If not provided, will attempt to use the tokenizer to guess it.
--cache_dir CACHE_DIR
Path indicating where to store cache.
--trust-remote-code Allow to use custom code for the modeling hosted in the model repository. This option should only be set for repositories you trust
and in which you have read the code, as it will execute on your local machine arbitrary code present in the model repository.
Input shapes:
--batch_size BATCH_SIZE
Batch size that the TFLite exported model will be able to take as input.
--sequence_length SEQUENCE_LENGTH
Sequence length that the TFLite exported model will be able to take as input.
--num_choices NUM_CHOICES
Only for the multiple-choice task. Num choices that the TFLite exported model will be able to take as input.
--width WIDTH Vision tasks only. Image width that the TFLite exported model will be able to take as input.
--height HEIGHT Vision tasks only. Image height that the TFLite exported model will be able to take as input.
--num_channels NUM_CHANNELS
Vision tasks only. Number of channels used to represent the image that the TFLite exported model will be able to take as input.
(GREY = 1, RGB = 3, ARGB = 4)
--feature_size FEATURE_SIZE
Audio tasks only. Feature dimension of the extracted features by the feature extractor that the TFLite exported model will be able
to take as input.
--nb_max_frames NB_MAX_FRAMES
Audio tasks only. Maximum number of frames that the TFLite exported model will be able to take as input.
--audio_sequence_length AUDIO_SEQUENCE_LENGTH
Audio tasks only. Audio sequence length that the TFLite exported model will be able to take as input.