🚀 使用 Hugging Face Spaces 和 Docker 构建 Qwen 2.5 VL API 端点!
社区文章 发布于 2025 年 1 月 29 日
视觉-语言模型正在掀起波澜,但目前还没有提供商提供 Qwen2.5-VL 的 API 就绪部署。本指南将介绍如何构建一个概念验证 API,使用 Docker 将 Qwen2.5-VL-7B-Instruct 托管在 Hugging Face Spaces 上。让我们动手部署一个可以通过单个 API 调用理解图像和文本的模型!
📌 您将获得:一个实时 API,它接收图像 URL 和文本提示,使用 Qwen2.5-VL 处理它们,并返回响应。
1️⃣ 设置您的空间
前往 Hugging Face Spaces 并创建一个新的空间。选择 Docker 作为 SDK。请务必为空间附加 GPU 以加快推理速度!
2️⃣ 编写 FastAPI 服务器
我们将公开一个接受 image_url
和 prompt
的端点,使用 Qwen2.5-VL 运行推理,并返回文本响应。
📜 main.py
from fastapi import FastAPI, Query
from transformers import Qwen2_5_VLForConditionalGeneration, AutoProcessor
from qwen_vl_utils import process_vision_info
import torch
app = FastAPI()
checkpoint = "Qwen/Qwen2.5-VL-3B-Instruct"
min_pixels = 256*28*28
max_pixels = 1280*28*28
processor = AutoProcessor.from_pretrained(
checkpoint,
min_pixels=min_pixels,
max_pixels=max_pixels
)
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
checkpoint,
torch_dtype=torch.bfloat16,
device_map="auto",
# attn_implementation="flash_attention_2",
)
@app.get("/")
def read_root():
return {"message": "API is live. Use the /predict endpoint."}
@app.get("/predict")
def predict(image_url: str = Query(...), prompt: str = Query(...)):
messages = [
{"role": "system", "content": "You are a helpful assistant with vision abilities."},
{"role": "user", "content": [{"type": "image", "image": image_url}, {"type": "text", "text": prompt}]},
]
text = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
).to(model.device)
with torch.no_grad():
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)]
output_texts = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
return {"response": output_texts[0]}
🔹 端点: GET /predict?image_url=<URL>&prompt=<TEXT>
🔹 返回: 基于图像和提示生成的响应
3️⃣ 构建 Dockerfile
我们将直接在 Dockerfile
中安装依赖项,从而无需 requirements.txt
。
📜 Dockerfile
# Use Python 3.12 as the base image
FROM python:3.12
# Install system dependencies
RUN apt-get update && apt-get install -y \
ffmpeg \
git \
&& rm -rf /var/lib/apt/lists/*
# Create a non-root user
RUN useradd -m -u 1000 user
WORKDIR /app
# Install Python dependencies directly
RUN pip install --no-cache-dir --upgrade pip && \
pip install --no-cache-dir \
torch \
torchvision \
git+https://github.com/huggingface/transformers \
accelerate \
qwen-vl-utils[decord]==0.0.8 \
fastapi \
uvicorn[standard]
# Copy application files
COPY --chown=user . /app
# Switch to the non-root user
USER user
# Set environment variables
ENV HOME=/home/user \
PATH=/home/user/.local/bin:$PATH
# Command to run the application
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "7860"]
成功部署后将如下所示 👇🏻
4️⃣ 测试 API
✅ 使用 curl
curl -G "https://<uname>-<spacename>.hf.space/predict" \
--data-urlencode "image_url=https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg" \
--data-urlencode "prompt=Describe this image."
✅ 使用 Python
import requests
url = "https://<uname>-<spacename>.hf.space/predict"
# Define the parameters
params = {
"image_url": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
"prompt": "describe"
}
# Send the GET request
response = requests.get(url, params=params)
if response.status_code == 200:
print("Response:", response.json())
else:
print("Error:", response.status_code, response.text)
🎯 总结
只需几个步骤,我们就在 Hugging Face Spaces 上使用 FastAPI & Docker 将 Qwen2.5-VL-3B-Instruct 部署为 API。
🔹 后续步骤? 欢迎 fork 此空间并尝试新事物!
- 添加一个前端来与 API 交互!
- 优化模型推理以获得更快响应。