Huggingface.js 文档

Hugging Face JS 库

Hugging Face's logo
加入 Hugging Face 社区

并获得增强的文档体验

开始使用


huggingface javascript library logo

// Programmatically interact with the Hub

await createRepo({
  repo: { type: "model", name: "my-user/nlp-model" },
  accessToken: HF_TOKEN
});

await uploadFile({
  repo: "my-user/nlp-model",
  accessToken: HF_TOKEN,
  // Can work with native File in browsers
  file: {
    path: "pytorch_model.bin",
    content: new Blob(...)
  }
});

// Use HF Inference API, or external Inference Providers!

await inference.chatCompletion({
  model: "meta-llama/Llama-3.1-8B-Instruct",
  provider: "sambanova", // or together, fal-ai, replicate, cohere …
  messages: [
    {
      role: "user",
      content: "Hello, nice to meet you!",
    },
  ],
  max_tokens: 512,
  temperature: 0.5,
});

await inference.textToImage({
  model: "black-forest-labs/FLUX.1-dev",
  provider: "replicate",
  inputs: "a picture of a green bird",
});

// and much more…

Hugging Face JS 库

这是一个 JS 库的集合,用于与 Hugging Face API 交互,包含 TS 类型。

我们使用现代功能来避免 polyfill 和依赖项,因此这些库仅适用于现代浏览器 / Node.js >= 18 / Bun / Deno。

这些库还很新,请通过提交 issue 来帮助我们!

安装

从 NPM

要通过 NPM 安装,您可以根据需要下载库

npm install @huggingface/inference
npm install @huggingface/hub
npm install @huggingface/agents

然后在你的代码中导入库

import { InferenceClient } from "@huggingface/inference";
import { HfAgent } from "@huggingface/agents";
import { createRepo, commit, deleteRepo, listFiles } from "@huggingface/hub";
import type { RepoId } from "@huggingface/hub";

从 CDN 或静态托管

您可以使用原生 JS 运行我们的包,无需任何 bundler,通过使用 CDN 或静态托管。使用 ES 模块,即 <script type="module">,您可以在代码中导入库

<script type="module">
    import { InferenceClient } from 'https://cdn.jsdelivr.net.cn/npm/@huggingface/inference@3.7.0/+esm';
    import { createRepo, commit, deleteRepo, listFiles } from "https://cdn.jsdelivr.net.cn/npm/@huggingface/hub@1.1.2/+esm";
</script>

Deno

// esm.sh
import { InferenceClient } from "https://esm.sh/@huggingface/inference"
import { HfAgent } from "https://esm.sh/@huggingface/agents";

import { createRepo, commit, deleteRepo, listFiles } from "https://esm.sh/@huggingface/hub"
// or npm:
import { InferenceClient } from "npm:@huggingface/inference"
import { HfAgent } from "npm:@huggingface/agents";

import { createRepo, commit, deleteRepo, listFiles } from "npm:@huggingface/hub"

使用示例

在您的帐户设置中获取您的 HF 访问令牌。

@huggingface/inference 示例

import { InferenceClient } from "@huggingface/inference";

const HF_TOKEN = "hf_...";

const inference = new InferenceClient(HF_TOKEN);

// Chat completion API
const out = await inference.chatCompletion({
  model: "meta-llama/Llama-3.1-8B-Instruct",
  messages: [{ role: "user", content: "Hello, nice to meet you!" }],
  max_tokens: 512
});
console.log(out.choices[0].message);

// Streaming chat completion API
for await (const chunk of inference.chatCompletionStream({
  model: "meta-llama/Llama-3.1-8B-Instruct",
  messages: [{ role: "user", content: "Hello, nice to meet you!" }],
  max_tokens: 512
})) {
  console.log(chunk.choices[0].delta.content);
}

/// Using a third-party provider:
await inference.chatCompletion({
  model: "meta-llama/Llama-3.1-8B-Instruct",
  messages: [{ role: "user", content: "Hello, nice to meet you!" }],
  max_tokens: 512,
  provider: "sambanova", // or together, fal-ai, replicate, cohere …
})

await inference.textToImage({
  model: "black-forest-labs/FLUX.1-dev",
  inputs: "a picture of a green bird",
  provider: "fal-ai",
})



// You can also omit "model" to use the recommended model for the task
await inference.translation({
  inputs: "My name is Wolfgang and I live in Amsterdam",
  parameters: {
    src_lang: "en",
    tgt_lang: "fr",
  },
});

// pass multimodal files or URLs as inputs
await inference.imageToText({
  model: 'nlpconnect/vit-gpt2-image-captioning',
  data: await (await fetch('https://picsum.photos/300/300')).blob(),
})

// Using your own dedicated inference endpoint: https://hf.co/docs/inference-endpoints/
const gpt2 = inference.endpoint('https://xyz.eu-west-1.aws.endpoints.huggingface.cloud/gpt2');
const { generated_text } = await gpt2.textGeneration({ inputs: 'The answer to the universe is' });

// Chat Completion
const llamaEndpoint = inference.endpoint(
  "https://router.huggingface.co/hf-inference/models/meta-llama/Llama-3.1-8B-Instruct"
);
const out = await llamaEndpoint.chatCompletion({
  model: "meta-llama/Llama-3.1-8B-Instruct",
  messages: [{ role: "user", content: "Hello, nice to meet you!" }],
  max_tokens: 512,
});
console.log(out.choices[0].message);

@huggingface/hub 示例

import { createRepo, uploadFile, deleteFiles } from "@huggingface/hub";

const HF_TOKEN = "hf_...";

await createRepo({
  repo: "my-user/nlp-model", // or { type: "model", name: "my-user/nlp-test" },
  accessToken: HF_TOKEN
});

await uploadFile({
  repo: "my-user/nlp-model",
  accessToken: HF_TOKEN,
  // Can work with native File in browsers
  file: {
    path: "pytorch_model.bin",
    content: new Blob(...)
  }
});

await deleteFiles({
  repo: { type: "space", name: "my-user/my-space" }, // or "spaces/my-user/my-space"
  accessToken: HF_TOKEN,
  paths: ["README.md", ".gitattributes"]
});

@huggingface/agents 示例

import { HfAgent, LLMFromHub, defaultTools } from '@huggingface/agents';

const HF_TOKEN = "hf_...";

const agent = new HfAgent(
  HF_TOKEN,
  LLMFromHub(HF_TOKEN),
  [...defaultTools]
);


// you can generate the code, inspect it and then run it
const code = await agent.generateCode("Draw a picture of a cat wearing a top hat. Then caption the picture and read it out loud.");
console.log(code);
const messages = await agent.evaluateCode(code)
console.log(messages); // contains the data

// or you can run the code directly, however you can't check that the code is safe to execute this way, use at your own risk.
const messages = await agent.run("Draw a picture of a cat wearing a top hat. Then caption the picture and read it out loud.")
console.log(messages);

当然还有更多功能,请查看每个库的 README!

格式化 & 测试

sudo corepack enable
pnpm install

pnpm -r format:check
pnpm -r lint:check
pnpm -r test

构建

pnpm -r build

这将在 `packages/*/dist` 中生成 ESM 和 CJS javascript 文件,例如 `packages/inference/dist/index.mjs`。

< > 在 GitHub 上更新