开源 AI 食谱文档

使用 LangChain 在 Hugging Face 文档上构建高级 RAG

Hugging Face's logo
加入 Hugging Face 社区

并获得增强型文档体验

入门

Open In Colab

使用 LangChain 在 Hugging Face 文档上构建高级 RAG

作者: Aymeric Roucher

此笔记本演示了如何使用 LangChain 构建高级 RAG(检索增强生成)来回答用户关于特定知识库(此处为 HuggingFace 文档)的问题。

有关 RAG 的介绍,您可以查看 此其他食谱

RAG 系统很复杂,包含许多活动部件:这是一个 RAG 图,我们用蓝色标记了系统增强的所有可能性。

💡 正如您所见,此架构中有很多步骤需要调整:正确调整系统将带来显著的性能提升。

在本笔记本中,我们将仔细查看这些蓝色注释,以了解如何调整您的 RAG 系统并获得最佳性能。

让我们深入了解模型构建! 首先,我们安装所需的模型依赖项。

!pip install -q torch transformers transformers accelerate bitsandbytes langchain sentence-transformers faiss-gpu openpyxl pacmap datasets langchain-community ragatouille
%reload_ext dotenv
%dotenv
from tqdm.notebook import tqdm
import pandas as pd
from typing import Optional, List, Tuple
from datasets import Dataset
import matplotlib.pyplot as plt

pd.set_option("display.max_colwidth", None)  # This will be helpful when visualizing retriever outputs

加载您的知识库

import datasets

ds = datasets.load_dataset("m-ric/huggingface_doc", split="train")
from langchain.docstore.document import Document as LangchainDocument

RAW_KNOWLEDGE_BASE = [
    LangchainDocument(page_content=doc["text"], metadata={"source": doc["source"]}) for doc in tqdm(ds)
]

1. 检索器 - 嵌入 🗂️

检索器充当内部搜索引擎:给定用户查询,它会从您的知识库中返回一些相关的片段。

然后,这些片段将被馈送到阅读器模型以帮助它生成其答案。

因此,我们的目标是在给定用户问题的情况下,从我们的知识库中找到最相关的片段来回答这个问题。

这是一个广泛的目标,它留下了一些问题。我们应该检索多少个片段?此参数将命名为top_k

这些片段应该有多长?这称为chunk size。没有一个放之四海而皆准的答案,但以下是一些要素

  • 🔀 您的chunk size 允许在不同的片段之间变化。
  • 由于您的检索中总会出现一些噪音,因此增加top_k会增加在检索到的片段中获得相关元素的可能性。🎯射出更多的箭会增加您击中目标的概率。
  • 同时,检索到的文档的总长度不应过高:例如,对于大多数当前模型,16k 个令牌可能会因中间丢失现象而淹没您的阅读器模型在信息中。🎯只向您的阅读器模型提供最相关的见解,而不是一堆书!

在本笔记本中,我们使用 Langchain 库,因为它提供了大量用于向量数据库的选项,并允许我们在整个处理过程中保留文档元数据

1.1 将文档拆分为块

  • 在本部分中,我们将来自知识库的文档拆分为较小的块,这些块将是阅读器 LLM 将其答案建立在其上的片段。
  • 目标是准备一个语义相关片段的集合。因此,它们的大小应适应精确的想法:太小会截断想法,太大会稀释它们。

💡 文本拆分存在许多选项:按单词拆分、按句子边界拆分、递归分块以树状方式处理文档以保留结构信息... 要了解有关分块的更多信息,我建议您阅读Greg Kamradt 的这个很棒的笔记本

  • 递归分块使用给定的分隔符列表(从最重要的分隔符到最不重要的分隔符排序)逐步将文本分解为更小的部分。如果第一次拆分没有给出正确的块大小或形状,该方法会使用不同的分隔符在新的块上重复自身。例如,使用分隔符列表["\n\n", "\n", ".", ""]

    • 该方法首先会在出现双行中断"\n\n"的地方分解文档。
    • 生成的文档将再次在简单行中断"\n"上进行拆分,然后在句子结束符"."上进行拆分。
    • 最后,如果一些块仍然太大,它们将在超出最大尺寸时进行拆分。
  • 使用此方法,全局结构得到了很好的保留,但代价是块大小略有变化。

这个空间 允许您可视化不同的拆分选项如何影响您得到的块。

🔬让我们用块大小进行一些实验,从任意大小开始,看看拆分是如何工作的。我们使用 Langchain 的递归分块的实现RecursiveCharacterTextSplitter

  • 参数chunk_size控制单个块的长度:默认情况下,此长度作为块中的字符数来计算。
  • 参数chunk_overlap 允许相邻块彼此重叠一点。这降低了想法被两个相邻块之间的拆分截断一半的可能性。我们~任意地将其设置为块大小的 1/10,您可以尝试不同的值!
from langchain.text_splitter import RecursiveCharacterTextSplitter

# We use a hierarchical list of separators specifically tailored for splitting Markdown documents
# This list is taken from LangChain's MarkdownTextSplitter class
MARKDOWN_SEPARATORS = [
    "\n#{1,6} ",
    "```\n",
    "\n\\*\\*\\*+\n",
    "\n---+\n",
    "\n___+\n",
    "\n\n",
    "\n",
    " ",
    "",
]

text_splitter = RecursiveCharacterTextSplitter(
    chunk_size=1000,  # The maximum number of characters in a chunk: we selected this value arbitrarily
    chunk_overlap=100,  # The number of characters to overlap between chunks
    add_start_index=True,  # If `True`, includes chunk's start index in metadata
    strip_whitespace=True,  # If `True`, strips whitespace from the start and end of every document
    separators=MARKDOWN_SEPARATORS,
)

docs_processed = []
for doc in RAW_KNOWLEDGE_BASE:
    docs_processed += text_splitter.split_documents([doc])

我们还必须记住,在嵌入文档时,我们将使用一个接受特定最大序列长度max_seq_length的嵌入模型。

因此,我们应该确保块大小低于此限制,因为任何更长的块都会在处理之前被截断,从而失去相关性。

>>> from sentence_transformers import SentenceTransformer

>>> # To get the value of the max sequence_length, we will query the underlying `SentenceTransformer` object used in the RecursiveCharacterTextSplitter
>>> print(f"Model's maximum sequence length: {SentenceTransformer('thenlper/gte-small').max_seq_length}")

>>> from transformers import AutoTokenizer

>>> tokenizer = AutoTokenizer.from_pretrained("thenlper/gte-small")
>>> lengths = [len(tokenizer.encode(doc.page_content)) for doc in tqdm(docs_processed)]

>>> # Plot the distribution of document lengths, counted as the number of tokens
>>> fig = pd.Series(lengths).hist()
>>> plt.title("Distribution of document lengths in the knowledge base (in count of tokens)")
>>> plt.show()
Model's maximum sequence length: 512

👀如您所见,块长度与我们的 512 个令牌的限制不一致,并且一些文档超过了限制,因此其中的一部分将在截断中丢失!

  • 因此,我们应该更改RecursiveCharacterTextSplitter 类,使其按令牌数而不是字符数计算长度。
  • 然后,我们可以选择一个特定的块大小,这里我们将选择一个低于 512 的阈值
    • 较小的文档可以使拆分更多地关注特定的想法。
    • 但是太小的块会将句子截断一半,从而再次失去意义:适当的调整是一个平衡问题。
>>> from langchain.text_splitter import RecursiveCharacterTextSplitter
>>> from transformers import AutoTokenizer

>>> EMBEDDING_MODEL_NAME = "thenlper/gte-small"


>>> def split_documents(
...     chunk_size: int,
...     knowledge_base: List[LangchainDocument],
...     tokenizer_name: Optional[str] = EMBEDDING_MODEL_NAME,
... ) -> List[LangchainDocument]:
...     """
...     Split documents into chunks of maximum size `chunk_size` tokens and return a list of documents.
...     """
...     text_splitter = RecursiveCharacterTextSplitter.from_huggingface_tokenizer(
...         AutoTokenizer.from_pretrained(tokenizer_name),
...         chunk_size=chunk_size,
...         chunk_overlap=int(chunk_size / 10),
...         add_start_index=True,
...         strip_whitespace=True,
...         separators=MARKDOWN_SEPARATORS,
...     )

...     docs_processed = []
...     for doc in knowledge_base:
...         docs_processed += text_splitter.split_documents([doc])

...     # Remove duplicates
...     unique_texts = {}
...     docs_processed_unique = []
...     for doc in docs_processed:
...         if doc.page_content not in unique_texts:
...             unique_texts[doc.page_content] = True
...             docs_processed_unique.append(doc)

...     return docs_processed_unique


>>> docs_processed = split_documents(
...     512,  # We choose a chunk size adapted to our model
...     RAW_KNOWLEDGE_BASE,
...     tokenizer_name=EMBEDDING_MODEL_NAME,
... )

>>> # Let's visualize the chunk sizes we would have in tokens from a common model
>>> from transformers import AutoTokenizer

>>> tokenizer = AutoTokenizer.from_pretrained(EMBEDDING_MODEL_NAME)
>>> lengths = [len(tokenizer.encode(doc.page_content)) for doc in tqdm(docs_processed)]
>>> fig = pd.Series(lengths).hist()
>>> plt.title("Distribution of document lengths in the knowledge base (in count of tokens)")
>>> plt.show()

➡️ 现在块长度分布看起来更好了!

1.2 构建向量数据库

我们希望为知识库的所有块计算嵌入:要了解有关句子嵌入的更多信息,我们建议阅读本指南

检索是如何工作的?

一旦所有块都被嵌入,我们就会将它们存储在向量数据库中。当用户输入查询时,它会由先前使用的相同模型进行嵌入,相似性搜索会返回向量数据库中最接近的文档。

因此,技术挑战在于,给定一个查询向量,在包含数千条记录的向量数据库中快速找到该向量的最近邻居。为此,我们需要选择两件事:距离和搜索算法,以便在数据库中快速找到最近邻居。

最近邻居搜索算法

最近邻居搜索算法有很多选择:我们使用 Facebook 的FAISS,因为 FAISS 的性能对于大多数用例来说都足够,而且它很有名,因此得到了广泛的应用。

距离

关于距离,您可以在此处找到一个很好的指南。简而言之

  • 余弦相似度将两个向量之间的相似度计算为它们相对角度的余弦:它允许我们比较向量方向,而不管它们的大小。使用它需要对所有向量进行归一化,以将它们重新缩放到单位范数。
  • 点积考虑大小,有时会产生不希望有的影响,即增加向量的长度会使其与所有其他向量更加相似。
  • 欧几里得距离是向量末端之间的距离。

您可以尝试这个小型练习来检查您对这些概念的理解。但是,一旦向量被归一化,特定距离的选择并不重要

我们的特定模型在余弦相似度方面表现良好,因此选择此距离,我们在嵌入模型和 FAISS 索引的distance_strategy参数中都设置了它。使用余弦相似度,我们必须对嵌入进行归一化。

🚨👇 下面的单元格在 A10G 上运行需要几分钟!

from langchain.vectorstores import FAISS
from langchain_community.embeddings import HuggingFaceEmbeddings
from langchain_community.vectorstores.utils import DistanceStrategy

embedding_model = HuggingFaceEmbeddings(
    model_name=EMBEDDING_MODEL_NAME,
    multi_process=True,
    model_kwargs={"device": "cuda"},
    encode_kwargs={"normalize_embeddings": True},  # Set `True` for cosine similarity
)

KNOWLEDGE_VECTOR_DATABASE = FAISS.from_documents(
    docs_processed, embedding_model, distance_strategy=DistanceStrategy.COSINE
)

👀为了可视化搜索最接近的文档,让我们使用 PaCMAP 将嵌入从 384 维降维到 2 维。

💡 我们选择 PaCMAP 而不是其他技术,例如 t-SNE 或 UMAP,因为它效率高(保留局部和全局结构)、对初始化参数鲁棒且速度快

# Embed a user query in the same space
user_query = "How to create a pipeline object?"
query_vector = embedding_model.embed_query(user_query)
import pacmap
import numpy as np
import plotly.express as px

embedding_projector = pacmap.PaCMAP(n_components=2, n_neighbors=None, MN_ratio=0.5, FP_ratio=2.0, random_state=1)

embeddings_2d = [
    list(KNOWLEDGE_VECTOR_DATABASE.index.reconstruct_n(idx, 1)[0]) for idx in range(len(docs_processed))
] + [query_vector]

# Fit the data (the index of transformed data corresponds to the index of the original data)
documents_projected = embedding_projector.fit_transform(np.array(embeddings_2d), init="pca")
df = pd.DataFrame.from_dict(
    [
        {
            "x": documents_projected[i, 0],
            "y": documents_projected[i, 1],
            "source": docs_processed[i].metadata["source"].split("/")[1],
            "extract": docs_processed[i].page_content[:100] + "...",
            "symbol": "circle",
            "size_col": 4,
        }
        for i in range(len(docs_processed))
    ]
    + [
        {
            "x": documents_projected[-1, 0],
            "y": documents_projected[-1, 1],
            "source": "User query",
            "extract": user_query,
            "size_col": 100,
            "symbol": "star",
        }
    ]
)

# Visualize the embedding
fig = px.scatter(
    df,
    x="x",
    y="y",
    color="source",
    hover_data="extract",
    size="size_col",
    symbol="symbol",
    color_discrete_map={"User query": "black"},
    width=1000,
    height=700,
)
fig.update_traces(
    marker=dict(opacity=1, line=dict(width=0, color="DarkSlateGrey")),
    selector=dict(mode="markers"),
)
fig.update_layout(
    legend_title_text="<b>Chunk source</b>",
    title="<b>2D Projection of Chunk Embeddings via PaCMAP</b>",
)
fig.show()

➡️ 在上面的图表中,您可以看到知识库文档的空间表示。由于向量嵌入代表文档的含义,因此它们的嵌入之间的接近度应该反映它们在含义上的接近度。

用户查询的嵌入也显示在图表中:我们希望找到k 个具有最接近含义的文档,因此我们选择k 个最接近的向量。

在 LangChain 向量数据库实现中,此搜索操作由方法vector_database.similarity_search(query)执行。

以下是结果

>>> print(f"\nStarting retrieval for {user_query=}...")
>>> retrieved_docs = KNOWLEDGE_VECTOR_DATABASE.similarity_search(query=user_query, k=5)
>>> print("\n==================================Top document==================================")
>>> print(retrieved_docs[0].page_content)
>>> print("==================================Metadata==================================")
>>> print(retrieved_docs[0].metadata)
Starting retrieval for user_query='How to create a pipeline object?'...

==================================Top document==================================
```

## Available Pipelines:
==================================Metadata==================================
&#123;'source': 'huggingface/diffusers/blob/main/docs/source/en/api/pipelines/deepfloyd_if.md', 'start_index': 16887}

2. 阅读器 - LLM 💬

在本部分,**LLM 阅读器会读取检索到的上下文来制定答案**。

其中包含可以调整的子步骤

  1. 检索到的文档内容会汇总到“上下文”中,并通过多种处理选项进行处理,例如*提示压缩*。
  2. 上下文和用户查询会汇总成一个提示,然后交给 LLM 生成答案。

2.1. 阅读器模型

选择阅读器模型在几个方面都很重要

  • 阅读器模型的max_seq_length 必须容纳我们的提示,其中包含检索器调用输出的上下文:上下文包含 5 个 512 个令牌的文档,因此我们至少需要 4k 个令牌的上下文长度。
  • 阅读器模型

在本例中,我们选择了HuggingFaceH4/zephyr-7b-beta,这是一个小型但功能强大的模型。

由于每周都会发布许多模型,您可能需要将此模型替换为最新最好的模型。跟踪开源 LLM 的最佳方法是查看开源 LLM 排行榜

为了加快推断速度,我们将加载模型的量化版本

from transformers import pipeline
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig

READER_MODEL_NAME = "HuggingFaceH4/zephyr-7b-beta"

bnb_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_use_double_quant=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_compute_dtype=torch.bfloat16,
)
model = AutoModelForCausalLM.from_pretrained(READER_MODEL_NAME, quantization_config=bnb_config)
tokenizer = AutoTokenizer.from_pretrained(READER_MODEL_NAME)

READER_LLM = pipeline(
    model=model,
    tokenizer=tokenizer,
    task="text-generation",
    do_sample=True,
    temperature=0.2,
    repetition_penalty=1.1,
    return_full_text=False,
    max_new_tokens=500,
)
READER_LLM("What is 4+4? Answer:")

2.2. 提示

下面的 RAG 提示模板是我们提供给阅读器 LLM 的内容:将其格式化为阅读器 LLM 的聊天模板非常重要。

我们会提供我们的上下文和用户的疑问。

>>> prompt_in_chat_format = [
...     {
...         "role": "system",
...         "content": """Using the information contained in the context,
... give a comprehensive answer to the question.
... Respond only to the question asked, response should be concise and relevant to the question.
... Provide the number of the source document when relevant.
... If the answer cannot be deduced from the context, do not give an answer.""",
...     },
...     {
...         "role": "user",
...         "content": """Context:
... {context}
... ---
... Now here is the question you need to answer.

... Question: {question}""",
...     },
... ]
>>> RAG_PROMPT_TEMPLATE = tokenizer.apply_chat_template(
...     prompt_in_chat_format, tokenize=False, add_generation_prompt=True
... )
>>> print(RAG_PROMPT_TEMPLATE)
<|system|>
Using the information contained in the context, 
give a comprehensive answer to the question.
Respond only to the question asked, response should be concise and relevant to the question.
Provide the number of the source document when relevant.
If the answer cannot be deduced from the context, do not give an answer.
<|user|>
Context:
&#123;context}
---
Now here is the question you need to answer.

Question: &#123;question}
<|assistant|>

让我们在之前检索到的文档上测试我们的阅读器!

>>> retrieved_docs_text = [doc.page_content for doc in retrieved_docs]  # We only need the text of the documents
>>> context = "\nExtracted documents:\n"
>>> context += "".join([f"Document {str(i)}:::\n" + doc for i, doc in enumerate(retrieved_docs_text)])

>>> final_prompt = RAG_PROMPT_TEMPLATE.format(question="How to create a pipeline object?", context=context)

>>> # Redact an answer
>>> answer = READER_LLM(final_prompt)[0]["generated_text"]
>>> print(answer)
To create a pipeline object, follow these steps:

1. Define the inputs and outputs of your pipeline. These could be strings, dictionaries, or any other format that best suits your use case.

2. Inherit the `Pipeline` class from the `transformers` module and implement the following methods:

   - `preprocess`: This method takes the raw inputs and returns a preprocessed dictionary that can be passed to the model.

   - `_forward`: This method performs the actual inference using the model and returns the output tensor.

   - `postprocess`: This method takes the output tensor and returns the final output in the desired format.

   - `_sanitize_parameters`: This method is used to sanitize the input parameters before passing them to the model.

3. Load the necessary components, such as the model and scheduler, into the pipeline object.

4. Instantiate the pipeline object and return it.

Here's an example implementation based on the given context:

```python
from transformers import Pipeline
import torch
from diffusers import StableDiffusionPipeline

class MyPipeline(Pipeline):
    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self.pipe = StableDiffusionPipeline.from_pretrained("my_model")

    def preprocess(self, inputs):
        # Preprocess the inputs as needed
        return &#123;"input_ids":...}

    def _forward(self, inputs):
        # Run the forward pass of the model
        return self.pipe(**inputs).images[0]

    def postprocess(self, outputs):
        # Postprocess the outputs as needed
        return outputs["sample"]

    def _sanitize_parameters(self, params):
        # Sanitize the input parameters
        return params

my_pipeline = MyPipeline()
result = my_pipeline("My input string")
print(result)
```

Note that this implementation assumes that the model and scheduler are already loaded into memory. If they need to be loaded dynamically, you can modify the `__init__` method accordingly.

2.3. 重新排序

RAG 的一个好选择是检索比最终需要的更多文档,然后使用更强大的检索模型重新排序结果,最后只保留top_k

为此,Colbertv2 是一个很好的选择:与我们的经典嵌入模型之类的双编码器不同,它是一个交叉编码器,可以计算查询令牌与每个文档的令牌之间的更细粒度的交互。

由于RAGatouille 库,它很容易使用。

from ragatouille import RAGPretrainedModel

RERANKER = RAGPretrainedModel.from_pretrained("colbert-ir/colbertv2.0")

3. 将所有部分组装起来!

from transformers import Pipeline


def answer_with_rag(
    question: str,
    llm: Pipeline,
    knowledge_index: FAISS,
    reranker: Optional[RAGPretrainedModel] = None,
    num_retrieved_docs: int = 30,
    num_docs_final: int = 5,
) -> Tuple[str, List[LangchainDocument]]:
    # Gather documents with retriever
    print("=> Retrieving documents...")
    relevant_docs = knowledge_index.similarity_search(query=question, k=num_retrieved_docs)
    relevant_docs = [doc.page_content for doc in relevant_docs]  # Keep only the text

    # Optionally rerank results
    if reranker:
        print("=> Reranking documents...")
        relevant_docs = reranker.rerank(question, relevant_docs, k=num_docs_final)
        relevant_docs = [doc["content"] for doc in relevant_docs]

    relevant_docs = relevant_docs[:num_docs_final]

    # Build the final prompt
    context = "\nExtracted documents:\n"
    context += "".join([f"Document {str(i)}:::\n" + doc for i, doc in enumerate(relevant_docs)])

    final_prompt = RAG_PROMPT_TEMPLATE.format(question=question, context=context)

    # Redact an answer
    print("=> Generating answer...")
    answer = llm(final_prompt)[0]["generated_text"]

    return answer, relevant_docs

让我们看看我们的 RAG 管道如何回答用户查询。

>>> question = "how to create a pipeline object?"

>>> answer, relevant_docs = answer_with_rag(question, READER_LLM, KNOWLEDGE_VECTOR_DATABASE, reranker=RERANKER)
=> Retrieving documents...
>>> print("==================================Answer==================================")
>>> print(f"{answer}")
>>> print("==================================Source docs==================================")
>>> for i, doc in enumerate(relevant_docs):
...     print(f"Document {i}------------------------------------------------------------")
...     print(doc)
==================================Answer==================================
To create a pipeline object, follow these steps:

1. Import the `pipeline` function from the `transformers` module:

   ```python
   from transformers import pipeline
   ```

2. Choose the task you want to perform, such as object detection, sentiment analysis, or image generation, and pass it as an argument to the `pipeline` function:

   - For object detection:

     ```python
     >>> object_detector = pipeline('object-detection')
     >>> object_detector(image)
     [&#123;'score': 0.9982201457023621,
       'label':'remote',
       'box': &#123;'xmin': 40, 'ymin': 70, 'xmax': 175, 'ymax': 117}},
     ...]
     ```

   - For sentiment analysis:

     ```python
     >>> classifier = pipeline("sentiment-analysis")
     >>> classifier("This is a great product!")
     &#123;'labels': ['POSITIVE'],'scores': tensor([0.9999], device='cpu', dtype=torch.float32)}
     ```

   - For image generation:

     ```python
     >>> image = pipeline(
    ... "stained glass of darth vader, backlight, centered composition, masterpiece, photorealistic, 8k"
    ... ).images[0]
     >>> image
     PILImage mode RGB size 7680x4320 at 0 DPI
     ```

Note that the exact syntax may vary depending on the specific pipeline being used. Refer to the documentation for more details on how to use each pipeline.

In general, the process involves importing the necessary modules, selecting the desired pipeline task, and passing it to the `pipeline` function along with any required arguments. The resulting pipeline object can then be used to perform the selected task on input data.
==================================Source docs==================================
Document 0------------------------------------------------------------
# Allocate a pipeline for object detection
>>> object_detector = pipeline('object-detection')
>>> object_detector(image)
[&#123;'score': 0.9982201457023621,
  'label': 'remote',
  'box': &#123;'xmin': 40, 'ymin': 70, 'xmax': 175, 'ymax': 117}},
 &#123;'score': 0.9960021376609802,
  'label': 'remote',
  'box': &#123;'xmin': 333, 'ymin': 72, 'xmax': 368, 'ymax': 187}},
 &#123;'score': 0.9954745173454285,
  'label': 'couch',
  'box': &#123;'xmin': 0, 'ymin': 1, 'xmax': 639, 'ymax': 473}},
 &#123;'score': 0.9988006353378296,
  'label': 'cat',
  'box': &#123;'xmin': 13, 'ymin': 52, 'xmax': 314, 'ymax': 470}},
 &#123;'score': 0.9986783862113953,
  'label': 'cat',
  'box': &#123;'xmin': 345, 'ymin': 23, 'xmax': 640, 'ymax': 368}}]
Document 1------------------------------------------------------------
# Allocate a pipeline for object detection
>>> object_detector = pipeline('object_detection')
>>> object_detector(image)
[&#123;'score': 0.9982201457023621,
  'label': 'remote',
  'box': &#123;'xmin': 40, 'ymin': 70, 'xmax': 175, 'ymax': 117}},
 &#123;'score': 0.9960021376609802,
  'label': 'remote',
  'box': &#123;'xmin': 333, 'ymin': 72, 'xmax': 368, 'ymax': 187}},
 &#123;'score': 0.9954745173454285,
  'label': 'couch',
  'box': &#123;'xmin': 0, 'ymin': 1, 'xmax': 639, 'ymax': 473}},
 &#123;'score': 0.9988006353378296,
  'label': 'cat',
  'box': &#123;'xmin': 13, 'ymin': 52, 'xmax': 314, 'ymax': 470}},
 &#123;'score': 0.9986783862113953,
  'label': 'cat',
  'box': &#123;'xmin': 345, 'ymin': 23, 'xmax': 640, 'ymax': 368}}]
Document 2------------------------------------------------------------
Start by creating an instance of [`pipeline`] and specifying a task you want to use it for. In this guide, you'll use the [`pipeline`] for sentiment analysis as an example:

```py
>>> from transformers import pipeline

>>> classifier = pipeline("sentiment-analysis")
Document 3------------------------------------------------------------
```

## Add the pipeline to 🤗 Transformers

If you want to contribute your pipeline to 🤗 Transformers, you will need to add a new module in the `pipelines` submodule
with the code of your pipeline, then add it to the list of tasks defined in `pipelines/__init__.py`.

Then you will need to add tests. Create a new file `tests/test_pipelines_MY_PIPELINE.py` with examples of the other tests.

The `run_pipeline_test` function will be very generic and run on small random models on every possible
architecture as defined by `model_mapping` and `tf_model_mapping`.

This is very important to test future compatibility, meaning if someone adds a new model for
`XXXForQuestionAnswering` then the pipeline test will attempt to run on it. Because the models are random it's
impossible to check for actual values, that's why there is a helper `ANY` that will simply attempt to match the
output of the pipeline TYPE.

You also *need* to implement 2 (ideally 4) tests.

- `test_small_model_pt` : Define 1 small model for this pipeline (doesn't matter if the results don't make sense)
  and test the pipeline outputs. The results should be the same as `test_small_model_tf`.
- `test_small_model_tf` : Define 1 small model for this pipeline (doesn't matter if the results don't make sense)
  and test the pipeline outputs. The results should be the same as `test_small_model_pt`.
- `test_large_model_pt` (`optional`): Tests the pipeline on a real pipeline where the results are supposed to
  make sense. These tests are slow and should be marked as such. Here the goal is to showcase the pipeline and to make
  sure there is no drift in future releases.
- `test_large_model_tf` (`optional`): Tests the pipeline on a real pipeline where the results are supposed to
  make sense. These tests are slow and should be marked as such. Here the goal is to showcase the pipeline and to make
  sure there is no drift in future releases.
Document 4------------------------------------------------------------
```

2. Pass a prompt to the pipeline to generate an image:

```py
image = pipeline(
	"stained glass of darth vader, backlight, centered composition, masterpiece, photorealistic, 8k"
).images[0]
image

✅ 我们现在拥有了一个功能齐全、性能优异的 RAG 系统。今天就到这里!恭喜您坚持到最后 🥳

更进一步 🗺️

这并非旅程的终点!您可以尝试许多步骤来改进 RAG 系统。我们建议采用迭代方式:对系统进行微小的更改,然后查看哪些更改可以提高性能。

建立评估管道

  • 💬 “不能改进没有衡量的模型性能”,甘地如是说……或者至少 Llama2 告诉我他这样说过。无论如何,您绝对应该从衡量性能开始:这意味着构建一个小型的评估数据集,然后监控 RAG 系统在此评估数据集上的性能。

改进检索器

🛠️ 您可以使用以下选项来调整结果:

  • 调整分块方法
    • 分块大小
    • 方法:根据不同的分隔符进行拆分,使用语义分块……
  • 更改嵌入模型

👷‍♀️ 可以考虑更多内容:

  • 尝试其他分块方法,例如语义分块
  • 更改使用的索引(这里使用 FAISS)
  • 查询扩展:以稍微不同的方式重新表达用户查询以检索更多文档。

改进阅读器

🛠️ 以下是一些可以尝试的选项来改进结果:

  • 调整提示
  • 打开/关闭重新排序
  • 选择更强大的阅读器模型

💡 可以考虑许多选项来进一步改进结果:

  • 压缩检索到的上下文,只保留与查询最相关的部分来回答查询。
  • 扩展 RAG 系统,使其更加人性化
    • 引用来源
    • 进行对话式交流
< > 更新 在 GitHub 上