开源 AI 食谱文档

Hugging Face 文档上使用 LangChain 的高级 RAG

Hugging Face's logo
加入 Hugging Face 社区

并获得增强的文档体验

开始使用

Open In Colab

Hugging Face 文档上使用 LangChain 的高级 RAG

作者:Aymeric Roucher

本笔记本演示了如何构建高级 RAG(检索增强生成),以使用 LangChain 回答用户关于特定知识库(此处为 HuggingFace 文档)的问题。

有关 RAG 的介绍,您可以查看这个其他的食谱

RAG 系统很复杂,包含许多移动部件:这是一个 RAG 图,我们在蓝色中标记了所有系统增强的可能性

💡 如您所见,此架构中有许多步骤需要调整:正确调整系统将产生显着的性能提升。

在本笔记本中,我们将仔细研究许多这些蓝色注释,以了解如何调整 RAG 系统并获得最佳性能。

让我们深入研究模型构建! 首先,我们安装所需的模型依赖项。

!pip install -q torch transformers accelerate bitsandbytes langchain sentence-transformers faiss-cpu openpyxl pacmap datasets langchain-community ragatouille
from tqdm.notebook import tqdm
import pandas as pd
from typing import Optional, List, Tuple
from datasets import Dataset
import matplotlib.pyplot as plt

pd.set_option("display.max_colwidth", None)  # This will be helpful when visualizing retriever outputs

加载您的知识库

import datasets

ds = datasets.load_dataset("m-ric/huggingface_doc", split="train")
from langchain.docstore.document import Document as LangchainDocument

RAW_KNOWLEDGE_BASE = [
    LangchainDocument(page_content=doc["text"], metadata={"source": doc["source"]}) for doc in tqdm(ds)
]

1. 检索器 - 嵌入 🗂️

检索器的作用类似于内部搜索引擎:给定用户查询,它会从您的知识库中返回一些相关的片段。

然后,这些片段将被馈送到阅读器模型,以帮助其生成答案。

因此,我们在此处的目标是,给定用户问题,从我们的知识库中找到最相关的片段来回答该问题。

这是一个广泛的目标,它留下了一些问题。我们应该检索多少个片段?此参数将命名为 top_k

这些片段应该有多长?这称为 chunk size(块大小)。没有一刀切的答案,但以下是一些要素

  • 🔀 您的 chunk size 允许在一个片段与另一个片段之间变化。
  • 由于您的检索中总会存在一些噪声,因此增加 top_k 会增加在检索到的片段中获得相关元素的机会。🎯 射出更多的箭会增加您击中目标的可能性。
  • 同时,检索到的文档的总长度不应太高:例如,对于大多数当前模型,由于Lost-in-the-middle 现象,16k 个 token 可能会使您的阅读器模型淹没在信息中。🎯 只给您的阅读器模型最相关的见解,而不是一大堆书籍!

在本笔记本中,我们使用 Langchain 库,因为它为向量数据库提供了多种选择,并允许我们在整个处理过程中保留文档元数据

1.1 将文档拆分为块

  • 在这一部分中,我们将知识库中的文档拆分为更小的块,这些块将成为阅读器 LLM 回答问题的片段。
  • 目标是准备一个语义相关的片段集合。因此,它们的大小应适应精确的想法:太小会截断想法,太大则会稀释想法。

💡 文本拆分有很多选项:按单词拆分、按句子边界拆分、递归分块(以树状方式处理文档以保留结构信息)……要了解有关分块的更多信息,我建议您阅读 Greg Kamradt 的这篇精彩的笔记本

  • 递归分块使用给定的分隔符列表(从最重要到最不重要排序)逐步将文本分解为更小的部分。如果第一次拆分没有给出正确大小或形状的块,则该方法会在新块上使用不同的分隔符重复自身。例如,使用分隔符列表 ["\n\n", "\n", ".", ""]

    • 该方法将首先在出现双换行符 "\n\n" 的任何位置分解文档。
    • 生成的文档将再次在简单的换行符 "\n" 上拆分,然后在句子结尾 "." 上拆分。
    • 最后,如果某些块仍然太大,则会在它们超出最大大小时进行拆分。
  • 使用此方法,全局结构可以很好地保留,但代价是块大小会略有变化。

这个 Space 让您可以可视化不同的拆分选项如何影响您获得的块。

🔬 让我们尝试一下块大小,从任意大小开始,看看拆分如何工作。我们使用 Langchain 的 RecursiveCharacterTextSplitter 递归分块实现。

  • 参数 chunk_size 控制各个块的长度:默认情况下,此长度计为块中的字符数。
  • 参数 chunk_overlap 使相邻块彼此之间略有重叠。这降低了一个想法可能被两个相邻块之间的拆分截断的可能性。我们~随意地将其设置为块大小的 1/10,您可以尝试不同的值!
from langchain.text_splitter import RecursiveCharacterTextSplitter

# We use a hierarchical list of separators specifically tailored for splitting Markdown documents
# This list is taken from LangChain's MarkdownTextSplitter class
MARKDOWN_SEPARATORS = [
    "\n#{1,6} ",
    "```\n",
    "\n\\*\\*\\*+\n",
    "\n---+\n",
    "\n___+\n",
    "\n\n",
    "\n",
    " ",
    "",
]

text_splitter = RecursiveCharacterTextSplitter(
    chunk_size=1000,  # The maximum number of characters in a chunk: we selected this value arbitrarily
    chunk_overlap=100,  # The number of characters to overlap between chunks
    add_start_index=True,  # If `True`, includes chunk's start index in metadata
    strip_whitespace=True,  # If `True`, strips whitespace from the start and end of every document
    separators=MARKDOWN_SEPARATORS,
)

docs_processed = []
for doc in RAW_KNOWLEDGE_BASE:
    docs_processed += text_splitter.split_documents([doc])

我们还必须记住,在嵌入文档时,我们将使用一个接受特定最大序列长度 max_seq_length 的嵌入模型。

因此,我们应该确保我们的块大小低于此限制,因为任何更长的块都会在处理之前被截断,从而失去相关性。

>>> from sentence_transformers import SentenceTransformer

>>> # To get the value of the max sequence_length, we will query the underlying `SentenceTransformer` object used in the RecursiveCharacterTextSplitter
>>> print(f"Model's maximum sequence length: {SentenceTransformer('thenlper/gte-small').max_seq_length}")

>>> from transformers import AutoTokenizer

>>> tokenizer = AutoTokenizer.from_pretrained("thenlper/gte-small")
>>> lengths = [len(tokenizer.encode(doc.page_content)) for doc in tqdm(docs_processed)]

>>> # Plot the distribution of document lengths, counted as the number of tokens
>>> fig = pd.Series(lengths).hist()
>>> plt.title("Distribution of document lengths in the knowledge base (in count of tokens)")
>>> plt.show()
Model's maximum sequence length: 512

👀 如您所见,块长度与我们的 512 个 token 的限制不对齐,并且某些文档超出了限制,因此它们的部分内容将在截断中丢失!

  • 因此,我们应该更改 RecursiveCharacterTextSplitter 类,以 token 数而不是字符数来计算长度。
  • 然后我们可以选择特定的块大小,这里我们选择比 512 更低的阈值
    • 较小的文档可以使拆分更专注于特定的想法。
    • 但是太小的块会将句子分成两半,从而再次失去意义:适当的调整是一个平衡问题。
>>> from langchain.text_splitter import RecursiveCharacterTextSplitter
>>> from transformers import AutoTokenizer

>>> EMBEDDING_MODEL_NAME = "thenlper/gte-small"


>>> def split_documents(
...     chunk_size: int,
...     knowledge_base: List[LangchainDocument],
...     tokenizer_name: Optional[str] = EMBEDDING_MODEL_NAME,
... ) -> List[LangchainDocument]:
...     """
...     Split documents into chunks of maximum size `chunk_size` tokens and return a list of documents.
...     """
...     text_splitter = RecursiveCharacterTextSplitter.from_huggingface_tokenizer(
...         AutoTokenizer.from_pretrained(tokenizer_name),
...         chunk_size=chunk_size,
...         chunk_overlap=int(chunk_size / 10),
...         add_start_index=True,
...         strip_whitespace=True,
...         separators=MARKDOWN_SEPARATORS,
...     )

...     docs_processed = []
...     for doc in knowledge_base:
...         docs_processed += text_splitter.split_documents([doc])

...     # Remove duplicates
...     unique_texts = {}
...     docs_processed_unique = []
...     for doc in docs_processed:
...         if doc.page_content not in unique_texts:
...             unique_texts[doc.page_content] = True
...             docs_processed_unique.append(doc)

...     return docs_processed_unique


>>> docs_processed = split_documents(
...     512,  # We choose a chunk size adapted to our model
...     RAW_KNOWLEDGE_BASE,
...     tokenizer_name=EMBEDDING_MODEL_NAME,
... )

>>> # Let's visualize the chunk sizes we would have in tokens from a common model
>>> from transformers import AutoTokenizer

>>> tokenizer = AutoTokenizer.from_pretrained(EMBEDDING_MODEL_NAME)
>>> lengths = [len(tokenizer.encode(doc.page_content)) for doc in tqdm(docs_processed)]
>>> fig = pd.Series(lengths).hist()
>>> plt.title("Distribution of document lengths in the knowledge base (in count of tokens)")
>>> plt.show()

➡️ 现在块长度分布看起来更好了!

1.2 构建向量数据库

我们想要计算知识库所有块的嵌入:要了解有关句子嵌入的更多信息,我们建议阅读本指南

检索如何工作?

一旦所有块都被嵌入,我们就将它们存储在向量数据库中。当用户键入查询时,它会被先前使用的相同模型嵌入,并且相似性搜索会从向量数据库返回最接近的文档。

因此,技术挑战在于,给定一个查询向量,如何在包含数千条记录的数据库中快速找到该向量的最近邻居。为此,我们需要选择两件事:距离和搜索算法,以便在包含数千条记录的数据库中快速找到最近邻居。

最近邻搜索算法

最近邻搜索算法有很多选择:我们选择 Facebook 的 FAISS,因为 FAISS 的性能足以满足大多数用例,并且它众所周知,因此得到广泛实施。

距离

关于距离,您可以在此处找到很好的指南。简而言之

  • 余弦相似度计算两个向量之间的相似度,即它们相对角度的余弦:它允许我们比较向量方向,而无需考虑它们的大小。使用它需要归一化所有向量,以将它们重新缩放到单位范数。
  • 点积考虑了大小,但有时会产生不良影响,即增加向量的长度会使其与其他所有向量更相似。
  • 欧几里得距离是向量末端之间的距离。

您可以尝试这个小练习来检查您对这些概念的理解。但是,一旦向量被归一化,特定距离的选择就无关紧要了

我们的特定模型与余弦相似度配合良好,因此选择此距离,并在嵌入模型和 FAISS 索引的 distance_strategy 参数中进行设置。对于余弦相似度,我们必须归一化我们的嵌入。

🚨👇 以下单元格在 A10G 上运行需要几分钟时间!

from langchain.vectorstores import FAISS
from langchain_community.embeddings import HuggingFaceEmbeddings
from langchain_community.vectorstores.utils import DistanceStrategy

embedding_model = HuggingFaceEmbeddings(
    model_name=EMBEDDING_MODEL_NAME,
    multi_process=True,
    model_kwargs={"device": "cuda"},
    encode_kwargs={"normalize_embeddings": True},  # Set `True` for cosine similarity
)

KNOWLEDGE_VECTOR_DATABASE = FAISS.from_documents(
    docs_processed, embedding_model, distance_strategy=DistanceStrategy.COSINE
)

👀 为了可视化搜索最接近的文档,让我们使用 PaCMAP 将我们的嵌入从 384 维降到 2 维。

💡 我们选择 PaCMAP 而不是 t-SNE 或 UMAP 等其他技术,因为它效率高(保留局部和全局结构)、对初始化参数具有鲁棒性且速度快

# Embed a user query in the same space
user_query = "How to create a pipeline object?"
query_vector = embedding_model.embed_query(user_query)
import pacmap
import numpy as np
import plotly.express as px

embedding_projector = pacmap.PaCMAP(n_components=2, n_neighbors=None, MN_ratio=0.5, FP_ratio=2.0, random_state=1)

embeddings_2d = [
    list(KNOWLEDGE_VECTOR_DATABASE.index.reconstruct_n(idx, 1)[0]) for idx in range(len(docs_processed))
] + [query_vector]

# Fit the data (the index of transformed data corresponds to the index of the original data)
documents_projected = embedding_projector.fit_transform(np.array(embeddings_2d), init="pca")
df = pd.DataFrame.from_dict(
    [
        {
            "x": documents_projected[i, 0],
            "y": documents_projected[i, 1],
            "source": docs_processed[i].metadata["source"].split("/")[1],
            "extract": docs_processed[i].page_content[:100] + "...",
            "symbol": "circle",
            "size_col": 4,
        }
        for i in range(len(docs_processed))
    ]
    + [
        {
            "x": documents_projected[-1, 0],
            "y": documents_projected[-1, 1],
            "source": "User query",
            "extract": user_query,
            "size_col": 100,
            "symbol": "star",
        }
    ]
)

# Visualize the embedding
fig = px.scatter(
    df,
    x="x",
    y="y",
    color="source",
    hover_data="extract",
    size="size_col",
    symbol="symbol",
    color_discrete_map={"User query": "black"},
    width=1000,
    height=700,
)
fig.update_traces(
    marker=dict(opacity=1, line=dict(width=0, color="DarkSlateGrey")),
    selector=dict(mode="markers"),
)
fig.update_layout(
    legend_title_text="<b>Chunk source</b>",
    title="<b>2D Projection of Chunk Embeddings via PaCMAP</b>",
)
fig.show()

➡️ 在上面的图表中,您可以看到知识库文档的空间表示。由于向量嵌入表示文档的含义,因此它们的含义上的接近性应反映在其嵌入的接近性中。

还显示了用户查询的嵌入:我们想要找到 k 个具有最接近含义的文档,因此我们选择 k 个最接近的向量。

在 LangChain 向量数据库实现中,此搜索操作由方法 vector_database.similarity_search(query) 执行。

这是结果

>>> print(f"\nStarting retrieval for {user_query=}...")
>>> retrieved_docs = KNOWLEDGE_VECTOR_DATABASE.similarity_search(query=user_query, k=5)
>>> print("\n==================================Top document==================================")
>>> print(retrieved_docs[0].page_content)
>>> print("==================================Metadata==================================")
>>> print(retrieved_docs[0].metadata)
Starting retrieval for user_query='How to create a pipeline object?'...

==================================Top document==================================
```

## Available Pipelines:
==================================Metadata==================================
&#123;'source': 'huggingface/diffusers/blob/main/docs/source/en/api/pipelines/deepfloyd_if.md', 'start_index': 16887}

2. 阅读器 - LLM 💬

在这一部分中,LLM 阅读器读取检索到的上下文以形成其答案。

所有子步骤都可以调整

  1. 检索到的文档的内容被聚合在一起形成“上下文”,其中包含许多处理选项,例如prompt compression(提示压缩)。
  2. 上下文和用户查询被聚合到一个 prompt 中,然后提供给 LLM 以生成其答案。

2.1. 阅读器模型

阅读器模型的选择在以下几个方面很重要

  • 阅读器模型的 max_seq_length 必须适应我们的 prompt,其中包括检索器调用输出的上下文:上下文由 5 个文档组成,每个文档 512 个 token,因此我们的目标是至少 4k 个 token 的上下文长度。
  • 阅读器模型

对于此示例,我们选择了 HuggingFaceH4/zephyr-7b-beta,这是一个小巧但功能强大的模型。

随着每周都会发布许多模型,您可能希望将此模型替换为最新最好的模型。跟踪开源 LLM 的最佳方法是查看 开源 LLM 排行榜

为了加快推理速度,我们将加载模型的量化版本

from transformers import pipeline
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig

READER_MODEL_NAME = "HuggingFaceH4/zephyr-7b-beta"

bnb_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_use_double_quant=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_compute_dtype=torch.bfloat16,
)
model = AutoModelForCausalLM.from_pretrained(READER_MODEL_NAME, quantization_config=bnb_config)
tokenizer = AutoTokenizer.from_pretrained(READER_MODEL_NAME)

READER_LLM = pipeline(
    model=model,
    tokenizer=tokenizer,
    task="text-generation",
    do_sample=True,
    temperature=0.2,
    repetition_penalty=1.1,
    return_full_text=False,
    max_new_tokens=500,
)
READER_LLM("What is 4+4? Answer:")

2.2. Prompt

下面的 RAG prompt 模板是我们提供给阅读器 LLM 的模板:重要的是将其格式化为阅读器 LLM 的聊天模板。

我们给它我们的上下文和用户的问题。

>>> prompt_in_chat_format = [
...     {
...         "role": "system",
...         "content": """Using the information contained in the context,
... give a comprehensive answer to the question.
... Respond only to the question asked, response should be concise and relevant to the question.
... Provide the number of the source document when relevant.
... If the answer cannot be deduced from the context, do not give an answer.""",
...     },
...     {
...         "role": "user",
...         "content": """Context:
... {context}
... ---
... Now here is the question you need to answer.

... Question: {question}""",
...     },
... ]
>>> RAG_PROMPT_TEMPLATE = tokenizer.apply_chat_template(
...     prompt_in_chat_format, tokenize=False, add_generation_prompt=True
... )
>>> print(RAG_PROMPT_TEMPLATE)
<|system|>
Using the information contained in the context, 
give a comprehensive answer to the question.
Respond only to the question asked, response should be concise and relevant to the question.
Provide the number of the source document when relevant.
If the answer cannot be deduced from the context, do not give an answer.
<|user|>
Context:
&#123;context}
---
Now here is the question you need to answer.

Question: &#123;question}
<|assistant|>

让我们在先前检索到的文档上测试我们的阅读器!

>>> retrieved_docs_text = [doc.page_content for doc in retrieved_docs]  # We only need the text of the documents
>>> context = "\nExtracted documents:\n"
>>> context += "".join([f"Document {str(i)}:::\n" + doc for i, doc in enumerate(retrieved_docs_text)])

>>> final_prompt = RAG_PROMPT_TEMPLATE.format(question="How to create a pipeline object?", context=context)

>>> # Redact an answer
>>> answer = READER_LLM(final_prompt)[0]["generated_text"]
>>> print(answer)
To create a pipeline object, follow these steps:

1. Define the inputs and outputs of your pipeline. These could be strings, dictionaries, or any other format that best suits your use case.

2. Inherit the `Pipeline` class from the `transformers` module and implement the following methods:

   - `preprocess`: This method takes the raw inputs and returns a preprocessed dictionary that can be passed to the model.

   - `_forward`: This method performs the actual inference using the model and returns the output tensor.

   - `postprocess`: This method takes the output tensor and returns the final output in the desired format.

   - `_sanitize_parameters`: This method is used to sanitize the input parameters before passing them to the model.

3. Load the necessary components, such as the model and scheduler, into the pipeline object.

4. Instantiate the pipeline object and return it.

Here's an example implementation based on the given context:

```python
from transformers import Pipeline
import torch
from diffusers import StableDiffusionPipeline

class MyPipeline(Pipeline):
    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self.pipe = StableDiffusionPipeline.from_pretrained("my_model")

    def preprocess(self, inputs):
        # Preprocess the inputs as needed
        return &#123;"input_ids":...}

    def _forward(self, inputs):
        # Run the forward pass of the model
        return self.pipe(**inputs).images[0]

    def postprocess(self, outputs):
        # Postprocess the outputs as needed
        return outputs["sample"]

    def _sanitize_parameters(self, params):
        # Sanitize the input parameters
        return params

my_pipeline = MyPipeline()
result = my_pipeline("My input string")
print(result)
```

Note that this implementation assumes that the model and scheduler are already loaded into memory. If they need to be loaded dynamically, you can modify the `__init__` method accordingly.

2.3. 重新排序

RAG 的一个好选择是检索比您最终想要的文档更多的文档,然后在最终仅保留 top_k 之前,使用更强大的检索模型对结果进行重新排序。

为此,Colbertv2 是一个不错的选择:与我们经典的嵌入模型(如双编码器)不同,它是一个交叉编码器,可以计算查询 token 和每个文档的 token 之间更细粒度的交互。

由于RAGatouille 库,它很容易使用。

from ragatouille import RAGPretrainedModel

RERANKER = RAGPretrainedModel.from_pretrained("colbert-ir/colbertv2.0")

3. 组装所有内容!

from transformers import Pipeline


def answer_with_rag(
    question: str,
    llm: Pipeline,
    knowledge_index: FAISS,
    reranker: Optional[RAGPretrainedModel] = None,
    num_retrieved_docs: int = 30,
    num_docs_final: int = 5,
) -> Tuple[str, List[LangchainDocument]]:
    # Gather documents with retriever
    print("=> Retrieving documents...")
    relevant_docs = knowledge_index.similarity_search(query=question, k=num_retrieved_docs)
    relevant_docs = [doc.page_content for doc in relevant_docs]  # Keep only the text

    # Optionally rerank results
    if reranker:
        print("=> Reranking documents...")
        relevant_docs = reranker.rerank(question, relevant_docs, k=num_docs_final)
        relevant_docs = [doc["content"] for doc in relevant_docs]

    relevant_docs = relevant_docs[:num_docs_final]

    # Build the final prompt
    context = "\nExtracted documents:\n"
    context += "".join([f"Document {str(i)}:::\n" + doc for i, doc in enumerate(relevant_docs)])

    final_prompt = RAG_PROMPT_TEMPLATE.format(question=question, context=context)

    # Redact an answer
    print("=> Generating answer...")
    answer = llm(final_prompt)[0]["generated_text"]

    return answer, relevant_docs

让我们看看我们的 RAG 管道如何回答用户查询。

>>> question = "how to create a pipeline object?"

>>> answer, relevant_docs = answer_with_rag(question, READER_LLM, KNOWLEDGE_VECTOR_DATABASE, reranker=RERANKER)
=> Retrieving documents...
>>> print("==================================Answer==================================")
>>> print(f"{answer}")
>>> print("==================================Source docs==================================")
>>> for i, doc in enumerate(relevant_docs):
...     print(f"Document {i}------------------------------------------------------------")
...     print(doc)
==================================Answer==================================
To create a pipeline object, follow these steps:

1. Import the `pipeline` function from the `transformers` module:

   ```python
   from transformers import pipeline
   ```

2. Choose the task you want to perform, such as object detection, sentiment analysis, or image generation, and pass it as an argument to the `pipeline` function:

   - For object detection:

     ```python
     >>> object_detector = pipeline('object-detection')
     >>> object_detector(image)
     [&#123;'score': 0.9982201457023621,
       'label':'remote',
       'box': &#123;'xmin': 40, 'ymin': 70, 'xmax': 175, 'ymax': 117}},
     ...]
     ```

   - For sentiment analysis:

     ```python
     >>> classifier = pipeline("sentiment-analysis")
     >>> classifier("This is a great product!")
     &#123;'labels': ['POSITIVE'],'scores': tensor([0.9999], device='cpu', dtype=torch.float32)}
     ```

   - For image generation:

     ```python
     >>> image = pipeline(
    ... "stained glass of darth vader, backlight, centered composition, masterpiece, photorealistic, 8k"
    ... ).images[0]
     >>> image
     PILImage mode RGB size 7680x4320 at 0 DPI
     ```

Note that the exact syntax may vary depending on the specific pipeline being used. Refer to the documentation for more details on how to use each pipeline.

In general, the process involves importing the necessary modules, selecting the desired pipeline task, and passing it to the `pipeline` function along with any required arguments. The resulting pipeline object can then be used to perform the selected task on input data.
==================================Source docs==================================
Document 0------------------------------------------------------------
# Allocate a pipeline for object detection
>>> object_detector = pipeline('object-detection')
>>> object_detector(image)
[&#123;'score': 0.9982201457023621,
  'label': 'remote',
  'box': &#123;'xmin': 40, 'ymin': 70, 'xmax': 175, 'ymax': 117}},
 &#123;'score': 0.9960021376609802,
  'label': 'remote',
  'box': &#123;'xmin': 333, 'ymin': 72, 'xmax': 368, 'ymax': 187}},
 &#123;'score': 0.9954745173454285,
  'label': 'couch',
  'box': &#123;'xmin': 0, 'ymin': 1, 'xmax': 639, 'ymax': 473}},
 &#123;'score': 0.9988006353378296,
  'label': 'cat',
  'box': &#123;'xmin': 13, 'ymin': 52, 'xmax': 314, 'ymax': 470}},
 &#123;'score': 0.9986783862113953,
  'label': 'cat',
  'box': &#123;'xmin': 345, 'ymin': 23, 'xmax': 640, 'ymax': 368}}]
Document 1------------------------------------------------------------
# Allocate a pipeline for object detection
>>> object_detector = pipeline('object_detection')
>>> object_detector(image)
[&#123;'score': 0.9982201457023621,
  'label': 'remote',
  'box': &#123;'xmin': 40, 'ymin': 70, 'xmax': 175, 'ymax': 117}},
 &#123;'score': 0.9960021376609802,
  'label': 'remote',
  'box': &#123;'xmin': 333, 'ymin': 72, 'xmax': 368, 'ymax': 187}},
 &#123;'score': 0.9954745173454285,
  'label': 'couch',
  'box': &#123;'xmin': 0, 'ymin': 1, 'xmax': 639, 'ymax': 473}},
 &#123;'score': 0.9988006353378296,
  'label': 'cat',
  'box': &#123;'xmin': 13, 'ymin': 52, 'xmax': 314, 'ymax': 470}},
 &#123;'score': 0.9986783862113953,
  'label': 'cat',
  'box': &#123;'xmin': 345, 'ymin': 23, 'xmax': 640, 'ymax': 368}}]
Document 2------------------------------------------------------------
Start by creating an instance of [`pipeline`] and specifying a task you want to use it for. In this guide, you'll use the [`pipeline`] for sentiment analysis as an example:

```py
>>> from transformers import pipeline

>>> classifier = pipeline("sentiment-analysis")
Document 3------------------------------------------------------------
```

## Add the pipeline to 🤗 Transformers

If you want to contribute your pipeline to 🤗 Transformers, you will need to add a new module in the `pipelines` submodule
with the code of your pipeline, then add it to the list of tasks defined in `pipelines/__init__.py`.

Then you will need to add tests. Create a new file `tests/test_pipelines_MY_PIPELINE.py` with examples of the other tests.

The `run_pipeline_test` function will be very generic and run on small random models on every possible
architecture as defined by `model_mapping` and `tf_model_mapping`.

This is very important to test future compatibility, meaning if someone adds a new model for
`XXXForQuestionAnswering` then the pipeline test will attempt to run on it. Because the models are random it's
impossible to check for actual values, that's why there is a helper `ANY` that will simply attempt to match the
output of the pipeline TYPE.

You also *need* to implement 2 (ideally 4) tests.

- `test_small_model_pt` : Define 1 small model for this pipeline (doesn't matter if the results don't make sense)
  and test the pipeline outputs. The results should be the same as `test_small_model_tf`.
- `test_small_model_tf` : Define 1 small model for this pipeline (doesn't matter if the results don't make sense)
  and test the pipeline outputs. The results should be the same as `test_small_model_pt`.
- `test_large_model_pt` (`optional`): Tests the pipeline on a real pipeline where the results are supposed to
  make sense. These tests are slow and should be marked as such. Here the goal is to showcase the pipeline and to make
  sure there is no drift in future releases.
- `test_large_model_tf` (`optional`): Tests the pipeline on a real pipeline where the results are supposed to
  make sense. These tests are slow and should be marked as such. Here the goal is to showcase the pipeline and to make
  sure there is no drift in future releases.
Document 4------------------------------------------------------------
```

2. Pass a prompt to the pipeline to generate an image:

```py
image = pipeline(
	"stained glass of darth vader, backlight, centered composition, masterpiece, photorealistic, 8k"
).images[0]
image

✅ 我们现在有了一个功能齐全、性能良好的 RAG 系统。今天就到这里!恭喜你坚持到最后 🥳

进一步探索 🗺️

这不是旅程的终点!您可以尝试许多步骤来改进您的 RAG 系统。我们建议以迭代的方式进行:对系统进行小的更改,看看哪些可以提高性能。

建立评估管道

  • 💬 “如果你不衡量模型的性能,你就无法改进它”,甘地说... 或者至少 Llama2 告诉我他是这么说的。无论如何,你绝对应该从衡量性能开始:这意味着构建一个小型的评估数据集,然后监控你的 RAG 系统在这个评估数据集上的性能。

改进检索器

🛠️ 您可以使用以下选项来调整结果:

  • 调整分块方法
    • 块的大小
    • 方法:在不同的分隔符上拆分,使用语义分块
  • 更改嵌入模型

👷‍♀️ 可以考虑更多:

  • 尝试另一种分块方法,例如语义分块
  • 更改使用的索引(这里是 FAISS)
  • 查询扩展:以略微不同的方式重新制定用户查询,以检索更多文档。

改进阅读器

🛠️ 在这里,您可以尝试以下选项来改进结果:

  • 调整提示
  • 打开/关闭重新排序
  • 选择更强大的阅读器模型

💡 在这里可以考虑许多选项来进一步改进结果:

  • 压缩检索到的上下文,仅保留最相关的部分来回答查询。
  • 扩展 RAG 系统,使其更用户友好
    • 引用来源
    • 使其具有对话性
< > 在 GitHub 上更新