一架梯子,一头程序猿,仰望星空!
LangChain教程(JS版本) > 内容正文

问答任务


基于本地信息搜索的问答任务

此示例展示了在搜索结果上进行问答的功能。

RetrievalQAChain 是一个将 Retriever 和 QA 链(上面已经描述过了)结合在一起的链条。它用于从 Retriever 检索文档,并使用 QA 链根据检索到的文档回答问题。

使用方法

在下面的示例中,我们使用 VectorStore 作为 Retriever。默认情况下,StuffDocumentsChain 被用作 QA 链。

import { OpenAI } from "langchain/llms/openai";
import { RetrievalQAChain } from "langchain/chains";
import { HNSWLib } from "langchain/vectorstores/hnswlib";
import { OpenAIEmbeddings } from "langchain/embeddings/openai";
import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";
import * as fs from "fs";

// 初始化用于回答问题的LLM。
const model = new OpenAI({});
const text = fs.readFileSync("state_of_the_union.txt", "utf8");
const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000 });
const docs = await textSplitter.createDocuments([text]);

// 从文档创建向量存储。
const vectorStore = await HNSWLib.fromDocuments(docs, new OpenAIEmbeddings());

// 初始化一个围绕向量存储的检索器的包装器
const vectorStoreRetriever = vectorStore.asRetriever();

// 创建一个使用OpenAI LLM和HNSWLib向量存储的链条。
const chain = RetrievalQAChain.fromLLM(model, vectorStoreRetriever);
const res = await chain.call({
  query: "总统对Breyer法官有什么说法?",
});
console.log({ res });
/*
{
  res: {
    text: '总统表示 Breyer 法官是一位退伍军人、宪法学者和美国最高法院的退休法官,并感谢他的服务。'
  }
}
*/

自定义 QA 链​

在下面的示例中,我们使用 VectorStore 作为 Retriever,并使用 MapReduceDocumentsChain 作为 QA 链。

import { OpenAI } from "langchain/llms/openai";
import { RetrievalQAChain, loadQAMapReduceChain } from "langchain/chains";
import { HNSWLib } from "langchain/vectorstores/hnswlib";
import { OpenAIEmbeddings } from "langchain/embeddings/openai";
import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";
import * as fs from "fs";

// 初始化用于回答问题的LLM。
const model = new OpenAI({});
const text = fs.readFileSync("state_of_the_union.txt", "utf8");
const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000 });
const docs = await textSplitter.createDocuments([text]);

// 从文档创建向量存储。
const vectorStore = await HNSWLib.fromDocuments(docs, new OpenAIEmbeddings());

// 创建一个使用映射减少链和HNSWLib向量存储的链条。
const chain = new RetrievalQAChain({
  combineDocumentsChain: loadQAMapReduceChain(model),
  retriever: vectorStore.asRetriever(),
});
const res = await chain.call({
  query: "总统对Breyer法官有什么说法?",
});
console.log({ res });
/*
{
  res: {
    text: '总统表示 Breyer 法官一直致力于为国家服务,并对他的服务表示感谢。他还表示......”。'
  }
}
*/

自定义提示

您可以传入自定义提示来进行问题回答。这些提示与您可以传入基础问答链的提示相同。

import { OpenAI } from "langchain/llms/openai";
import { RetrievalQAChain, loadQAStuffChain } from "langchain/chains";
import { HNSWLib } from "langchain/vectorstores/hnswlib";
import { OpenAIEmbeddings } from "langchain/embeddings/openai";
import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";
import { PromptTemplate } from "langchain/prompts";
import * as fs from "fs";

const promptTemplate = \`Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.

{context}

Question: {question}
Answer in Italian:\`;
const prompt = PromptTemplate.fromTemplate(promptTemplate);

// 初始化LLM以用于回答问题。
const model = new OpenAI({});
const text = fs.readFileSync("state_of_the_union.txt", "utf8");
const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000 });
const docs = await textSplitter.createDocuments([text]);

// 从文档创建向量存储。
const vectorStore = await HNSWLib.fromDocuments(docs,new OpenAIEmbeddings());

// 创建一个使用Stuff链和HNSWLib向量存储的链。
const chain = new RetrievalQAChain({
  combineDocumentsChain: loadQAStuffChain(model,{prompt}),
  retriever: vectorStore.asRetriever(),
});
const res = await chain.call({
  query: "What did the president say about Justice Breyer?",
});
console.log({ res });
/*
{
  res: {
    text: ' Il presidente ha elogiato Justice Breyer per il suo servizio e lo ha ringraziato.'
  }
}
*/

返回源文件

此外,我们可以通过在构建链时指定一个可选参数来返回用于回答问题的源文件。

import { OpenAI } from "langchain/llms/openai";
import { RetrievalQAChain } from "langchain/chains";
import { HNSWLib } from "langchain/vectorstores/hnswlib";
import { OpenAIEmbeddings } from "langchain/embeddings/openai";
import { RecursiveCharacterTextSplitter } from "langchain/text_splitter";
import * as fs from "fs";

// Initialize the LLM to use to answer the question.
const model = new OpenAI({});
const text = fs.readFileSync("data/state_of_the_union_2022.txt", "utf8");
const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000 });
const docs = await textSplitter.createDocuments([text]);

// Create a vector store from the documents.
const vectorStore = await HNSWLib.fromDocuments(docs, new OpenAIEmbeddings());

// Create a chain that uses a map reduce chain and HNSWLib vector store.
const chain = RetrievalQAChain.fromLLM(model, vectorStore.asRetriever(), {
  returnSourceDocuments: true, // Can also be passed into the constructor
});
const res = await chain.call({
  query: "What did the president say about Justice Breyer?",
});
console.log(JSON.stringify(res, null, 2));
/*
{
  "text": " The president thanked Justice Breyer for his service and asked him to stand so he could be seen.",
  "sourceDocuments": [
    {
      "pageContent": "Justice Breyer, thank you for your service. .....",
      "metadata": {
        "loc": {
          "lines": {
            "from": 481,
            "to": 487
          }
        }
      }
    },
    {
      "pageContent": "Since she’s been nominated, ...",
      "metadata": {
        "loc": {
          "lines": {
            "from": 487,
            "to": 499
          }
        }
      }
    },
    {
      "pageContent": "These laws don’t infringe on the...",
      "metadata": {
        "loc": {
          "lines": {
            "from": 468,
            "to": 481
          }
        }
      }
    },
    {
      "pageContent": "If you want to go forward not backwards, ...",
      "metadata": {
        "loc": {
          "lines": {
            "from": 511,
            "to": 523
          }
        }
      }
    }
  ]
}
*/


关联主题