一架梯子,一头程序猿,仰望星空!
LangChain教程(Python版本) > 内容正文

LangChain 拆分代码片段


代码分割

本章介绍LangChain的代码文本分割器,如果你需要把代码拆分成代码片段,可以仔细学习本章内容,CodeTextSplitter支持多种编程语言代码分割。

安装依赖包

%pip install -qU langchain-text-splitters

下面导入枚举“Language”,然后看看支持哪些编程语言代码分割。

from langchain_text_splitters import (
    Language,
    RecursiveCharacterTextSplitter,
)
# 打印支持的编程语言
[e.value for e in Language]
['cpp',
     'go',
     'java',
     'js',
     'php',
     'proto',
     'python',
     'rst',
     'ruby',
     'rust',
     'scala',
     'swift',
     'markdown',
     'latex',
     'html',
     'sol',]
# 还可以查看指定语言使用的分隔符
RecursiveCharacterTextSplitter.get_separators_for_language(Language.PYTHON)
['\\nclass ', '\\ndef ', '\\n\\tdef ', '\\n\\n', '\\n', ' ', '']

Python

这是一个使用PythonTextSplitter拆分代码的示例

PYTHON_CODE = """
def hello_world():
    print("Hello, World!")

# Call the function
hello_world()
"""
python_splitter = RecursiveCharacterTextSplitter.from_language(
    language=Language.PYTHON, chunk_size=50, chunk_overlap=0
)
python_docs = python_splitter.create_documents([PYTHON_CODE])
python_docs
    [Document(page_content='def hello_world():\\\\n    print("Hello, World!")', metadata={}),     Document(page_content='# Call the function\\\\nhello_world()', metadata={})]

JS

使用 JS 文本分割器,拆分JS代码的示例

JS_CODE = """
function helloWorld() {
  console.log("Hello, World!");
}

// Call the function
helloWorld();
"""

js_splitter = RecursiveCharacterTextSplitter.from_language(
    language=Language.JS, chunk_size=60, chunk_overlap=0
)
js_docs = js_splitter.create_documents([JS_CODE])
js_docs
[Document(page_content='function helloWorld() {\n  console.log("Hello, World!");\n}', metadata={}),
     Document(page_content='// Call the function\nhelloWorld();', metadata={})]

Markdown

这是拆分markdown代码的例子

markdown_text = """
# ?️? LangChain

⚡ Building applications with LLMs through composability ⚡

## Quick Install

\`\`\`bash
# Hopefully this code block isn't split
pip install langchain
\`\`\`

As an open source project in a rapidly developing field, we are extremely open to contributions.
"""
md_splitter = RecursiveCharacterTextSplitter.from_language(
    language=Language.MARKDOWN, chunk_size=60, chunk_overlap=0
)
md_docs = md_splitter.create_documents([markdown_text])
md_docs
[Document(page_content='# ?️? LangChain', metadata={}),
     Document(page_content='⚡ Building applications with LLMs through composability ⚡', metadata={}),
     Document(page_content='## Quick Install', metadata={}),
     Document(page_content="```bash\n# Hopefully this code block isn't split", metadata={}),
     Document(page_content='pip install langchain', metadata={}),
     Document(page_content='```', metadata={}),
     Document(page_content='As an open source project in a rapidly developing field, we', metadata={}),
     Document(page_content='are extremely open to contributions.', metadata={})]

Latex

Latex文本切割的例子

latex_text = """
\documentclass{article}

\begin{document}

\maketitle

\section{Introduction}
Large language models (LLMs) are a type of machine learning model that can be trained on vast amounts of text data to generate human-like language. In recent years, LLMs have made significant advances in a variety of natural language processing tasks, including language translation, text generation, and sentiment analysis.

\subsection{History of LLMs}
The earliest LLMs were developed in the 1980s and 1990s, but they were limited by the amount of data that could be processed and the computational power available at the time. In the past decade, however, advances in hardware and software have made it possible to train LLMs on massive datasets, leading to significant improvements in performance.

\subsection{Applications of LLMs}
LLMs have many applications in industry, including chatbots, content creation, and virtual assistants. They can also be used in academia for research in linguistics, psychology, and computational linguistics.

\end{document}
"""
latex_splitter = RecursiveCharacterTextSplitter.from_language(
    language=Language.MARKDOWN, chunk_size=60, chunk_overlap=0
)
latex_docs = latex_splitter.create_documents([latex_text])
latex_docs
[Document(page_content='\\documentclass{article}\n\n\x08egin{document}\n\n\\maketitle', metadata={}),
     Document(page_content='\\section{Introduction}', metadata={}),
     Document(page_content='Large language models (LLMs) are a type of machine learning', metadata={}),
     Document(page_content='model that can be trained on vast amounts of text data to', metadata={}),
     Document(page_content='generate human-like language. In recent years, LLMs have', metadata={}),
     Document(page_content='made significant advances in a variety of natural language', metadata={}),
     Document(page_content='processing tasks, including language translation, text', metadata={}),
     Document(page_content='generation, and sentiment analysis.', metadata={}),
     Document(page_content='\\subsection{History of LLMs}', metadata={}),
     Document(page_content='The earliest LLMs were developed in the 1980s and 1990s,', metadata={}),
     Document(page_content='but they were limited by the amount of data that could be', metadata={}),
     Document(page_content='processed and the computational power available at the', metadata={}),
     Document(page_content='time. In the past decade, however, advances in hardware and', metadata={}),
     Document(page_content='software have made it possible to train LLMs on massive', metadata={}),
     Document(page_content='datasets, leading to significant improvements in', metadata={}),
     Document(page_content='performance.', metadata={}),
     Document(page_content='\\subsection{Applications of LLMs}', metadata={}),
     Document(page_content='LLMs have many applications in industry, including', metadata={}),
     Document(page_content='chatbots, content creation, and virtual assistants. They', metadata={}),
     Document(page_content='can also be used in academia for research in linguistics,', metadata={}),
     Document(page_content='psychology, and computational linguistics.', metadata={}),
     Document(page_content='\\end{document}', metadata={})]

HTML

HTML代码拆分例子

html_text = """
<!DOCTYPE html>
<html>
    <head>
        <title>?️? LangChain</title>
        <style>
            body {
                font-family: Arial, sans-serif;
            }
            h1 {
                color: darkblue;
            }
        </style>
    </head>
    <body>
        <div>
            <h1>?️? LangChain</h1>
            <p>⚡ Building applications with LLMs through composability ⚡</p>
        </div>
        <div>
            As an open source project in a rapidly developing field, we are extremely open to contributions.
        </div>
    </body>
</html>
"""
html_splitter = RecursiveCharacterTextSplitter.from_language(
    language=Language.HTML, chunk_size=60, chunk_overlap=0
)
html_docs = html_splitter.create_documents([html_text])
html_docs
[Document(page_content='<!DOCTYPE html>\n<html>', metadata={}),
     Document(page_content='<head>\n        <title>?️? LangChain</title>', metadata={}),
     Document(page_content='<style>\n            body {\n                font-family: Aria', metadata={}),
     Document(page_content='l, sans-serif;\n            }\n            h1 {', metadata={}),
     Document(page_content='color: darkblue;\n            }\n        </style>\n    </head', metadata={}),
     Document(page_content='>', metadata={}),
     Document(page_content='<body>', metadata={}),
     Document(page_content='<div>\n            <h1>?️? LangChain</h1>', metadata={}),
     Document(page_content='<p>⚡ Building applications with LLMs through composability ⚡', metadata={}),
     Document(page_content='</p>\n        </div>', metadata={}),
     Document(page_content='<div>\n            As an open source project in a rapidly dev', metadata={}),
     Document(page_content='eloping field, we are extremely open to contributions.', metadata={}),
     Document(page_content='</div>\n    </body>\n</html>', metadata={})]

Solidity

Solidity代码拆分例子

SOL_CODE = """
pragma solidity ^0.8.20;
contract HelloWorld {
   function add(uint a, uint b) pure public returns(uint) {
       return a + b;
   }
}
"""

sol_splitter = RecursiveCharacterTextSplitter.from_language(
    language=Language.SOL, chunk_size=128, chunk_overlap=0
)
sol_docs = sol_splitter.create_documents([SOL_CODE])
sol_docs
[
    Document(page_content='pragma solidity ^0.8.20;', metadata={}),
    Document(page_content='contract HelloWorld {\n   function add(uint a, uint b) pure public returns(uint) {\n       return a + b;\n   }\n}', metadata={})
]


关联主题