电子商务网站建设课设心得体会,网站怎么发外链,网站建设的公司如何招销售,建设局象山网站系列文章索引 LangChain教程 - 系列文章
LangChain提供了一种灵活且强大的表达式语言 (LangChain Expression Language, LCEL)#xff0c;用于创建复杂的逻辑链。通过将不同的可运行对象组合起来#xff0c;LCEL可以实现顺序链、嵌套链、并行链、路由以及动态构建等高级功能…系列文章索引 LangChain教程 - 系列文章
LangChain提供了一种灵活且强大的表达式语言 (LangChain Expression Language, LCEL)用于创建复杂的逻辑链。通过将不同的可运行对象组合起来LCEL可以实现顺序链、嵌套链、并行链、路由以及动态构建等高级功能从而满足各种场景下的需求。本文将详细介绍这些功能及其实现方式。
顺序链
LCEL的核心功能是将可运行对象按顺序组合起来其中前一个对象的输出会自动传递给下一个对象作为输入。我们可以使用管道操作符 (|) 或显式的 .pipe() 方法来构建顺序链。
以下是一个简单的例子
from langchain_ollama import OllamaLLM
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParsermodel OllamaLLM(modelqwen2.5:0.5b)
prompt ChatPromptTemplate.from_template(tell me a joke about {topic})chain prompt | model | StrOutputParser()result chain.invoke({topic: bears})
print(result)输出
Heres a bear joke for you:Why did the bear dissolve in water?
Because it was a polar bear!在上述例子中提示模板将输入格式化为聊天模型的输入格式聊天模型生成笑话最后通过输出解析器将结果转换为字符串。
嵌套链
嵌套链允许我们将多个链组合起来以创建更复杂的逻辑。例如可以将一个生成笑话的链与另一个链组合该链负责分析笑话的有趣程度。
analysis_prompt ChatPromptTemplate.from_template(is this a funny joke? {joke})
composed_chain {joke: chain} | analysis_prompt | model | StrOutputParser()result composed_chain.invoke({topic: bears})
print(result)输出
Haha, thats a clever play on words! Using polar to imply the bear dissolved or became polar/polarized when put in water. Not the most hilarious joke ever, but it has a cute, groan-worthy pun that makes it mildly amusing.并行链
RunnableParallel 使得可以并行运行多个链并将每个链的结果组合成一个字典。这种方式适用于需要同时处理多个任务的场景。
from langchain_core.runnables import RunnableParalleljoke_chain ChatPromptTemplate.from_template(tell me a joke about {topic}) | model
poem_chain ChatPromptTemplate.from_template(write a 2-line poem about {topic}) | modelparallel_chain RunnableParallel(jokejoke_chain, poempoem_chain)result parallel_chain.invoke({topic: bear})
print(result)输出
{joke: Why dont bears like fast food? Because they cant catch it!,poem: In the quiet of the forest, the bear roams free\nMajestic and wild, a sight to see.
}路由
路由允许根据输入动态选择要执行的子链。LCEL提供了两种实现路由的方式
使用自定义函数
通过 RunnableLambda 实现动态路由
from langchain_core.prompts import PromptTemplate
from langchain_core.runnables import RunnableLambdachain (PromptTemplate.from_template(Given the user question below, classify it as either being about LangChain, Anthropic, or Other.Do not respond with more than one word.question
{question}
/questionClassification:)| OllamaLLM(modelqwen2.5:0.5b)| StrOutputParser()
)langchain_chain PromptTemplate.from_template(You are an expert in langchain. \
Always answer questions starting with As Harrison Chase told me. \
Respond to the following question:Question: {question}
Answer:
) | OllamaLLM(modelqwen2.5:0.5b)
anthropic_chain PromptTemplate.from_template(You are an expert in anthropic. \
Always answer questions starting with As Dario Amodei told me. \
Respond to the following question:Question: {question}
Answer:
) | OllamaLLM(modelqwen2.5:0.5b)
general_chain PromptTemplate.from_template(Respond to the following question:Question: {question}
Answer:
) | OllamaLLM(modelqwen2.5:0.5b)def route(info):if anthropic in info[topic].lower():return anthropic_chainelif langchain in info[topic].lower():return langchain_chainelse:return general_chainfull_chain {topic: chain, question: lambda x: x[question]} | RunnableLambda(route)result full_chain.invoke({question: how do I use LangChain?})
print(result)def route(info):if anthropic in info[topic].lower():return anthropic_chainelif langchain in info[topic].lower():return langchain_chainelse:return general_chainfrom langchain_core.runnables import RunnableLambdafull_chain {topic: chain, question: lambda x: x[question]} | RunnableLambda(route)result full_chain.invoke({question: how do I use LangChain?})
print(result)使用 RunnableBranch
RunnableBranch 通过条件匹配选择分支
from langchain_core.runnables import RunnableBranchbranch RunnableBranch((lambda x: anthropic in x[topic].lower(), anthropic_chain),(lambda x: langchain in x[topic].lower(), langchain_chain),general_chain,
)full_chain {topic: chain, question: lambda x: x[question]} | branch
result full_chain.invoke({question: how do I use Anthropic?})
print(result)动态构建
动态构建链可以根据输入在运行时生成链的部分。通过 RunnableLambda 的返回值机制可以返回一个新的 Runnable。
from langchain_core.runnables import chain, RunnablePassthroughllm OllamaLLM(modelqwen2.5:0.5b)contextualize_instructions Convert the latest user question into a standalone question given the chat history. Dont answer the question, return the question and nothing else (no descriptive text).
contextualize_prompt ChatPromptTemplate.from_messages([(system, contextualize_instructions),(placeholder, {chat_history}),(human, {question}),]
)
contextualize_question contextualize_prompt | llm | StrOutputParser()chain
def contextualize_if_needed(input_: dict):if input_.get(chat_history):return contextualize_questionelse:return RunnablePassthrough() | itemgetter(question)chain
def fake_retriever(input_: dict):return egypts population in 2024 is about 111 millionqa_instructions (Answer the user question given the following context:\n\n{context}.
)
qa_prompt ChatPromptTemplate.from_messages([(system, qa_instructions), (human, {question})]
)full_chain (RunnablePassthrough.assign(questioncontextualize_if_needed).assign(contextfake_retriever)| qa_prompt| llm| StrOutputParser()
)result full_chain.invoke({question: what about egypt,chat_history: [(human, whats the population of indonesia),(ai, about 276 million),],
})
print(result)输出
According to the context provided, Egypts population in 2024 is estimated to be about 111 million.完整代码实例
from operator import itemgetterfrom langchain_ollama import OllamaLLM
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParserprint(\n-----------------------------------\n)# Simple demo
model OllamaLLM(modelqwen2.5:0.5b)
prompt ChatPromptTemplate.from_template(tell me a joke about {topic})chain prompt | model | StrOutputParser()result chain.invoke({topic: bears})
print(result)print(\n-----------------------------------\n)# Compose demo
analysis_prompt ChatPromptTemplate.from_template(is this a funny joke? {joke})
composed_chain {joke: chain} | analysis_prompt | model | StrOutputParser()result composed_chain.invoke({topic: bears})
print(result)print(\n-----------------------------------\n)# Parallel demo
from langchain_core.runnables import RunnableParalleljoke_chain ChatPromptTemplate.from_template(tell me a joke about {topic}) | model
poem_chain ChatPromptTemplate.from_template(write a 2-line poem about {topic}) | modelparallel_chain RunnableParallel(jokejoke_chain, poempoem_chain)result parallel_chain.invoke({topic: bear})
print(result)print(\n-----------------------------------\n)# Route demo
from langchain_core.prompts import PromptTemplate
from langchain_core.runnables import RunnableLambdachain (PromptTemplate.from_template(Given the user question below, classify it as either being about LangChain, Anthropic, or Other.Do not respond with more than one word.question
{question}
/questionClassification:)| OllamaLLM(modelqwen2.5:0.5b)| StrOutputParser()
)langchain_chain PromptTemplate.from_template(You are an expert in langchain. \
Always answer questions starting with As Harrison Chase told me. \
Respond to the following question:Question: {question}
Answer:
) | OllamaLLM(modelqwen2.5:0.5b)
anthropic_chain PromptTemplate.from_template(You are an expert in anthropic. \
Always answer questions starting with As Dario Amodei told me. \
Respond to the following question:Question: {question}
Answer:
) | OllamaLLM(modelqwen2.5:0.5b)
general_chain PromptTemplate.from_template(Respond to the following question:Question: {question}
Answer:
) | OllamaLLM(modelqwen2.5:0.5b)def route(info):if anthropic in info[topic].lower():return anthropic_chainelif langchain in info[topic].lower():return langchain_chainelse:return general_chainfull_chain {topic: chain, question: lambda x: x[question]} | RunnableLambda(route)result full_chain.invoke({question: how do I use LangChain?})
print(result)print(\n-----------------------------------\n)# Branch demo
from langchain_core.runnables import RunnableBranchbranch RunnableBranch((lambda x: anthropic in x[topic].lower(), anthropic_chain),(lambda x: langchain in x[topic].lower(), langchain_chain),general_chain,
)full_chain {topic: chain, question: lambda x: x[question]} | branch
result full_chain.invoke({question: how do I use Anthropic?})
print(result)print(\n-----------------------------------\n)# Dynamic demo
from langchain_core.runnables import chain, RunnablePassthroughllm OllamaLLM(modelqwen2.5:0.5b)contextualize_instructions Convert the latest user question into a standalone question given the chat history. Dont answer the question, return the question and nothing else (no descriptive text).
contextualize_prompt ChatPromptTemplate.from_messages([(system, contextualize_instructions),(placeholder, {chat_history}),(human, {question}),]
)
contextualize_question contextualize_prompt | llm | StrOutputParser()chain
def contextualize_if_needed(input_: dict):if input_.get(chat_history):return contextualize_questionelse:return RunnablePassthrough() | itemgetter(question)chain
def fake_retriever(input_: dict):return egypts population in 2024 is about 111 millionqa_instructions (Answer the user question given the following context:\n\n{context}.
)
qa_prompt ChatPromptTemplate.from_messages([(system, qa_instructions), (human, {question})]
)full_chain (RunnablePassthrough.assign(questioncontextualize_if_needed).assign(contextfake_retriever)| qa_prompt| llm| StrOutputParser()
)result full_chain.invoke({question: what about egypt,chat_history: [(human, whats the population of indonesia),(ai, about 276 million),],
})
print(result)print(\n-----------------------------------\n)J-LangChain实现上面实例
J-LangChain - 智能链构建
总结
LangChain的LCEL通过提供顺序链、嵌套链、并行链、路由和动态构建等功能为开发者构建复杂的语言任务提供了强大的工具。无论是简单的逻辑流还是复杂的动态决策LCEL都能高效地满足需求。通过合理使用这些功能开发者可以快速搭建高效、灵活的智能链为各种场景的应用提供支持。