项目介绍
langchain官方文档:
https://docs.langchain.com/oss/python/langchain/quickstart
项目地址:
https://github.com/2915475627/stream-langchian-demo
项目介绍:此项目参照 LangChain 官方文档,搭建简单前端页面,流式响应回答,实现前后端能力自闭环。
技术选型:前端使用 React,后端使用 Python + fastApi + LangChain 。LangChain 官方文档前端使用了 useStream() 触发后端 agent 钩子 ,但是主流是前后端分离,所以我使用 fastApi 将 agent.stream() 包装成一个 SSE API,前端去 fetch API 。
---
title: 技术架构图
---
flowchart TB
subgraph LLM["🤖 LLM (大语言模型)"]
LLM_API["LLM API"]
end
subgraph Backend["⚙️ 后端服务"]
LangChain["LangChain Stream 函数"]
SSE["SSE (Server-Sent Events)"]
end
subgraph Frontend["🖥️ 前端应用"]
Buffer["缓冲区 (Buffer)"]
UI["用户界面"]
end
LLM_API -->|"Stream Response"| LangChain
LangChain -->|"SSE Stream"| SSE
SSE -->|"实时数据流"| Buffer
Buffer -->|"渲染数据"| UI
style LLM fill:#e1f5fe,stroke:#01579b
style Backend fill:#fff3e0,stroke:#e65100
style Frontend fill:#e8f5e9,stroke:#2e7d32
style LangChain fill:#ffccbc,stroke:#bf360c
style SSE fill:#ffe0b2,stroke:#e65100
style Buffer fill:#c8e6c9,stroke:#388e3c
style UI fill:#c8e6c9,stroke:#388e3c
项目拆解
后端流式响应实现
fastapi StreamResponse()
这个依赖是后端对前端实现流式响应的关键。我们需要传入 流式处理器 ,流格式 media_type 。
IODemo
先使用 io 实现一个基础版后端流式响应 api 。
@app.get("/stream")
async def stream(request:Request):
data=b"this is stream output \n" * 10000
stream_generater=io.BytesIO(data)
return StreamingResponse(stream_generater,
media_type="text/plain",
)
请求 localhost:8024/stream ,浏览器打开开发者工具,会发现页面逐条增加数据。
LangChainDemo
接着使用 LangChain 制作一个流式处理器并且传入。
这里的流式处理器还要实现 SSE 和 自定义JSON。
@app.post("/agent/runs/stream")
async def stream_run(request: Request):
body = await request.json()
input_data = body.get("input", {})
async def event_generator():
try:
async for chunk, metadata in agent.astream(input_data, stream_mode="messages"):
content = chunk.content
if isinstance(content, str) and content:
data = {"type": "text", "content": content}
yield f"data: {json.dumps(data, ensure_ascii=False)}\n\n"
elif isinstance(content, list):
for item in content:
if isinstance(item, dict) and item.get("type") == "text":
data = {"type": "text", "content": item.get("text", "")}
yield f"data: {json.dumps(data, ensure_ascii=False)}\n\n"
except Exception as e:
yield f"data: {json.dumps({'type': 'error', 'content': str(e)})}\n\n"
return StreamingResponse(
event_generator(),
media_type="text/event-stream",
headers={"Access-Control-Allow-Origin": "*"}
)
前端实现
1.初始化项目
使用vite 脚手架快速搭建 React + TypeScript 项目,保留 mian.tsx ,App.tsx。
自定义组件 Chat.tsx ,引入 App.tsx。
2.界面布局
上面一个 chat container 展示聊天记录。下面一个 input container 展示输入,再来一个 button 用来发送消息。
流式响应会触发状态更新,导致出现渲染页面。流式渲染 核心就是每次收到响应都更新 消息状态 ,消息记录就重新渲染。
<div className="chat-messages">
{messages.length === 0 && !isLoading && (
<div className="chat-empty">开始对话吧!</div>
)}
{messages.map((msg, i) => (
<div key={i} className="chat-message">
<strong>{msg.role === "human" ? "你" : "AI"}:</strong> {msg.content}
</div>
))}
3.状态管理
React 的状态就是动态的数据,其改变会触发页面查询渲染
const [messages, setMessages] = useState<{role: string; content: string}[]>([]);
// 消息列表: [{role: "human", content: "你好"}, {role: "ai", content: "你好,有什么可以帮你?"}]
const [input, setInput] = useState(""); // 输入框内容
const [isLoading, setIsLoading] = useState(false); // 是否正在等待回复
const [currentContent, setCurrentContent] = useState(""); // 流式输出中的临时内容
4.绑定事件
定义 handleSubmit 方法 fetch 后端 api ,并且改变 messages 状态
方法首先由于 用户输入 触发 setMessages() ,状态更新触发渲染
const handleSubmit = async () => {
if (!input.trim() || isLoading) return;
const userMessage = { role: "human", content: input.trim() };
setMessages(prev => [...prev, userMessage]);
setInput("");
setIsLoading(true);
setCurrentContent("");
然后触发 fetch 方法,每次收到响应触发 setMessages() ,状态更新触发渲染
try {
const response = await fetch(`${AGENT_URL}/agent/runs/stream`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
input: { messages: [{ type: "human", content: userMessage.content }] },
}),
});
const reader = response.body?.getReader();
const decoder = new TextDecoder();
let fullContent = "";
let buffer = "";
if (reader) {
while (true) {
const { done, value } = await reader.read();
if (done) break;
buffer += decoder.decode(value, { stream: true });
const lines = buffer.split("\n");
buffer = lines.pop() || "";
for (const line of lines) {
if (line.startsWith("data: ")) {
try {
const data = JSON.parse(line.slice(5));
if (data.type === "text") {
fullContent += data.content;
setCurrentContent(fullContent);
}
} catch (e) {
// 忽略解析错误
}
}
}
}
setMessages(prev => [...prev, { role: "ai", content: fullContent }]);
}
} catch (error) {
console.error("Error:", error);
} finally {
setIsLoading(false);
setCurrentContent("");
}
};
最后记得将方法绑定到 button 上
<button onClick={handleSubmit} disabled={isLoading} className="chat-button">
发送
</button>
LLM 消息解析
LangChain 不同的流式响应函数,响应格式也不一样。本项目的 agent.astream() ,stream_mode=”messages”,原始格式如下。
============================================================
开始流式响应...
============================================================
{'type': 'messages', 'ns': (), 'data': (AIMessageChunk(content='', additional_kwargs={}, response_metadata={'model_name': 'MiniMax-M2.7', 'model_provider': 'anthropic'}, id='lc_run--019dc00d-2c91-7f02-b8a7-956b5600d5a9', tool_calls=[], invalid_tool_calls=[], tool_call_chunks=[]), {'ls_integration': 'langchain_chat_model', 'langgraph_step': 1, 'langgraph_node': 'model', 'langgraph_triggers': ('branch:to:model',), 'langgraph_path': ('__pregel_pull', 'model'), 'langgraph_checkpoint_ns': 'model:2adf0980-2fda-42cb-0eb7-69685217b1a2', 'checkpoint_ns': 'model:2adf0980-2fda-42cb-0eb7-69685217b1a2', 'ls_provider': 'anthropic', 'ls_model_name': 'MiniMax-M2.7', 'ls_model_type': 'chat', 'ls_temperature': 0.5, 'ls_max_tokens': 1000})}
{'type': 'messages', 'ns': (), 'data': (AIMessageChunk(content=[{'thinking': '用户问', 'type': 'thinking', 'index': 0}], additional_kwargs={}, response_metadata={'model_provider': 'anthropic'}, id='lc_run--019dc00d-2c91-7f02-b8a7-956b5600d5a9', tool_calls=[], invalid_tool_calls=[], tool_call_chunks=[]), {'ls_integration': 'langchain_chat_model', 'langgraph_step': 1, 'langgraph_node': 'model', 'langgraph_triggers': ('branch:to:model',), 'langgraph_path': ('__pregel_pull', 'model'), 'langgraph_checkpoint_ns': 'model:2adf0980-2fda-42cb-0eb7-69685217b1a2', 'checkpoint_ns': 'model:2adf0980-2fda-42cb-0eb7-69685217b1a2', 'ls_provider': 'anthropic', 'ls_model_name': 'MiniMax-M2.7', 'ls_model_type': 'chat', 'ls_temperature': 0.5, 'ls_max_tokens': 1000})}
{'type': 'messages', 'ns': (), 'data': (AIMessageChunk(content=[{'thinking': '今天的天气怎么样。但是我作为一个AI助手,并没有实时的天气数据获取能力,也没有办法知道用户', 'type': 'thinking', 'index': 0}], additional_kwargs={}, response_metadata={'model_provider': 'anthropic'}, id='lc_run--019dc00d-2c91-7f02-b8a7-956b5600d5a9', tool_calls=[], invalid_tool_calls=[], tool_call_chunks=[]), {'ls_integration': 'langchain_chat_model', 'langgraph_step': 1, 'langgraph_node': 'model', 'langgraph_triggers': ('branch:to:model',), 'langgraph_path': ('__pregel_pull', 'model'), 'langgraph_checkpoint_ns': 'model:2adf0980-2fda-42cb-0eb7-69685217b1a2', 'checkpoint_ns': 'model:2adf0980-2fda-42cb-0eb7-69685217b1a2', 'ls_provider': 'anthropic', 'ls_model_name': 'MiniMax-M2.7', 'ls_model_type': 'chat', 'ls_temperature': 0.5, 'ls_max_tokens': 1000})}
{'type': 'messages', 'ns': (), 'data': (AIMessageChunk(content=[{'thinking': '所在的具体位置。我应该诚实地告诉用户这个情况,并提供一些建议。\n\n我需要用友好的方式', 'type': 'thinking', 'index': 0}], additional_kwargs={}, response_metadata={'model_provider': 'anthropic'}, id='lc_run--019dc00d-2c91-7f02-b8a7-956b5600d5a9', tool_calls=[], invalid_tool_calls=[], tool_call_chunks=[]), {'ls_integration': 'langchain_chat_model', 'langgraph_step': 1, 'langgraph_node': 'model', 'langgraph_triggers': ('branch:to:model',), 'langgraph_path': ('__pregel_pull', 'model'), 'langgraph_checkpoint_ns': 'model:2adf0980-2fda-42cb-0eb7-69685217b1a2', 'checkpoint_ns': 'model:2adf0980-2fda-42cb-0eb7-69685217b1a2', 'ls_provider': 'anthropic', 'ls_model_name': 'MiniMax-M2.7', 'ls_model_type': 'chat', 'ls_temperature': 0.5, 'ls_max_tokens': 1000})}
{'type': 'messages', 'ns': (), 'data': (AIMessageChunk(content=[{'thinking': '回应,说明我无法获取实时天气信息,然后建议用户如何获取天气信息。', 'type': 'thinking', 'index': 0}], additional_kwargs={}, response_metadata={'model_provider': 'anthropic'}, id='lc_run--019dc00d-2c91-7f02-b8a7-956b5600d5a9', tool_calls=[], invalid_tool_calls=[], tool_call_chunks=[]), {'ls_integration': 'langchain_chat_model', 'langgraph_step': 1, 'langgraph_node': 'model', 'langgraph_triggers': ('branch:to:model',), 'langgraph_path': ('__pregel_pull', 'model'), 'langgraph_checkpoint_ns': 'model:2adf0980-2fda-42cb-0eb7-69685217b1a2', 'checkpoint_ns': 'model:2adf0980-2fda-42cb-0eb7-69685217b1a2', 'ls_provider': 'anthropic', 'ls_model_name': 'MiniMax-M2.7', 'ls_model_type': 'chat', 'ls_temperature': 0.5, 'ls_max_tokens': 1000})}
{'type': 'messages', 'ns': (), 'data': (AIMessageChunk(content=[{'signature': '698fa4d87d04882d2cfe2d9b2fad2682924f906e88a766b84b510d0ef249650f', 'type': 'thinking', 'index': 0}], additional_kwargs={}, response_metadata={'model_provider': 'anthropic'}, id='lc_run--019dc00d-2c91-7f02-b8a7-956b5600d5a9', tool_calls=[], invalid_tool_calls=[], tool_call_chunks=[]), {'ls_integration': 'langchain_chat_model', 'langgraph_step': 1, 'langgraph_node': 'model', 'langgraph_triggers': ('branch:to:model',), 'langgraph_path': ('__pregel_pull', 'model'), 'langgraph_checkpoint_ns': 'model:2adf0980-2fda-42cb-0eb7-69685217b1a2', 'checkpoint_ns': 'model:2adf0980-2fda-42cb-0eb7-69685217b1a2', 'ls_provider': 'anthropic', 'ls_model_name': 'MiniMax-M2.7', 'ls_model_type': 'chat', 'ls_temperature': 0.5, 'ls_max_tokens': 1000})}
{'type': 'messages', 'ns': (), 'data': (AIMessageChunk(content='您好!感谢', additional_kwargs={}, response_metadata={'model_provider': 'anthropic'}, id='lc_run--019dc00d-2c91-7f02-b8a7-956b5600d5a9', tool_calls=[], invalid_tool_calls=[], tool_call_chunks=[]), {'ls_integration': 'langchain_chat_model', 'langgraph_step': 1, 'langgraph_node': 'model', 'langgraph_triggers': ('branch:to:model',), 'langgraph_path': ('__pregel_pull', 'model'), 'langgraph_checkpoint_ns': 'model:2adf0980-2fda-42cb-0eb7-69685217b1a2', 'checkpoint_ns': 'model:2adf0980-2fda-42cb-0eb7-69685217b1a2', 'ls_provider': 'anthropic', 'ls_model_name': 'MiniMax-M2.7', 'ls_model_type': 'chat', 'ls_temperature': 0.5, 'ls_max_tokens': 1000})}
{'type': 'messages', 'ns': (), 'data': (AIMessageChunk(content='您的提问 😊\n\n不过很抱歉,我无法获取您所在地区的实时天气信息。作为AI', additional_kwargs={}, response_metadata={'model_provider': 'anthropic'}, id='lc_run--019dc00d-2c91-7f02-b8a7-956b5600d5a9', tool_calls=[], invalid_tool_calls=[], tool_call_chunks=[]), {'ls_integration': 'langchain_chat_model', 'langgraph_step': 1, 'langgraph_node': 'model', 'langgraph_triggers': ('branch:to:model',), 'langgraph_path': ('__pregel_pull', 'model'), 'langgraph_checkpoint_ns': 'model:2adf0980-2fda-42cb-0eb7-69685217b1a2', 'checkpoint_ns': 'model:2adf0980-2fda-42cb-0eb7-69685217b1a2', 'ls_provider': 'anthropic', 'ls_model_name': 'MiniMax-M2.7', 'ls_model_type': 'chat', 'ls_temperature': 0.5, 'ls_max_tokens': 1000})}
{'type': 'messages', 'ns': (), 'data': (AIMessageChunk(content='助手,我没有联网查询当前天气的能力,也不清楚您的位置。\n\n**', additional_kwargs={}, response_metadata={'model_provider': 'anthropic'}, id='lc_run--019dc00d-2c91-7f02-b8a7-956b5600d5a9', tool_calls=[], invalid_tool_calls=[], tool_call_chunks=[]), {'ls_integration': 'langchain_chat_model', 'langgraph_step': 1, 'langgraph_node': 'model', 'langgraph_triggers': ('branch:to:model',), 'langgraph_path': ('__pregel_pull', 'model'), 'langgraph_checkpoint_ns': 'model:2adf0980-2fda-42cb-0eb7-69685217b1a2', 'checkpoint_ns': 'model:2adf0980-2fda-42cb-0eb7-69685217b1a2', 'ls_provider': 'anthropic', 'ls_model_name': 'MiniMax-M2.7', 'ls_model_type': 'chat', 'ls_temperature': 0.5, 'ls_max_tokens': 1000})}
{'type': 'messages', 'ns': (), 'data': (AIMessageChunk(content='您可以通过以下方式获取今天的天气:**\n\n1. 手机自带的天气应用\n2. 搜索“城市名+天气”(', additional_kwargs={}, response_metadata={'model_provider': 'anthropic'}, id='lc_run--019dc00d-2c91-7f02-b8a7-956b5600d5a9', tool_calls=[], invalid_tool_calls=[], tool_call_chunks=[]), {'ls_integration': 'langchain_chat_model', 'langgraph_step': 1, 'langgraph_node': 'model', 'langgraph_triggers': ('branch:to:model',), 'langgraph_path': ('__pregel_pull', 'model'), 'langgraph_checkpoint_ns': 'model:2adf0980-2fda-42cb-0eb7-69685217b1a2', 'checkpoint_ns': 'model:2adf0980-2fda-42cb-0eb7-69685217b1a2', 'ls_provider': 'anthropic', 'ls_model_name': 'MiniMax-M2.7', 'ls_model_type': 'chat', 'ls_temperature': 0.5, 'ls_max_tokens': 1000})}
{'type': 'messages', 'ns': (), 'data': (AIMessageChunk(content='如“北京天气”)\n3. 天气类APP(如墨迹天气、中国天气等)\n\n如果您告诉我', additional_kwargs={}, response_metadata={'model_provider': 'anthropic'}, id='lc_run--019dc00d-2c91-7f02-b8a7-956b5600d5a9', tool_calls=[], invalid_tool_calls=[], tool_call_chunks=[]), {'ls_integration': 'langchain_chat_model', 'langgraph_step': 1, 'langgraph_node': 'model', 'langgraph_triggers': ('branch:to:model',), 'langgraph_path': ('__pregel_pull', 'model'), 'langgraph_checkpoint_ns': 'model:2adf0980-2fda-42cb-0eb7-69685217b1a2', 'checkpoint_ns': 'model:2adf0980-2fda-42cb-0eb7-69685217b1a2', 'ls_provider': 'anthropic', 'ls_model_name': 'MiniMax-M2.7', 'ls_model_type': 'chat', 'ls_temperature': 0.5, 'ls_max_tokens': 1000})}
{'type': 'messages', 'ns': (), 'data': (AIMessageChunk(content='您所在的城市,我可以尝试根据我训练数据中的基本信息提供一个大致的参考,但这些信息可能已经过时或不', additional_kwargs={}, response_metadata={'model_provider': 'anthropic'}, id='lc_run--019dc00d-2c91-7f02-b8a7-956b5600d5a9', tool_calls=[], invalid_tool_calls=[], tool_call_chunks=[]), {'ls_integration': 'langchain_chat_model', 'langgraph_step': 1, 'langgraph_node': 'model', 'langgraph_triggers': ('branch:to:model',), 'langgraph_path': ('__pregel_pull', 'model'), 'langgraph_checkpoint_ns': 'model:2adf0980-2fda-42cb-0eb7-69685217b1a2', 'checkpoint_ns': 'model:2adf0980-2fda-42cb-0eb7-69685217b1a2', 'ls_provider': 'anthropic', 'ls_model_name': 'MiniMax-M2.7', 'ls_model_type': 'chat', 'ls_temperature': 0.5, 'ls_max_tokens': 1000})}
{'type': 'messages', 'ns': (), 'data': (AIMessageChunk(content='准确。\n\n请问还有什么我可以帮您的吗?', additional_kwargs={}, response_metadata={'model_provider': 'anthropic'}, id='lc_run--019dc00d-2c91-7f02-b8a7-956b5600d5a9', tool_calls=[], invalid_tool_calls=[], tool_call_chunks=[]), {'ls_integration': 'langchain_chat_model', 'langgraph_step': 1, 'langgraph_node': 'model', 'langgraph_triggers': ('branch:to:model',), 'langgraph_path': ('__pregel_pull', 'model'), 'langgraph_checkpoint_ns': 'model:2adf0980-2fda-42cb-0eb7-69685217b1a2', 'checkpoint_ns': 'model:2adf0980-2fda-42cb-0eb7-69685217b1a2', 'ls_provider': 'anthropic', 'ls_model_name': 'MiniMax-M2.7', 'ls_model_type': 'chat', 'ls_temperature': 0.5, 'ls_max_tokens': 1000})}
{'type': 'messages', 'ns': (), 'data': (AIMessageChunk(content='', additional_kwargs={}, response_metadata={'stop_reason': 'end_turn', 'stop_sequence': None, 'model_provider': 'anthropic'}, id='lc_run--019dc00d-2c91-7f02-b8a7-956b5600d5a9', tool_calls=[], invalid_tool_calls=[], usage_metadata={'input_tokens': 44, 'output_tokens': 176, 'total_tokens': 220, 'input_token_details': {}}, tool_call_chunks=[], chunk_position='last'), {'ls_integration': 'langchain_chat_model', 'langgraph_step': 1, 'langgraph_node': 'model', 'langgraph_triggers': ('branch:to:model',), 'langgraph_path': ('__pregel_pull', 'model'), 'langgraph_checkpoint_ns': 'model:2adf0980-2fda-42cb-0eb7-69685217b1a2', 'checkpoint_ns': 'model:2adf0980-2fda-42cb-0eb7-69685217b1a2', 'ls_provider': 'anthropic', 'ls_model_name': 'MiniMax-M2.7', 'ls_model_type': 'chat', 'ls_temperature': 0.5, 'ls_max_tokens': 1000})}
本项目暂时不需要思考过程,拆出模型回答封装成自定义 JSON 返回给前端。
前端根据响应更改消息记录里的 AIMessages 。
评论