
vercel ai sdk 可以轻松与 openai、anthropic 等 llm api 进行交互,并传输数据,以便在加载时快速显示在您的 web 应用程序中。在本文中,我们将学习如何同时运行多个提示并并行查看它们的结果。
tl;dr:github 存储库在这里。
在 web 应用程序中同时运行多个数据获取请求并不罕见。例如,在假设的博客系统中,当仪表板界面加载时,我们可能希望同时获取用户的个人资料数据、他们创建的帖子以及他们喜欢的其他用户的帖子。
如果同一个仪表板同时向 openai 发出请求,我们可能希望同时向 openai 询问有关改善用户个人资料的提示,并同时分析他们的最新帖子。理论上,如果我们愿意的话,我们可以并行使用数十个人工智能请求(即使来自完全不同的平台和模型),并分析信息、生成内容并同时执行所有类型的其他任务。
您可以在此处克隆包含最终结果的 github 存储库。
从头开始设置:
完成所有工作的主要组件将包含一个表单和一些用于输出的容器。使用一些基本的 shadcn-ui 组件,表单将如下所示:
export function generationform() {
// state and other info will be defined here...
return (
<form onsubmit={onsubmit} classname="flex flex-col gap-3 w-full">
<div classname="inline-block mb-4 w-full flex flex-row gap-1">
<button type="submit">generate news & weather</button>
</div>
{isgenerating ? (
<div classname="flex flex-row w-full justify-center items-center p-4 transition-all">
<spinner classname="h-6 w-6 text-slate-900" />
</div>
) : null}
<h3 classname="font-bold">historical weather</h3>
<div classname="mt-4 mb-8 p-4 rounded-md shadow-md bg-blue-100">
{weather ? weather : null}
</div>
<h4 classname="font-bold">historical news</h4>
<div classname="mt-4 p-4 rounded-md shadow-md bg-green-100">{news ? news : null}</div>
</form>
)
}
你可以看到我们这里有一些东西:
现在您可以对这些值进行硬编码;它们都会从我们的信息流中删除。
streamanswer 服务器操作将完成创建和更新我们的流的工作。
动作的结构是这样的:
export async function streamanswer(question: string) {
// booleans for indicating whether each stream is currently streaming
const isgeneratingstream1 = createstreamablevalue(true);
const isgeneratingstream2 = createstreamablevalue(true);
// the current stream values
const weatherstream = createstreamablevalue("");
const newsstream = createstreamablevalue("");
// create the first stream. notice that we don't use await here, so that we
// don't block the rest of this function from running.
streamtext({
// ... params, including the llm prompt
}).then(async (result) => {
// read from the async iterator. set the stream value to each new word
// received.
for await (const value of result.textstream) {
weatherstream.update(value || "");
}
} finally {
// set isgenerating to false, and close that stream.
isgeneratingstream1.update(false);
isgeneratingstream1.done();
// close the given stream so the request doesn't hang.
weatherstream.done();
}
});
// same thing for the second stream.
streamtext({
// ... params
}).then(async (result) => {
// ...
})
// return any streams we want to read on the client.
return {
isgeneratingstream1: isgeneratingstream1.value,
isgeneratingstream2: isgeneratingstream2.value,
weatherstream: weatherstream.value,
newsstream: newsstream.value,
};
}
表单的 onsubmit 处理程序将完成这里的所有工作。以下是其工作原理的详细说明:
"use client";
import { SyntheticEvent, useState } from "react";
import { Button } from "./ui/button";
import { readStreamableValue, useUIState } from "ai/rsc";
import { streamAnswer } from "@/app/actions";
import { Spinner } from "./svgs/Spinner";
export function GenerationForm() {
// State for loading flags
const [isGeneratingStream1, setIsGeneratingStream1] = useState<boolean>(false);
const [isGeneratingStream2, setIsGeneratingStream2] = useState<boolean>(false);
// State for the LLM output streams
const [weather, setWeather] = useState<string>("");
const [news, setNews] = useState<string>("");
// We'll hide the loader when both streams are done.
const isGenerating = isGeneratingStream1 || isGeneratingStream2;
async function onSubmit(e: SyntheticEvent) {
e.preventDefault();
// Clear previous results.
setNews("");
setWeather("");
// Call the server action. The returned object will have all the streams in it.
const result = await streamAnswer(question);
// Translate each stream into an async iterator so we can loop through
// the values as they are generated.
const isGeneratingStream1 = readStreamableValue(result.isGeneratingStream1);
const isGeneratingStream2 = readStreamableValue(result.isGeneratingStream2);
const weatherStream = readStreamableValue(result.weatherStream);
const newsStream = readStreamableValue(result.newsStream);
// Iterate through each stream, putting its values into state one by one.
// Notice the IIFEs again! As on the server, these allow us to prevent blocking
// the function, so that we can run these iterators in parallel.
(async () => {
for await (const value of isGeneratingStream1) {
if (value != null) {
setIsGeneratingStream1(value);
}
}
})();
(async () => {
for await (const value of isGeneratingStream2) {
if (value != null) {
setIsGeneratingStream2(value);
}
}
})();
(async () => {
for await (const value of weatherStream) {
setWeather((existing) => (existing + value) as string);
}
})();
(async () => {
for await (const value of newsStream) {
setNews((existing) => (existing + value) as string);
}
})();
}
return (
// ... The form code from before.
);
}
以上就是使用 Vercel AI SDK 实现多个并行 AI 流的详细内容,更多请关注php中文网其它相关文章!
每个人都需要一台速度更快、更稳定的 PC。随着时间的推移,垃圾文件、旧注册表数据和不必要的后台进程会占用资源并降低性能。幸运的是,许多工具可以让 Windows 保持平稳运行。
Copyright 2014-2025 https://www.php.cn/ All Rights Reserved | php.cn | 湘ICP备2023035733号