forked from lobehub/lobe-chat
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
🔨 chore: Add docs workflow (lobehub#658)
* 🔧 chore: Add docs workflow and update docs files * 📝 docs: Update wiki docs link
- Loading branch information
1 parent
fedf799
commit d1df19a
Showing
46 changed files
with
882 additions
and
62 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,15 @@ | ||
# Data Statistics | ||
|
||
To better analyze the usage of LobeChat users, we have integrated several free/open-source data statistics services in LobeChat for collecting user usage data, which you can enable as needed. | ||
|
||
## Vercel Analytics | ||
|
||
[Vercel Analytics](https://vercel.com/analytics) is a data analysis service launched by Vercel, which can help you collect website visit information, including traffic, sources, and devices used for access. | ||
|
||
We have integrated Vercel Analytics into the code, and you can enable it by setting the environment variable `NEXT_PUBLIC_ANALYTICS_VERCEL=1`, and then open the Analytics tab in the Vercel deployment project to view your application's visit information. | ||
|
||
Vercel Analytics provides 2500 free Web Analytics Events per month (which can be understood as PV), which is generally sufficient for personal deployment and self-use products. | ||
|
||
If you need detailed instructions on using Vercel Analytics, please refer to [Vercel Web Analytics Quick Start](https://vercel.com/docs/analytics/quickstart). | ||
|
||
## 🚧 Posthog |
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
File renamed without changes.
File renamed without changes.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,37 @@ | ||
# Architecture Design | ||
|
||
LobeChat is an AI conversation application built on the Next.js framework, aiming to provide an AI productivity platform that enables users to interact with AI through natural language. The following is an overview of the architecture design of LobeChat: | ||
|
||
## Application Architecture Overview | ||
|
||
The overall architecture of LobeChat consists of the frontend, EdgeRuntime API, Agents Market, Plugin Market, and independent plugins. These components collaborate to provide a complete AI experience. | ||
|
||
## Frontend Architecture | ||
|
||
The frontend of LobeChat adopts the Next.js framework, leveraging its powerful server-side rendering (SSR) capability and routing functionality. The frontend utilizes a stack of technologies, including the antd component library, lobe-ui AIGC component library, zustand state management, swr request library, i18next internationalization library, and more. These technologies collectively support the functionality and features of LobeChat. | ||
|
||
The components in the frontend architecture include app, components, config, const, features, helpers, hooks, layout, locales, migrations, prompts, services, store, styles, types, and utils. Each component has specific responsibilities and collaborates with others to achieve different functionalities. | ||
|
||
## Edge Runtime API | ||
|
||
The Edge Runtime API is one of the core components of LobeChat, responsible for handling the core logic of AI conversations. It provides interaction interfaces with the AI engine, including natural language processing, intent recognition, and response generation. The EdgeRuntime API communicates with the frontend, receiving user input and returning corresponding responses. | ||
|
||
## Agents Market | ||
|
||
The Agents Market is a crucial part of LobeChat, providing various AI agents for different scenarios to handle specific tasks and domains. The Agents Market also offers functionality for discovering and uploading agents, allowing users to find agents created by others and easily share their own agents in the market. | ||
|
||
## Plugin Market | ||
|
||
The Plugin Market is another key component of LobeChat, offering various plugins to extend the functionality and features of LobeChat. Plugins can be independent functional modules or integrated with agents from the Agents Market. During conversations, the assistant automatically identifies user input, recognizes suitable plugins, and passes them to the corresponding plugins for processing and returns the results. | ||
|
||
## Security and Performance Optimization | ||
|
||
LobeChat's security strategy includes authentication and permission management. Users need to authenticate before using LobeChat, and operations are restricted based on the user's permissions. | ||
|
||
To optimize performance, LobeChat utilizes Next.js SSR functionality to achieve fast page loading and response times. Additionally, a series of performance optimization measures are implemented, including code splitting, caching, and resource compression. | ||
|
||
## Development and Deployment Process | ||
|
||
LobeChat's development process includes version control, testing, continuous integration, and continuous deployment. The development team uses version control systems for code management and conducts unit and integration testing to ensure code quality. Continuous integration and deployment processes ensure rapid delivery and deployment of code. | ||
|
||
The above is a brief introduction to the architecture design of LobeChat, detailing the responsibilities and collaboration of each component, as well as the impact of design decisions on application functionality and performance. |
2 changes: 1 addition & 1 deletion
2
docs/Development-Guide/Architecture.zh-CN.md → docs/Development/Architecture.zh-CN.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,4 +1,4 @@ | ||
## 架构设计 | ||
# 架构设计 | ||
|
||
LobeChat 是一个基于 Next.js 框架构建的 AI 会话应用,旨在提供一个 AI 生产力平台,使用户能够与 AI 进行自然语言交互。以下是 LobeChat 的架构设计介稿: | ||
|
||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,127 @@ | ||
# Conversation API Implementation Logic | ||
|
||
The implementation of LobeChat's large model AI mainly relies on OpenAI's API, including the core conversation API on the backend and the integrated API on the frontend. Next, we will introduce the implementation approach and code for the backend and frontend separately. | ||
|
||
## Backend Implementation | ||
|
||
The following code removes authentication, error handling, and other logic, retaining only the core functionality logic. | ||
|
||
### Core Conversation API | ||
|
||
In the file `src/app/api/openai/chat/handler.ts`, we define a `POST` method, which first parses the payload data from the request (i.e., the conversation content sent by the client), and then retrieves the authorization information from the request. Then, we create an `openai` object and call the `createChatCompletion` method, which is responsible for sending the conversation request to OpenAI and returning the result. | ||
|
||
```ts | ||
export const POST = async (req: Request) => { | ||
const payload = await req.json(); | ||
|
||
const { apiKey, endpoint } = getOpenAIAuthFromRequest(req); | ||
|
||
const openai = createOpenai(apiKey, endpoint); | ||
|
||
return createChatCompletion({ openai, payload }); | ||
}; | ||
``` | ||
|
||
### Conversation Result Processing | ||
|
||
In the file `src/app/api/openai/chat/createChatCompletion.ts`, we define the `createChatCompletion` method, which first preprocesses the payload data, then calls OpenAI's `chat.completions.create` method to send the request, and uses the `OpenAIStream` from the [Vercel AI SDK](https://sdk.vercel.ai/docs) to convert the returned result into a streaming response. | ||
|
||
```ts | ||
import { OpenAIStream, StreamingTextResponse } from 'ai'; | ||
|
||
export const createChatCompletion = async ({ payload, openai }: CreateChatCompletionOptions) => { | ||
const { messages, ...params } = payload; | ||
|
||
const formatMessages = messages.map((m) => ({ | ||
content: m.content, | ||
name: m.name, | ||
role: m.role, | ||
})); | ||
|
||
const response = await openai.chat.completions.create( | ||
{ | ||
messages: formatMessages, | ||
...params, | ||
stream: true, | ||
}, | ||
{ headers: { Accept: '*/*' } }, | ||
); | ||
const stream = OpenAIStream(response); | ||
return new StreamingTextResponse(stream); | ||
}; | ||
``` | ||
|
||
## Frontend Implementation | ||
|
||
### Frontend Integration | ||
|
||
In the `src/services/chatModel.ts` file, we define the `fetchChatModel` method, which first preprocesses the payload data, then sends a POST request to the `/chat` endpoint on the backend, and returns the request result. | ||
|
||
```ts | ||
export const fetchChatModel = ( | ||
{ plugins: enabledPlugins, ...params }: Partial<OpenAIStreamPayload>, | ||
options?: FetchChatModelOptions, | ||
) => { | ||
const payload = merge( | ||
{ | ||
model: initialLobeAgentConfig.model, | ||
stream: true, | ||
...initialLobeAgentConfig.params, | ||
}, | ||
params, | ||
); | ||
|
||
const filterFunctions: ChatCompletionFunctions[] = pluginSelectors.enabledSchema(enabledPlugins)( | ||
usePluginStore.getState(), | ||
); | ||
|
||
const functions = filterFunctions.length === 0 ? undefined : filterFunctions; | ||
|
||
return fetch(OPENAI_URLS.chat, { | ||
body: JSON.stringify({ ...payload, functions }), | ||
headers: createHeaderWithOpenAI({ 'Content-Type': 'application/json' }), | ||
method: 'POST', | ||
signal: options?.signal, | ||
}); | ||
}; | ||
``` | ||
|
||
### Using Streaming to Get Results | ||
|
||
In the `src/utils/fetch.ts` file, we define the `fetchSSE` method, which uses a streaming approach to retrieve data. When a new data chunk is read, it calls the `onMessageHandle` callback function to process the data chunk, achieving a typewriter-like output effect. | ||
|
||
```ts | ||
export const fetchSSE = async (fetchFn: () => Promise<Response>, options: FetchSSEOptions = {}) => { | ||
const response = await fetchFn(); | ||
|
||
if (!response.ok) { | ||
const chatMessageError = await getMessageError(response); | ||
|
||
options.onErrorHandle?.(chatMessageError); | ||
return; | ||
} | ||
|
||
const returnRes = response.clone(); | ||
|
||
const data = response.body; | ||
|
||
if (!data) return; | ||
|
||
const reader = data.getReader(); | ||
const decoder = new TextDecoder(); | ||
|
||
let done = false; | ||
|
||
while (!done) { | ||
const { value, done: doneReading } = await reader.read(); | ||
done = doneReading; | ||
const chunkValue = decoder.decode(value); | ||
|
||
options.onMessageHandle?.(chunkValue); | ||
} | ||
|
||
return returnRes; | ||
}; | ||
``` | ||
|
||
The above is the core implementation of the LobeChat session API. With an understanding of these core codes, further expansion and optimization of LobeChat's AI functionality can be achieved. |
2 changes: 1 addition & 1 deletion
2
docs/Development-Guide/Chat-API.zh-CN.md → docs/Development/Chat-API.zh-CN.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.