Skip to content

Commit

Permalink
📝 docs: Update syntax in markdown
Browse files Browse the repository at this point in the history
  • Loading branch information
canisminor1990 committed Nov 16, 2023
1 parent 7edb10a commit 9778e1d
Show file tree
Hide file tree
Showing 12 changed files with 79 additions and 45 deletions.
31 changes: 19 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,8 @@ Please be aware that LobeChat is currently under active development, and feedbac
| :---------------------------------------- | :----------------------------------------------------------------------------------------------------------------- |
| [![][discord-shield-badge]][discord-link] | Join our Discord community! This is where you can connect with developers and other enthusiastic users of LobeHub. |

> **Important**\
> \[!IMPORTANT]
>
> **Star Us**, You will receive all release notifications from GitHub without any delay \~ ⭐️
![](https://gw.alipayobjects.com/zos/kitchen/0hcO8QiU9c/star.webp)
Expand All @@ -96,12 +97,11 @@ Please be aware that LobeChat is currently under active development, and feedbac
- [x] 👁️ **Visual Recognition**: With the integration of visual recognition capabilities, your agent can now analyze and understand images provided during the conversation. This allows for more interactive and context-aware conversations, enabling the dialogue agent to provide relevant and accurate responses based on visual content.
- [ ] (WIP)📢 **Text-to-Speech (TTS) Conversation**: LobeChat are supporting Text-to-Speech technology, allowing users to have voice-based conversations with the dialogue agent. This feature enhances the user experience by providing a more natural and immersive conversation environment. Users can choose from a variety of voices and adjust the speech rate to suit their preferences.


> **Note**\
> \[!NOTE]
>
> You can find our upcoming [Roadmap][github-project-link] plans in the Projects section.

-----
---

Beside these features, LobeChat also have much better basic technique underground:

Expand Down Expand Up @@ -152,7 +152,8 @@ In our agent market. We have accumulated a large number of practical, prompt age

Utilize the Progressive Web Application ([PWA](https://support.google.com/chrome/answer/9658361)) technology to achieve a seamless LobeChat experience on your computer or mobile device.

> **Note**\
> \[!NOTE]
>
> If you are unfamiliar with the installation process of PWA, you can add LobeChat as your desktop application (also applicable to mobile devices) by following these steps:
>
> - Launch the Chrome or Edge browser on your computer.
Expand Down Expand Up @@ -184,7 +185,8 @@ We have carried out a series of optimization designs for mobile devices to enhan

## ⚡️ Performance

> **Note**\
> \[!NOTE]
>
> The complete list of reports can be found in the [📘 Lighthouse Reports](https://github.com/lobehub/lobe-chat/wiki/Lighthouse)
| Desktop | Mobile |
Expand Down Expand Up @@ -221,7 +223,8 @@ If you want to deploy this service yourself on Vercel, you can follow these step

If you have deployed your own project following the one-click deployment steps in the README, you might encounter constant prompts indicating "updates available." This is because Vercel defaults to creating a new project instead of forking this one, resulting in an inability to detect updates accurately.

> **Important**\
> \[!TIP]
>
> We suggest you redeploy using the following steps, [📘 Maintaining Updates with LobeChat Self-Deployment](https://github.com/lobehub/lobe-chat/wiki/Upstream-Sync).
<br/>
Expand All @@ -241,7 +244,8 @@ $ docker run -d -p 3210:3210 \
lobehub/lobe-chat
```

> **Note**\
> \[!TIP]
>
> If you need to use the OpenAI service through a proxy, you can configure the proxy address using the `OPENAI_PROXY_URL` environment variable:
```fish
Expand All @@ -252,7 +256,8 @@ $ docker run -d -p 3210:3210 \
lobehub/lobe-chat
```

> **Note**\
> \[!NOTE]
>
> For detailed instructions on deploying with Docker, please refer to the [📘 Docker Deployment Guide](https://github.com/lobehub/lobe-chat/wiki/Docker-Deployment)
<br/>
Expand All @@ -267,7 +272,8 @@ This project provides some additional configuration items set with environment v
| `OPENAI_PROXY_URL` | No | If you manually configure the OpenAI interface proxy, you can use this configuration item to override the default OpenAI API request base URL | `https://api.chatanywhere.cn/v1`<br/>The default value is<br/>`https://api.openai.com/v1` |
| `ACCESS_CODE` | No | Add a password to access this service; the password should be a 6-digit number or letter | `awCT74` or `e3@09!` |

> **Note**\
> \[!NOTE]
>
> The complete list of environment variables can be found in the [📘 Environment Variables](https://github.com/lobehub/lobe-chat/wiki/Environment-Variable)
<div align="right">
Expand Down Expand Up @@ -299,7 +305,8 @@ Plugins provide a means to extend the [Function Calling][fc-link] capabilities o
- [@lobehub/chat-plugin-sdk][chat-plugin-sdk]: The LobeChat Plugin SDK assists you in creating exceptional chat plugins for Lobe Chat.
- [@lobehub/chat-plugins-gateway][chat-plugins-gateway]: The LobeChat Plugins Gateway is a backend service that provides a gateway for LobeChat plugins. We deploy this service using Vercel. The primary API POST /api/v1/runner is deployed as an Edge Function.

> **Note**\
> \[!NOTE]
>
> The plugin system is currently undergoing major development. You can learn more in the following issues:
>
> - [x] [**Plugin Phase 1**](https://github.com/lobehub/lobe-chat/issues/73): Implement separation of the plugin from the main body, split the plugin into an independent repository for maintenance, and realize dynamic loading of the plugin.
Expand Down
31 changes: 20 additions & 11 deletions README.zh-CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,8 @@ LobeChat 是一个开源的、可扩展的([Function Calling][fc-link])高
| :---------------------------------------- | :--------------------------------------------------------------------------- |
| [![][discord-shield-badge]][discord-link] | 加入我们的 Discord 社区!这是你可以与开发者和其他 LobeHub 热衷用户交流的地方 |

> **Important**\
> \[!IMPORTANT]
>
> **收藏项目**,你将从 GitHub 上无延迟地接收所有发布通知~⭐️
![](https://gw.alipayobjects.com/zos/kitchen/0hcO8QiU9c/star.webp)
Expand All @@ -94,17 +95,18 @@ LobeChat 是一个开源的、可扩展的([Function Calling][fc-link])高
- [x] 👁️ **视觉识别**: 通过集成视觉识别能力,AI 助手现在可以分析和理解对话过程中提供的图像。这使得对话代理能够进行更具交互性和上下文感知的对话,根据视觉内容提供相关和准确的回答。
- [ ] (WIP)📢 **文本转语音(TTS)对话**: 我们正在支持文本转语音技术,允许用户与对话代理进行语音对话。这个功能通过提供更自然和沉浸式的对话环境来增强用户体验。用户可以选择多种声音并调整语速以适应自己的偏好。

> **Note**\
> \[!NOTE]
>
> 你可以在 Projects 中找到我们后续的 [Roadmap][github-project-link] 计划
----
---

除了上述功能特性以外,我们的底层技术方案为你带来了更多使用保障:

- [x] 💨 **快速部署**:使用 Vercel 平台或者我们的 Docker 镜像,只需点击一键部署按钮,即可在 1 分钟内完成部署,无需复杂的配置过程 .
- [x] 🔒 **隐私安全**:所有数据保存在用户浏览器本地,保证用户的隐私安全 .
- [x] 🌐 **自定义域名**:如果用户拥有自己的域名,可以将其绑定到平台上,方便在任何地方快速访问对话助手 .


<div align="right">

[![][back-to-top]](#readme-top)
Expand Down Expand Up @@ -148,7 +150,8 @@ LobeChat 是一个开源的、可扩展的([Function Calling][fc-link])高

利用渐进式 Web 应用 [PWA](https://support.google.com/chrome/answer/9658361) 技术,您可在电脑或移动设备上实现流畅的 LobeChat 体验。

> **Note**\
> \[!NOTE]
>
> 若您未熟悉 PWA 的安装过程,您可以按照以下步骤将 LobeChat 添加为您的桌面应用(也适用于移动设备):
>
> - 在电脑上运行 Chrome 或 Edge 浏览器 .
Expand Down Expand Up @@ -182,7 +185,8 @@ LobeChat 提供了两种独特的主题模式 - 明亮模式和暗黑模式,

## ⚡️ 性能测试

> **Note**\
> \[!NOTE]
>
> 完整测试报告可见 [📘 Lighthouse 性能测试](https://github.com/lobehub/lobe-chat/wiki/Lighthouse.zh-CN)
| Desktop | Mobile |
Expand Down Expand Up @@ -221,7 +225,8 @@ LobeChat 提供了 Vercel 的 自托管版本 和 [Docker 镜像][docker-release

如果你根据 README 中的一键部署步骤部署了自己的项目,你可能会发现总是被提示 “有可用更新”。这是因为 Vercel 默认为你创建新项目而非 fork 本项目,这将导致无法准确检测更新。

> **Important**\
> \[!TIP]
>
> 我们建议按照 [📘 LobeChat 自部署保持更新](https://github.com/lobehub/lobe-chat/wiki/Upstream-Sync.zh-CN) 步骤重新部署。
<br/>
Expand All @@ -241,7 +246,8 @@ $ docker run -d -p 3210:3210 \
lobehub/lobe-chat
```

> **Note**\
> \[!TIP]
>
> 如果你需要通过代理使用 OpenAI 服务,你可以使用 `OPENAI_PROXY_URL` 环境变量来配置代理地址:
```fish
Expand All @@ -252,7 +258,8 @@ $ docker run -d -p 3210:3210 \
lobehub/lobe-chat
```

> **Note**\
> \[!NOTE]
>
> 有关 Docker 部署的详细说明,详见 [📘 使用 Docker 部署](https://github.com/lobehub/lobe-chat/wiki/Docker-Deployment.zh-CN)
<br/>
Expand All @@ -267,7 +274,8 @@ $ docker run -d -p 3210:3210 \
| `OPENAI_PROXY_URL` | 可选 | 如果你手动配置了 OpenAI 接口代理,可以使用此配置项来覆盖默认的 OpenAI API 请求基础 URL | `https://api.chatanywhere.cn/v1`<br/>默认值:<br/>`https://api.openai.com/v1` |
| `ACCESS_CODE` | 可选 | 添加访问此服务的密码,密码应为 6 位数字或字母 | `awCT74``e3@09!` |

> **Note**\
> \[!NOTE]
>
> 完整环境变量可见 [📘环境变量](https://github.com/lobehub/lobe-chat/wiki/Environment-Variable.zh-CN)
<div align="right">
Expand Down Expand Up @@ -299,7 +307,8 @@ $ docker run -d -p 3210:3210 \
- [@lobehub/chat-plugin-sdk][chat-plugin-sdk]:LobeChat 插件 SDK 可帮助您创建出色的 Lobe Chat 插件。
- [@lobehub/chat-plugins-gateway][chat-plugins-gateway]:LobeChat 插件网关是一个后端服务,作为 LobeChat 插件的网关。我们使用 Vercel 部署此服务。主要的 API POST /api/v1/runner 被部署为 Edge Function。

> **Note**\
> \[!NOTE]
>
> 插件系统目前正在进行重大开发。您可以在以下 Issues 中了解更多信息:
>
> - [x] [**插件一期**](https://github.com/lobehub/lobe-chat/issues/73): 实现插件与主体分离,将插件拆分为独立仓库维护,并实现插件的动态加载
Expand Down
3 changes: 2 additions & 1 deletion docs/Deploy-with-Azure-OpenAI.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,8 @@ If you want the deployed version to be directly configured with Azure OpenAI for
| `AZURE_API_VERSION` | No | Azure's API version, follows the YYYY-MM-DD format | 2023-08-01-preview | `2023-05-15`, refer to [latest version][azure-api-version-url] |
| `ACCESS_CODE` | No | Add a password to access this service, the password should be 6 digits or letters | - | `awCT74` or `e3@09!` |

> **Note**\
> \[!NOTE]
>
> When you enable `USE_AZURE_OPENAI` on the server side, users will not be able to modify and use the OpenAI key in the front-end configuration.
[azure-api-version-url]: https://learn.microsoft.com/zh-cn/azure/ai-services/openai/reference#chat-completions
Expand Down
3 changes: 2 additions & 1 deletion docs/Deploy-with-Azure-OpenAI.zh-CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,8 @@ LobeChat 支持使用 [Azure OpenAI][azure-openai-url] 作为 OpenAI 的模型
| `AZURE_API_VERSION` | 可选 | Azure 的 API 版本,遵循 YYYY-MM-DD 格式 | 2023-08-01-preview | `2023-05-15`,查阅[最新版本][azure-api-verion-url] |
| `ACCESS_CODE` | 可选 | 添加访问此服务的密码,密码应为 6 位数字或字母 | - | `awCT74``e3@09!` |

> **Note**\
> \[!NOTE]
>
> 当你在服务端开启 `USE_AZURE_OPENAI` 后,用户将无法在前端配置中修改并使用 OpenAI key。
[azure-api-verion-url]: https://learn.microsoft.com/zh-cn/azure/ai-services/openai/reference#chat-completions
Expand Down
5 changes: 3 additions & 2 deletions docs/Docker-Deployment.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ $ docker run -d -p 3210:3210 \
lobehub/lobe-chat
```

> **Note**
> \[!NOTE]
>
> - The default mapped port is `3210`. Make sure it is not occupied or manually change the port mapping.
> - Replace `sk-xxxx` in the above command with your own OpenAI API Key.
Expand All @@ -54,7 +54,8 @@ $ docker run -d -p 3210:3210 \
lobehub/lobe-chat
```

> **Note**\
> \[!NOTE]
>
> As the official Docker image build takes about half an hour, if there is a "update available" prompt after updating deployment, wait for the image to finish building before deploying again.
### `B` Docker Compose
Expand Down
5 changes: 3 additions & 2 deletions docs/Docker-Deployment.zh-CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ $ docker run -d -p 3210:3210 \
lobehub/lobe-chat
```

> **Note**
> \[!NOTE]
>
> - 默认映射端口为 `3210`, 请确保未被占用或手动更改端口映射
> - 使用你的 OpenAI API Key 替换上述命令中的 `sk-xxxx`
Expand All @@ -54,7 +54,8 @@ $ docker run -d -p 3210:3210 \
lobehub/lobe-chat
```

> **Note**\
> \[!NOTE]
>
> 由于官方的 Docker 镜像构建大约需要半小时左右,如果在更新部署后会出现「存在更新」的提示,可以等待镜像构建完成后再次部署。
### `B` Docker Compose
Expand Down
3 changes: 2 additions & 1 deletion docs/Upstream-Sync.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,8 @@ If you have deployed your own project following the one-click deployment steps i

## Enabling Automatic Updates

> **Note**\
> \[!NOTE]
>
> If you encounter an error executing Upstream Sync, manually Sync Fork once
Once you have forked the project, due to Github restrictions, you will need to manually enable Workflows on the Actions page of your forked project and activate the Upstream Sync Action. Once enabled, you can set up hourly automatic updates.
Expand Down
3 changes: 2 additions & 1 deletion docs/Upstream-Sync.zh-CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,8 @@

## 启动自动更新

> **Note**\
> \[!NOTE]
>
> 如果你在执行 `Upstream Sync` 时遇到错误,请手动执再行一次
当你 Fork 了项目后,由于 Github 的限制,你需要手动在你 Fork 的项目的 Actions 页面启用 Workflows,并启动 Upstream Sync Action。启用后,你可以设置每小时进行一次自动更新。
Expand Down
18 changes: 12 additions & 6 deletions docs/Usage-Agents.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,8 @@ When you need to handle specific tasks, you'll want to consider creating a custo
![](https://github-production-user-asset-6210df.s3.amazonaws.com/17870709/279587283-a3ea8dfd-70fb-47ee-ab00-e3911ac6a939.png)
![](https://github-production-user-asset-6210df.s3.amazonaws.com/17870709/279587292-a3d102c6-f61e-4578-91f1-c0a4c97588e1.png)

> **Note**\
> \[!NOTE]
>
> Quick setting tip: You can conveniently modify the prompt by using the quick edit button in the sidebar.
![](https://github-production-user-asset-6210df.s3.amazonaws.com/17870709/279587294-388d1877-193e-4a50-9fe8-8fbcc3ccefa0.png)
Expand All @@ -50,7 +51,8 @@ Generative AI is very useful, but it requires human guidance. In most cases, gen

### How to write a structured prompt

> **Important**\
> \[!TIP]
>
> A structured prompt refers to the construction of the prompt having clear logic and structure. For example, if you want the model to generate an article, your prompt may need to include the topic of the article, its outline, and its style.
Let's look at a basic example of a discussion question:
Expand All @@ -77,7 +79,8 @@ The second prompt generates longer outputs with better structure. The use of the

### How to improve quality and effectiveness

> **Important**\
> \[!TIP]
>
> There are several ways to improve the quality and effectiveness of prompts:
>
> - Be as clear as possible about your needs. The model will try to fulfill your requirements, so if your requirements are not clear, the output may not meet your expectations.
Expand Down Expand Up @@ -154,7 +157,8 @@ Controls the randomness of the model's output. Higher values increase randomness
- Lower values make the output more focused and deterministic.
- Higher values make the output more random and creative.

> **Note**\
> \[!NOTE]
>
> Generally, the longer and clearer the prompt, the better the quality and confidence of the generated output. In this case, you can increase the temperature value. Conversely, if the prompt is short and ambiguous, setting a higher temperature value will make the model's output less stable.
<br/>
Expand All @@ -163,7 +167,8 @@ Controls the randomness of the model's output. Higher values increase randomness

Top-p nucleus sampling is another sampling parameter that is different from temperature. Before the model generates the output, it generates a set of tokens. In top-p sampling mode, the candidate word list is dynamic and selected from the tokens based on a percentage. Top-p introduces randomness to the selection of tokens, allowing other high-scoring tokens to have a chance of being selected instead of always choosing the highest-scoring one.

> **Note**\
> \[!NOTE]
>
> Top-p is similar to randomness. In general, it is not recommended to change it together with the randomness parameter, temperature.
<br/>
Expand All @@ -175,7 +180,8 @@ The presence penalty parameter can be seen as a punishment for repetitive conten
- Increasing the originality and diversity of the generated text: In some application scenarios, such as creative writing or generating news headlines, it is desirable for the generated text to have high originality and diversity. By increasing the value of the presence penalty parameter, the probability of generating repeated content in the generated text can be effectively reduced, thereby improving its originality and diversity.
- Preventing generation loops and meaningless content: In some cases, the generative model may produce repetitive and meaningless text that fails to convey useful information. By appropriately increasing the value of the presence penalty parameter, the probability of generating this type of meaningless content can be reduced, thereby improving the readability and usefulness of the generated text.

> **Note**\
> \[!NOTE]
>
> It is worth noting that the presence penalty parameter, along with other parameters such as temperature and top-p, collectively affect the quality of the generated text. Compared to other parameters, the presence penalty parameter focuses more on the originality and repetitiveness of the text, while the temperature and top-p parameters have a greater impact on the randomness and determinism of the generated text. By adjusting these parameters properly, comprehensive control of the quality of the generated text can be achieved.
<br/>
Expand Down
Loading

0 comments on commit 9778e1d

Please sign in to comment.