Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimize Docker Setup, Resolve CORS Issues, and Fix Dependency Conflicts #170

Merged
merged 2 commits into from
Aug 27, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
115 changes: 65 additions & 50 deletions docker/README.md
Original file line number Diff line number Diff line change
@@ -1,33 +1,33 @@
# MindSearch Docker Compose User Guide

English | [简体中文](README_zh-CN.md)

## 🚀 Quick Start with Docker Compose

MindSearch now supports quick deployment and startup using Docker Compose. This method simplifies the environment configuration process, allowing you to easily run the entire system.

### Prerequisites

- Docker installed (Docker Compose V2 is integrated into Docker)
- Docker (Docker Compose V2 is integrated into Docker)
- NVIDIA GPU and NVIDIA Container Toolkit (required for NVIDIA GPU support)

Note: Newer versions of Docker have integrated Docker Compose V2, so you can directly use the `docker compose` command without installing docker-compose separately.
Note: Newer versions of Docker have integrated Docker Compose V2, so you can use the `docker compose` command directly without a separate installation of docker-compose.

### Usage Instructions

### First-time Startup
All commands should be executed in the `mindsearch/docker` directory.

Execute the following commands in the project root directory:
#### First-time Startup

```bash
cd docker
docker compose up --build -d
```

This will build the necessary Docker images and start the services in the background.

### Daily Use
#### Daily Use

Start services:

```bash
cd docker
docker compose up -d
```

Expand All @@ -43,71 +43,86 @@ Stop services:
docker compose down
```

### Configuration Instructions
#### Major Version Updates

Rebuild images after a major update:

1. **Environment Variable Settings**:
The system will automatically read the following variables from your environment and pass them to the containers:
```bash
docker compose build --no-cache
docker compose up -d
```

- `OPENAI_API_KEY`: Your OpenAI API key (required when using GPT models)
- `OPENAI_API_BASE`: Base URL for OpenAI API (default is https://api.openai.com/v1)
- `LANG`: Set language, e.g., 'en' or 'zh'
- `MODEL_FORMAT`: Set model format, e.g., 'gpt4' or 'internlm_server'
### Configuration Details

1. **Environment Variables**:
The system automatically reads the following variables from your environment:

- `OPENAI_API_KEY`: Your OpenAI API key
- `OPENAI_API_BASE`: OpenAI API base URL (default: https://api.openai.com/v1)
- `LANG`: Language setting ('en' or 'cn')
- `MODEL_FORMAT`: Model format ('gpt4' or 'internlm_server')

Example setup:

Using local internlm2.5-7b-chat model:

```bash
export LANG=cn
export MODEL_FORMAT=internlm_server
docker compose up -d
```

You can set these variables before running the Docker Compose command, for example:
Using OpenAI's API:

```bash
export OPENAI_API_KEY=your_api_key_here
export OPENAI_API_BASE=https://your-custom-endpoint.com/v1
export LANG=en
export MODEL_FORMAT=gpt4
docker compose up -d
```

Using SiliconFlow's cloud LLM service:

```bash
export SILICON_API_KEY=your_api_key_here
export LANG=en
export MODEL_FORMAT=internlm_silicon
docker compose up -d
```

2. **Model Cache**:
The container maps the `/root/.cache:/root/.cache` path. If you use the local large model mode (`internlm_server`), model files will be downloaded to this directory. To change the storage location, please modify the corresponding configuration in docker-compose.yaml.
The container maps the `/root/.cache:/root/.cache` path to store model files.

3. **GPU Support**:
The current configuration defaults to using NVIDIA GPUs. For other GPU types (such as AMD or Apple M series), please refer to the comments in docker-compose.yaml for appropriate adjustments.
The default configuration uses NVIDIA GPUs. For other GPU types, please refer to the comments in docker-compose.yaml.

4. **Service Access**:
In the Docker Compose environment, the frontend container can directly access the backend service via `http://backend:8002`.

4. **Service Ports**:
The default API service address is `http://0.0.0.0:8002`. To change this, please modify the corresponding configuration in docker-compose.yaml.
5. **Backend Server Address Configuration**:
Currently, the method for changing the backend server address is temporary. We use a sed command in the Dockerfile to modify the vite.config.ts file to replace the server proxy address. This method is effective in the development environment but not suitable for production.

### Notes
### Important Notes

- During the first run, depending on your chosen model and network conditions, it may take some time to download the necessary model files.
- The first run may take some time to download necessary model files, depending on your chosen model and network conditions.
- Ensure you have sufficient disk space to store model files and Docker images.
- If you encounter permission issues, you may need to use sudo to run Docker commands.

### Cross-Origin Access Considerations

When accessing the frontend, it's important to be aware of potential cross-origin issues. The current Docker Compose configuration is a starting point for the project but doesn't fully resolve all cross-origin problems that might be encountered in a production environment. Please note the following points:
### Cross-Origin Access Note

1. **API Service Address Consistency**:
Ensure that the API service address matches the address you use to access the frontend. For example:
In the current version, we temporarily solve the cross-origin access issue by using Vite's development mode in the frontend Docker container:

- For local deployment: Use `0.0.0.0` or `127.0.0.1`
- For LAN or public network deployment: Use the same IP address or domain name
1. The frontend Dockerfile uses the `npm start` command to start the Vite development server.
2. In the `vite.config.ts` file, we configure proxy settings to forward requests for the `/solve` path to the backend service.

2. **Current Limitations**:
The current configuration is primarily suitable for development and testing environments. You may still encounter cross-origin issues in certain deployment scenarios.
Please note:

3. **Future Improvements**:
To enhance the system's robustness and adapt to more deployment scenarios, we plan to implement the following improvements in future versions:
- This method is effective in the development environment but not suitable for production use.
- We plan to implement a more robust cross-origin solution suitable for production environments in future versions.
- If you plan to deploy this project in a production environment, you may need to consider other cross-origin handling methods, such as configuring backend CORS policies or using a reverse proxy server.

- Modify server-side code to properly configure CORS (Cross-Origin Resource Sharing)
- Adjust client-side code to handle API requests more flexibly
- Consider introducing reverse proxy solutions

4. **Temporary Solutions**:
Before we implement these improvements, if you encounter cross-origin issues in specific environments, you can consider using browser plugins to temporarily disable cross-origin restrictions (for testing purposes only) or using a simple reverse proxy server.

5. **Docker Environment Settings**:
In the `docker-compose.yaml` file, ensure that the `API_URL` environment variable is set correctly, for example:
```yaml
environment:
- API_URL=http://your-server-address:8002
```
### Conclusion

We appreciate your understanding and patience. MindSearch is still in its early stages, and we are working hard to improve various aspects of the system. Your feedback is very important to us as it helps us continually refine the project. If you encounter any issues or have any suggestions during use, please feel free to provide feedback.
We appreciate your understanding and patience. MindSearch is still in its early stages, and we are working hard to improve various aspects of the system. Your feedback is very important to us as it helps us continuously refine the project. If you encounter any issues or have any suggestions during use, please feel free to provide feedback.

By using Docker Compose, you can quickly deploy MindSearch without worrying about complex environment configurations. This method is particularly suitable for rapid testing and development environment deployments. If you encounter any problems during deployment, please refer to our troubleshooting guide or seek community support.
By using Docker Compose, you can quickly deploy MindSearch without worrying about complex environment configurations. This method is particularly suitable for rapid testing and development environment deployment. If you encounter any problems during deployment, please refer to our troubleshooting guide or seek community support.
117 changes: 67 additions & 50 deletions docker/README_zh-CN.md
Original file line number Diff line number Diff line change
@@ -1,33 +1,33 @@
# MindSearch Docker Compose 使用指南

[English](README.md) | 简体中文

## 🚀 使用 Docker Compose 快速启动

MindSearch 现在支持使用 Docker Compose 进行快速部署和启动。这种方法简化了环境配置过程,使您能够轻松运行整个系统
MindSearch 支持使用 Docker Compose 进行快速部署和启动,简化了环境配置过程,让您能轻松运行整个系统

### 前提条件

- 安装 Docker(Docker Compose V2 已集成到 Docker 中)
- NVIDIA GPU 和 NVIDIA Container Toolkit(对于 NVIDIA GPU 支持是必需的)
- Docker(已集成 Docker Compose V2)
- NVIDIA GPU 和 NVIDIA Container Toolkit(如需 NVIDIA GPU 支持)

注意:较新版本的 Docker 已整合 Docker Compose V2,可直接使用 `docker compose` 命令。

注意:较新版本的 Docker 已经整合了 Docker Compose V2,因此您可以直接使用 `docker compose` 命令,无需单独安装 docker-compose。
### 使用说明

### 首次启动
所有命令都应在 `mindsearch/docker` 目录下执行。

在项目根目录下执行以下命令:
#### 首次启动

```bash
cd docker
docker compose up --build -d
```

这将构建必要的 Docker 镜像并在后台启动服务。

### 日常使用
#### 日常使用

启动服务:

```bash
cd docker
docker compose up -d
```

Expand All @@ -43,69 +43,86 @@ docker ps
docker compose down
```

#### 大版本更新

更新后重新构建镜像:

```bash
docker compose build --no-cache
docker compose up -d
```

### 配置说明

1. **环境变量设置**:
系统会自动从您的环境中读取以下变量并传递给容器
系统自动读取以下环境变量

- `OPENAI_API_KEY`:您的 OpenAI API 密钥(使用 GPT 模型时需要)
- `OPENAI_API_BASE`:OpenAI API 的基础 URL(默认为 https://api.openai.com/v1)
- `LANG`:设置语言,如 'en' 或 'zh'
- `MODEL_FORMAT`:设置模型格式,如 'gpt4' 或 'internlm_server'
- `OPENAI_API_KEY`:OpenAI API 密钥
- `OPENAI_API_BASE`:OpenAI API 基础 URL(默认:https://api.openai.com/v1)
- `LANG`:语言设置('en' 或 'cn')
- `MODEL_FORMAT`:模型格式('gpt4' 或 'internlm_server'

您可以在运行 Docker Compose 命令前设置这些变量,例如:
设置示例:

使用本地的 internlm2.5-7b-chat 模型:

```bash
export LANG=cn
export MODEL_FORMAT=internlm_server
docker compose up -d
```

使用 OpenAI 的 LLM 服务:

```bash
export OPENAI_API_KEY=your_api_key_here
export OPENAI_API_BASE=https://your-custom-endpoint.com/v1
export LANG=en
export LANG=cn
export MODEL_FORMAT=gpt4
docker compose up -d
```

使用 SiliconFlow 云端 LLM 服务:

```bash
export SILICON_API_KEY=your_api_key_here
export LANG=cn
export MODEL_FORMAT=internlm_silicon
docker compose up -d
```

2. **模型缓存**:
容器会映射 `/root/.cache:/root/.cache` 路径。如果您使用本地运行大模型模式(`internlm_server`),模型文件将下载到此目录。如需更改存储位置,请修改 docker-compose.yaml 中的相应配置
容器映射 `/root/.cache:/root/.cache` 路径存储模型文件

3. **GPU 支持**:
当前配置默认使用 NVIDIA GPU。对于其他 GPU 类型(如 AMD 或 Apple M 系列),请参考 docker-compose.yaml 中的注释进行相应调整
默认配置使用 NVIDIA GPU。其他 GPU 类型请参考 docker-compose.yaml 中的注释

4. **服务端口**:
默认 API 服务地址为 `http://0.0.0.0:8002`。如需更改,请修改 docker-compose.yaml 中的相应配置
4. **服务访问**:
在 Docker Compose 环境中,前端容器可以通过 `http://backend:8002` 直接访问后端服务

### 注意事项
5. **后端服务器地址配置**:
目前,更改后端服务器地址的方法是临时的。我们在 Dockerfile 中使用 sed 命令来修改 vite.config.ts 文件,以替换服务器代理地址。这种方法在开发环境中有效,但不适合生产环境。

- 首次运行时,根据您选择的模型和网络状况,可能需要一些时间来下载必要的模型文件。
- 确保您有足够的磁盘空间来存储模型文件和 Docker 镜像。
- 如果遇到权限问题,可能需要使用 sudo 运行 Docker 命令。
### 注意事项

### 跨域访问注意事项
- 首次运行可能需要时间下载模型文件。
- 确保有足够磁盘空间存储模型和 Docker 镜像。
- 如遇权限问题,可能需要使用 sudo 运行 Docker 命令。

在访问前端时,需要特别注意避免跨域问题。目前的 Docker Compose 配置是项目的一个起点,但还没有完全解决所有生产环境中可能遇到的跨域问题。请注意以下几点:
### 跨域访问说明

1. **API 服务地址一致性**:
确保 API 服务地址与您访问前端的服务地址保持一致。例如:
- 本地部署时:使用 `0.0.0.0` 或 `127.0.0.1`
- 局域网或公网部署时:使用相同的 IP 地址或域名
当前版本通过 Vite 开发模式临时解决跨域问题:

2. **当前限制**:
目前的配置主要适用于开发和测试环境。在某些部署场景下,您可能仍会遇到跨域问题
1. 前端 Dockerfile 使用 `npm start` 启动 Vite 开发服务器。
2. `vite.config.ts` 配置代理,将 `/solve` 路径请求代理到后端

3. **未来改进**:
为了提高系统的鲁棒性和适应更多的部署场景,我们计划在未来的版本中进行以下改进:
- 修改服务端代码以适当配置 CORS(跨源资源共享)
- 调整客户端代码以更灵活地处理 API 请求
- 考虑引入反向代理方案
注意:

4. **临时解决方案**:
在我们实现这些改进之前,如果您在特定环境中遇到跨域问题,可以考虑使用浏览器插件暂时禁用跨域限制(仅用于测试),或者使用简单的反向代理服务器。
- 此方法适用于开发环境,不适合生产环境。
- 未来版本将实现更适合生产环境的跨域解决方案。
- 生产环境部署可能需要考虑其他跨域处理方法。

5. **Docker 环境中的设置**:
在 `docker-compose.yaml` 文件中,确保 `API_URL` 环境变量设置正确,例如:
```yaml
environment:
- API_URL=http://your-server-address:8002
```
### 结语

我们感谢您的理解和耐心。MindSearch 仍处于早期阶段,我们正在努力改进系统的各个方面。您的反馈对我们非常重要,它帮助我们不断完善项目。如果您在使用过程中遇到任何问题或有任何建议,请随时向我们反馈
感谢您的支持。MindSearch 正在不断改进,您的反馈对我们至关重要。如有任何问题或建议,请随时与我们联系

通过使用 Docker Compose,您可以快速部署 MindSearch,而无需担心复杂的环境配置。这种方法特别适合快速测试和开发环境部署。如果您在部署过程中遇到任何问题,请查阅我们的故障排除指南或寻求社区支持
Docker Compose 方法简化了 MindSearch 的部署流程,特别适合快速测试和开发环境。如遇部署问题,请参考故障排除指南或寻求社区支持
4 changes: 3 additions & 1 deletion docker/backend.dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -26,4 +26,6 @@ RUN pip install --no-cache-dir \
sse-starlette \
termcolor \
uvicorn \
git+https://github.com/InternLM/lagent.git
git+https://github.com/InternLM/lagent.git

RUN pip install --no-cache-dir -U griffe==0.48.0
7 changes: 2 additions & 5 deletions docker/docker-compose.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,8 @@ services:
- PYTHONUNBUFFERED=1
- OPENAI_API_KEY=${OPENAI_API_KEY:-}
- OPENAI_API_BASE=${OPENAI_API_BASE:-https://api.openai.com/v1}
command: python -m mindsearch.app --lang ${LANG:-zh} --model_format ${MODEL_FORMAT:-internlm_server}
- SILICON_API_KEY=${SILICON_API_KEY:-}
command: python -m mindsearch.app --lang ${LANG:-cn} --model_format ${MODEL_FORMAT:-internlm_server}
volumes:
- /root/.cache:/root/.cache
deploy:
Expand Down Expand Up @@ -56,9 +57,5 @@ services:
# pull: never
ports:
- "8080:8080"
environment:
- NODE_ENV=production
- API_URL=http://0.0.0.0:8002
- SERVE_PORT=8080
depends_on:
- backend
Loading