Skip to content

Commit a240748

Browse files
authored
mindsearch codespace版本 (#1759)
* github mindsearch
1 parent b6d22b6 commit a240748

File tree

1 file changed

+280
-0
lines changed

1 file changed

+280
-0
lines changed
Lines changed: 280 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,280 @@
1+
# MindSearch CPU-only 版在github codespace部署
2+
3+
[原有的CPU版本](https://github.com/InternLM/Tutorial/blob/camp3/docs/L2/MindSearch/readme.md)相比区别是把internstudio换成了github codespace。
4+
5+
随着硅基流动提供了免费的 InternLM2.5-7B-Chat 服务(免费的 InternLM2.5-7B-Chat 真的很香),MindSearch 的部署与使用也就迎来了纯 CPU 版本,进一步降低了部署门槛。那就让我们来一起看看如何使用硅基流动的 API 来部署 MindSearch 吧。
6+
7+
## 1. 创建开发机 & 环境配置
8+
9+
打开[codespace主页](https://github.com/codespaces),选择blank template。
10+
11+
![image](https://github.com/user-attachments/assets/27e7a7ef-37e0-4614-89bf-c96044e3c6f3)
12+
13+
浏览器会自动在新的页面打开一个web版的vscode。
14+
15+
<img width="1591" alt="image" src="https://github.com/user-attachments/assets/58727fec-8d83-417d-88e5-eedc631444f2">
16+
17+
接下来的操作就和我们使用vscode基本没差别了。
18+
19+
然后我们新建一个目录用于存放 MindSearch 的相关代码,并把 MindSearch 仓库 clone 下来。在终端中运行下面的命令:
20+
21+
```bash
22+
mkdir -p /workspaces/mindsearch
23+
cd /workspaces/mindsearch
24+
git clone https://github.com/InternLM/MindSearch.git
25+
cd MindSearch && git checkout b832275 && cd ..
26+
```
27+
28+
接下来,我们创建一个 conda 环境来安装相关依赖。
29+
30+
```bash
31+
# 创建环境
32+
conda create -n mindsearch python=3.10 -y
33+
# 激活环境
34+
conda activate mindsearch
35+
# 安装依赖
36+
pip install -r /workspaces/mindsearch/MindSearch/requirements.txt
37+
```
38+
39+
## 2. 获取硅基流动 API Key
40+
41+
因为要使用硅基流动的 API Key,所以接下来便是注册并获取 API Key 了。
42+
43+
首先,我们打开 https://account.siliconflow.cn/login 来注册硅基流动的账号(如果注册过,则直接登录即可)。
44+
45+
在完成注册后,打开 https://cloud.siliconflow.cn/account/ak 来准备 API Key。首先创建新 API 密钥,然后点击密钥进行复制,以备后续使用。
46+
47+
![image](https://github.com/user-attachments/assets/7905a2fc-ef30-4e33-b214-274bebdc9251)
48+
49+
## 3. 启动 MindSearch
50+
51+
### 3.1 启动后端
52+
53+
由于硅基流动 API 的相关配置已经集成在了 MindSearch 中,所以我们可以直接执行下面的代码来启动 MindSearch 的后端。
54+
55+
```bash
56+
export SILICON_API_KEY=第二步中复制的密钥
57+
conda activate mindsearch
58+
cd /workspaces/mindsearch/MindSearch
59+
python -m mindsearch.app --lang cn --model_format internlm_silicon --search_engine DuckDuckGoSearch
60+
```
61+
62+
### 3.2 启动前端
63+
64+
在后端启动完成后,我们打开新终端运行如下命令来启动 MindSearch 的前端。
65+
66+
```bash
67+
conda activate mindsearch
68+
cd /workspaces/mindsearch/MindSearch
69+
python frontend/mindsearch_gradio.py
70+
```
71+
72+
前后端都启动后,我们应该可以看到github自动为这两个进程做端口转发。
73+
74+
<img width="1183" alt="image" src="https://github.com/user-attachments/assets/4ee76ca2-06a5-4145-829a-1310e69c0d83">
75+
76+
77+
由于使用codespace,这里我们不需要使用ssh端口转发了,github会自动提示我们打开一个在公网的前端地址。
78+
79+
<img width="600" alt="image" src="https://github.com/user-attachments/assets/545d5827-6ee3-416a-a913-1be09866f29e">
80+
81+
82+
然后就可以即刻体验啦。
83+
84+
<img width="1489" alt="image" src="https://github.com/user-attachments/assets/28f5658c-19a6-4a46-9bc9-51f4923a012c">
85+
86+
如果遇到了 timeout 的问题,可以按照 [文档](./readme_gpu.md#2-使用-bing-的接口) 换用 Bing 的搜索接口。
87+
88+
## 4. 部署到 HuggingFace Space
89+
90+
最后,我们来将 MindSearch 部署到 HuggingFace Space。
91+
92+
我们首先打开 https://huggingface.co/spaces ,并点击 Create new Space,如下图所示。
93+
94+
![image](https://github.com/user-attachments/assets/bacbe161-2d21-434e-8f78-5738b076cd74)
95+
96+
在输入 Space name 并选择 License 后,选择配置如下所示。
97+
98+
![image](https://github.com/user-attachments/assets/f4d98e6b-5352-4638-a3da-d090140ce3f6)
99+
100+
然后,我们进入 Settings,配置硅基流动的 API Key。如下图所示。
101+
102+
![image](https://github.com/user-attachments/assets/76947d6c-eeba-4230-ab77-04a98c60d4d3)
103+
104+
选择 New secrets,name 一栏输入 SILICON_API_KEY,value 一栏输入你的 API Key 的内容。
105+
106+
![image](https://github.com/user-attachments/assets/6f4ab268-c5d6-4106-8749-ad282e17ba35)
107+
108+
最后,我们先新建一个目录,准备提交到 HuggingFace Space 的全部文件。
109+
110+
```bash
111+
# 创建新目录
112+
mkdir -p /workspaces/mindsearch/mindsearch_deploy
113+
# 准备复制文件
114+
cd /workspaces/mindsearch
115+
cp -r /workspaces/mindsearch/MindSearch/mindsearch /workspaces/mindsearch/mindsearch_deploy
116+
cp /workspaces/mindsearch/MindSearch/requirements.txt /workspaces/mindsearch/mindsearch_deploy
117+
# 创建 app.py 作为程序入口
118+
touch /workspaces/mindsearch/mindsearch_deploy/app.py
119+
```
120+
121+
其中,app.py 的内容如下:
122+
123+
```python
124+
import json
125+
import os
126+
127+
import gradio as gr
128+
import requests
129+
from lagent.schema import AgentStatusCode
130+
131+
os.system("python -m mindsearch.app --lang cn --model_format internlm_silicon &")
132+
133+
PLANNER_HISTORY = []
134+
SEARCHER_HISTORY = []
135+
136+
137+
def rst_mem(history_planner: list, history_searcher: list):
138+
'''
139+
Reset the chatbot memory.
140+
'''
141+
history_planner = []
142+
history_searcher = []
143+
if PLANNER_HISTORY:
144+
PLANNER_HISTORY.clear()
145+
return history_planner, history_searcher
146+
147+
148+
def format_response(gr_history, agent_return):
149+
if agent_return['state'] in [
150+
AgentStatusCode.STREAM_ING, AgentStatusCode.ANSWER_ING
151+
]:
152+
gr_history[-1][1] = agent_return['response']
153+
elif agent_return['state'] == AgentStatusCode.PLUGIN_START:
154+
thought = gr_history[-1][1].split('```')[0]
155+
if agent_return['response'].startswith('```'):
156+
gr_history[-1][1] = thought + '\n' + agent_return['response']
157+
elif agent_return['state'] == AgentStatusCode.PLUGIN_END:
158+
thought = gr_history[-1][1].split('```')[0]
159+
if isinstance(agent_return['response'], dict):
160+
gr_history[-1][
161+
1] = thought + '\n' + f'```json\n{json.dumps(agent_return["response"], ensure_ascii=False, indent=4)}\n```' # noqa: E501
162+
elif agent_return['state'] == AgentStatusCode.PLUGIN_RETURN:
163+
assert agent_return['inner_steps'][-1]['role'] == 'environment'
164+
item = agent_return['inner_steps'][-1]
165+
gr_history.append([
166+
None,
167+
f"```json\n{json.dumps(item['content'], ensure_ascii=False, indent=4)}\n```"
168+
])
169+
gr_history.append([None, ''])
170+
return
171+
172+
173+
def predict(history_planner, history_searcher):
174+
175+
def streaming(raw_response):
176+
for chunk in raw_response.iter_lines(chunk_size=8192,
177+
decode_unicode=False,
178+
delimiter=b'\n'):
179+
if chunk:
180+
decoded = chunk.decode('utf-8')
181+
if decoded == '\r':
182+
continue
183+
if decoded[:6] == 'data: ':
184+
decoded = decoded[6:]
185+
elif decoded.startswith(': ping - '):
186+
continue
187+
response = json.loads(decoded)
188+
yield (response['response'], response['current_node'])
189+
190+
global PLANNER_HISTORY
191+
PLANNER_HISTORY.append(dict(role='user', content=history_planner[-1][0]))
192+
new_search_turn = True
193+
194+
url = 'http://localhost:8002/solve'
195+
headers = {'Content-Type': 'application/json'}
196+
data = {'inputs': PLANNER_HISTORY}
197+
raw_response = requests.post(url,
198+
headers=headers,
199+
data=json.dumps(data),
200+
timeout=20,
201+
stream=True)
202+
203+
for resp in streaming(raw_response):
204+
agent_return, node_name = resp
205+
if node_name:
206+
if node_name in ['root', 'response']:
207+
continue
208+
agent_return = agent_return['nodes'][node_name]['detail']
209+
if new_search_turn:
210+
history_searcher.append([agent_return['content'], ''])
211+
new_search_turn = False
212+
format_response(history_searcher, agent_return)
213+
if agent_return['state'] == AgentStatusCode.END:
214+
new_search_turn = True
215+
yield history_planner, history_searcher
216+
else:
217+
new_search_turn = True
218+
format_response(history_planner, agent_return)
219+
if agent_return['state'] == AgentStatusCode.END:
220+
PLANNER_HISTORY = agent_return['inner_steps']
221+
yield history_planner, history_searcher
222+
return history_planner, history_searcher
223+
224+
225+
with gr.Blocks() as demo:
226+
gr.HTML("""<h1 align="center">MindSearch Gradio Demo</h1>""")
227+
gr.HTML("""<p style="text-align: center; font-family: Arial, sans-serif;">MindSearch is an open-source AI Search Engine Framework with Perplexity.ai Pro performance. You can deploy your own Perplexity.ai-style search engine using either closed-source LLMs (GPT, Claude) or open-source LLMs (InternLM2.5-7b-chat).</p>""")
228+
gr.HTML("""
229+
<div style="text-align: center; font-size: 16px;">
230+
<a href="https://github.com/InternLM/MindSearch" style="margin-right: 15px; text-decoration: none; color: #4A90E2;">🔗 GitHub</a>
231+
<a href="https://arxiv.org/abs/2407.20183" style="margin-right: 15px; text-decoration: none; color: #4A90E2;">📄 Arxiv</a>
232+
<a href="https://huggingface.co/papers/2407.20183" style="margin-right: 15px; text-decoration: none; color: #4A90E2;">📚 Hugging Face Papers</a>
233+
<a href="https://huggingface.co/spaces/internlm/MindSearch" style="text-decoration: none; color: #4A90E2;">🤗 Hugging Face Demo</a>
234+
</div>
235+
""")
236+
with gr.Row():
237+
with gr.Column(scale=10):
238+
with gr.Row():
239+
with gr.Column():
240+
planner = gr.Chatbot(label='planner',
241+
height=700,
242+
show_label=True,
243+
show_copy_button=True,
244+
bubble_full_width=False,
245+
render_markdown=True)
246+
with gr.Column():
247+
searcher = gr.Chatbot(label='searcher',
248+
height=700,
249+
show_label=True,
250+
show_copy_button=True,
251+
bubble_full_width=False,
252+
render_markdown=True)
253+
with gr.Row():
254+
user_input = gr.Textbox(show_label=False,
255+
placeholder='帮我搜索一下 InternLM 开源体系',
256+
lines=5,
257+
container=False)
258+
with gr.Row():
259+
with gr.Column(scale=2):
260+
submitBtn = gr.Button('Submit')
261+
with gr.Column(scale=1, min_width=20):
262+
emptyBtn = gr.Button('Clear History')
263+
264+
def user(query, history):
265+
return '', history + [[query, '']]
266+
267+
submitBtn.click(user, [user_input, planner], [user_input, planner],
268+
queue=False).then(predict, [planner, searcher],
269+
[planner, searcher])
270+
emptyBtn.click(rst_mem, [planner, searcher], [planner, searcher],
271+
queue=False)
272+
273+
demo.queue()
274+
demo.launch(server_name='0.0.0.0',
275+
server_port=7860,
276+
inbrowser=True,
277+
share=True)
278+
```
279+
280+
在最后,将 /root/mindsearch/mindsearch_deploy 目录下的文件(使用 git)提交到 HuggingFace Space 即可完成部署了。注意将代码提交到huggingface space中需要配置hugginface的token。

0 commit comments

Comments
 (0)