@@ -17,10 +17,10 @@ This tutorial shows how to set up real-time weather functionality with Llama-Nex
17
17
## 1. Set Up Your MCP Server
18
18
19
19
``` bash
20
- curl -LO https://github.com/cardea-mcp/cardea-mcp-servers/releases/download/0.7 .0/cardea-mcp-servers-unknown-linux-gnu-x86_64.tar.gz
20
+ curl -LO https://github.com/cardea-mcp/cardea-mcp-servers/releases/download/0.8 .0/cardea-mcp-servers-unknown-linux-gnu-x86_64.tar.gz
21
21
tar xvf cardea-mcp-servers-unknown-linux-gnu-x86_64.tar.gz
22
22
```
23
- > Download for your platform: https://github.com/cardea-mcp/cardea-mcp-servers/releases/tag/0.7 .0
23
+ > Download for your platform: https://github.com/cardea-mcp/cardea-mcp-servers/releases/tag/0.8 .0
24
24
25
25
Set the environment variables:
26
26
@@ -45,11 +45,11 @@ Run the MCP server (accessible from external connections):
45
45
Download and extract llama-nexus:
46
46
47
47
``` bash
48
- curl -LO https://github.com/LlamaEdge/llama-nexus/releases/download/0.5 .0/llama-nexus-apple-darwin-aarch64.tar.gz
48
+ curl -LO https://github.com/LlamaEdge/llama-nexus/releases/download/0.6 .0/llama-nexus-apple-darwin-aarch64.tar.gz
49
49
tar xvf llama-nexus-apple-darwin-aarch64.tar.gz
50
50
```
51
51
52
- > Download for your platform: https://github.com/LlamaEdge/llama-nexus/releases/tag/0.5 .0
52
+ > Download for your platform: https://github.com/LlamaEdge/llama-nexus/releases/tag/0.6 .0
53
53
54
54
### Configure llama-nexus
55
55
@@ -87,11 +87,13 @@ Register an LLM chat API server for the `/chat/completions` endpoint:
87
87
curl --location ' http://localhost:9095/admin/servers/register' \
88
88
--header ' Content-Type: application/json' \
89
89
--data ' {
90
- "url": "https://0xb2962131564bc854ece7b0f7c8c9a8345847abfb.gaia.domains",
90
+ "url": "https://0xb2962131564bc854ece7b0f7c8c9a8345847abfb.gaia.domains/v1 ",
91
91
"kind": "chat"
92
92
}'
93
93
```
94
94
95
+ > If your API server requires an API key access, you can add an ` api-key ` field in the registration request.
96
+
95
97
## 3. Test the Setup
96
98
97
99
Test the inference server by requesting the ` /chat/completions ` API endpoint, which is OpenAI-compatible:
0 commit comments