Skip to content

Commit ced478e

Browse files
committed
dmr: gui docs
1 parent 5b0705f commit ced478e

File tree

2 files changed

+46
-185
lines changed

2 files changed

+46
-185
lines changed

content/manuals/_index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ params:
3939
- title: Docker Model Runner
4040
description: View and manage your local models.
4141
icon: view_in_ar
42-
link: /model-runner/
42+
link: /ai/model-runner/
4343
- title: MCP Catalog and Toolkit
4444
description: Augment your AI workflow with MCP servers.
4545
icon: /assets/icons/toolbox.svg

content/manuals/ai/model-runner.md renamed to content/manuals/ai/model-runner/_index.md

Lines changed: 45 additions & 184 deletions
Original file line numberDiff line numberDiff line change
@@ -8,27 +8,30 @@ params:
88
group: AI
99
weight: 20
1010
description: Learn how to use Docker Model Runner to manage and run AI models.
11-
keywords: Docker, ai, model runner, docker deskotp, llm
11+
keywords: Docker, ai, model runner, docker desktop, llm
1212
aliases:
1313
- /desktop/features/model-runner/
14-
- /ai/model-runner/
14+
- /model-runner/
1515
---
1616

1717
{{< summary-bar feature_name="Docker Model Runner" >}}
1818

19-
The Docker Model Runner plugin lets you:
19+
## Key features
2020

21-
- [Pull models from Docker Hub](https://hub.docker.com/u/ai)
22-
- Run AI models directly from the command line
23-
- Manage local models (add, list, remove)
24-
- Interact with models using a submitted prompt or in chat mode in the CLI or Docker Desktop Dashboard
25-
- Push models to Docker Hub
21+
- [Pull and push models to and from Docker Hub](https://hub.docker.com/u/ai)
22+
- Run and interact with AI models directly from the command line or from Docker Desktop
23+
- Manage local models and display logs
24+
25+
## How it works
2626

2727
Models are pulled from Docker Hub the first time they're used and stored locally. They're loaded into memory only at runtime when a request is made, and unloaded when not in use to optimize resources. Since models can be large, the initial pull may take some time — but after that, they're cached locally for faster access. You can interact with the model using [OpenAI-compatible APIs](#what-api-endpoints-are-available).
2828

2929
> [!TIP]
3030
>
31-
> Using Testcontainers or Docker Compose? [Testcontainers for Java](https://java.testcontainers.org/modules/docker_model_runner/) and [Go](https://golang.testcontainers.org/modules/dockermodelrunner/), and [Docker Compose](/manuals/compose/how-tos/model-runner.md) now support Docker Model Runner.
31+
> Using Testcontainers or Docker Compose?
32+
> [Testcontainers for Java](https://java.testcontainers.org/modules/docker_model_runner/)
33+
> and [Go](https://golang.testcontainers.org/modules/dockermodelrunner/), and
34+
> [Docker Compose](/manuals/compose/how-tos/model-runner.md) now support Docker Model Runner.
3235
3336
## Enable Docker Model Runner
3437

@@ -42,192 +45,60 @@ Models are pulled from Docker Hub the first time they're used and stored locally
4245

4346
You can now use the `docker model` command in the CLI and view and interact with your local models in the **Models** tab in the Docker Desktop Dashboard.
4447

45-
## Available commands
46-
47-
### Model runner status
48-
49-
Check whether the Docker Model Runner is active and displays the current inference engine:
50-
51-
```console
52-
$ docker model status
53-
```
54-
55-
### View all commands
56-
57-
Displays help information and a list of available subcommands.
58-
59-
```console
60-
$ docker model help
61-
```
62-
63-
Output:
64-
65-
```text
66-
Usage: docker model COMMAND
67-
68-
Commands:
69-
list List models available locally
70-
pull Download a model from Docker Hub
71-
rm Remove a downloaded model
72-
run Run a model interactively or with a prompt
73-
status Check if the model runner is running
74-
version Show the current version
75-
```
76-
77-
### Pull a model
78-
79-
Pulls a model from Docker Hub to your local environment.
80-
81-
```console
82-
$ docker model pull <model>
83-
```
84-
85-
Example:
86-
87-
```console
88-
$ docker model pull ai/smollm2
89-
```
90-
91-
Output:
92-
93-
```text
94-
Downloaded: 257.71 MB
95-
Model ai/smollm2 pulled successfully
96-
```
97-
98-
The models also display in the Docker Desktop Dashboard.
99-
100-
#### Pull from Hugging Face
101-
102-
You can also pull GGUF models directly from [Hugging Face](https://huggingface.co/models?library=gguf).
103-
104-
```console
105-
$ docker model pull hf.co/<model-you-want-to-pull>
106-
```
107-
108-
For example:
109-
110-
```console
111-
$ docker model pull hf.co/bartowski/Llama-3.2-1B-Instruct-GGUF
112-
```
113-
114-
Pulls the [bartowski/Llama-3.2-1B-Instruct-GGUF](https://huggingface.co/bartowski/Llama-3.2-1B-Instruct-GGUF).
115-
116-
### List available models
117-
118-
Lists all models currently pulled to your local environment.
119-
120-
```console
121-
$ docker model list
122-
```
123-
124-
You will see something similar to:
48+
## Install a model
12549

126-
```text
127-
+MODEL PARAMETERS QUANTIZATION ARCHITECTURE MODEL ID CREATED SIZE
128-
+ai/smollm2 361.82 M IQ2_XXS/Q4_K_M llama 354bf30d0aa3 3 days ago 256.35 MiB
129-
```
50+
Models are installed locally.
13051

131-
### Run a model
52+
{{< tabs >}}
53+
{{< tab name="From Docker Desktop">}}
13254

133-
Run a model and interact with it using a submitted prompt or in chat mode. When you run a model, Docker
134-
calls an Inference Server API endpoint hosted by the Model Runner through Docker Desktop. The model
135-
stays in memory until another model is requested, or until a pre-defined inactivity timeout is reached (currently 5 minutes).
55+
1. Select **Models** and select the **Docker Hub** tab.
56+
2. Find the model of your choice and select **Pull**.
13657

137-
You do not have to use `Docker model run` before interacting with a specific model from a
138-
host process or from within a container. Model Runner transparently loads the requested model on-demand, assuming it has been
139-
pulled beforehand and is locally available.
58+
{{< /tab >}}
59+
{{< tab name="From the Docker CLI">}}
14060

141-
#### One-time prompt
61+
Use the [`docker model pull` command](/reference/cli/docker/).
14262

143-
```console
144-
$ docker model run ai/smollm2 "Hi"
145-
```
63+
{{< /tab >}}
64+
{{< /tabs >}}
14665

147-
Output:
66+
## Run a model
14867

149-
```text
150-
Hello! How can I assist you today?
151-
```
152-
153-
#### Interactive chat
154-
155-
```console
156-
$ docker model run ai/smollm2
157-
```
68+
Models are installed locally.
15869

159-
Output:
70+
{{< tabs >}}
71+
{{< tab name="From Docker Desktop">}}
16072

161-
```text
162-
Interactive chat mode started. Type '/bye' to exit.
163-
> Hi
164-
Hi there! It's SmolLM, AI assistant. How can I help you today?
165-
> /bye
166-
Chat session ended.
167-
```
73+
Select **Models** and select the **Local** tab and click the play button.
74+
The interactive chat screen opens.
16875

169-
> [!TIP]
170-
>
171-
> You can also use chat mode in the Docker Desktop Dashboard when you select the model in the **Models** tab.
76+
{{< /tab >}}
77+
{{< tab name="From the Docker CLI">}}
17278

173-
### Push a model to Docker Hub
79+
Use the [`docker model run` command](/reference/cli/docker/).
17480

175-
To push your model to Docker Hub:
81+
{{< /tab >}}
82+
{{< /tabs >}}
17683

177-
```console
178-
$ docker model push <namespace>/<model>
179-
```
84+
## Troubleshooting
18085

181-
### Tag a model
86+
To troubleshoot potential issues, display the logs:
18287

183-
To specify a particular version or variant of the model:
88+
{{< tabs >}}
89+
{{< tab name="From Docker Desktop">}}
18490

185-
```console
186-
$ docker model tag
187-
```
91+
Select **Models** and select the **Logs** tab.
18892

189-
If no tag is provided, Docker defaults to `latest`.
93+
{{< /tab >}}
94+
{{< tab name="From the Docker CLI">}}
19095

191-
### View the logs
96+
Use the [`docker model log` command](/reference/cli/docker/).
19297

193-
Fetch logs from Docker Model Runner to monitor activity or debug issues.
98+
{{< /tab >}}
99+
{{< /tabs >}}
194100

195-
```console
196-
$ docker model logs
197-
```
198-
199-
The following flags are accepted:
200-
201-
- `-f`/`--follow`: View logs with real-time streaming
202-
- `--no-engines`: Exclude inference engine logs from the output
203-
204-
### Remove a model
205-
206-
Removes a downloaded model from your system.
207-
208-
```console
209-
$ docker model rm <model>
210-
```
211-
212-
Output:
213-
214-
```text
215-
Model <model> removed successfully
216-
```
217-
218-
### Package a model
219-
220-
Packages a GGUF file into a Docker model OCI artifact, with optional licenses, and pushes it to the specified registry.
221-
222-
```console
223-
$ docker model package \
224-
--gguf ./model.gguf \
225-
--licenses license1.txt \
226-
--licenses license2.txt \
227-
--push registry.example.com/ai/custom-model
228-
```
229-
230-
## Integrate the Docker Model Runner into your software development lifecycle
101+
## Example: Integrate Docker Model Runner into your software development lifecycle
231102

232103
You can now start building your Generative AI application powered by the Docker Model Runner.
233104

@@ -287,7 +158,6 @@ with `/exp/vDD4.40`.
287158
> [!NOTE]
288159
> You can omit `llama.cpp` from the path. For example: `POST /engines/v1/chat/completions`.
289160
290-
291161
### How do I interact through the OpenAI API?
292162

293163
#### From within a container
@@ -399,12 +269,3 @@ The Docker Model CLI currently lacks consistent support for specifying models by
399269
## Share feedback
400270

401271
Thanks for trying out Docker Model Runner. Give feedback or report any bugs you may find through the **Give feedback** link next to the **Enable Docker Model Runner** setting.
402-
403-
## Disable the feature
404-
405-
To disable Docker Model Runner:
406-
407-
1. Open the **Settings** view in Docker Desktop.
408-
2. Navigate to the **Beta** tab in **Features in development**.
409-
3. Clear the **Enable Docker Model Runner** checkbox.
410-
4. Select **Apply & restart**.

0 commit comments

Comments
 (0)