Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 12 additions & 0 deletions _ml-commons-plugin/gpu-acceleration.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,6 +67,18 @@
fi
```

If you run OpenSearch natively (without Docker) using the packaged version of OpenSearch, `systemd` may block OpenSearch from accessing your GPU. To accelerate models, you need a working [CUDA Toolkit](https://developer.nvidia.com/cuda-toolkit) installation and access to the NVIDIA device under `/dev`.

To allow OpenSearch to use the GPU, update the `systemd` service by adding the following configuration:

```ini
systemctl edit opensearch.service

Check failure on line 75 in _ml-commons-plugin/gpu-acceleration.md

View workflow job for this annotation

GitHub Actions / style-job

[vale] reported by reviewdog 🐶 [OpenSearch.Spelling] Error: opensearch. If you are referencing a setting, variable, format, function, or repository, surround it with tic marks. Raw Output: {"message": "[OpenSearch.Spelling] Error: opensearch. If you are referencing a setting, variable, format, function, or repository, surround it with tic marks.", "location": {"path": "_ml-commons-plugin/gpu-acceleration.md", "range": {"start": {"line": 75, "column": 16}}}, "severity": "ERROR"}

Check failure on line 75 in _ml-commons-plugin/gpu-acceleration.md

View workflow job for this annotation

GitHub Actions / style-job

[vale] reported by reviewdog 🐶 [Vale.Terms] Use 'OpenSearch' instead of 'opensearch'. Raw Output: {"message": "[Vale.Terms] Use 'OpenSearch' instead of 'opensearch'.", "location": {"path": "_ml-commons-plugin/gpu-acceleration.md", "range": {"start": {"line": 75, "column": 16}}}, "severity": "ERROR"}

[Service]
DevicePolicy=auto
```


After verifying that `nvidia-uvm` exists under `/dev`, you can start OpenSearch inside your cluster.

### Preparing AWS Inferentia ML node
Expand Down
Loading