In this demo video (Bahasa): Menjalankan ChatGPT secara lokal menggunakan docker openwebui dengan model phi3.5
System Specification
Processor : 12th Gen Intel(R) Core(TM) i5-12450H
CPU cores : 12 @ 2496.008 MHz
GPU : NVIDIA GeForce RTX 3050 Laptop GPU, compute capability 8.6
AES-NI : ✔ Enabled
VM-x/AMD-V : ✔ Enabled
RAM : 7.6 GiB
Swap : 2.0 GiB
Disk : 2.0 TiB
Distro : Ubuntu 24.04.1 LTS
Kernel : 5.15.153.1-microsoft-standard-WSL2
VM Type : WSL version: 2.2.4.0
Operating system: Windows 11 - 64 Bit
Docker Engine: Docker version 27.3.1, build ce12230 + Docker Compose version v2.29.2-desktop.2
If you have an NVIDIA GPU like me and want to leverage its power to enhance the performance of your ChatGPT model, follow these steps:
-
Enable WSL: Open PowerShell as an administrator and run:
wsl --install
-
Set WSL 2 as Default:
wsl --set-default-version 2
-
Install Ubuntu LTS: You can find it in the Microsoft Store. Once installed, open it to complete the setup.
-
Download and Install NVIDIA Drivers: Ensure you have the latest NVIDIA drivers that support WSL. You can download them from the NVIDIA website.
-
Install CUDA Toolkit: Follow the instructions on the CUDA Toolkit Installation Guide to set it up within your WSL environment.
-
Install Docker: Follow these commands within your WSL terminal:
sudo curl -sL https://get.docker.com
-
Start Docker:
sudo service docker start
-
Add your user to the Docker group (to avoid using
sudo
with Docker):sudo usermod -aG docker $USER
After running this command, log out and log back into your WSL terminal.
-
Set Up the NVIDIA Docker Toolkit: 1.1 Configure the production repository:
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \ && curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \ sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \ sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
1.2 Update the packages list from the repository and install Install the NVIDIA Container Toolkit packages
sudo apt-get update sudo apt-get install -y nvidia-container-toolkit
Follow the instructions from the NVIDIA Docker documentation to install the NVIDIA Container Toolkit, which allows Docker to use your NVIDIA GPU.
-
Configure NVIDIA Docker Toolkit
sudo nvidia-ctk runtime configure --runtime=docker sudo systemctl restart docker
-
Restart your system.
Now, you can build and run your container, and it will utilize your NVIDIA GPU 💪
docker-compose up
Access the OpenWebUI as described in the previous steps, and you should now have a performance boost from your NVIDIA GPU.
Please Note: For the first time you need create a local user by clicking up sign up inside Open WebUI
- Privacy: Keeping your data local means it’s not shared with third-party servers.
- Control: You can modify and configure the model as needed.
- Performance: Utilizing a GPU can significantly enhance performance, especially for larger models.
Running ChatGPT locally using Docker OpenWebUI is a straightforward process that provides numerous benefits. With the optional setup for NVIDIA GPU support on WSL 2 and Ubuntu , you can take your local AI capabilities to the next level.
Feel free to dive into the configuration files and experiment with different settings to optimize your local environment 😁