Open-source assistant-style large language models that run locally on your CPU
๐ฆ๏ธ๐ Official Langchain Backend
GPT4All is made possible by our compute partner Paperspace.
Run on an M1 macOS Device (not sped up!)
GPT4All is an ecosystem to train and deploy powerful and customized large language models that run locally on consumer grade CPUs. Note that your CPU needs to support AVX or AVX2 instructions.
Learn more in the documentation.
The goal is simple - be the best instruction tuned assistant-style language model that any person or enterprise can freely use, distribute and build on.
A GPT4All model is a 3GB - 8GB file that you can download and plug into the GPT4All open-source ecosystem software. Nomic AI supports and maintains this software ecosystem to enforce quality and security alongside spearheading the effort to allow any person or enterprise to easily train and deploy their own on-edge large language models.
Run any GPT4All model natively on your home desktop with the auto-updating desktop chat client. See GPT4All Website for a full list of open-source models you can run with this powerful desktop application.
Direct Installer Links:
Find the most up-to-date information on the GPT4All Website
- Follow the visual instructions on the chat client build_and_run page
- ๐ Official Python Bindings
- ๐ป Official Typescript Bindings
- ๐ป Official GoLang Bindings
- ๐ป Official C# Bindings
- ๐ป Official Java Bindings
(https://gpt4all.io/index.html) is an open-source project containing a number of pre-trained Large Language Models (LLMs) that you can use to run locally using consumer grade CPUs. GPT4All contains a number of models that ranges from 3GB to 8GB. Whatโs more exciting? It is free!
While the performance of GPT4All may not be on par with the current ChatGPT, with contributions from the open source community it has significant potentials for further development and enhancements. It may eventually be able to compete on the same level with commercial models like ChatGPT from OpenAI.
- venv: C:\Users\zjc10\Desktop\Projects\envs\gpt_all\Scripts\activate.ps1
- cmake local: C:\Program Files\CMake\
- mingw64: c:\msys64\mingw64\bin
- project folder: C:\Users\zjc10\Desktop\Projects\code\gpt4_all
This package contains a set of Python bindings around the llmodel
C-API.
- Package on PyPI: https://pypi.org/project/gpt4all/
- Documentation: https://docs.gpt4all.io/gpt4all_python.html
-
download exe to install mingw64
wget https://github.com/msys2/msys2-installer/releases/download/2023-05-26/msys2-x86_64-20230526.exe
-
run installer and save MinGW64 to below default location (c:\msys64)
c:\msys64
-
When complete, ensure the Run MSYS2 now box is checked and select Finish. This will open a MSYS2 terminal window for you.
- In this terminal, install the MinGW-w64 toolchain by running the following command and enter Y when prompted to proceed:
pacman -S --needed base-devel mingw-w64-ucrt-x86_64-toolchain
- In this terminal, install the MinGW-w64 toolchain by running the following command and enter Y when prompted to proceed:
-
Add the path of your MinGW-w64 bin folder to the Windows PATH environment variable by following the below steps (https://code.visualstudio.com/docs/cpp/config-mingw)
- In the Windows search bar, type Settings to open your Windows Settings.
- Search for Edit environment variables for your account.
- In your User variables, select the Path variable and then select Edit.
- Select New and add the MinGW-w64 destination folder you recorded during the installation process to the list. If you used the default settings above, then this will be the path: C:\msys64\ucrt64\bin.
- Select OK to save the updated PATH. You will need to reopen any console windows for the new PATH location to be available
-
verify updated path variable correctly retains references to mingw utils
gcc --version g++ --version gdb --version
1. Install cmake (https://github.com/Kitware/CMake/releases/download/v3.27.4/cmake-3.27.4-windows-x86_64.msi)
- run the msi installer
- select 'add cmake to the system path for ALL users'
- install CMake to: C:\Program Files\CMake\
- restart terminal and test path variable assignment
>cmake
NOTE: If you are doing this on a Windows machine, you must build the GPT4All backend using MinGW64 compiler.
pip install gpt4all
git clone --recurse-submodules git@github.com:nomic-ai/gpt4all.git
cd gpt4all/gpt4all-backend/
mkdir build
cd build
cmake ..
cmake --build . --parallel # optionally append: --config Release
WARNING: Confirm that libllmodel.*
exists in gpt4all-backend/build
before proceeding
cd ../../gpt4all-bindings/python
pip3 install -e .
Test it out! In a Python script or console:
from gpt4all import GPT4All
model = GPT4All("orca-mini-3b.ggmlv3.q4_0.bin")
output = model.generate("The capital of France is ", max_tokens=3)
print(output)
GPU Usage
from gpt4all import GPT4All
model = GPT4All("orca-mini-3b.ggmlv3.q4_0.bin", device='gpu') # device='amd', device='intel'
output = model.generate("The capital of France is ", max_tokens=3)
print(output)
-
If you're on Windows and have compiled with a MinGW toolchain, you might run into an error like:
FileNotFoundError: Could not find module '<...>\gpt4all-bindings\python\gpt4all\llmodel_DO_NOT_MODIFY\build\libllmodel.dll' (or one of its dependencies). Try using the full path with constructor syntax.
The key phrase in this case is "or one of its dependencies". The Python interpreter you're using probably doesn't see the MinGW runtime dependencies. At the moment, the following three are required:
libgcc_s_seh-1.dll
,libstdc++-6.dll
andlibwinpthread-1.dll
. You should copy them from MinGW into a folder where Python will see them, preferably next tolibllmodel.dll
. -
Note regarding the Microsoft toolchain: Compiling with MSVC is possible, but not the official way to go about it at the moment. MSVC doesn't produce DLLs with a
lib
prefix, which the bindings expect. You'd have to amend that yourself.