A Docker Compose to run a local ChatGPT-like application using Ollama, Ollama Web UI, Mistral NeMo & DeepSeek R1.
-
Updated
Aug 25, 2025
A Docker Compose to run a local ChatGPT-like application using Ollama, Ollama Web UI, Mistral NeMo & DeepSeek R1.
A versatile CLI and Python wrapper for Mistral AI's 'Mixtral', 'Mistral' and 'NeMo' large language models. Streamline the creation of chatbots and generate dynamic text with ease.
Experiments running offline LLMs in Python and Rust locally using Ollama and llama.cpp
Psych_RAG is a Retrieval Augmented Generation chatbot that answers psychology-related questions using Mistral Nemo Instruct 2407, Pinecone for vector search, LangChain for integration, FastAPI for the backend, and Gradio for the interface.
The Mistral-Nemo-12b model has been fine-tuned for text generation tasks. This fine-tuning was performed using the Unsloth optimization framework, which significantly accelerates the training process, achieving a 2x faster fine-tuning time compared to conventional methods.
Godot Engine 4.3 custom build to run Mistral-Nemo-Instruct-2407 and SDXL finetunes for Windows 10/later.
Add a description, image, and links to the mistral-nemo topic page so that developers can more easily learn about it.
To associate your repository with the mistral-nemo topic, visit your repo's landing page and select "manage topics."