Ollama4Truth is an open-source framework for misinformation detection and fact-checking using Large Language Models (LLMs) through the Ollama ecosystem.
This project aims to explore, evaluate, and democratize the use of open-source LLMs for identifying and mitigating online disinformation — particularly in Portuguese (PT-BR) and English contexts.
- Develop a modular and reproducible pipeline for fact-checking and misinformation identification.
- Evaluate open LLMs’ capabilities in detecting false or misleading content.
- Foster open collaboration and transparent evaluation within the AI research community.
- Install Ollama and download any model:
curl -fsSL https://ollama.com/install.sh | sh
ollama run gemma3:1b
- Install requirements (with Python 3.10):
pip install -r requirements.txt
- Create a .env file:
OLLAMA_MODEL=gemma3:1b
GOOGLE_API_KEY=YOUR_GOOGLE_API_KEY
GOOGLE_CSE_ID=YOUR_CSE_ID
- Start the server:
uvicorn api:app --reload
- Send a POST request to http://localhost:8000/analyze with the claim:
{
"claim": "O café ajuda a melhorar a memória de longo prazo."
}
- 🦙 Ollama — local LLM inference
- 🔍 Google Search API — open evidence retrieval
- 🤗 Transformers — tokenization and model loading
- 🧮 PyTorch — inference backend
- 📄 BM25 / FAISS — ranking and document retrieval
- 🧰 Python (3.10+)
Released under the MIT License — free for research and open-source use.