Skip to content

Open-source framework for misinformation detection and fact verification using Large Language Models.

lucasfrag/ollama4truth

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

19 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🧠 Ollama4Truth

Ollama4Truth is an open-source framework for misinformation detection and fact-checking using Large Language Models (LLMs) through the Ollama ecosystem.

This project aims to explore, evaluate, and democratize the use of open-source LLMs for identifying and mitigating online disinformation — particularly in Portuguese (PT-BR) and English contexts.


🎯 Objectives

  • Develop a modular and reproducible pipeline for fact-checking and misinformation identification.
  • Evaluate open LLMs’ capabilities in detecting false or misleading content.
  • Foster open collaboration and transparent evaluation within the AI research community.

⚙️ Configuration

  1. Install Ollama and download any model:
curl -fsSL https://ollama.com/install.sh | sh
ollama run gemma3:1b
  1. Install requirements (with Python 3.10):
pip install -r requirements.txt
  1. Create a .env file:
OLLAMA_MODEL=gemma3:1b
GOOGLE_API_KEY=YOUR_GOOGLE_API_KEY
GOOGLE_CSE_ID=YOUR_CSE_ID

🚀 How to run?

  1. Start the server:
uvicorn api:app --reload

  1. Send a POST request to http://localhost:8000/analyze with the claim:
{
  "claim": "O café ajuda a melhorar a memória de longo prazo."
}

🖥️ Technologies

  • 🦙 Ollama — local LLM inference
  • 🔍 Google Search API — open evidence retrieval
  • 🤗 Transformers — tokenization and model loading
  • 🧮 PyTorch — inference backend
  • 📄 BM25 / FAISS — ranking and document retrieval
  • 🧰 Python (3.10+)

🪶 License

Released under the MIT License — free for research and open-source use.


About

Open-source framework for misinformation detection and fact verification using Large Language Models.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published