nagrom is a self-hostable discord bot designed for rigorous fact-checking against a tiered hierarchy of trusted sources.
# clone the repo
git clone https://github.com/microck/nagrom.git
cd nagrom
# setup venv if you want
python -m venv .venv
source .venv/bin/activate # or .venv\Scripts\activate on windows
# install dependencies
pip install -r requirements.txt
# minimal config setup
cp config/examples/minimal.yaml config/bot.yaml
# you need to edit bot.yaml with your keys now. don't skip this.
# run it
python -m srcnagrom isn't a wrapper around an llm. it enforces a specific logic loop to verify facts.
- bring your own key (BYOK): supports openrouter, openai, anthropic, or generic openai-compatible endpoints.
- strict verification: uses a tiered source hierarchy. snopes ranks higher than quora, for obvious reasons.
- async architecture: built on
discord.py2.4+ andaiohttp. no blocking calls allowed here. - structured output: the llm is forced to output json, which we parse into pretty embeds.
- rate limiting: built-in token buckets and cooldowns so your server doesn't bankrupt you.
- flexible triggers: supports slash commands, replies, mentions, and context menus.
- database backed: keeps a history of checks in sqlite using
sqlalchemy.
nagrom acts as a logic engine. when you ask it to verify something, it goes through a pipeline:
- intent classification: figures out if you are asking for a fact check or just trying to prompt inject.
- extraction: pulls out the claims, dates, and entities.
- retrieval: looks for sources based on a trust tier (tier 1 is reuters/snopes, tier 4 is twitter).
- synthesis: compares sources against internal knowledge. external evidence wins.
- response: formats the verdict as
true,false,mixed, orunverifiable.
note: checking facts requires an llm capable of tool use or browsing if you want live internet access. otherwise it relies on the model's training data cutoff.
this assumes you have python 3.11 or higher installed. docker instructions are further down if you prefer containers.
git clone https://github.com/microck/nagrom.git
cd nagrom
mkdir dataalways use a virtual environment. installing global packages is a bad habit.
windows (powershell)
py -3.11 -m venv .venv
.\.venv\Scripts\Activate.ps1linux / macos
python3.11 -m venv .venv
source .venv/bin/activatepip install -r requirements.txtconfiguration is split between config/bot.yaml for settings and config/system_prompt.txt for the brain.
create config/bot.yaml. here is a sane default configuration:
discord_token: "${DISCORD_TOKEN}" # loads from env var
database_url: "sqlite+aiosqlite:///./data/nagrom.db"
llm:
default_provider: "openrouter"
providers:
openrouter:
enabled: true
api_key: "${OPENROUTER_KEY}"
model: "google/gemini-2.5-flash-preview"
max_tokens: 4000
temperature: 0.0 # keep this low for facts
rate_limits:
user_cooldown_seconds: 30
guild_daily_limit: 100
features:
enable_reply_detection: true
enable_context_menu: truenagrom relies on a very specific system prompt to force the llm to output json. if you mess this up, the bot will crash trying to parse the response.
ensure config/system_prompt.txt exists and contains the verification logic. see the example in config/examples/ if you lost it.
you can set keys directly in the yaml if you don't care about security, but using environment variables is the recommended way.
export DISCORD_TOKEN="your_token_here"
export OPENROUTER_KEY="your_key_here"once the bot is running and invited to your server, you have four ways to fact-check.
someone posts something wrong. you reply to their message and tag the bot.
user a: google doesn't steal anyones data without their permission. you (replying to a): @nagrom check this.
good for settling bets in real time.
/check statement: the us gdp grew by 2.5% in 2023
just ping the bot with a statement.
@nagrom is it true that the beef industry and fashion/textiles industries use ~90x more water than data centers used for AI?
right click a message, go to apps, and select check facts. yes, im lazy to type too.
things go wrong. here is how to fix them.
| problem | likely cause | fix |
|---|---|---|
| bot ignores commands | missing scope | re-invite bot with applications.commands scope selected. |
| "interaction failed" | timeout | the llm is taking too long. try a faster model like gemini flash. |
| json parse error | bad model | your model is ignoring the system prompt. switch to a smarter model (gpt-4o, claude 3.5). |
| rate limited immediately | clock drift | check your server time. or you set the limit to 1 request per day. |
warning: do not use small local models (like 7b params) for this. they are terrible at following the strict json schema required for the verification result and will likely hallucinate the format.
o'saasy license