Think-on-Graph 3.0 + MACER Framework
Meta-cognitive Adaptive Chain-of-thought with Evidence-based Reasoning
Built with β€οΈ by Kit4Some & sapiens.team
Ontology Reasoning System is a next-generation knowledge graph reasoning engine that goes far beyond traditional RAG (Retrieval-Augmented Generation). It implements Think-on-Graph (ToG) 3.0 with the MACER framework β a meta-cognitive reasoning pipeline that adaptively explores, validates, and synthesizes evidence from structured knowledge graphs.
| Aspect | Traditional RAG | Ontology Reasoning |
|---|---|---|
| Reasoning | Vector similarity + LLM | Meta-cognitive 4-stage pipeline |
| Query Handling | Static, single-pass | Adaptive refinement & decomposition |
| Evidence Validation | Basic relevance | 5-component scoring + contradiction detection |
| Multi-hop Questions | LLM-dependent hallucination | Explicit path tracking & bridge entity detection |
| Temporal Reasoning | Ignored | Native temporal alignment & event sequencing |
| Failure Transparency | "I don't know" | Detailed confidence classification & gap analysis |
βββββββββββββββ βββββββββββββββ βββββββββββββββ βββββββββββββββ
β Constructor β -> β Retriever β -> β Reflector β -> β Responser β
β β β β β (loop) β β β
β Entity β β 5 Evidence β β Sufficiency β β Synthesis β
β Extraction β β Strategies β β Assessment β β & Answer β
βββββββββββββββ βββββββββββββββ ββββββββ¬βββββββ βββββββββββββββ
β
EXPLORE / FOCUS / REFINE / BACKTRACK
- Vector Search: Semantic similarity on entity/chunk embeddings
- Graph Traversal: Multi-hop structural exploration
- Community Summaries: High-level contextual retrieval
- Text2Cypher: Natural language to Cypher with self-healing
- Hybrid Mode: Intelligent combination of all strategies
- Entity Overlap (35%): Jaccard similarity matching
- Relationship Match (25%): Graph structure alignment
- Temporal Alignment (20%): Date/time context validation
- Answer Presence (10%): Direct answer detection
- Negative Evidence (10%): Contradiction & negation detection
- Multilingual: Full Korean/English support with optimized fuzzy matching
- LLM Failover: Automatic cascade (OpenAI β Anthropic β Azure β Ollama)
- Incremental Updates: Delta-based graph modifications with change tracking
- Ontology Schema: Entity type inheritance, predicate cardinality, domain profiles
- SSE Streaming: Real-time progress for long-running operations
User Query
β
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β INTENT CLASSIFICATION β
β (KNOWLEDGE | GREETING | SMALL_TALK | SYSTEM) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β KNOWLEDGE
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β CONSTRUCTOR β
β β’ Extract topic entities (multilingual NLP) β
β β’ Vector + Full-text entity retrieval β
β β’ Build seed subgraph with 1-3 hop neighbors β
β β’ Detect bridge entities for multi-hop questions β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β RETRIEVER β
β β’ Execute 5 evidence collection strategies β
β β’ Rank evidence with 5-component scoring β
β β’ Track evidence chains for provenance β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β REFLECTOR (Meta-cognitive Core) ββββββ β
β β’ Assess sufficiency (0.0 - 1.0) β β
β β’ Evaluate: Completeness, Coverage, Consistency β β
β β’ Decide: EXPLORE | FOCUS | REFINE | BACKTRACK | CONCLUDE β
β β’ Evolve query if needed ββββββββββββββββββββββββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β CONCLUDE
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β RESPONSER β
β β’ Synthesize evidence into facts/inferences β
β β’ Generate natural language answer β
β β’ Provide confidence: CONFIDENT | PROBABLE | UNCERTAIN β
β β’ Include reasoning explanation & evidence attribution β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
βΌ
Final Answer with Confidence + Explanation + Sources
Documents (JSON, PDF, MD, CSV, XML, YAML, HTML, DOCX)
β
βΌ
ββββββββββββ ββββββββββββ ββββββββββββ ββββββββββββ ββββββββββββ
β Ingest β β β Chunk β β β Extract β β β Embed β β β Load β
β β β β β Entities β β β β to Neo4j β
β Loaders β β Smart β β Relationsβ β 1536-dim β β Bulk β
β Encoding β β Overlap β β LLM-basedβ β Vectors β β Upsert β
ββββββββββββ ββββββββββββ ββββββββββββ ββββββββββββ ββββββββββββ
# Clone repository
git clone https://github.com/your-org/ontology-reasoning.git
cd ontology-reasoning
# Configure environment
cp .env.example .env
# Edit .env with your API keys
# Start all services
docker-compose up -d
# Verify
curl http://localhost:8000/api/healthAccess Points:
- API: http://localhost:8000
- API Docs: http://localhost:8000/docs
- Neo4j Browser: http://localhost:7474
- LangGraph Studio: http://localhost:8123
# Python 3.11+ required
pip install -e ".[dev]"
# Start Neo4j separately (Docker or native)
docker run -d --name neo4j \
-p 7474:7474 -p 7687:7687 \
-e NEO4J_AUTH=neo4j/password123 \
neo4j:5.15
# Configure and run
cp .env.example .env
uvicorn src.api.main:app --reloadcd desktop
npm install
npm run dev # Development mode
npm run build # Production buildFeatures:
- Session-based chat history with auto-save
- Expandable reasoning process view
- Real-time streaming with step-by-step updates
- Multi-session management (create, switch, delete)
- Dark/Light theme support
# Synchronous reasoning query
POST /api/query
{
"query": "What is the relationship between Entity A and Entity B?",
"max_iterations": 5
}
# SSE streaming with step-by-step updates
POST /api/query/stream# Upload files (up to 1GB)
POST /api/ingest
Content-Type: multipart/form-data
# Stream ingestion progress
GET /api/ingest/{job_id}/stream# Natural language to Cypher
POST /api/text2cypher
{
"query": "Find all employees who work in Seoul",
"execute": true
}
# Raw Cypher execution
POST /api/cypher
{
"query": "MATCH (n:Entity) RETURN n LIMIT 10"
}GET /api/health # Health check
GET /api/stats # Graph statistics
GET /api/schema # Neo4j schema
GET /api/ontology # Export ontology (JSON-LD, Turtle, JSON)Full API documentation available at /docs (Swagger UI).
Create .env file from .env.example:
# Neo4j Database
NEO4J_URI=bolt://localhost:7687
NEO4J_USERNAME=neo4j
NEO4J_PASSWORD=your-password
# LLM Provider (openai | anthropic | azure | local)
LLM_PROVIDER=openai
LLM_OPENAI_API_KEY=sk-...
LLM_ANTHROPIC_API_KEY=sk-ant-... # Optional fallback
# Model Selection
LLM_REASONING_MODEL=gpt-4o-mini
LLM_EMBEDDING_MODEL=text-embedding-3-small
# Deterministic Response Settings
LLM_TEMPERATURE=0.0
LLM_SEED=42
LLM_TOP_P=1.0
# Reasoning Parameters
TOG_MAX_REASONING_DEPTH=5
TOG_CONFIDENCE_THRESHOLD=0.7# Install with dev dependencies
make dev
# Run tests
make test # All tests
make test-unit # Unit tests only
make test-cov # With coverage report
# Code quality
make lint-fix # Lint with auto-fix
make format # Format code
make typecheck # Type checking
make check # All checks
# Docker operations
make docker-up # Start services
make docker-down # Stop services
make db-setup # Initialize Neo4j schema
make health # Health checksrc/
βββ api/ # FastAPI endpoints
βββ config/ # Pydantic settings
βββ graph/ # Neo4j client & operations
βββ llm/ # LLM provider with failover
βββ sddi/ # Data ingestion pipeline
β βββ document_loaders/
β βββ extractors/
β βββ loaders/
βββ tog/ # MACER reasoning agents
β βββ agents/ # Constructor, Retriever, Reflector, Responser
β βββ temporal_reasoning.py
β βββ negative_evidence.py
βββ text2cypher/ # NL to Cypher generation
βββ validation/ # Pipeline validation framework
βββ workflow/ # LangGraph orchestration
desktop/ # Electron desktop app
tests/ # Unit & integration tests
from src.tog.temporal_reasoning import compute_enhanced_temporal_alignment
result = compute_enhanced_temporal_alignment(
query="What happened before 2023?",
evidence_text="The event occurred in January 2022..."
)
# Returns: {score, alignment_type, temporal_match, temporal_consistency}from src.tog.negative_evidence import analyze_evidence_polarity
polarity = analyze_evidence_polarity(
evidence="The company did NOT acquire the startup.",
query="Did the company acquire the startup?"
)
# Returns: NEGATIVE with contradiction scorefrom src.sddi.pipeline import SDDIPipeline
pipeline = SDDIPipeline(
llm=llm,
embeddings=embeddings,
use_incremental_loading=True,
)
# Get change report after ingestion
delta = pipeline.get_last_delta_report()
# DeltaReport: new_entities, modified_entities, unchanged, deletedWe welcome contributions! Please see our contributing guidelines:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit changes (
git commit -m 'Add amazing feature') - Push to branch (
git push origin feature/amazing-feature) - Open a Pull Request
- Python 3.11+ with type hints
- Ruff for linting and formatting
- MyPy for type checking
- Pytest for testing
**Created by Kit4Some **
in collaboration with sapiens.team
Building the future of intelligent systems
We believe in the power of open source to accelerate innovation. Ontology Reasoning System is our contribution to the AI community β a production-ready framework for building knowledge-intensive applications that reason, not just retrieve.
- Transparency: Every reasoning step is traceable
- Reliability: Confidence scores you can trust
- Extensibility: Modular architecture for customization
- Community: Built together, better together
- π Website: https://sapiens.team
- π§ Email: gkemqk7@gmail.com
- π¬ Discussions: GitHub Issues & Discussions
This project is licensed under the Apache 2.0 License - see the LICENSE file for details.
Star β this repository if you find it useful!
Made with π§ by Kit4Some & sapiens.team