An AI-powered research platform that transforms how you conduct research
Features β’ Quick Start β’ Documentation β’ Contributing
Introlix is an intelligent research platform that combines the power of AI agents with advanced search capabilities to streamline your research workflow. Whether you're conducting academic research, market analysis, or deep investigations, Introlix provides a comprehensive suite of tools to help you gather, organize, and synthesize information efficiently.
- AI-Powered Research Desk: Multi-stage AI-guided research workflow with context gathering, planning, and exploration
- Intelligent Chat Interface: Conversational AI with internet search integration for real-time information
- AI Document Editor: Edit and enhance your research documents with AI assistance
- Advanced Search Integration: Powered by SearXNG for privacy-focused web searches
- Knowledge Management: Vector-based storage with Pinecone for semantic search
- Multi-Agent System: Specialized agents for different research tasks (Context, Planner, Explorer, Editor, Writer)
The Research Desk guides you through a comprehensive research process:
- Initial Setup: Create a research desk with your topic
- Context Agent: AI asks clarifying questions to understand your research scope
- Planner Agent: Generates a structured research plan with topics and keywords
- Explorer Agent: Automatically searches the internet and gathers relevant information
- Document Editing: AI-assisted writing and editing of your research document
- Interactive Chat: Ask questions about your research and get AI-powered answers
- Real-time conversational AI with streaming responses
- Internet search integration for up-to-date information
- Conversation history persistence
- Support for multiple LLM providers (OpenRouter, Google AI Studio)
- Rich text editor powered by Lexical
- AI-assisted editing and content generation
- Workspace organization for multiple projects
- Auto-save and version tracking
- Document Formatting: Export research as blog posts, research papers, or custom formats
- Reference Management: Automatic citation generation with inline references [1], [2], etc.
- Python: 3.11 or higher
- Node.js: 18 or higher
- pnpm: Package manager for frontend
- MongoDB: Database for storing workspaces and research data
- Pinecone: Vector database for semantic search
- SearXNG: Self-hosted search engine (see SearXNG Setup)
- Clone the repository
git clone https://github.com/introlix/introlix.git
cd introlix- Set up environment variables
cp .env.example .envEdit .env and add your API keys:
# Required: Choose one LLM provider
OPEN_ROUTER_KEY=your_openrouter_api_key_here
# OR
GEMINI_API_KEY=your_gemini_api_key_here # From Google AI Studio
# Required: Search engine
SEARCHXNG_HOST=http://localhost:8080/search
# Required: Vector database
PINECONE_KEY=your_pinecone_api_key_here
# Required: Database
MONGO_URI=mongodb://localhost:27017/introlix- Install Python dependencies
# Create virtual environment
python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
# Install dependencies
pip install -e .- Authenticate with Hugging Face (required for LLM model downloads)
pip install huggingface_hub
# Login to Hugging Face
hf auth login
# Or set token directly
export HUGGING_FACE_HUB_TOKEN=your_hf_token_hereNote: Get your Hugging Face token from https://huggingface.co/settings/tokens
- Install frontend dependencies
cd web
pnpm install- Start the services
Terminal 1 - Backend:
# From project root
source .venv/bin/activate
uvicorn app:app --reload --port 8000Terminal 2 - Frontend:
cd web
pnpm dev- Access the application
- Frontend: http://localhost:3000
- Backend API: http://localhost:8000
- API Documentation: http://localhost:8000/docs
Edit introlix/config.py to choose your LLM provider:
# Choose: "openrouter" or "google_ai_studio"
CLOUD_PROVIDER = "google_ai_studio"
# Set default model
if CLOUD_PROVIDER == "openrouter":
AUTO_MODEL = "qwen/qwen3-235b-a22b:free"
elif CLOUD_PROVIDER == "google_ai_studio":
AUTO_MODEL = "gemini-2.5-flash"SearXNG is a privacy-respecting metasearch engine. For Introlix to work properly, you need to configure it to return JSON results.
To install SearXNG, see Installation guide.
Modify searxng/settings.yml:
# SearXNG settings
general:
instance_name: "SearXNG"
search:
safe_search: 0
autocomplete: ""
formats:
- html
- json # Important Enable JSON format
server:
port: 8888
bind_address: "127.0.0.1"Note: Above code is only for example. Don't replace settings.yml file. Only modify the settings.yml for enabling json.
For full template see, searxng/blob/main/searx/settings.yml
- Verify JSON output
Test that JSON format works:
curl "http://localhost:8888/search?q=test&format=json"You should receive a JSON response with search results.
Important: Make sure to enable JSON format in your SearXNG settings as shown above. Introlix requires JSON responses from SearXNG to function properly.
- API Documentation - REST API reference
- Architecture - System design and components
- Development Guide - Contributing and development setup
- SearXNG Setup - Detailed search engine configuration
- Quick Reference - Common commands and tips
Introlix is built with a modern, scalable architecture:
- FastAPI: High-performance async web framework
- Multi-Agent System: Specialized AI agents for different tasks
ChatAgent: Conversational interface with searchContextAgent: Gathers research context through questionsPlannerAgent: Creates structured research plansExplorerAgent: Searches and gathers informationEditAgent: AI-assisted document editingWriterAgent: Content generation and synthesis
- Vector Storage: Pinecone for semantic search
- Database: MongoDB for data persistence
- Next.js 15: React framework with App Router
- Lexical: Rich text editor
- TanStack Query: Data fetching and caching
- Radix UI: Accessible component primitives
- Tailwind CSS: Utility-first styling
- LLM Providers: OpenRouter or Google AI Studio
- Search: SearXNG (self-hosted)
- Vector DB: Pinecone
- Database: MongoDB
We welcome contributions! Please see our Contributing Guide for details on:
- Setting up your development environment
- Code style and standards
- Submitting pull requests
- Reporting bugs and requesting features
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
- Built with FastAPI
- Frontend powered by Next.js
- Rich text editing with Lexical
- Search powered by SearXNG
- Vector storage by Pinecone
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Community: Join our Discord
Made with β€οΈ by the Introlix Team




