Skip to content
View introlix's full-sized avatar

Block or report introlix

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
introlix/README.md

Introlix

An AI-powered research platform that transforms how you conduct research

License Python Next.js FastAPI

Features β€’ Quick Start β€’ Documentation β€’ Contributing


🌟 Overview

Introlix is an intelligent research platform that combines the power of AI agents with advanced search capabilities to streamline your research workflow. Whether you're conducting academic research, market analysis, or deep investigations, Introlix provides a comprehensive suite of tools to help you gather, organize, and synthesize information efficiently.

Watch the Demo

Key Capabilities

  • AI-Powered Research Desk: Multi-stage AI-guided research workflow with context gathering, planning, and exploration
  • Intelligent Chat Interface: Conversational AI with internet search integration for real-time information
  • AI Document Editor: Edit and enhance your research documents with AI assistance
  • Advanced Search Integration: Powered by SearXNG for privacy-focused web searches
  • Knowledge Management: Vector-based storage with Pinecone for semantic search
  • Multi-Agent System: Specialized agents for different research tasks (Context, Planner, Explorer, Editor, Writer)

✨ Features

Research Desk Workflow

The Research Desk guides you through a comprehensive research process:

  1. Initial Setup: Create a research desk with your topic
  2. Context Agent: AI asks clarifying questions to understand your research scope
  3. Planner Agent: Generates a structured research plan with topics and keywords
  4. Explorer Agent: Automatically searches the internet and gathers relevant information
  5. Document Editing: AI-assisted writing and editing of your research document
  6. Interactive Chat: Ask questions about your research and get AI-powered answers

Chat Interface

  • Real-time conversational AI with streaming responses
  • Internet search integration for up-to-date information
  • Conversation history persistence
  • Support for multiple LLM providers (OpenRouter, Google AI Studio)

Document Management

  • Rich text editor powered by Lexical
  • AI-assisted editing and content generation
  • Workspace organization for multiple projects
  • Auto-save and version tracking

Coming Soon (Beta Features)

  • Document Formatting: Export research as blog posts, research papers, or custom formats
  • Reference Management: Automatic citation generation with inline references [1], [2], etc.

πŸš€ Quick Start

Prerequisites

  • Python: 3.11 or higher
  • Node.js: 18 or higher
  • pnpm: Package manager for frontend
  • MongoDB: Database for storing workspaces and research data
  • Pinecone: Vector database for semantic search
  • SearXNG: Self-hosted search engine (see SearXNG Setup)

Installation

  1. Clone the repository
git clone https://github.com/introlix/introlix.git
cd introlix
  1. Set up environment variables
cp .env.example .env

Edit .env and add your API keys:

# Required: Choose one LLM provider
OPEN_ROUTER_KEY=your_openrouter_api_key_here
# OR
GEMINI_API_KEY=your_gemini_api_key_here # From Google AI Studio

# Required: Search engine
SEARCHXNG_HOST=http://localhost:8080/search

# Required: Vector database
PINECONE_KEY=your_pinecone_api_key_here

# Required: Database
MONGO_URI=mongodb://localhost:27017/introlix
  1. Install Python dependencies
# Create virtual environment
python -m venv .venv
source .venv/bin/activate  # On Windows: .venv\Scripts\activate

# Install dependencies
pip install -e .
  1. Authenticate with Hugging Face (required for LLM model downloads)
pip install huggingface_hub
# Login to Hugging Face
hf auth login

# Or set token directly
export HUGGING_FACE_HUB_TOKEN=your_hf_token_here

Note: Get your Hugging Face token from https://huggingface.co/settings/tokens

  1. Install frontend dependencies
cd web
pnpm install
  1. Start the services

Terminal 1 - Backend:

# From project root
source .venv/bin/activate
uvicorn app:app --reload --port 8000

Terminal 2 - Frontend:

cd web
pnpm dev
  1. Access the application

πŸ”§ Configuration

LLM Provider Selection

Edit introlix/config.py to choose your LLM provider:

# Choose: "openrouter" or "google_ai_studio"
CLOUD_PROVIDER = "google_ai_studio"

# Set default model
if CLOUD_PROVIDER == "openrouter":
    AUTO_MODEL = "qwen/qwen3-235b-a22b:free"
elif CLOUD_PROVIDER == "google_ai_studio":
    AUTO_MODEL = "gemini-2.5-flash"

SearXNG Setup

SearXNG is a privacy-respecting metasearch engine. For Introlix to work properly, you need to configure it to return JSON results.

To install SearXNG, see Installation guide.

Modify searxng/settings.yml:

# SearXNG settings

general:
  instance_name: "SearXNG"

search:
  safe_search: 0
  autocomplete: ""
  formats:
    - html
    - json   # Important Enable JSON format

server:
  port: 8888
  bind_address: "127.0.0.1"

Note: Above code is only for example. Don't replace settings.yml file. Only modify the settings.yml for enabling json.

For full template see, searxng/blob/main/searx/settings.yml

  1. Verify JSON output

Test that JSON format works:

curl "http://localhost:8888/search?q=test&format=json"

You should receive a JSON response with search results.

Important: Make sure to enable JSON format in your SearXNG settings as shown above. Introlix requires JSON responses from SearXNG to function properly.


πŸ“š Documentation


πŸ—οΈ Architecture

Introlix is built with a modern, scalable architecture:

Backend (Python/FastAPI)

  • FastAPI: High-performance async web framework
  • Multi-Agent System: Specialized AI agents for different tasks
    • ChatAgent: Conversational interface with search
    • ContextAgent: Gathers research context through questions
    • PlannerAgent: Creates structured research plans
    • ExplorerAgent: Searches and gathers information
    • EditAgent: AI-assisted document editing
    • WriterAgent: Content generation and synthesis
  • Vector Storage: Pinecone for semantic search
  • Database: MongoDB for data persistence

Frontend (Next.js/React)

  • Next.js 15: React framework with App Router
  • Lexical: Rich text editor
  • TanStack Query: Data fetching and caching
  • Radix UI: Accessible component primitives
  • Tailwind CSS: Utility-first styling

External Services

  • LLM Providers: OpenRouter or Google AI Studio
  • Search: SearXNG (self-hosted)
  • Vector DB: Pinecone
  • Database: MongoDB

🀝 Contributing

We welcome contributions! Please see our Contributing Guide for details on:

  • Setting up your development environment
  • Code style and standards
  • Submitting pull requests
  • Reporting bugs and requesting features

πŸ“ License

This project is licensed under the Apache License 2.0 - see the LICENSE file for details.


πŸ™ Acknowledgments


πŸ“§ Contact & Support


⬆ back to top

Made with ❀️ by the Introlix Team

Pinned Loading

  1. Swiftlet Swiftlet Public

    SwiftLet is a lightweight Python framework for running open-source Large Language Models (LLMs) locally using safetensors

    Python 28

  2. HeatFlow HeatFlow Public

    Python 1

  3. kavinai-website kavinai-website Public

    Forked from krkavinraj/kavinai-website

    TypeScript