A Retrieval-Augmented Generation (RAG) system built with Node.js and Langbase that provides AI-powered support for answering questions based on uploaded documentation.
This project implements a RAG system that combines document retrieval with AI-generated responses. It uses Langbase as the backend service for memory management and AI pipe execution. The system is designed to answer user queries by retrieving relevant information from uploaded documents and generating contextual responses.
The project consists of several TypeScript modules that work together to create a complete RAG pipeline:
- Memory Management (
create-memory.ts) - Creates and configures the knowledge base - Document Upload (
upload-docs.ts) - Uploads documents to the knowledge base - AI Agent Setup (
create-pipe.ts) - Creates the AI support agent pipeline - RAG Pipeline (
agents.ts) - Implements the retrieval and generation logic - Main Application (
index.ts) - Orchestrates the complete RAG workflow
- Document Storage: Upload and store documents in a vectorized knowledge base
- Semantic Search: Retrieve relevant document chunks based on user queries
- AI-Powered Responses: Generate contextual answers using retrieved information
- Source Citation: Automatically cite sources in responses with proper formatting
- Metadata Support: Add metadata to documents for better organization
- Node.js (v14 or higher)
- TypeScript
- Langbase API key
- Clone the repository:
git clone <repository-url>
cd rag-nodejs-langbase- Install dependencies:
npm install- Create a
.envfile in the root directory and add your Langbase API key:
LANGBASE_API_KEY=your_langbase_api_key_here
Before using the RAG system, you need to set up the infrastructure:
npx tsx create-memory.tsThis creates a new memory called "knowledge-base" with OpenAI's text-embedding-3-large model.
npx tsx upload-docs.tsThis uploads the FAQ document from docs/langbase-faq.txt to the knowledge base with metadata.
npx tsx create-pipe.tsThis creates the "ai-support-agent" pipeline that will be used for generating responses.
npx tsx index.tsThis executes the main RAG pipeline, which:
- Takes a user query ("How do I upgrade individual plan?")
- Retrieves relevant chunks from the knowledge base
- Generates a contextual response using the AI support agent
- Displays the completion with proper source citations
Creates a new memory instance in Langbase for storing and retrieving document chunks. Uses OpenAI's text-embedding-3-large model for vectorization.
Uploads documents to the knowledge base. Currently configured to upload the FAQ document with metadata including category and topic information.
Creates an AI pipeline (pipe) in Langbase that will be used for generating responses. The pipe is configured with a system prompt for helpful assistance.
Contains the core RAG logic:
runMemoryAgent(): Retrieves relevant document chunks based on a queryrunAiSupportAgent(): Generates AI responses using retrieved chunksgetSystemPrompt(): Creates a system prompt that includes retrieved context and citation instructions
Main application file that demonstrates the complete RAG workflow by running a sample query through the system.
The system is configured to:
- Use "knowledge-base" as the memory name
- Retrieve top 4 most relevant chunks (topK: 4)
- Use "ai-support-agent" as the pipe name
- Include source citations in responses with proper formatting
langbase: Official Langbase SDK for Node.jsdotenv: Environment variable managementfs/promises: File system operations for document reading
Małgorzata Krawczuk