Skip to content

KerryPeng08/LMeterX

 
 

Repository files navigation

LMeterX Logo

简体中文 | English

LMeterX

📋 Project Overview

LMeterX is a professional large language model performance testing platform that supports comprehensive load testing for any LLM service. Through an intuitive Web interface, users can easily create and manage test tasks, monitor testing processes in real-time, and obtain detailed performance analysis reports, providing reliable data support for model deployment and performance optimization.

LMeterX Demo

✨ Core Features

  • Full Model Compatibility - Supports mainstream LLMs like GPT, Claude, and Llama with one-click stress testing
  • High-Load Stress Testing - Simulates high-concurrency requests to accurately detect model performance limits
  • Multi-Scenario Coverage - Supports streaming/non-streaming, supports text/multimodal/custom datasetsNEW
  • Professional Metrics - Core performance metrics including first token latency, throughput(RPS、TPS), and success rate
  • AI Smart Reports - AI-powered performance analysisNEW, multi-dimensional model comparison and visualization
  • Web Console - One-stop management for task creation, stopping, status tracking, and full-chain log monitoring
  • Enterprise-level Deployment - Docker containerization with elastic scaling and distributed deployment support

🏗️ System Architecture

LMeterX adopts a microservices architecture design, consisting of four core components:

  1. Backend Service: FastAPI-based REST API service responsible for task management and result storage
  2. Load Testing Engine: Locust-based load testing engine that executes actual performance testing tasks
  3. Frontend Interface: Modern Web interface based on React + TypeScript + Ant Design
  4. MySQL Database: Stores test tasks, result data, and configuration information
LMeterX tech arch

🚀 Quick Start

Environment Requirements

  • Docker 20.10.0+
  • Docker Compose 2.0.0+
  • At least 4GB available memory
  • At least 5GB available disk space

One-Click Deployment (Recommended)

Complete Deployment Guide: See Complete Deployment Guide for detailed instructions on all deployment methods

Use pre-built Docker images to start all services with one click:

# Download and run one-click deployment script
curl -fsSL https://raw.githubusercontent.com/MigoXLab/LMeterX/main/quick-start.sh | bash

Usage Guide

  1. Access Web Interface: http://localhost:8080
  2. Create Test Task:
    • Configure target API address and model parameters
    • Select test type (text conversation/image-text conversation)
    • Set concurrent user count and test duration
    • Configure other advanced parameters (optional)
  3. Monitor Test Process: Real-time view of test logs and performance metrics
  4. View and Export Test Results: View detailed performance results and export reports.
  5. AI Summary: After configuring the AI service on the System Configuration page, you can perform AI-powered evaluation and summary of performance results on the Task Results page.

🔧 Configuration

Environment Variable Configuration

General Configuration

SECRET_KEY=your_secret_key_here        # Application security key
FLASK_DEBUG=false                      # Debug mode switch

Database Configuration

DB_HOST=mysql                          # Database host address
DB_PORT=3306                           # Database port
DB_USER=lmeterx                        # Database username
DB_PASSWORD=lmeterx_password           # Database password
DB_NAME=lmeterx                        # Database name

Frontend Configuration

VITE_API_BASE_URL=/api                # API base path

🤝 Development Guide

We welcome all forms of contributions! Please read our Contributing Guide for details.

Technology Stack

LMeterX adopts a modern technology stack to ensure system reliability and maintainability:

  • Backend Service: Python + FastAPI + SQLAlchemy + MySQL
  • Load Testing Engine: Python + Locust + Custom Extensions
  • Frontend Interface: React + TypeScript + Ant Design + Vite
  • Deployment & Operations: Docker + Docker Compose + Nginx

Project Structure

LMeterX/
├── backend/                  # Backend service
├── st_engine/                # Load testing engine service
├── frontend/                 # Frontend service
├── docs/                     # Documentation directory
├── docker-compose.yml        # Docker Compose configuration
├── Makefile                  # Run complete code checks
├── README.md                 # English README

Development Environment Setup

  1. Fork the Project to your GitHub account
  2. Clone Your Fork, create a development branch for development
  3. Follow Code Standards, use clear commit messages (follow conventional commit standards)
  4. Run Code Checks: Before submitting PR, ensure code checks, formatting, and tests all pass, you can run make all
  5. Write Clear Documentation: Write corresponding documentation for new features or changes
  6. Actively Participate in Review: Actively respond to feedback during the review process

🗺️ Development Roadmap

In Development

  • Support for client resource monitoring

Planned

  • CLI command-line tool

📚 Related Documentation

👥 Contributors

Thanks to all developers who have contributed to the LMeterX project:

📄 Open Source License

This project is licensed under the Apache 2.0 License.


**⭐ If this project helps you, please give us a Star! Your support is our motivation for continuous improvement.**

About

Professional Load Testing for Any LLM Services or Routers. 大模型压测工具,性能测试结果AI分析

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • Python 55.8%
  • TypeScript 39.4%
  • CSS 2.1%
  • Makefile 0.9%
  • JavaScript 0.7%
  • HTML 0.5%
  • Other 0.6%