Skip to content

JasonTeixeira/Performance-Testing-Framework

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Performance Testing Framework

Python Locust

Load testing tools for finding out how your API actually performs under stress

I built this to answer questions that always come up before launch: "Can our API handle 100 simultaneous users?" "What happens during a traffic spike?" "Will it stay stable under sustained load?"

This framework makes it easy to find out.


What's This For?

Performance testing tells you things functional tests can't:

  • How many users can you handle before things slow down?
  • Where are the bottlenecks in your system?
  • Does performance degrade over time? (memory leaks, connection leaks, etc.)
  • What happens when traffic suddenly spikes?

I built this to test the FastAPI application from my other project, but it works against any HTTP API.


Quick Start

Install

# Clone and setup
git clone https://github.com/JasonTeixeira/Performance-Testing-Framework.git
cd Performance-Testing-Framework

# Virtual environment
python -m venv venv
source venv/bin/activate  # Windows: venv\Scripts\activate

# Install dependencies
pip install -r requirements.txt

Run Your First Test

The easiest way is using the CLI script:

# See available test scenarios
python run_load_test.py list

# Run a basic load test
python run_load_test.py run basic --host http://localhost:8000

# Quick smoke test (30 seconds)
python run_load_test.py quick http://localhost:8000

Or run Locust directly:

locust -f locustfiles/api_load_test.py --host http://localhost:8000

Then open http://localhost:8089 to see the web UI.


Test Scenarios

Basic Load Test

10 users, 2 minutes

Models realistic user behavior - logging in, browsing, updating profiles. Good for baseline performance.

python run_load_test.py run basic

Spike Test

100 users, 1 minute, rapid spawn

Sudden traffic burst. Tests how your API handles when traffic jumps quickly (like getting featured on Reddit).

python run_load_test.py run spike

Endurance Test

20 users, 10 minutes

Sustained load over time. Catches memory leaks, connection pool issues, and performance degradation.

python run_load_test.py run endurance

Stress Test

500 users, 5 minutes

Find the breaking point. Keeps ramping up until something gives.

python run_load_test.py run stress

How It Works

Realistic User Behavior

The tests don't just hammer one endpoint. They model actual user behavior:

  1. User logs in (gets JWT token)
  2. Browses around (views profile, lists users)
  3. Updates things (profile changes)
  4. Health checks (like monitoring tools do)

Each action has a weight based on how often it happens in real usage. Profile views are more common than updates.

Wait Times

Users don't make requests instantly. The framework adds realistic "think time" between actions - usually 1-3 seconds, like a real person navigating.

For spike tests, wait time is much shorter (0.1-0.5s) to simulate frantic clicking.

Token Management

The test automatically handles JWT authentication:

  • Logs in once at the start
  • Uses the token for all subsequent requests
  • Handles expiration gracefully

This is important because authentication adds overhead to every request.


Understanding Results

Response Time

Locust shows several percentiles:

  • p50 (median): Half of requests are faster than this
  • p95: 95% of requests are faster (captures slow outliers)
  • p99: 99% are faster (the really slow ones)

Don't just look at average - the p95 and p99 times matter more for user experience.

Failure Rate

Any non-200 responses count as failures. Even 1-2% failure rate means users are seeing errors.

Requests/Second (RPS)

How much throughput your API can handle. This should stay consistent if performance is stable.

If RPS drops over time with the same number of users, something's degrading (probably memory or connections).


Project Structure

Performance-Testing-Framework/
├── locustfiles/             # Test scenarios
│   └── api_load_test.py    # API load tests
├── scenarios/               # Additional test scenarios
├── utils/                  # Helper utilities
├── config/                 # Configuration files
│   └── test_config.yaml   # Test settings
├── reports/                # Test results (generated)
├── run_load_test.py       # CLI tool for easy execution
└── requirements.txt       # Python dependencies

What I Learned

Building this taught me:

About Load Testing:

  • The difference between load, stress, spike, and endurance testing
  • Why you can't just use functional tests for performance
  • How to model realistic user behavior instead of just hitting endpoints
  • The importance of percentiles over averages

About Performance:

  • Authentication overhead matters at scale
  • Database connection pools are critical
  • Memory leaks show up in long-running tests
  • Even small inefficiencies multiply under load

About APIs:

  • Rate limiting needs careful tuning
  • Keep-alive connections make a huge difference
  • JWT token validation is surprisingly expensive
  • Database queries that seem fast become bottlenecks at scale

Testing Your Own API

To test a different API:

  1. Create a new locustfile (copy api_load_test.py as a template)
  2. Model your user flow (login, browse, checkout, etc.)
  3. Add realistic wait times between actions
  4. Set task weights based on actual usage patterns
  5. Start small (5-10 users) and ramp up

Don't start by testing with 1000 users - you'll just break everything and learn nothing. Start low, find issues, fix them, then increase.


Common Issues

"Connection refused"

  • Is your API actually running?
  • Check the host URL (http vs https)

"Too many failures"

  • Your API might not be able to handle the load
  • Reduce user count and try again
  • Check API logs for errors

"Tests run but no requests"

  • Authentication might be failing
  • Check credentials in the locustfile

"Response times increasing over time"

  • Memory leak or connection pool exhaustion
  • This is what endurance tests catch - it's valuable data!

Tips for Good Load Tests

Do:

✅ Start with low user counts and ramp up ✅ Model realistic user behavior ✅ Run tests for several minutes (not just seconds) ✅ Test on an environment similar to production ✅ Monitor your API server during tests (CPU, memory, etc.)

Don't:

❌ Test directly against production (use staging!) ❌ Just hit one endpoint repeatedly ❌ Ignore authentication in your tests ❌ Only look at average response times


Contributing

Found a bug or have a scenario to add? Open an issue or PR!


Author

Jason Teixeira


License

MIT License - use it however you want.


Why Locust?

I chose Locust because:

  • Python-based - Easy to extend with custom logic
  • Distributed testing - Can run across multiple machines
  • Real-time web UI - Watch results as tests run
  • Realistic simulation - Models actual user behavior, not just request spam
  • Open source - No licensing costs

Other tools are good too (JMeter, k6, Gatling), but Locust's Python API makes it very flexible for complex scenarios.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages