Skip to content

Use sanic's own server to run benchmarks and adjust Litestart and FastAPI servers for same grounds 🚀 #3

Open
@provinzkraut

Description

@provinzkraut

Sanic

Since you're giving Robyn the opportunity to run on its own server, you should do the same for Sanic.

Here's the results of Sanic with uvicorn:

wrk -t12 -c400 -d10s -s wrk_script.lua http://localhost:8000/echo
Running 10s test @ http://localhost:8000/echo
  12 threads and 400 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    23.96ms   23.94ms 205.95ms   85.51%
    Req/Sec     1.71k     1.33k    6.51k    85.60%
  204405 requests in 10.10s, 33.33MB read
Requests/sec:  20241.36
Transfer/sec:      3.30MB

And this is what I get when I run Sanic with its own server:

Running 10s test @ http://localhost:8000/echo
  12 threads and 400 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     6.50ms    6.17ms  68.92ms   75.49%
    Req/Sec     6.16k     3.23k   38.93k    75.60%
  736312 requests in 10.10s, 85.67MB read
Requests/sec:  72901.33
Transfer/sec:      8.48MB

This would actually make it the fastest of the tested frameworks.

Litestar

The Litestar app is set up to respond on /, where the other apps, and the benchmark, are run against /echo, meaning all results you're seeing are just 404 responses. This is actually visible in the results:

433571 requests in 10.10s, 71.12MB read
Socket errors: connect 0, read 306, write 0, timeout 0
Non-2xx or 3xx responses: 433571

I've also noticed that you enabled way more strict data validation for Litestar than FastAPI; dict[str, str] for incoming and outgoing data for Litestar and just dict for incoming data for FastAPI and no validation for outgoing data. To make this a useful comparison, those should probably be equivalent (=


Litestar before the adjustments:

wrk -t12 -c400 -d10s -s wrk_script.lua http://localhost:8000/echo
Running 10s test @ http://localhost:8000/echo
  12 threads and 400 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    11.27ms    9.04ms 111.02ms   64.85%
    Req/Sec     3.17k     1.64k   20.26k    77.37%
  379577 requests in 10.08s, 62.26MB read
  Non-2xx or 3xx responses: 379577
Requests/sec:  37667.42
Transfer/sec:      6.18MB

Litestar after the adjustments:

wrk -t12 -c400 -d10s -s wrk_script.lua http://localhost:8000/echo
Running 10s test @ http://localhost:8000/echo
  12 threads and 400 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     7.42ms    6.14ms 106.89ms   73.79%
    Req/Sec     4.89k     1.64k   28.00k    78.47%
  585402 requests in 10.10s, 87.65MB read
Requests/sec:  57968.14
Transfer/sec:      8.68MB

Adjusted results and rankings:

I've also ran Starlette and FastAPI for comparison, and compiled a table with the results of the adjusted tests:

Framework RPS
Sanic 72901
Starlette 68016
Litestar 57968
FastAPI 38225

This gives a very different picture than your original run.

Metadata

Metadata

Assignees

No one assigned

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions