Open
Description
To ensure that performance is not touched between rororo
releases.
As of now I'm thinking about using pyperf as benchmark runner, and benchmarking,
- In
todobackend
example,setup_openapi
performancevalidate_request
performance by creating 100+ todos viacreate_todo
operation (without Redis layer)validate_response
performance by emulatinglist_todos
operation with 100+ todos (without Redis layer)
- In
hobotnica
example,validate_security
performance for basic authvalidate_security
performance for HTTP auth
To complete task need to store benchmark results somewhere (gh-pages
branch or as release artifact) and have ability to compare benchmark results between releases.