Skip to content

Provide set of meaningful benchmarks #75

Open
@playpauseandstop

Description

@playpauseandstop

To ensure that performance is not touched between rororo releases.

As of now I'm thinking about using pyperf as benchmark runner, and benchmarking,

  • In todobackend example,
    • setup_openapi performance
    • validate_request performance by creating 100+ todos via create_todo operation (without Redis layer)
    • validate_response performance by emulating list_todos operation with 100+ todos (without Redis layer)
  • In hobotnica example,
    • validate_security performance for basic auth
    • validate_security performance for HTTP auth

To complete task need to store benchmark results somewhere (gh-pages branch or as release artifact) and have ability to compare benchmark results between releases.

Metadata

Metadata

Labels

ciChanges to CI configuration files and scriptsperfA code change that improves performance

Projects

No projects

Relationships

None yet

Development

No branches or pull requests

Issue actions