You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When running Flashlight tests via Maestro with flashlight test <maesto test file> the results are nicely aggregated into a single result, which makes comparing against different runs easy. Is it possible to do this when manually executing tests? Not sure if it doesn't exist, or I've just missed it in the docs.
We have found that due to the time Maestro spends letting the app go idle our tests aren't quite as accurate as we'd like them to be. So instead we've been manually running tests via flashlight measure, running ~5 times, then grabbing the worst result. Is there a way to merge results like what is done when automating measures, but with manual tests? Happy to help with this if it doesn't exist, just point me in the right direction.
Thanks heaps, love the tool it has massively changed our performance measuring game.
The text was updated successfully, but these errors were encountered:
You're right, there's no way to do this out of the box with flashlight measure (except dowloading each JSON of measures from the webapp individually and merging them via a script, but that'd be tedious)
Might make sense to restructure the page and make it so we have a start/stop button and:
a button to add a new result
a button to add a new iteration
If you're interested in doing this, happy to provide some more guidance!
Maestro being slow
We also noticed Maestro can be pretty slow at starting and stopping tests, is that also what you're experiencing?
We actually have a fork that can be used with npx @perf-profiler/maestro@rc but it's not ideal, we should hopefully soon open a PR to Maestro to remedy this
Thanks for the response @Almouro, I'll hopefully find some time to poke around and see what I can do. Super appreciate the help.
That's a big part of the slowness we're seeing yeah, interesting to hear others are experiencing it. We also have noticed that Maestro seems to "wait" for arbitrary amounts of time to allow the app to go idle to some degree, which means the JS thread and FPS have time to recover beyond what a regular user would experience. I'll check out your branch and see if it helps out at all. Thanks again!
When running Flashlight tests via Maestro with
flashlight test <maesto test file>
the results are nicely aggregated into a single result, which makes comparing against different runs easy. Is it possible to do this when manually executing tests? Not sure if it doesn't exist, or I've just missed it in the docs.We have found that due to the time Maestro spends letting the app go idle our tests aren't quite as accurate as we'd like them to be. So instead we've been manually running tests via
flashlight measure
, running ~5 times, then grabbing the worst result. Is there a way to merge results like what is done when automating measures, but with manual tests? Happy to help with this if it doesn't exist, just point me in the right direction.Thanks heaps, love the tool it has massively changed our performance measuring game.
The text was updated successfully, but these errors were encountered: