-
Notifications
You must be signed in to change notification settings - Fork 119
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Disabling and enabling the explicit profiler #296
Comments
Enough people are asking for this where I'm willing to consider it. But I also want to move slowly and carefully with changes to this library. You have a proof of concept, which is the first step. Let's try to brainstorm an API. My first thought is something like: from line_profiler import profile
@profile(warmup=1)
def foo():
import torch
return torch.rand(10) Where the "warmup" parameter indicates how many times you will run the function before you start to measure it. However, this might not provide a granular level of control, but I'm also not sure if that is needed. If it is perhaps, there could be some attribute assigned to the function which indicates if it is in a measurable state, and that could be programmatically controlled? The other considerations that need to be made are:
With this being said, if anyone wants to work on this, open a PR and we can continue development and discussion there. |
That API would definitely work for my use case. I would personally be just as happy doing
as a user if that's easier to maintain. Implementation-wise it seems like if we make it so that enable_by_count() only increments when enable_count > 0 (like disable_by_count() does), then setting enable_count to -1 and then calling disable() would durably disable profiling until enable_count was set to 0 and then enable() is called. I don't fully understand the use case for enable_by_count() so I'm not sure if that proposal breaks existing code. If that does break code, then a is_user_disabled flag could be added that would be checked in the enable() function. Specifying a warmup per-function would mean keeping track of a per-function countdown which is a bit more involved. Happy to put together a PR if you like that general direction. |
It would be nice if this could generalize beyond the explicit profiler. That's just a way to run line-profiler. I don't want to bake features into it that things like the "auto-profiler" or the "implicit-profiler" wouldn't be compatible with. I would be fine with something like: In any case, the name can be search/replaced in the PR, so go ahead start the development. I'm confident we can get this feature into a refined and meregable state without too much effort. |
Regarding implementation of this: depending on how easy Cython makes it, and how efficient modern branch predictors are, it could be more efficient to use a void pointer in C to replace the recording function with a function that does nothing when we want to disable_recording as opposed to having a boolean if condition. I can try implementing both approaches if you want. |
I have a class with a function that is slow the first time it's called (common for ML code like TF and PyTorch). In order to get more consistent timing, this function does a warmup run during init. I'd like to make sure I don't profile this warmup run when running line_profiler. This is related to #30
I tried to do this by modifying the explicit_profiler's internal _profile object but it didn't work. Here is a minimal example:
I think this is a generally useful feature so I hope there can be a documented way to achieve this!
EDIT:
looks like this works:
but I think there should be a more obvious way to do it.
The text was updated successfully, but these errors were encountered: