Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

silk_clear_request_log taking longer than 30 minutes #239

Closed
blairg23 opened this issue Dec 21, 2017 · 8 comments
Closed

silk_clear_request_log taking longer than 30 minutes #239

blairg23 opened this issue Dec 21, 2017 · 8 comments

Comments

@blairg23
Copy link

Just installed Silk and ran through all the settings. Everything "works" perfectly (I can see the profiles and queries of the method I set the silk profile on just fine), but the queries seems extremely high for the 2 Django queries I'm making inside the method. I'll be creating an issue after this one to address that issue.

The main issue, however, is that when I tried to clear the logs to see if that's the problem, silk_clear_request_log takes an enormous amount of time. I let it sit for 30 minutes and then went and cooked dinner, came back and it had finally finished. Since I had just installed Silk and run maybe 2 requests, I thought this seemed a bit excessive. Unsure on a fix for this, except perhaps showing some kind of feedback like a progression bar so the user doesn't think the command failed?

@avelis
Copy link
Collaborator

avelis commented Dec 29, 2017

@blairg23 That seems rather bizarre behavior for the clearing feature. hmm. Can I get some information as to what is your setup specifically?

@avelis
Copy link
Collaborator

avelis commented Dec 29, 2017

Looking at the command itself. It actually clears 4 separate tables. Likely what could be added is some progress reporting to let the command user know what is about to take place. Also, I am not sure if the for loop used to remove is the most efficient. The command is 2/3 years old. It could use a little love to see how it can improve.

@blairg23
Copy link
Author

blairg23 commented Jan 1, 2018

I agree, I think some feedback to tell the user that it's still going and what progress it's made would be helpful. I found that the reason it takes so long is that I had an inefficiency which was causing a query to occur every iteration, so I was having 39k queries every time I called that method. This build up of queries probably led to the long wait time on the clear Silk log command. It takes considerably less time now.

@smcoll
Copy link
Contributor

smcoll commented Jan 9, 2018

i had an enormous amount of data in the silk_* tables (wish i had noted the row count before i cleared it out), and it appears the command makes two queries for every 1000 records (aren't __in queries expensive?) so the management command took a very long time to execute. i understand we're trying to avoid an out-of-memory error (since Django selects before deletion: https://code.djangoproject.com/ticket/9519), but maybe there's another way to approach this. Apparently there's queryset._raw_delete(queryset.db) which could be leveraged, although that's a private API.

@MateuszBelczowski
Copy link

I also had the same problem. At least giving user a feedback about the progress would be really useful.

@avelis
Copy link
Collaborator

avelis commented Jan 15, 2018

@smcoll I have no issue with using a private API call. Not sure it's the best way to go though.

@blairg23 @MateuszBelczowski If anyone here has time for a PR I can happily take a look and merge it in.

Another option is to run table truncation at the DB level. But that is outside of Django in general and that is at your own risk of course.

@siovene
Copy link

siovene commented Feb 11, 2018

I'm having the same problem, and I have 10,000 requests as shown in the silk UI. Perhaps there's a workaround with some raw SQL that I can execute to purge the data?

@AuHau
Copy link

AuHau commented Feb 11, 2018

@siovene depends on what DB flavor you are using, but as avelis already mentioned in previous post, you can run TRUNCATE SQL command yourself, which purge the logs very fast.

IMO I think that this task should be implemented with Raw SQL as the performance gain is very, very significant. I noticed that you support PostgreSQL, SQLite and MySQL and it should not be hard to write support for these, especially since TRUNCATE is pretty straightforward command. Only difference will be in SQLite where it is not supported and DELETE TABLE * needs to be used...

If you would agree with this approach I would be happy to submit PR with the changes...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants