Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Slow query log for filters "blocked *" #3011

Closed
kpoman opened this issue Apr 27, 2021 · 10 comments
Closed

Slow query log for filters "blocked *" #3011

kpoman opened this issue Apr 27, 2021 · 10 comments

Comments

@kpoman
Copy link

kpoman commented Apr 27, 2021

Issue Details

  • Version of AdGuard Home server:
    v0.106.0-b.1
  • How did you install AdGuard Home:
    Built from source
  • How did you setup DNS configuration:
    IoT complementing a home router
  • If it's a router or IoT, please write device model:
    Raspberry Pi 3 Model B
  • CPU architecture:
    ARMv7
  • Operating system and version:
    Raspbian latest

Expected Behavior

Using query log filters, get the result in couple of seconds while changing them

Actual Behavior

Some filters respond almost instantly showing filtered results, some not (take minutes to respond, takes full CPU usage, making me think of lack of indexation on specific query filters, specifically:

  • Blocked Services
  • Blocked Threats
  • Blocked Adult Websites
  • Rewritten
  • Safesearch

All these bring CPU usage to 100% (even worse, if trying these sequentially, I got >400% cpu usage). There is no timeout, so the box becomes unusable. BTW, box has couple hundred thousands logs. This responded on first days of adguardhome usage, and became progressively unusable. Apparent obvious lack of indexation. Only way out of that hell is rebooting the raspberry.

@ameshkov
Copy link
Member

COuld you please update to v0.106.0-b.4 and check again?

@kpoman
Copy link
Author

kpoman commented Apr 27, 2021

Yes sure ! Is there a way to update on a running box ? I just did a git pull on the compile/running folder, can I just make it there ? (I ask this because it took quite long tom compile the first time, lol)

@ainar-g
Copy link
Contributor

ainar-g commented Apr 27, 2021

@kpoman, we provide prebuilt binaries on the releases page.

@kpoman
Copy link
Author

kpoman commented Apr 27, 2021

Ok, I was able to recompile without problems, in place, and restart the binary. No problems so far, however, still extremely slow. So I suppose one needs to recreate the local databases ? I saw these 2 files ./data/stats.db and ./data/sessions.db, which I guess are not patched if I upgrade. Any way to patch the database ? Or should I get rid of those and have them somehow recreated ?

@ainar-g
Copy link
Contributor

ainar-g commented Apr 27, 2021

I see, thank you for testing. No, there are no local databases for query logs. Currently, it's just text files with no indexing. See #2290.

My current assumption is that the logs probably have fewer logs with these statuses, and they are much more sparse than the other types, so the search has to scan much more records to find those. We won't be able to fix that until we'll be able to find an appropriate database format for the logs.

@kpoman
Copy link
Author

kpoman commented Apr 27, 2021

Ok thanks for the insights ! My OS is on a SD Card (running raspbian). Do you want me to use some tool to see where the lag comes from (iostat, or whatever) ?

@kpoman
Copy link
Author

kpoman commented Apr 27, 2021

Screenshot from 2021-04-27 11-05-10
iostat when querying buggy logs. Apparently it comes IO operations, pretty slow (~1MB/s). Am gonna check which perfs I can get from that mmcblk device.

@kpoman
Copy link
Author

kpoman commented Apr 27, 2021

Ok, so for the sdcard perfs, it is able to write at 15MB/s and read at 25MB/s approximately, see below:

pi@IQoT-000:~ $ dd if=/dev/zero of=~/test.tmp bs=500K count=1024
1024+0 records in
1024+0 records out
524288000 bytes (524 MB, 500 MiB) copied, 35.4707 s, 14.8 MB/s
pi@IQoT-000:~ $ sync; echo 3 | sudo tee /proc/sys/vm/drop_caches
3
pi@IQoT-000:~ $ dd if=~/test.tmp of=/dev/null bs=500K count=1024
1024+0 records in
1024+0 records out
524288000 bytes (524 MB, 500 MiB) copied, 22.7515 s, 23.0 MB/s
pi@IQoT-000:~ $ 

I dont know why iostat shows only ~1MB/s but still 100% cpu, so apparently something related to the parsing of that (json???) file with complete logs, some slow parsing there.

@ainar-g
Copy link
Contributor

ainar-g commented Apr 28, 2021

Thanks for investigating. As I've said, it's probably the inefficiencies of the current search implementation. I doubt that it can be solved other than in the issue I've mentioned above.

@kpoman
Copy link
Author

kpoman commented Apr 28, 2021

Thank you ! I am closing it now, will wait for new releases !

@kpoman kpoman closed this as completed Apr 28, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants