-
Notifications
You must be signed in to change notification settings - Fork 147
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve indexing performance (update.py
)
#289
Comments
Answering here to this comment:
I do not believe this to be an issue. Database is in cache, lookup is really fast. Small benchmarks agree on this.
This, yes! I've avoided using
I did not look into this, but I agree it might be useful. Batch insert could be useful. If you can reference your claim of it not being available from the Python wrapper it would be nice. |
I don't have a complete proof that bsddb does not support bulk operations. Here are some examples of bulk operations in C with claims that bulk operations improve performance: I didn't read too much into it, but it seems that you have to use special macros to construct a bulk buffer. Which suggests, that this would need special handing in the wrapper. DB_MULTIPLE is mentioned in docs of the put method However, I don't see any mentions of bulk operations or related flags in bsddb docs The put method only works on strings or bytes too. I couldn't really find anything related to bulk operations in the source code, but again, I didn't look into it much. bsddb was replaced by berkeleydb (same author, it seems). Maybe something changed, although I still don't see any mention of bulk operations in the docs |
Well the code link you provided says it all. As you said, it only accepts |
Current index feels much slower than it ought to be. I am creating this issue to track work on this topic.
See this message by Franek for his observations (I/O wait bottleneck, database caching, OOM issues).
I've got proof-of-concepts for the following topics:
update.py
. It is suboptimal. Having to take locks to write into databases makes everything really slow and (I believe) explains most of the performance issues (caused by IO wait). This can be confirmed by running the same commands without doing any processing on the output: it is much faster. I have a PoC for solving this, it does all database accesses in the main thread (and usesmultiprocessing.Pool
to spawn sub-processes).script.sh
. Some commands are more wasteful than needed.sed(1)
call inlist-blobs
is a big bottleneck for no specific reason. This won't be a massive time saver as we are talking about a second per tag.find-file-doc-comments.pl
inparse-docs
is really expensive. We could avoid calling it on files for which we know they cannot have any doc comment.Those combined, for the first 5 Linux tags: I get wallclock/usr/sys 126s/1017s/395s versus 1009s/1341s/490s. For the old
update.py
, I passed my CPU count as argument ie20
.Those changes will require a way to compare databases, see this message for reasoning behind. Solutions to this are either a custom Python script or a shell script that uses
db_dump -p
anddiff
, as recommended here.There could however be other topics to improve performance. Are those worth it, that is the question. Probably not.
script.sh
commands take multiple blobs.script.sh
and callsctags
or tokenize by ourselves.zstd -1
), which means there is superfluous information. The value format could be optimized, possibly made binary.The text was updated successfully, but these errors were encountered: