Replies: 3 comments
-
Yes, I am aware of the issue, and a few commits have been made to resolve the issue. We are in the process of moving from LU to unified-latex for parsing. Eventually this issue will be auto resolved. |
Beta Was this translation helpful? Give feedback.
-
By the way, is it possible that you may share this project for my local debugging? |
Beta Was this translation helpful? Give feedback.
-
I was thinking over the night; and if anything it should be fully possible to tweak the working example that I gave in the bug report I mentioned earlier. The only difference would be changing the content of the subfile (before it is duplicated). Instead of just having the file containing a single command to mock a file; actually populate the file with actual content (like a full article from wikipedia). From what I have observed, the delays really start becoming more observable with a mixture quantity of file size and also file sizes (recommend getting the subfile to a size of 100kb or more for a proper test). Otherwise I sent an invite to my private repo, so you can see what exactly what mine is doing. |
Beta Was this translation helpful? Give feedback.
-
This is possibly a continuation of bug #3907. This is more of a discussion, to see if what I am encountering is a correlated bug or something outside the scope of this project. I am suspecting, that I am encountering either a followup bug from the fix done in #3907, overloading the cacher memory/storage, possibly both sides. The overloading the cacher side (if it actually is an issue), would making a working example non practical (due to the size and amount of work to make).
Going by the extension output log (included, but is large); it starts off about as expected in the cacher starts off breezing through the files with minimal time, then hits a point that it takes more time to parse the file. Sadly, it hits a point a couple more times, in the parsing time takes increasingly more time to complete. Towards the end, the parsing time taking over 1000ms (total parsing time is over 600,000 ms). This project is just under 900 files, over 90% of the file's size being below 100KB in size (all text, no math/tables/graphics of any kind).
output.txt
Now, the part I am not sure on, is more of the details on the cacher. What I think is happening, is that the parser is filling the cache up, having to reallocate a larger cache multiple times. Assuming this is what is happening; this will explain the part of me hitting points that the parsing time increasing (aka hitting the allocated size of the cache/vector, and having to allocate a new size; if like in other languages ~1.5x-2x in size). If this is what is happening, I'd conclude this is falling outside the scope of the project and something I'll have to live with in this project. Either way, the output log, may be beneficial if it makes a different issue more apparent.
Extension version v9.11.5
Vscode:
Version: 1.79.2
Commit: 695af097c7bd098fbf017ce3ac85e09bbc5dda06
Date: 2023-06-14T08:59:55.818Z
Electron: 22.5.7
Chromium: 108.0.5359.215
Node.js: 16.17.1
V8: 10.8.168.25-electron.0
OS: Linux x64 6.1.0-9-amd64
Beta Was this translation helpful? Give feedback.
All reactions