Skip to content

Conversation

@muendlein
Copy link
Contributor

@muendlein muendlein commented May 10, 2025

Fixes issue: #60

before

1000 searches 100%: 34.013s
1000 searches 75%: 25.787s
1000 searches 50%: 16.775s
1000 searches 25%: 9.530s
1000 searches 10%: 3.225s
1000 searches 1%: 280.145ms
1000 searches 0.01%: 11.675ms

after

1000 searches 100%: 18.888s
1000 searches 75%: 13.866s
1000 searches 50%: 9.127s
1000 searches 25%: 5.771s
1000 searches 10%: 1.963s
1000 searches 1%: 208.407ms
1000 searches 0.01%: 15.793ms

Copy link
Owner

@mourner mourner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall this is a really awesome improvement! Thanks for a great contribution. 👍

index.js Outdated
const nodeMinX = this._boxes[pos + 0];
const nodeMinY = this._boxes[pos + 1];
const nodeMaxX = this._boxes[pos + 2];
const nodeMaxY = this._boxes[pos + 3];
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just curious: would early rejects with if / continue after every of the four reads here make any difference, or not worth the added verbosity?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does not seem to make a difference when trying that.

@mourner
Copy link
Owner

mourner commented May 10, 2025

Also wondering: is this increase for smaller queries a fluke due or is there an overhead?

before: 1000 searches 0.01%: 11.675ms
after: 1000 searches 0.01%: 15.793ms

@muendlein
Copy link
Contributor Author

Also wondering: is this increase for smaller queries a fluke due or is there an overhead?

before: 1000 searches 0.01%: 11.675ms
after: 1000 searches 0.01%: 15.793ms

I would say both is possible here. The current benchmark is not sophisticated enough to track this down. Overall the time for small queries seems to scatters signicantly with every run.

@muendlein
Copy link
Contributor Author

muendlein commented May 13, 2025

Also wondering: is this increase for smaller queries a fluke due or is there an overhead?

before: 1000 searches 0.01%: 11.675ms
after: 1000 searches 0.01%: 15.793ms

I would say both is possible here. The current benchmark is not sophisticated enough to track this down. Overall the time for small queries seems to scatters signicantly with every run.

After some more benchmarking there seems to be an apparent overhead if the number of rectangles in the search area are low.

Edit: After some more debugging this seems to be also present if the relevant code is unreachable which indicates that it is related to the javascript engine or javascript timers not being precise in the first place.

after						before
                                                
1000000 rectangles                              1000000 rectangles
Indexing: 174.82 +- 10.33 ms                    Indexing: 178.94 +- 6.13 ms
1000 searches 100%: 16915.16 +- 27.19 ms        1000 searches 100%: 29411.44 +- 114.
1000 searches 75%: 12091.25 +- 34.09 ms         1000 searches 75%: 20886.99 +- 28.87
1000 searches 50%: 8091.64 +- 16.49 ms          1000 searches 50%: 13845.36 +- 22.50
1000 searches 25%: 5114.00 +- 36.44 ms          1000 searches 25%: 8035.35 +- 30.98 
1000 searches 10%: 1700.65 +- 31.92 ms          1000 searches 10%: 2681.03 +- 30.18 
1000 searches 1%: 185.35 +- 3.87 ms             1000 searches 1%: 213.03 +- 7.76 ms
1000 searches 0.1%: 43.53 +- 2.34 ms            1000 searches 0.1%: 33.70 +- 0.48 ms
1000 searches 0.01%: 14.62 +- 1.44 ms           1000 searches 0.01%: 10.37 +- 0.18 m
                                                
500000 rectangles                               500000 rectangles
Indexing: 86.71 +- 3.26 ms                      Indexing: 83.50 +- 3.08 ms
1000 searches 100%: 7966.13 +- 285.32 ms        1000 searches 100%: 13432.76 +- 37.1
1000 searches 75%: 8280.05 +- 137.75 ms         1000 searches 75%: 12058.49 +- 60.04
1000 searches 50%: 5490.95 +- 115.95 ms         1000 searches 50%: 7963.56 +- 31.44 
1000 searches 25%: 2517.51 +- 85.76 ms          1000 searches 25%: 3677.59 +- 28.52 
1000 searches 10%: 1105.34 +- 16.96 ms          1000 searches 10%: 1454.78 +- 8.84 m
1000 searches 1%: 97.87 +- 4.05 ms              1000 searches 1%: 99.88 +- 2.78 ms
1000 searches 0.1%: 25.27 +- 1.06 ms            1000 searches 0.1%: 18.65 +- 0.73 ms
1000 searches 0.01%: 7.61 +- 0.35 ms            1000 searches 0.01%: 5.55 +- 0.21 ms
                                                
250000 rectangles                               250000 rectangles
Indexing: 38.60 +- 0.35 ms                      Indexing: 40.61 +- 1.07 ms
1000 searches 100%: 3429.81 +- 29.19 ms         1000 searches 100%: 6239.33 +- 13.67
1000 searches 75%: 3609.75 +- 23.50 ms          1000 searches 75%: 5551.44 +- 15.86 
1000 searches 50%: 2388.36 +- 27.00 ms          1000 searches 50%: 3659.61 +- 20.25 
1000 searches 25%: 1084.09 +- 6.67 ms           1000 searches 25%: 1435.32 +- 24.14 
1000 searches 10%: 454.77 +- 11.62 ms           1000 searches 10%: 569.01 +- 22.03 m
1000 searches 1%: 53.35 +- 3.08 ms              1000 searches 1%: 48.03 +- 3.20 ms
1000 searches 0.1%: 10.47 +- 0.23 ms            1000 searches 0.1%: 8.16 +- 0.62 ms
1000 searches 0.01%: 4.00 +- 0.21 ms            1000 searches 0.01%: 3.30 +- 0.75 ms
                                                
100000 rectangles                               100000 rectangles
Indexing: 14.93 +- 0.88 ms                      Indexing: 15.11 +- 1.01 ms
1000 searches 100%: 1415.36 +- 16.48 ms         1000 searches 100%: 2148.95 +- 16.61
1000 searches 75%: 1499.75 +- 20.38 ms          1000 searches 75%: 1881.47 +- 14.69 
1000 searches 50%: 1000.98 +- 13.85 ms          1000 searches 50%: 1141.08 +- 15.79 
1000 searches 25%: 423.34 +- 5.90 ms            1000 searches 25%: 458.26 +- 10.56 m
1000 searches 10%: 118.69 +- 1.89 ms            1000 searches 10%: 113.83 +- 1.30 ms
1000 searches 1%: 19.49 +- 0.55 ms              1000 searches 1%: 14.33 +- 0.14 ms
1000 searches 0.1%: 5.33 +- 0.21 ms             1000 searches 0.1%: 3.34 +- 0.05 ms
1000 searches 0.01%: 2.13 +- 0.03 ms            1000 searches 0.01%: 1.47 +- 0.02 ms
                                                
50000 rectangles                                50000 rectangles
Indexing: 6.94 +- 0.20 ms                       Indexing: 7.13 +- 0.13 ms
1000 searches 100%: 584.78 +- 10.95 ms          1000 searches 100%: 729.12 +- 12.03 
1000 searches 75%: 635.39 +- 15.14 ms           1000 searches 75%: 696.66 +- 7.82 ms
1000 searches 50%: 397.94 +- 16.80 ms           1000 searches 50%: 422.41 +- 8.55 ms
1000 searches 25%: 123.37 +- 2.38 ms            1000 searches 25%: 118.10 +- 1.51 ms
1000 searches 10%: 59.40 +- 1.41 ms             1000 searches 10%: 55.20 +- 2.80 ms
1000 searches 1%: 12.01 +- 0.54 ms              1000 searches 1%: 9.09 +- 0.65 ms
1000 searches 0.1%: 3.19 +- 0.03 ms             1000 searches 0.1%: 2.23 +- 0.10 ms
1000 searches 0.01%: 1.59 +- 0.11 ms            1000 searches 0.01%: 1.16 +- 0.12 ms
                                                
25000 rectangles                                25000 rectangles
Indexing: 3.82 +- 0.46 ms                       Indexing: 3.52 +- 0.08 ms
1000 searches 100%: 328.56 +- 11.57 ms          1000 searches 100%: 407.00 +- 9.08 m
1000 searches 75%: 225.00 +- 3.29 ms            1000 searches 75%: 238.60 +- 6.12 ms
1000 searches 50%: 114.57 +- 4.05 ms            1000 searches 50%: 117.15 +- 4.78 ms
1000 searches 25%: 64.82 +- 3.11 ms             1000 searches 25%: 62.83 +- 1.03 ms
1000 searches 10%: 31.45 +- 1.54 ms             1000 searches 10%: 27.40 +- 1.79 ms
1000 searches 1%: 6.86 +- 0.13 ms               1000 searches 1%: 4.83 +- 0.32 ms
1000 searches 0.1%: 2.13 +- 0.02 ms             1000 searches 0.1%: 1.40 +- 0.02 ms
1000 searches 0.01%: 1.17 +- 0.01 ms            1000 searches 0.01%: 0.82 +- 0.01 ms
                                                
10000 rectangles                                10000 rectangles
Indexing: 3.82 +- 3.65 ms                       Indexing: 3.98 +- 4.70 ms
1000 searches 100%: 62.94 +- 2.13 ms            1000 searches 100%: 98.41 +- 10.85 m
1000 searches 75%: 78.66 +- 3.50 ms             1000 searches 75%: 87.03 +- 9.78 ms
1000 searches 50%: 54.85 +- 0.89 ms             1000 searches 50%: 60.02 +- 6.40 ms
1000 searches 25%: 30.96 +- 0.52 ms             1000 searches 25%: 30.87 +- 2.78 ms
1000 searches 10%: 16.13 +- 0.88 ms             1000 searches 10%: 14.25 +- 1.59 ms
1000 searches 1%: 3.89 +- 0.45 ms               1000 searches 1%: 2.45 +- 0.20 ms
1000 searches 0.1%: 1.48 +- 0.15 ms             1000 searches 0.1%: 0.92 +- 0.01 ms
1000 searches 0.01%: 0.93 +- 0.10 ms            1000 searches 0.01%: 0.68 +- 0.11 ms

@mourner
Copy link
Owner

mourner commented May 18, 2025

@muendlein might be worth increasing the number of searches between timings for a possibly more reliable measurement. Does the last commit help? I'll likely land this anyway eventually since it's an obvious net positive, just wanted to explore whether we could minimize the hit on smaller queries.

@muendlein
Copy link
Contributor Author

@muendlein might be worth increasing the number of searches between timings for a possibly more reliable measurement. Does the last commit help? I'll likely land this anyway eventually since it's an obvious net positive, just wanted to explore whether we could minimize the hit on smaller queries.

@mourner The last commit has only been a minor improvement which doesn't close the gap.
After some more playing around, I don't think that increasing the number of searches will have any impact as the repeatability is too consistent. After observing that the gap still exists even if the code is never reached, I'm unfortunately out of ideas.

@mourner
Copy link
Owner

mourner commented May 18, 2025

After observing that the gap still exists even if the code is never reached, I'm unfortunately out of ideas.

One idea is to set an empyrical threshold of the query area compared to data bounds, over which we'll apply the "all in query bounds" logic, and under which we'll leave the existing logic. The overhead for calculating that area for a single query should be small.

@muendlein
Copy link
Contributor Author

muendlein commented May 18, 2025

After observing that the gap still exists even if the code is never reached, I'm unfortunately out of ideas.

One idea is to set an empyrical threshold of the query area compared to data bounds, over which we'll apply the "all in query bounds" logic, and under which we'll leave the existing logic. The overhead for calculating that area for a single query should be small.

Edit: At least the topic of unreachable functions can be solved with a separate function.

@mourner Already tried something similar which does not have an impact. All my testing points towards JIT compiler issues.

To make it more clear here are two unreachable examples.

Example 1:

This is exactly the old logic apart from the additional if (10 < 5). Unfortunately this brings no improvement compared to the new logic of this PR.

Before:

100000 rectangles
Indexing: 14.65 +- 0.72 ms
1000 searches 10%: 125.01 +- 8.72 ms
1000 searches 1%: 15.09 +- 1.02 ms
1000 searches 0.1%: 3.71 +- 0.37 ms
1000 searches 0.01%: 1.51 +- 0.07 ms

This PR:

100000 rectangles
Indexing: 14.15 +- 0.13 ms
1000 searches 10%: 111.84 +- 1.59 ms
1000 searches 1%: 16.88 +- 0.28 ms
1000 searches 0.1%: 4.90 +- 0.10 ms
1000 searches 0.01%: 2.00 +- 0.03 ms

Unreachable example 1:

100000 rectangles
Indexing: 14.21 +- 0.21 ms
1000 searches 10%: 157.39 +- 5.32 ms
1000 searches 1%: 19.68 +- 0.53 ms
1000 searches 0.1%: 4.44 +- 0.08 ms
1000 searches 0.01%: 2.02 +- 0.14 ms
        if (this._pos !== this._boxes.length) {
            throw new Error('Data not yet indexed - call index.finish().');
        }

        /** @type number | undefined */
        let nodeIndex = this._boxes.length - 4;
        const queue = [];
        const results = [];

        while (nodeIndex !== undefined) {
            // find the end index of the node
            const end = Math.min(nodeIndex + this.nodeSize * 4, upperBound(nodeIndex, this._levelBounds));

            // search through child nodes
            for (let /** @type number */ pos = nodeIndex; pos < end; pos += 4) {
                const nodeMinX = this._boxes[pos + 0];
                const nodeMinY = this._boxes[pos + 1];
                const nodeMaxX = this._boxes[pos + 2];
                const nodeMaxY = this._boxes[pos + 3];

                // check if node bbox intersects with query bbox
                if (maxX < nodeMinX || maxY < nodeMinY || minX > nodeMaxX || minY > nodeMaxY) {
                    continue;
                }

                const index = this._indices[pos >> 2] | 0;

                if (nodeIndex >= this.numItems * 4) {
                    if (10 < 5) {
                        // check if node bbox is completely inside query bbox
                        if (minX <= nodeMinX && minY <= nodeMinY && maxX >= nodeMaxX && maxY >= nodeMaxY) {
                            let posStart = pos;
                            let posEnd = pos;

                            // depth search while not leaf
                            while (posStart >= this.numItems * 4) {
                                posStart = this._indices[posStart >> 2] | 0;
                                const posEndStart = this._indices[posEnd >> 2] | 0;
                                posEnd = Math.min(posEndStart + this.nodeSize * 4, upperBound(posEndStart, this._levelBounds)) - 4;
                            }

                            for (let /** @type number */ leafPos = posStart; leafPos <= posEnd; leafPos += 4) {
                                const leafIndex = this._indices[leafPos >> 2];
                                if (filterFn === undefined || filterFn(leafIndex)) {
                                    results.push(leafIndex); // leaf item
                                }
                            }
                        }
                    } else {
                        queue.push(index); // node; add it to the search queue
                    }
                } else if (filterFn === undefined || filterFn(index)) {
                    results.push(index); // leaf item
                }
            }

            nodeIndex = queue.pop();
        }

        return results;

Example 2:

Same as example 1 but with the majority of the logic removed inside the unreachable if statement. Now you can see that the performance of this example is the same as before. Right now I don't have any good explanation for this except for certain compiler optimizations.

Before:

100000 rectangles
Indexing: 14.35 +- 0.36 ms
1000 searches 10%: 115.16 +- 2.56 ms
1000 searches 1%: 14.23 +- 0.13 ms
1000 searches 0.1%: 3.41 +- 0.10 ms
1000 searches 0.01%: 1.44 +- 0.01 ms

This PR:

100000 rectangles
Indexing: 14.55 +- 0.16 ms
1000 searches 10%: 112.36 +- 2.86 ms
1000 searches 1%: 18.17 +- 2.15 ms
1000 searches 0.1%: 4.64 +- 0.13 ms
1000 searches 0.01%: 2.06 +- 0.08 ms

Unreachable example 2:

100000 rectangles
Indexing: 14.23 +- 0.15 ms
1000 searches 10%: 114.44 +- 1.03 ms
1000 searches 1%: 14.30 +- 0.18 ms
1000 searches 0.1%: 3.35 +- 0.05 ms
1000 searches 0.01%: 1.48 +- 0.03 ms
        if (this._pos !== this._boxes.length) {
            throw new Error('Data not yet indexed - call index.finish().');
        }

        /** @type number | undefined */
        let nodeIndex = this._boxes.length - 4;
        const queue = [];
        const results = [];

        while (nodeIndex !== undefined) {
            // find the end index of the node
            const end = Math.min(nodeIndex + this.nodeSize * 4, upperBound(nodeIndex, this._levelBounds));

            // search through child nodes
            for (let /** @type number */ pos = nodeIndex; pos < end; pos += 4) {
                const nodeMinX = this._boxes[pos + 0];
                const nodeMinY = this._boxes[pos + 1];
                const nodeMaxX = this._boxes[pos + 2];
                const nodeMaxY = this._boxes[pos + 3];

                // check if node bbox intersects with query bbox
                if (maxX < nodeMinX || maxY < nodeMinY || minX > nodeMaxX || minY > nodeMaxY) {
                    continue;
                }

                const index = this._indices[pos >> 2] | 0;

                if (nodeIndex >= this.numItems * 4) {
                    if (10 < 5) {
                        // check if node bbox is completely inside query bbox
                        if (minX <= nodeMinX && minY <= nodeMinY && maxX >= nodeMaxX && maxY >= nodeMaxY) {
                            let posStart = pos;
                            let posEnd = pos;
                        }
                    } else {
                        queue.push(index); // node; add it to the search queue
                    }
                } else if (filterFn === undefined || filterFn(index)) {
                    results.push(index); // leaf item
                }
            }

            nodeIndex = queue.pop();
        }

        return results;

@mourner
Copy link
Owner

mourner commented May 18, 2025

@muendlein I've seen similar behavior before, and my guess is that it's because of v8 inlining. Over a certain threshold of complexity or size, v8 stops inlining the function, which makes it slower for small payloads. Might be worth experimenting with cutting out some of the logic into a separate function so that most of the hot path code in search remains inlineable.

@muendlein
Copy link
Contributor Author

muendlein commented May 18, 2025

@muendlein I've seen similar behavior before, and my guess is that it's because of v8 inlining. Over a certain threshold of complexity or size, v8 stops inlining the function, which makes it slower for small payloads. Might be worth experimenting with cutting out some of the logic into a separate function so that most of the hot path code in search remains inlineable.

@mourner Just tested this (see my edit above), at least it will fix the topic of unreachable code paths. But for now I only see small improvements in the "real" scenario.
But at least I have a good starting point for more experiments.

@muendlein
Copy link
Contributor Author

With the latest commit the gap has been reduced but is still there.

after last commit              			 before this PR

1000000 rectangles				 1000000 rectangles
Indexing: 182.94 +- 11.81 ms                     Indexing: 178.94 +- 6.13 ms
1000 searches 100%: 17005.09 +- 410.28 ms        1000 searches 100%: 29411.44 +- 114.
1000 searches 75%: 12021.44 +- 68.88 ms          1000 searches 75%: 20886.99 +- 28.87
1000 searches 50%: 8026.82 +- 22.21 ms           1000 searches 50%: 13845.36 +- 22.50
1000 searches 25%: 5073.42 +- 44.18 ms           1000 searches 25%: 8035.35 +- 30.98 
1000 searches 10%: 1687.99 +- 14.58 ms           1000 searches 10%: 2681.03 +- 30.18 
1000 searches 1%: 155.26 +- 1.00 ms              1000 searches 1%: 213.03 +- 7.76 ms
1000 searches 0.1%: 35.50 +- 0.16 ms             1000 searches 0.1%: 33.70 +- 0.48 ms
1000 searches 0.01%: 12.07 +- 0.15 ms            1000 searches 0.01%: 10.37 +- 0.18 m
                                                 
500000 rectangles                                500000 rectangles
Indexing: 79.79 +- 0.90 ms                       Indexing: 83.50 +- 3.08 ms
1000 searches 100%: 7501.64 +- 382.36 ms         1000 searches 100%: 13432.76 +- 37.1
1000 searches 75%: 7513.90 +- 37.22 ms           1000 searches 75%: 12058.49 +- 60.04
1000 searches 50%: 5002.77 +- 24.10 ms           1000 searches 50%: 7963.56 +- 31.44 
1000 searches 25%: 2303.79 +- 10.96 ms           1000 searches 25%: 3677.59 +- 28.52 
1000 searches 10%: 1013.43 +- 4.51 ms            1000 searches 10%: 1454.78 +- 8.84 m
1000 searches 1%: 82.99 +- 5.88 ms               1000 searches 1%: 99.88 +- 2.78 ms
1000 searches 0.1%: 20.10 +- 0.60 ms             1000 searches 0.1%: 18.65 +- 0.73 ms
1000 searches 0.01%: 6.54 +- 0.18 ms             1000 searches 0.01%: 5.55 +- 0.21 ms
                                                 
250000 rectangles                                250000 rectangles
Indexing: 38.87 +- 0.43 ms                       Indexing: 40.61 +- 1.07 ms
1000 searches 100%: 3236.10 +- 14.80 ms          1000 searches 100%: 6239.33 +- 13.67
1000 searches 75%: 3418.34 +- 39.25 ms           1000 searches 75%: 5551.44 +- 15.86 
1000 searches 50%: 2260.82 +- 18.72 ms           1000 searches 50%: 3659.61 +- 20.25 
1000 searches 25%: 1037.22 +- 11.76 ms           1000 searches 25%: 1435.32 +- 24.14 
1000 searches 10%: 416.20 +- 9.52 ms             1000 searches 10%: 569.01 +- 22.03 m
1000 searches 1%: 48.00 +- 2.87 ms               1000 searches 1%: 48.03 +- 3.20 ms
1000 searches 0.1%: 9.34 +- 0.53 ms              1000 searches 0.1%: 8.16 +- 0.62 ms
1000 searches 0.01%: 3.71 +- 0.21 ms             1000 searches 0.01%: 3.30 +- 0.75 ms
                                                 
100000 rectangles                                100000 rectangles
Indexing: 14.42 +- 0.52 ms                       Indexing: 15.11 +- 1.01 ms
1000 searches 100%: 1343.67 +- 19.48 ms          1000 searches 100%: 2148.95 +- 16.61
1000 searches 75%: 1414.79 +- 16.05 ms           1000 searches 75%: 1881.47 +- 14.69 
1000 searches 50%: 934.25 +- 5.47 ms             1000 searches 50%: 1141.08 +- 15.79 
1000 searches 25%: 396.69 +- 12.29 ms            1000 searches 25%: 458.26 +- 10.56 m
1000 searches 10%: 108.51 +- 3.56 ms             1000 searches 10%: 113.83 +- 1.30 ms
1000 searches 1%: 15.83 +- 0.08 ms               1000 searches 1%: 14.33 +- 0.14 ms
1000 searches 0.1%: 4.31 +- 0.14 ms              1000 searches 0.1%: 3.34 +- 0.05 ms
1000 searches 0.01%: 1.88 +- 0.02 ms             1000 searches 0.01%: 1.47 +- 0.02 ms
                                                 
50000 rectangles                                 50000 rectangles
Indexing: 7.12 +- 0.41 ms                        Indexing: 7.13 +- 0.13 ms
1000 searches 100%: 560.33 +- 14.39 ms           1000 searches 100%: 729.12 +- 12.03 
1000 searches 75%: 590.22 +- 12.87 ms            1000 searches 75%: 696.66 +- 7.82 ms
1000 searches 50%: 378.47 +- 20.60 ms            1000 searches 50%: 422.41 +- 8.55 ms
1000 searches 25%: 108.08 +- 1.83 ms             1000 searches 25%: 118.10 +- 1.51 ms
1000 searches 10%: 51.08 +- 2.27 ms              1000 searches 10%: 55.20 +- 2.80 ms
1000 searches 1%: 9.81 +- 0.47 ms                1000 searches 1%: 9.09 +- 0.65 ms
1000 searches 0.1%: 2.72 +- 0.14 ms              1000 searches 0.1%: 2.23 +- 0.10 ms
1000 searches 0.01%: 1.38 +- 0.03 ms             1000 searches 0.01%: 1.16 +- 0.12 ms
                                                 
25000 rectangles                                 25000 rectangles
Indexing: 3.95 +- 1.02 ms                        Indexing: 3.52 +- 0.08 ms
1000 searches 100%: 322.03 +- 8.93 ms            1000 searches 100%: 407.00 +- 9.08 m
1000 searches 75%: 208.44 +- 5.53 ms             1000 searches 75%: 238.60 +- 6.12 ms
1000 searches 50%: 101.36 +- 1.83 ms             1000 searches 50%: 117.15 +- 4.78 ms
1000 searches 25%: 54.99 +- 0.97 ms              1000 searches 25%: 62.83 +- 1.03 ms
1000 searches 10%: 25.82 +- 0.21 ms              1000 searches 10%: 27.40 +- 1.79 ms
1000 searches 1%: 5.50 +- 0.16 ms                1000 searches 1%: 4.83 +- 0.32 ms
1000 searches 0.1%: 1.80 +- 0.02 ms              1000 searches 0.1%: 1.40 +- 0.02 ms
1000 searches 0.01%: 1.06 +- 0.01 ms             1000 searches 0.01%: 0.82 +- 0.01 ms
                                                 
10000 rectangles                                 10000 rectangles
Indexing: 4.95 +- 5.92 ms                        Indexing: 3.98 +- 4.70 ms
1000 searches 100%: 59.80 +- 1.96 ms             1000 searches 100%: 98.41 +- 10.85 m
1000 searches 75%: 67.49 +- 1.39 ms              1000 searches 75%: 87.03 +- 9.78 ms
1000 searches 50%: 48.07 +- 0.82 ms              1000 searches 50%: 60.02 +- 6.40 ms
1000 searches 25%: 26.20 +- 0.77 ms              1000 searches 25%: 30.87 +- 2.78 ms
1000 searches 10%: 13.59 +- 0.34 ms              1000 searches 10%: 14.25 +- 1.59 ms
1000 searches 1%: 3.06 +- 0.34 ms                1000 searches 1%: 2.45 +- 0.20 ms
1000 searches 0.1%: 1.17 +- 0.02 ms              1000 searches 0.1%: 0.92 +- 0.01 ms
1000 searches 0.01%: 0.80 +- 0.01 ms             1000 searches 0.01%: 0.68 +- 0.11 ms

@mourner
Copy link
Owner

mourner commented May 20, 2025

@muendlein this one looks much better!

  • I guess you can inline addLeafSegment for simplicity since it shouldn't affect inlining of the main search function, right?
  • Now that inlining happens again, perhaps let's try the heuristic again? (Don't do minX <= nodeMinX && minY <= nodeMinY && maxX >= nodeMaxX && maxY >= nodeMaxY check if the query area is small enough)

@muendlein
Copy link
Contributor Author

@mourner Inlining addLeafSegment into addAllLeavesOfNode does not have a performance impact (I just pushed the commit).

What exactly is your idea about the heuristic approach to check the relative query size? Especially for unbalanced distributions I'm not sure if this is even feasible.

@mourner
Copy link
Owner

mourner commented May 20, 2025

@muendlein so, if I do fb78a2e and then put if (false && minX <= boxes[pos] ... to make that branch unreachable, small queries run as fast as before. So maybe we could do if (isSmallQuery && minX <= boxes[pos] ..., picking isSmallQuery based on the query bbox compared to data bbox somewhat empyrically — the worst that can happen if we pick badly is the query will be slightly slower, but most of the time it should be as fast as before for small queries and much faster for larger queries (when the optimization kicks in).

@muendlein
Copy link
Contributor Author

muendlein commented May 21, 2025

@muendlein so, if I do fb78a2e and then put if (false && minX <= boxes[pos] ... to make that branch unreachable, small queries run as fast as before. So maybe we could do if (isSmallQuery && minX <= boxes[pos] ..., picking isSmallQuery based on the query bbox compared to data bbox somewhat empyrically — the worst that can happen if we pick badly is the query will be slightly slower, but most of the time it should be as fast as before for small queries and much faster for larger queries (when the optimization kicks in).

I assume you mean !isSmallQuery in the if statement?
In the worst case it can be significantly slower not just slightly. This happens when the search area is large but we guess it is a small query. In this case we don't run the optimized path but rather have the performance before this PR. As the greatest speedup is exactly with such large search areas, the penalty for a wrong guess is pretty high.

This brings me to the next point, namely how create an empirical yet reliable yet fast isSmallQuery function. Personally I think that this is not feasible after playing a bit around with possible ideas. But I'm open for any suggestion.

@mourner
Copy link
Owner

mourner commented May 21, 2025

the penalty for a wrong guess is pretty high

It's high compared to the case when we guess right; however it's very small compared to not landing this PR. So we need to decide what's better: accept a guaranteed notable performance drop for small queries (arguably the more prevalent case in real world apps than big queries), or accept that we'll sometimes guess wrong and the performance will be as before.

I'd probably try the simplest metric possible, e.g. const isSmallQuery = queryArea / this._dataBBoxArea < 0.01 (or some other threshold we pick experimentally).

@muendlein
Copy link
Contributor Author

@mourner I think you are asking the right question about what is the better choice here. Given the varity of datasets & query usecases, I think 1 solution fits all won't exist. For example your proposed simple metric can work great for well distributed datasets but may yield undesired performance in case of strongly unbalanced datasets.
Personally I think the best option is therefore to allow some user control from the outside in combination with your proposal.
The simplest way to achieve this, is by make the threshold an input parameter with an empirical derived default.

Threshold -> 0: always run the optimized path (best performance for large queries)
Threshold -> 0 to 1: best but non deterministic performance
Threshold -> >=1: always run the standard path (best performance for small queries)

This ensures that users themselves can optimize search performance for their respective dataset.
What do you think?

@leeoniya
Copy link

leeoniya commented May 22, 2025

👍 to exposing an option if the regression is unavoidable. "mouse hover single data point in a scatterplot of 300k points" is something i would prefer not to regress.

@muendlein
Copy link
Contributor Author

muendlein commented May 22, 2025

👍 to exposing an option if the regression is unavoidable. "mouse hover single data point in a scatterplot of 300k points" is something i would prefer not to regress.

@leeoniya Even with the regression, I'm pretty sure you won't observe any difference as it relative not absolute. When talking about hovering there are much slower processes involved.
For context at 300k points you are looking at <50 microseconds for 10 searches.

@mourner
Copy link
Owner

mourner commented May 22, 2025

I'm hesitant about introducing such an option, because I'd like to keep the library simple, minimal and working perfectly out of the box, and this parameter is pretty confusing and difficult to explain. I think it's fine if there are some weird edge cases with heavily imbalanced datasets where the optimization doesn't kick in, as long as the library as a whole performs great most of the time. So I'd still try to explore the heuristical approach, if there are no other ideas on how to address the small query regression.

@muendlein
Copy link
Contributor Author

muendlein commented May 22, 2025

@mourner I just tested the simplest heuristics approach and it seems like we are back fighting the compiler.
Approach (note: dataArea is already calculated during finish): const isLargeQuery = ((maxX - minX) * (maxY - minY)) > (0.5 * this._dataArea);

Basically as soon as if (false && minX <= boxes[pos] ... is an expression instead of a constant, we are getting a performance penalty that puts it on par with the current status of this PR.
Additionally, some minor performance can be gained in case the function never sees a larger query.
Now I'm basically a bit out of ideas.

before this PR

500000 rectangles
Indexing: 85.42 +- 2.83 ms
1000 searches 1%: 99.98 +- 6.24 ms
1000 searches 0.1%: 18.11 +- 0.43 ms
1000 searches 0.01%: 5.54 +- 0.40 ms
1000 searches 0.001%: 3.27 +- 0.24 ms
1000 searches 0.00009999999999999999%: 2.52 +- 0.11 ms
1000 searches 0.000009999999999999999%: 2.43 +- 0.11 ms

PR without any heuristics

500000 rectangles
Indexing: 85.83 +- 3.29 ms
1000 searches 1%: 86.02 +- 6.64 ms
1000 searches 0.1%: 20.96 +- 0.28 ms
1000 searches 0.01%: 6.95 +- 0.11 ms
1000 searches 0.001%: 4.02 +- 0.11 ms
1000 searches 0.00009999999999999999%: 3.29 +- 0.22 ms
1000 searches 0.000009999999999999999%: 3.23 +- 0.43 ms

PR with simple heuristics

500000 rectangles
Indexing: 86.27 +- 3.79 ms
1000 searches 1%: 123.81 +- 2.08 ms
1000 searches 0.1%: 22.27 +- 0.61 ms
1000 searches 0.01%: 6.97 +- 0.07 ms
1000 searches 0.001%: 4.13 +- 0.19 ms
1000 searches 0.00009999999999999999%: 3.33 +- 0.14 ms
1000 searches 0.000009999999999999999%: 3.04 +- 0.05 ms

PR with simple heuristics and warm up without large query during warmup.

500000 rectangles
Indexing: 89.04 +- 6.29 ms
1000 searches 1%: 116.74 +- 5.53 ms
1000 searches 0.1%: 21.70 +- 1.23 ms
1000 searches 0.01%: 6.70 +- 0.63 ms
1000 searches 0.001%: 3.72 +- 0.05 ms
1000 searches 0.00009999999999999999%: 3.18 +- 0.03 ms
1000 searches 0.000009999999999999999%: 2.83 +- 0.05 ms

@mourner
Copy link
Owner

mourner commented May 22, 2025

@muendlein all right, let this sit for a few days more, I'll try to play with it a bit... As a last resort, we could just add a duplicate method, e.g. searchLarge that has the optimization, and give the user a binary choice. But ideally we'd find a way to consolidate somehow...

@muendlein
Copy link
Contributor Author

@mourner As some time has passed, I'm wondering if you already had time to play around?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants