Skip to content

Conversation

dg-pb
Copy link
Contributor

@dg-pb dg-pb commented Sep 15, 2025

V1 Info (outdated)

Currently, adaptivity is simple.

  1. Record index of last insorted value (last)
  2. Record difference between last insorted value and the one before diff = abs(new_idx - last)
  3. Start iteration with:
    1. Take the new midpoint to be last
    2. Take the next midpoint to be last += diff
    3. Repeat (2) once more
  4. It always finishes off with simple binarysort

It is primarily targeted at data already sorted to significant degree (e.g. stock price data).
However it so happens that it handles some other patterns as well.

e.g.: [-1, 1, -2, 2, -3, 3, ...].
diff will always be the full length of sorted part, so it will be jumping from one end to the next in 1 step.

Microbenchmarks

PYMAIN=/Users/Edu/local/code/cpython/main/python.exe
PYNEW=/Users/Edu/local/code/cpython/wt1/python.exe

S="
import random
import itertools as itl
RND = [random.random() for _ in range(100_000)]
RWK = [random.randint(-1, 3) for _ in range(100_000)]
RWK = list(itl.accumulate(RWK))

RNDW = [[i] for i in RND]
RWKW = [[i] for i in RWK]
"
# RAW SMALL
$PYMAIN -m timeit -s $S "sorted(RND[:30])"  # 0.72 µs
$PYNEW -m timeit -s $S "sorted(RND[:30])"   # 0.85 µs
$PYMAIN -m timeit -s $S "sorted(RWK[:30])"  # 0.65 µs
$PYNEW -m timeit -s $S "sorted(RWK[:30])"   # 0.58 µs

# WRAPPED SMALL
$PYMAIN -m timeit -s $S "sorted(RNDW[:30])" # 4.3 µs
$PYNEW -m timeit -s $S "sorted(RNDW[:30])"  # 4.6 µs
$PYMAIN -m timeit -s $S "sorted(RWKW[:30])" # 2.8 µs
$PYNEW -m timeit -s $S "sorted(RWKW[:30])"  # 1.6 µs


# RAW
$PYMAIN -m timeit -s $S "sorted(RND)"   # 16.0 ms
$PYNEW -m timeit -s $S "sorted(RND)"    # 16.0 ms
$PYMAIN -m timeit -s $S "sorted(RWK)"   #  2.5 ms
$PYNEW -m timeit -s $S "sorted(RWK)"    #  2.3 ms

# WRAPPED
$PYMAIN -m timeit -s $S "sorted(RNDW)"  # 104 ms
$PYNEW -m timeit -s $S "sorted(RNDW)"   # 102 ms
$PYMAIN -m timeit -s $S "sorted(RWKW)"  #  14.5 ms
$PYNEW -m timeit -s $S "sorted(RWKW)"   #   8.3 ms

For optimised comparisons this has little effect.
As can be seen, the worst case is small random data.
But in the same way that small data feels the biggest adverse effect, the positive effect is also the largest as greater (or all) portion of data is sorted using binarysort only.

However, the impact is non-trivial for costly comparisons.
list.__lt__ is probably the fastest of all the possible ones.
For Pure Python user implemented __lt__, the impact would be greater.

V3 Getting closer to desirable result.

Raw integers & floats (specialised comparison functions)

unwrapped

Above wrapped into lists

wrapped

  1. Any tips for low level optimisation are welcome.
  2. Any ideas on better adaptivity strategy are welcome as well

@tim-one tim-one self-assigned this Sep 17, 2025
@pochmann3
Copy link
Contributor

Since you asked for more ideas... Tim and I once talked about things like this here: #116939

@dg-pb
Copy link
Contributor Author

dg-pb commented Sep 17, 2025

And another idea was to use statistics and switch between strategies, similar to what you do in galloping-or-not. Like tracking the insertion point averages, and if they're usually in the middle, then use raw binary searches, but if they're usually towards the end, then use the optimistic or exponential variation, and if they're usually near the start, then do optimistic/exponential from the start. The strategy could be chosen either per new pivot element or just per binarysearch invocation.

This is pretty much what I have done to incorporate it so not to damage performance of non-target cases. In many ways it resembles galloping approach. It switches on/off and grows "time-off" parameter on failed attempts.

@AlanCristhian
Copy link

I have a simpler Python implementation of the adaptative algorithm. I made it look like the C implementation, kind of.

def adaptative_binary_insertion_sort(a, n=0, ok=0):
    n = n or len(a)
    last = 0
    for ok in range(ok + 1, n):
        pivot = a[ok]
        L = 0
        R = ok - 1   # Ensures that pivot will not compare with itself.

        # M is the index of the element that will be compared
        # with the pivot. So start from the last moved element.
        M = last
        while L <= R:
            if pivot < a[M]:
                R = M - 1
                last = M  # Stores the index of the last moved element.
            else:
                L = M + 1
                last = L  # Stores the index of the last moved element.
            M = (L + R) >> 1
        if last < ok:  # Don't move the element to its existing location
            for M in range(ok, last, -1):
                a[M] = a[M - 1]
            a[last] = pivot  # Move pivot to its last position.

It's so simple, I think it can be implemented by modifying a few lines of the original binarysort. But I have zero real life experience with C.

@dg-pb
Copy link
Contributor Author

dg-pb commented Sep 20, 2025

I have a simpler Python implementation of the adaptative algorithm.

Used your idea of taking expectation to simply be last value.
I was overcomplicating things a bit there.
This also slashed off some operations, which is exactly what I was looking for.

Comparison count is a bit up, but this is due to the fact that my old expected value calculation was adapting to some stuff that is not the target. Performance is slightly better, although this turned out not to be as impactful as I expected.

Results with this change:

Unwrapped (Optimised types):

unwrapped

Wrapped (list.__lt__):

wrapped

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants