-
Notifications
You must be signed in to change notification settings - Fork 37
Open
Description
Just ran an experiment with 100 voters and about 2000 candidates, ballots being fairly sparse (usually between 10-30 candidates mentioned per ballot). Obviously, this has some applicability to users rating entities (imdb, yelp, etc).
Unfortunately, the current dictionary-based implementation is horrible for such large datasets. The generated pair entries in the internal dictionary ended up around 2000^2. Eesh. It's a decent implementation when we're talking about 20 candidates, but I think making use of http://docs.scipy.org/doc/scipy/reference/sparse.html will reveal significant optimizations in both the SchulzeMethod and derivative classes. Never used scipy before, so this'll be new.
PeterTheOne, endolith and GasperPaul
Metadata
Metadata
Assignees
Labels
No labels