- relax version requirements of dependencies to simplify packaging
- Do not include installations of jaro_winkler in wheels (regression from 2.0.7)
- Allow installation from system installed versions of
rapidfuzz-cpp
,jarowinkler-cpp
andtaskflow
- Added PyPy3.9 wheels on Linux
- Add missing Cython code in sdist
- consider float imprecision in score_cutoff (see #210)
- fix incorrect score_cutoff handling in token_set_ratio and token_ratio
- add longest common subsequence
- Do not include installations of jaro_winkler and taskflow in wheels
- fix incorrect population of sys.modules which lead to submodules overshadowing other imports
- moved JaroWinkler and Jaro into a separate package
- fix signed integer overflow inside hashmap implementation
- fix binary size increase due to debug symbols
- fix segmentation fault in
Levenshtein.editops
- Added fuzz.partial_ratio_alignment, which returns the result of fuzz.partial_ratio combined with the alignment this result stems from
- Fix Indel distance returning incorrect result when using score_cutoff=1, when the strings are not equal. This affected other scorers like fuzz.WRatio, which use the Indel distance as well.
- fix type hints
- Add back transpiled cython files to the sdist to simplify builds in package builders like FreeBSD port build or conda-forge
- fix type hints
- Indel.normalized_similarity mistakenly used the implementation of Indel.normalized_distance
- added C-Api which can be used to extend RapidFuzz from different Python modules using any programming language which allows the usage of C-Apis (C/C++/Rust)
- added new scorers in
rapidfuzz.distance.*
- port existing distances to this new api
- add Indel distance along with the corresponding editops function
- when the result of
string_metric.levenshtein
orstring_metric.hamming
is below max they do now returnmax + 1
instead of -1 - Build system moved from setuptools to scikit-build
- Stop including all modules in __init__.py, since they significantly slowed down import time
- remove the
rapidfuzz.levenshtein
module which was deprecated in v1.0.0 and scheduled for removal in v2.0.0 - dropped support for Python2.7 and Python3.5
- deprecate support to specify processor in form of a boolean (will be removed in v3.0.0)
- new functions will not get support for this in the first place
- deprecate
rapidfuzz.string_metric
(will be removed in v3.0.0). Similar scorers are available inrapidfuzz.distance.*
- process.cdist did raise an exception when used with a pure python scorer
- improve performance and memory usage of
rapidfuzz.string_metric.levenshtein_editops
- memory usage is reduced by 33%
- performance is improved by around 10%-20%
- significantly improve performance of
rapidfuzz.string_metric.levenshtein
formax <= 31
using a banded implementation
- fix bug in new editops implementation, causing it to SegFault on some inputs (see qurator-spk/dinglehopper#64)
- Fix some issues in the type annotations (see #163)
- improve performance and memory usage of
rapidfuzz.string_metric.levenshtein_editops
- memory usage is reduced by 10x
- performance is improved from
O(N * M)
toO([N / 64] * M)
- Added missing wheels for Python3.6 on MacOs and Windows (see #159)
- Add wheels for Python 3.10 on MacOs
- Fix incorrect editops results (See #148)
- Add Wheels for Python3.10 on all platforms except MacOs (see #141)
- Improve performance of
string_metric.jaro_similarity
andstring_metric.jaro_winkler_similarity
for strings with a length <= 64
- fixed incorrect results of fuzz.partial_ratio for long needles (see #138)
- Added typing for process.cdist
- Added multithreading support to cdist using the argument
process.cdist
- Add dtype argument to
process.cdist
to set the dtype of the result numpy array (see #132) - Use a better hash collision strategy in the internal hashmap, which improves the worst case performance
- improved performance of fuzz.ratio
- only import process.cdist when numpy is available
- Add back wheels for Python2.7
- fuzz.partial_ratio uses a new implementation for short needles (<= 64). This implementation is
- more accurate than the current implementation (it is guaranteed to find the optimal alignment)
- it is significantly faster
- Add process.cdist to compare all elements of two lists (see #51)
- Fix out of bounds access in levenshtein_editops
- all scorers do now support similarity/distance calculations between any sequence of hashables. So it is possible to calculate e.g. the WER as:
>>> string_metric.levenshtein(["word1", "word2"], ["word1", "word3"])
1
- Added type stub files for all functions
- added jaro similarity in
string_metric.jaro_similarity
- added jaro winkler similarity in
string_metric.jaro_winkler_similarity
- added Levenshtein editops in
string_metric.levenshtein_editops
- Fixed support for set objects in
process.extract
- Fixed inconsistent handling of empty strings
- improved performance of result creation in process.extract
- Cython ABI stability issue (#95)
- fix missing decref in case of exceptions in process.extract
- added processor support to
levenshtein
andhamming
- added distance support to extract/extractOne/extract_iter
- incorrect results of
normalized_hamming
andnormalized_levenshtein
when used withutils.default_process
as processor
- Fix a bug in the mbleven implementation of the uniform Levenshtein distance and cover it with fuzz tests
- some of the newly activated warnings caused build failures in the conda-forge build
- Fixed issue in LCS calculation for partial_ratio (see #90)
- Fixed incorrect results for normalized_hamming and normalized_levenshtein when the processor
utils.default_process
is used - Fix many compiler warnings
- add wheels for a lot of new platforms
- drop support for Python 2.7
- use
is
instead of==
to compare functions directly by address
- Fix another ref counting issue
- Fix some issues in the Levenshtein distance algorithm (see #92)
- further improve bitparallel implementation of uniform Levenshtein distance for strings with a length > 64 (in many cases more than 50% faster)
- add more benchmarks to documentation
- add bitparallel implementation to InDel Distance (Levenshtein with the weights 1,1,2) for strings with a length > 64
- improve bitparallel implementation of uniform Levenshtein distance for strings with a length > 64
- use the InDel Distance and uniform Levenshtein distance in more cases instead of the generic implementation
- Directly use the Levenshtein implementation in C++ instead of using it through Python in process.*
- Fix reference counting in process.extract (see #81)
- Fix result conversion in process.extract (see #79)
- string_metric.normalized_levenshtein supports now all weights
- when different weights are used for Insertion and Deletion the strings are not swapped inside the Levenshtein implementation anymore. So different weights for Insertion and Deletion are now supported.
- replace C++ implementation with a Cython implementation. This has the following advantages:
- The implementation is less error prone, since a lot of the complex things are done by Cython
- slighly faster than the current implementation (up to 10% for some parts)
- about 33% smaller binary size
- reduced compile time
- Added **kwargs argument to process.extract/extractOne/extract_iter that is passed to the scorer
- Add max argument to hamming distance
- Add support for whole Unicode range to utils.default_process
- replaced Wagner Fischer usage in the normal Levenshtein distance with a bitparallel implementation
- The bitparallel LCS algorithm in fuzz.partial_ratio did not find the longest common substring properly in some cases. The old algorithm is used again until this bug is fixed.
- string_metric.normalized_levenshtein supports now the weights (1, 1, N) with N >= 1
- The Levenshtein distance with the weights (1, 1, >2) do now use the same implementation as the weight (1, 1, 2), since
Substitution > Insertion + Deletion
has no effect
- fix uninitialized variable in bitparallel Levenshtein distance with the weight (1, 1, 1)
- all normalized string_metrics can now be used as scorer for process.extract/extractOne
- Implementation of the C++ Wrapper completely refactored to make it easier to add more scorers, processors and string matching algorithms in the future.
- increased test coverage, that already helped to fix some bugs and help to prevent regressions in the future
- improved docstrings of functions
- Added bit-parallel implementation of the Levenshtein distance for the weights (1,1,1) and (1,1,2).
- Added specialized implementation of the Levenshtein distance for cases with a small maximum edit distance, that is even faster, than the bit-parallel implementation.
- Improved performance of
fuzz.partial_ratio
-> Sincefuzz.ratio
andfuzz.partial_ratio
are used in most scorers, this improves the overall performance. - Improved performance of
process.extract
andprocess.extractOne
- the
rapidfuzz.levenshtein
module is now deprecated and will be removed in v2.0.0 These functions are now placed inrapidfuzz.string_metric
.distance
,normalized_distance
,weighted_distance
andweighted_normalized_distance
are combined intolevenshtein
andnormalized_levenshtein
.
- added normalized version of the hamming distance in
string_metric.normalized_hamming
- process.extract_iter as a generator, that yields the similarity of all elements, that have a similarity >= score_cutoff
- multiple bugs in extractOne when used with a scorer, that's not from RapidFuzz
- fixed bug in
token_ratio
- fixed bug in result normalization causing zero division
- utf8 usage in the copyright header caused problems with python2.7 on some platforms (see #70)
- when a custom processor like
lambda s: s
was used with any of the methods inside fuzz.* it always returned a score of 100. This release fixes this and adds a better test coverage to prevent this bug in the future.
- added hamming distance metric in the levenshtein module
- improved performance of default_process by using lookup table
- Add missing virtual destructor that caused a segmentation fault on Mac Os
- C++11 Support
- manylinux wheels
- Levenshtein was not imported from __init__
- The reference count of a Python Object inside process.extractOne was decremented to early
- process.extractOne exits early when a score of 100 is found. This way the other strings do not have to be preprocessed anymore.
- string objects passed to scorers had to be strings even before preprocessing them. This was changed, so they only have to be strings after preprocessing similar to process.extract/process.extractOne
- process.extractOne is now implemented in C++ making it a lot faster
- When token_sort_ratio or partial_token_sort ratio is used inprocess.extractOne the words in the query are only sorted once to improve the runtime
- process.extractOne/process.extract do now return the index of the match, when the choices are a list.
- process.extractIndices got removed, since the indices are now already returned by process.extractOne/process.extract
- fix documentation of process.extractOne (see #48)
- Added wheels for
- CPython 2.7 on windows 64 bit
- CPython 2.7 on windows 32 bit
- PyPy 2.7 on windows 32 bit
- fix bug in partial_ratio (see #43)
- fix inconsistency with fuzzywuzzy in partial_ratio when using strings of equal length
- MSVC has a bug and therefore crashed on some of the templates used. This Release simplifies the templates so compiling on msvc works again
- partial_ratio is using the Levenshtein distance now, which is a lot faster. Since many of the other algorithms use partial_ratio, this helps to improve the overall performance
- fix partial_token_set_ratio returning 100 all the time
- added rapidfuzz.__author__, rapidfuzz.__license__ and rapidfuzz.__version__
- do not use auto junk when searching the optimal alignment for partial_ratio
- support for python 2.7 added #40
- add wheels for python2.7 (both pypy and cpython) on MacOS and Linux
- added wheels for Python3.9
- tuple scores in process.extractOne are now supported #39