[7.17] [ML] Adjacency weighting fixes in categorization #2279
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
In #1903 we changed dictionary weighting in categorization to give
higher weighting when there were 3 or more adjacent dictionary
words. This was the first time that we'd ever had the situation
where the same token could have a different weight in different
messages. Unfortunately the way this interacted with us requiring
equal weights when checking for common tokens meant tokens could
be bizarrely removed from categories. For example, with the
following two messages we'd put them in the same category but say
that "started" was not a common token:
This happens because "abcd" is not a dictionary word but "reaper"
is, so then "started" has weight 6 in the first message but weight
31 in the second. Considering "started" to NOT be a common token
in this case is extremely bad both intuitively and for the accuracy
of drilldown searches.
Therefore this PR changes the categorization code to consider
tokens equal if their token IDs are equal but their weights are
different. Weights are now only used to compute distance between
different tokens.
This causes the need for another change. It is no longer as simple
as it used to be to calculate the highest and lowest possible total
weight of a message that might possibly be considered similar to
the current message. This calculation now needs to take account of
possible adjacency weighting, either in the current message or in
the messages being considered as matches. (This also has the side
effect that we'll do a higher number of expensive Levenshtein
distance calculations, as fewer potential matches will be discarded
early by the simple weight check.)
Backport of #2277