Skip to content

Conversation

@droberts195
Copy link

In #1903 we changed dictionary weighting in categorization to give
higher weighting when there were 3 or more adjacent dictionary
words. This was the first time that we'd ever had the situation
where the same token could have a different weight in different
messages. Unfortunately the way this interacted with us requiring
equal weights when checking for common tokens meant tokens could
be bizarrely removed from categories. For example, with the
following two messages we'd put them in the same category but say
that "started" was not a common token:

  • Service abcd was started
  • Service reaper was started

This happens because "abcd" is not a dictionary word but "reaper"
is, so then "started" has weight 6 in the first message but weight
31 in the second. Considering "started" to NOT be a common token
in this case is extremely bad both intuitively and for the accuracy
of drilldown searches.

Therefore this PR changes the categorization code to consider
tokens equal if their token IDs are equal but their weights are
different. Weights are now only used to compute distance between
different tokens.

This causes the need for another change. It is no longer as simple
as it used to be to calculate the highest and lowest possible total
weight of a message that might possibly be considered similar to
the current message. This calculation now needs to take account of
possible adjacency weighting, either in the current message or in
the messages being considered as matches. (This also has the side
effect that we'll do a higher number of expensive Levenshtein
distance calculations, as fewer potential matches will be discarded
early by the simple weight check.)

Backport of #2277

In elastic#1903 we changed dictionary weighting in categorization to give
higher weighting when there were 3 or more adjacent dictionary
words. This was the first time that we'd ever had the situation
where the same token could have a different weight in different
messages. Unfortunately the way this interacted with us requiring
equal weights when checking for common tokens meant tokens could
be bizarrely removed from categories. For example, with the
following two messages we'd put them in the same category but say
that "started" was not a common token:

- Service abcd was started
- Service reaper was started

This happens because "abcd" is not a dictionary word but "reaper"
is, so then "started" has weight 6 in the first message but weight
31 in the second. Considering "started" to NOT be a common token
in this case is extremely bad both intuitively and for the accuracy
of drilldown searches.

Therefore this PR changes the categorization code to consider
tokens equal if their token IDs are equal but their weights are
different. Weights are now only used to compute distance between
different tokens.

This causes the need for another change. It is no longer as simple
as it used to be to calculate the highest and lowest possible total
weight of a message that might possibly be considered similar to
the current message. This calculation now needs to take account of
possible adjacency weighting, either in the current message or in
the messages being considered as matches. (This also has the side
effect that we'll do a higher number of expensive Levenshtein
distance calculations, as fewer potential matches will be discarded
early by the simple weight check.)

Backport of elastic#2277
@droberts195
Copy link
Author

retest

@droberts195 droberts195 merged commit a64eb8e into elastic:7.17 May 24, 2022
@droberts195 droberts195 deleted the adjacent_word_weighting_fixes_717 branch May 24, 2022 15:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant