-
Notifications
You must be signed in to change notification settings - Fork 65
[ML] Adjacency weighting fixes in categorization #2277
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ML] Adjacency weighting fixes in categorization #2277
Conversation
In elastic#1903 we changed dictionary weighting in categorization to give higher weighting when there were 3 or more adjacent dictionary words. This was the first time that we'd ever had the situation where the same token could have a different weight in different messages. Unfortunately the way this interacted with us requiring equal weights when checking for common tokens meant tokens could be bizarrely removed from categories. For example, with the following two messages we'd put them in the same category but say that "started" was not a common token: - Service abcd was started - Service reaper was started This happens because "abcd" is not a dictionary word but "reaper" is, so then "started" has weight 6 in the first message but weight 31 in the second. Considering "started" to NOT be a common token in this case is extremely bad both intuitively and for the accuracy of drilldown searches. Therefore this PR changes the categorization code to consider tokens equal if their token IDs are equal but their weights are different. Weights are now only used to compute distance between different tokens. This causes the need for another change. It is no longer as simple as it used to be to calculate the highest and lowest possible total weight of a message that might possibly be considered similar to the current message. This calculation now needs to take account of possible adjacency weighting, either in the current message or in the messages being considered as matches. (This also has the side effect that we'll do a higher number of expensive Levenshtein distance calculations, as fewer potential matches will be discarded early by the simple weight check.)
More changes to sync with elastic/ml-cpp#2277
| 500)); | ||
| BOOST_REQUIRE_EQUAL(ml::model::CLocalCategoryId{5}, | ||
| BOOST_REQUIRE_EQUAL(ml::model::CLocalCategoryId{4}, | ||
| categorizer.computeCategory(false, | ||
| " [1111529792] INFO session <45409105041220090733@192.168.251.123> - ----------------- PROXY " | ||
| "Session DESTROYED --------------------", | ||
| " [1111529792] INFO session <45409105041220090733@192.168.251.123> - ----------------- " | ||
| "PROXY Session DESTROYED --------------------", | ||
| 500)); | ||
| BOOST_REQUIRE_EQUAL(ml::model::CLocalCategoryId{6}, | ||
| BOOST_REQUIRE_EQUAL(ml::model::CLocalCategoryId{4}, | ||
| categorizer.computeCategory(false, | ||
| " [1094662464] INFO session <ch6z1bho8xeprb3z4ty604iktl6c@dave.proxy.uk> - ----------------- " | ||
| "PROXY Session DESTROYED --------------------", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is an example of an improvement from these changes. These two messages are very similar and intuitively should go in the same category. Previously they weren't because "PROXY" and "Session" are weighted differently.
| BOOST_REQUIRE_EQUAL(ml::model::CLocalCategoryId{1}, | ||
| categorizer.computeCategory(false, "combo ftpd[7045]: connection from 84.232.2.50 () at Mon Jan 9 23:44:50 2006", | ||
| 76)); | ||
|
|
||
| BOOST_REQUIRE_EQUAL(ml::model::CLocalCategoryId{1}, | ||
| categorizer.computeCategory(false, | ||
| "combo ftpd[6527]: connection from 60.45.101.89 " | ||
| "(p15025-ipadfx01yosida.nagano.ocn.ne.jp) at Mon Jan 9 17:39:05 2006", | ||
| 115)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the example I saw on the Java side (while debugging elastic/elasticsearch#85872) that alerted me to the problem. Intuitively these messages are in the same category, but the old code would put them in different categories due to different weighting on the word "at".
| BOOST_REQUIRE_EQUAL(ml::model::CLocalCategoryId{2}, | ||
| categorizer.computeCategory(false, "<ml13-4608.1.p2ps: Info: > Source ML_SERVICE2 on 13122:867 has started.", | ||
| 500)); | ||
| BOOST_REQUIRE_EQUAL(ml::model::CLocalCategoryId{3}, | ||
| BOOST_REQUIRE_EQUAL(ml::model::CLocalCategoryId{2}, | ||
| categorizer.computeCategory(false, "<ml00-4201.1.p2ps: Info: > Service CUBE_CHIX, id of 132, has started.", | ||
| 500)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a case that's not as good following the changes. Previously it was nice that the extra weight on "Service" and "Source" meant that we differentiated services starting from sources starting. After the adjacency weighting changes we only carried on doing that in this unit test because the token "has" was considered not to match between the two messages (because it has the higher adjacency weighting in the second message but not in the first). Now that "has" is considered a match, that plus the massively high weighted verb "started" puts the two messages in the same category.
It's not ideal but I think the changes of this PR are more justifiable than what we had from #1903.
In the long term I think we should try to do the following:
- Have the tokenizer do the weighting rather than the categorizer - this will facilitate 2 and 3
- Only give higher weighting to adjacent dictionary words if they were separated by whitespace in the original message, not discarded tokens
- Having decided we have a sufficiently long run of adjacent dictionary words, give higher weighting to all of them, not just the 3rd onwards - this will be fairer when one of the first two words is important (like "Source" vs "Service" in this case
edsavage
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
…85872) This replaces the implementation of the categorize_text aggregation with the new algorithm that was added in #80867. The new algorithm works in the same way as the ML C++ code used for categorization jobs (and now includes the fixes of elastic/ml-cpp#2277). The docs are updated to reflect the workings of the new implementation.
Windows.h can get included via Boost headers, so undefining min and max in our header isn't always enough. Luckily it turns out there's a NOMINMAX macro that can be defined globally to tame Windows.h.
In elastic#1903 we changed dictionary weighting in categorization to give higher weighting when there were 3 or more adjacent dictionary words. This was the first time that we'd ever had the situation where the same token could have a different weight in different messages. Unfortunately the way this interacted with us requiring equal weights when checking for common tokens meant tokens could be bizarrely removed from categories. For example, with the following two messages we'd put them in the same category but say that "started" was not a common token: - Service abcd was started - Service reaper was started This happens because "abcd" is not a dictionary word but "reaper" is, so then "started" has weight 6 in the first message but weight 31 in the second. Considering "started" to NOT be a common token in this case is extremely bad both intuitively and for the accuracy of drilldown searches. Therefore this PR changes the categorization code to consider tokens equal if their token IDs are equal but their weights are different. Weights are now only used to compute distance between different tokens. This causes the need for another change. It is no longer as simple as it used to be to calculate the highest and lowest possible total weight of a message that might possibly be considered similar to the current message. This calculation now needs to take account of possible adjacency weighting, either in the current message or in the messages being considered as matches. (This also has the side effect that we'll do a higher number of expensive Levenshtein distance calculations, as fewer potential matches will be discarded early by the simple weight check.) Backport of elastic#2277
In elastic#1903 we changed dictionary weighting in categorization to give higher weighting when there were 3 or more adjacent dictionary words. This was the first time that we'd ever had the situation where the same token could have a different weight in different messages. Unfortunately the way this interacted with us requiring equal weights when checking for common tokens meant tokens could be bizarrely removed from categories. For example, with the following two messages we'd put them in the same category but say that "started" was not a common token: - Service abcd was started - Service reaper was started This happens because "abcd" is not a dictionary word but "reaper" is, so then "started" has weight 6 in the first message but weight 31 in the second. Considering "started" to NOT be a common token in this case is extremely bad both intuitively and for the accuracy of drilldown searches. Therefore this PR changes the categorization code to consider tokens equal if their token IDs are equal but their weights are different. Weights are now only used to compute distance between different tokens. This causes the need for another change. It is no longer as simple as it used to be to calculate the highest and lowest possible total weight of a message that might possibly be considered similar to the current message. This calculation now needs to take account of possible adjacency weighting, either in the current message or in the messages being considered as matches. (This also has the side effect that we'll do a higher number of expensive Levenshtein distance calculations, as fewer potential matches will be discarded early by the simple weight check.) Backport of elastic#2277
In #1903 we changed dictionary weighting in categorization to give higher weighting when there were 3 or more adjacent dictionary words. This was the first time that we'd ever had the situation where the same token could have a different weight in different messages. Unfortunately the way this interacted with us requiring equal weights when checking for common tokens meant tokens could be bizarrely removed from categories. For example, with the following two messages we'd put them in the same category but say that "started" was not a common token: - Service abcd was started - Service reaper was started This happens because "abcd" is not a dictionary word but "reaper" is, so then "started" has weight 6 in the first message but weight 31 in the second. Considering "started" to NOT be a common token in this case is extremely bad both intuitively and for the accuracy of drilldown searches. Therefore this PR changes the categorization code to consider tokens equal if their token IDs are equal but their weights are different. Weights are now only used to compute distance between different tokens. This causes the need for another change. It is no longer as simple as it used to be to calculate the highest and lowest possible total weight of a message that might possibly be considered similar to the current message. This calculation now needs to take account of possible adjacency weighting, either in the current message or in the messages being considered as matches. (This also has the side effect that we'll do a higher number of expensive Levenshtein distance calculations, as fewer potential matches will be discarded early by the simple weight check.) Backport of #2277
In #1903 we changed dictionary weighting in categorization to give higher weighting when there were 3 or more adjacent dictionary words. This was the first time that we'd ever had the situation where the same token could have a different weight in different messages. Unfortunately the way this interacted with us requiring equal weights when checking for common tokens meant tokens could be bizarrely removed from categories. For example, with the following two messages we'd put them in the same category but say that "started" was not a common token: - Service abcd was started - Service reaper was started This happens because "abcd" is not a dictionary word but "reaper" is, so then "started" has weight 6 in the first message but weight 31 in the second. Considering "started" to NOT be a common token in this case is extremely bad both intuitively and for the accuracy of drilldown searches. Therefore this PR changes the categorization code to consider tokens equal if their token IDs are equal but their weights are different. Weights are now only used to compute distance between different tokens. This causes the need for another change. It is no longer as simple as it used to be to calculate the highest and lowest possible total weight of a message that might possibly be considered similar to the current message. This calculation now needs to take account of possible adjacency weighting, either in the current message or in the messages being considered as matches. (This also has the side effect that we'll do a higher number of expensive Levenshtein distance calculations, as fewer potential matches will be discarded early by the simple weight check.) Backport of #2277
In #1903 we changed dictionary weighting in categorization to give
higher weighting when there were 3 or more adjacent dictionary
words. This was the first time that we'd ever had the situation
where the same token could have a different weight in different
messages. Unfortunately the way this interacted with us requiring
equal weights when checking for common tokens meant tokens could
be bizarrely removed from categories. For example, with the
following two messages we'd put them in the same category but say
that "started" was not a common token:
This happens because "abcd" is not a dictionary word but "reaper"
is, so then "started" has weight 6 in the first message but weight
31 in the second. Considering "started" to NOT be a common token
in this case is extremely bad both intuitively and for the accuracy
of drilldown searches.
Therefore this PR changes the categorization code to consider
tokens equal if their token IDs are equal but their weights are
different. Weights are now only used to compute distance between
different tokens.
This causes the need for another change. It is no longer as simple
as it used to be to calculate the highest and lowest possible total
weight of a message that might possibly be considered similar to
the current message. This calculation now needs to take account of
possible adjacency weighting, either in the current message or in
the messages being considered as matches. (This also has the side
effect that we'll do a higher number of expensive Levenshtein
distance calculations, as fewer potential matches will be discarded
early by the simple weight check.)