-
Notifications
You must be signed in to change notification settings - Fork 772
Description
I know that I can get feature importance for each term, with the term_importances, however this includes importances of single terms, as well as interaction terms. Is there a way to distribute the importance of the interaction terms back to their individual features?
Basically what I want is a single vector, N number of features long, where N is the number of input features, to use in variable reduction techniques, or for model interpretation. It's hard to know the importance of a feature in a model precisely, when you can't easily distribute it's importance from when it's used in interactions.
It's my understanding that these interaction terms are essentially a tree structure? Perhaps this could be traversed to determine how the weighted average of bin weights could be attributed to the individual feature of an interaction?
I may be missing a method, or misunderstanding how this is working as well :)