You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
How did you obtain the word set, and how did you create a visualization that shows how closely it is related to each prototype?
In the code, vocabulary is used by taking the weight of the LLM's embedding layer, but I thought it would be difficult to interpret which word combination prototype and which time series patch had the high attention score.
Thank you for reading my question.
The text was updated successfully, but these errors were encountered:
How did you obtain the word set, and how did you create a visualization that shows how closely it is related to each prototype?
In the code, vocabulary is used by taking the weight of the LLM's embedding layer, but I thought it would be difficult to interpret which word combination prototype and which time series patch had the high attention score.
Thank you for reading my question.
Hi, there. I got the same confusion when I read their paper. Do you find some cues to solve this question?
How did you obtain the word set, and how did you create a visualization that shows how closely it is related to each prototype?
In the code, vocabulary is used by taking the weight of the LLM's embedding layer, but I thought it would be difficult to interpret which word combination prototype and which time series patch had the high attention score.
Thank you for reading my question.
The text was updated successfully, but these errors were encountered: