Open
Description
Hello, your work is amazing. I have a question about the alpha parameter of the cross attention and self attention fusion module in the decoder. It was 0.5 in version one and the paper, but it became 0.3 in version two. Does this mean that the network pays more attention to the characteristics of the encoder?
Metadata
Assignees
Labels
No labels