Skip to content

Fusion module alpha parameter #43

Open
@LambdaLi

Description

Hello, your work is amazing. I have a question about the alpha parameter of the cross attention and self attention fusion module in the decoder. It was 0.5 in version one and the paper, but it became 0.3 in version two. Does this mean that the network pays more attention to the characteristics of the encoder?

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions