Skip to content

can you give a compare with existing implementations? #1

Closed
@zsz00

Description

@zsz00

I appreciate your excellent work, especially the example https://juliamltools.github.io/shakespeare-gpt

there are some existing implementations of MultiHeadAttention and Transformer:
FluxML/Flux.jl#2146
https://github.com/chengchingwen/NeuralAttentionlib.jl
https://github.com/chengchingwen/Transformers.jl

can you give a compare with existing implementations ?
why you want to implementation this again ?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions