PositionEmbedding/TokenAndPositionEmbedding guide #283
Unanswered
DavidLandup0
asked this question in
Show and tell
Replies: 1 comment 2 replies
-
@DavidLandup0 thank you! Very much appreciate the help building resources for KerasNLP! This looks great to me, I like that you give the peak into what One thing you could mention, though I know it isn't the focus on the article, is what tools are available for that |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hey! I just wrote a quick guide to doing positional/token and position embedding with KerasNLP. I think that the library really lowers the barrier to entry for many, and allows people to build NLP models much quicker and easier than ever before. As far as I can tell, KerasNLP is the first major lib to actually let people build stuff from scratch, rather than providing prepackaged Transformers.
While there's a niche for both - you're doing amazing work :)
I'll be doing a series of guides for KerasCV and KerasNLP, and this is one of the starting ones. Once a larger library of guides is finished, I'll string them together and add links between them. Any feedback on this?
@mattdangerw @chenmoneygithub
Beta Was this translation helpful? Give feedback.
All reactions