Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding the process capacity for the model #5

Open
OEG-Clark opened this issue Jan 9, 2024 · 0 comments
Open

Adding the process capacity for the model #5

OEG-Clark opened this issue Jan 9, 2024 · 0 comments

Comments

@OEG-Clark
Copy link

OEG-Clark commented Jan 9, 2024

I think it will be beneficial for the users to have the max process capacity for the model in the model card. I found it very helpful from time to time when I start to use a model.

And the reason I call it max process capacity is that for NL area, it could be the max process token length. For example, ChatGPT is 2048, Llama2 is 4096. For the CV area, Vit(Visual Transformer), I believe it is 224bit $\times$ 224bit, MidJourney is 1024 $\times$ 1024 as default.

And this property is referring to the input of the model, which is vastly helpful from time to time. :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant