You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I think it will be beneficial for the users to have the max process capacity for the model in the model card. I found it very helpful from time to time when I start to use a model.
And the reason I call it max process capacity is that for NL area, it could be the max process token length. For example, ChatGPT is 2048, Llama2 is 4096. For the CV area, Vit(Visual Transformer), I believe it is 224bit $\times$ 224bit, MidJourney is 1024 $\times$ 1024 as default.
And this property is referring to the input of the model, which is vastly helpful from time to time. :)
The text was updated successfully, but these errors were encountered:
I think it will be beneficial for the users to have the max process capacity for the model in the model card. I found it very helpful from time to time when I start to use a model.
And the reason I call it max process capacity is that for NL area, it could be the max process token length. For example, ChatGPT is 2048, Llama2 is 4096. For the CV area, Vit(Visual Transformer), I believe it is 224bit$\times$ 224bit, MidJourney is 1024 $\times$ 1024 as default.
And this property is referring to the input of the model, which is vastly helpful from time to time. :)
The text was updated successfully, but these errors were encountered: