Skip to content

single bit precision autoregressive somewhat large language models

License

Notifications You must be signed in to change notification settings

shauray8/singlebitllms

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Single Bit LLMs

Single-bit precision autoregressive somewhat large language models 
for research purposes maybe this is something we will be using in the near future and future generations will go why the fuck were these people using 16 bits of precision
---
https://arxiv.org/abs/2402.17764
https://arxiv.org/pdf/2310.11453.pdf
---
There are some graphs toward the starting of the paper, I would want to verify those so here it is a reproduction with mistral 7B V0.1

Turns out graphing MMLU or any EVAL for that matter for quantizer models is pretty slow, either I wiill make this fast now or maybe after I'm done with the paper 

anyways here are all the quantized models - https://huggingface.co/shauray/mistral_GGUF_TEST01
to be done 

---
link to the blog 
---

About

single bit precision autoregressive somewhat large language models

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published