From 365955a9a2462b8715750e88203b8df00b0df361 Mon Sep 17 00:00:00 2001 From: Tal Date: Sun, 15 Sep 2024 20:42:03 +0300 Subject: [PATCH] Fix missing link (#386) First of all, kudos for this project. It's the only project I found that properly supports modern models like llama-3.1 out of the box. Also the speed, and other factors, seem better. Fixing in this PR a small bug - the previous link led to a missing page. --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index f59c8bfa..b0f0b53f 100644 --- a/README.md +++ b/README.md @@ -179,7 +179,7 @@ model = GPTQModel.from_quantized(quant_output_dir) print(tokenizer.decode(model.generate(**tokenizer("gptqmodel is", return_tensors="pt").to(model.device))[0])) ``` -For more advanced features of model quantization, please reference to [this script](examples/quantization/quant_with_alpaca.py) +For more advanced features of model quantization, please reference to [this script](https://github.com/ModelCloud/GPTQModel/blob/main/examples/quantization/basic_usage_wikitext2.py) ### How to Add Support for a New Model