Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Local gptq support. #738

Merged
merged 2 commits into from
Jul 31, 2023
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Next Next commit
Local gptq support.
  • Loading branch information
Narsil committed Jul 31, 2023
commit f29e3d7d347bcef7c2e64c2f7920dfcd23f5bdb8
7 changes: 6 additions & 1 deletion server/text_generation_server/utils/weights.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
import os
from pathlib import Path
from typing import List, Dict, Optional, Tuple
from safetensors import safe_open, SafetensorError
Expand Down Expand Up @@ -221,8 +222,12 @@ def _get_gptq_params(self) -> Tuple[int, int]:
return bits, groupsize

def _set_gptq_params(self, model_id):
filename = "quantize_config.json"
try:
filename = hf_hub_download(model_id, filename="quantize_config.json")
if not os.path.exists(os.path.join(model_id, filename)):
filename = os.path.join(model_id, filename)
else:
filename = hf_hub_download(model_id, filename=filename)
with open(filename, "r") as f:
data = json.load(f)
self.gptq_bits = data["bits"]
Expand Down