Skip to content

Commit 70fbee3

Browse files
committed
OAI: Fix model parameter placement
Accidentally edited the Model Card parameters vs the model load request ones. Signed-off-by: kingbri <bdashore3@proton.me>
1 parent 1d0bdfa commit 70fbee3

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

OAI/types/model.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66
class ModelCardParameters(BaseModel):
77
max_seq_len: Optional[int] = 4096
88
rope_scale: Optional[float] = 1.0
9-
rope_alpha: Optional[float] = None
9+
rope_alpha: Optional[float] = 1.0
1010
prompt_template: Optional[str] = None
1111
cache_mode: Optional[str] = "FP16"
1212
draft: Optional['ModelCard'] = None
@@ -35,7 +35,7 @@ class ModelLoadRequest(BaseModel):
3535
gpu_split_auto: Optional[bool] = True
3636
gpu_split: Optional[List[float]] = Field(default_factory=list)
3737
rope_scale: Optional[float] = 1.0
38-
rope_alpha: Optional[float] = 1.0
38+
rope_alpha: Optional[float] = None
3939
no_flash_attention: Optional[bool] = False
4040
# low_mem: Optional[bool] = False
4141
cache_mode: Optional[str] = "FP16"

0 commit comments

Comments
 (0)