You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: colossalai/inference/config.py
+32-21Lines changed: 32 additions & 21 deletions
Original file line number
Diff line number
Diff line change
@@ -35,49 +35,60 @@ class InferenceConfig:
35
35
"""The inference configuration.
36
36
37
37
Args:
38
-
micro_batch_size (int): the micro batch size, defaults to 1. Only useful when `pp_size` > 1.
39
-
micro_batch_buffer_size (int): the buffer size for micro batch. Normally, it should be the same as the number of pipeline stages.
40
38
max_batch_size (int): Maximum batch size, defaults to 8.
41
39
max_output_len (int): Maximum output length, defaults to 256.
42
40
max_input_len (int): Maximum input length, defaults to 256.
43
-
block_size (int): The number of blocks in a logical block, defaults to 16.
44
41
dtype (Union[str, torch.dtype]): The data type for weights and activations.
45
-
tp_size (int): Tensor parallel size, defaults to 1.
46
-
pp_size (int): Pipeline parallel size, defaults to 1.
42
+
prompt_template (Optional[str]): The prompt template for generation, defaults to None.
43
+
do_sample (bool): Whether to use sampling for generation, defaults to False.
47
44
beam_width (int): The maximum beam width used to initialize KV Cache, defaults to 1.
48
45
During generation, the beam width provided as sampling parameter should be less than or equivalent to this value.
49
46
prefill_ratio (Optional[float]): A controling ratio for prefill and decoding in running list, defaults to 1.2. We will do a step of prefill
50
47
when the actual value exceeds this ratio.
51
48
pad_input: Whether to pad all inputs to the max length.
52
-
quant_mode (Optional[str]): Quantization mode.
53
-
revision (Optional[str]): The specific version(a branch, name, a commit id, or a tag name) of model to use.
54
-
prompt_template (Optional[str]): The prompt template for formatting the input text. Some built-in templates include 'llama' and 'vicuna'. Otherwise, the template should contain '{input_text}' for formatting the input text.
49
+
early_stopping (Optional[bool]): Whether to stop the generation when all beam hypotheses have finished or not, defaults to False.
50
+
top_k (Optional[int]): The number of highest probability vocabulary tokens to keep for top-k-filtering, defaults to None.
51
+
top_p (Optional[float]): The cumulative probability threshold for retaining tokens with a total probability above it, defaults to None.
52
+
min_p (Optional[float]): The minimum probability to keep for top-p filtering, defaults to None.
53
+
block_size (int): The number of blocks in a logical block, defaults to 16.
54
+
tp_size (int): Tensor parallel size, defaults to 1.
55
+
pp_size (int): Pipeline parallel size, defaults to 1.
56
+
micro_batch_size (int): the micro batch size, defaults to 1. Only useful when `pp_size` > 1.
57
+
micro_batch_buffer_size (int): the buffer size for micro batch. Normally, it should be the same as the number of pipeline stages.
58
+
55
59
"""
56
60
57
-
micro_batch_size: int=1
58
-
micro_batch_buffer_size: int=None
61
+
# NOTE: arrange configs according to their importance and frequency of usage
62
+
63
+
# runtime limit
59
64
max_batch_size: int=8
60
65
max_output_len: int=256
61
66
max_input_len: int=256
62
-
block_size: int=16
67
+
68
+
# general configs
63
69
dtype: Union[str, torch.dtype] =torch.float16# use fp16 by default
64
70
65
-
tp_size: int=1
66
-
pp_size: int=1
67
-
# TODO: beam search is not support for now
71
+
# generation configs
72
+
prompt_template: Optional[str] =None
68
73
do_sample: bool=False
69
-
beam_width: int=1
70
-
# the ratio of prefill sequences to decoding sequences, we do prefill step once the actual value exceeds ratio
71
-
prefill_ratio: Optional[float] =1.2
74
+
beam_width: int=1# TODO: beam search is not support for now
75
+
prefill_ratio: Optional[
76
+
float
77
+
] =1.2# the ratio of prefill sequences to decoding sequences, we do prefill step once the actual value exceeds ratio
0 commit comments