Skip to content

Commit 027aa10

Browse files
authored
[doc] updated inference readme (#5343)
1 parent e76acbb commit 027aa10

File tree

10 files changed

+83
-34
lines changed

10 files changed

+83
-34
lines changed

colossalai/inference/README.md

Lines changed: 66 additions & 34 deletions
Original file line numberDiff line numberDiff line change
@@ -13,39 +13,49 @@
1313

1414

1515
## 📌 Introduction
16-
ColossalAI-Inference is a library which offers acceleration to Transformers models, especially LLMs. In ColossalAI-Inference, we leverage high-performance kernels, KV cache, paged attention, continous batching and other techniques to accelerate the inference of LLMs. We also provide a unified interface for users to easily use our library.
16+
ColossalAI-Inference is a module which offers acceleration to the inference execution of Transformers models, especially LLMs. In ColossalAI-Inference, we leverage high-performance kernels, KV cache, paged attention, continous batching and other techniques to accelerate the inference of LLMs. We also provide simple and unified APIs for the sake of user-friendliness.
1717

1818
## 🛠 Design and Implementation
1919

2020
### :book: Overview
21-
We build ColossalAI-Inference based on **Four** core components: `engine`,`request handler`,`cache manager(block cached)`, `hand crafted modeling`. **Engine** controls inference step, it recives `requests`, calls `request handler` to schedule a decoding batch and runs `modeling` to perform a iteration and returns finished `requests`. **Cache manager** is bound with `request handler`, updates cache blocks and logical block tables during schedule.
2221

23-
The interaction between different components are shown below, you can also checkout detailed introduction below.:
22+
ColossalAI-Inference has **4** major components, namely namely `engine`,`request handler`,`cache manager`, and `modeling`.
23+
24+
- **Engine**: It orchestrates the inference step. During inference, it recives a request, calls `request handler` to schedule a decoding batch, and executes the model forward pass to perform a iteration. It returns the inference results back to the user at the end.
25+
- **Request Handler**: It manages requests and schedules a proper batch from exisiting requests.
26+
- **Cache manager** It is bound within the `request handler`, updates cache blocks and logical block tables as scheduled by the `request handler`.
27+
- **Modelling**: We rewrite the model and layers of LLMs to simplify and optimize the forward pass for inference.
28+
29+
30+
A high-level view of the inter-component interaction is given below. We would also introduce more details in the next few sections.
31+
2432
<p align="center">
2533
<img src="https://raw.githubusercontent.com/hpcaitech/public_assets/main/colossalai/img/inference/Structure/Introduction.png" width="600"/>
2634
<br/>
2735
</p>
2836

29-
### :mailbox_closed: Design of engine
30-
Engine is designed as starter of inference loop. User can easily instantialize an infer engine with config and execute requests. We provids apis below in engine, you can refer to source code for more information:
31-
- `generate`: main function, handle inputs and return outputs
32-
- `add_request`: add request to waitting list
33-
- `step`: perform one decoding iteration
34-
- first, `request handler` schedules a batch to do prefill/decode
35-
- then, invoke a model to generate a batch of token
36-
- after that, do logit processing and sampling, check and decode finished requests
37-
38-
### :game_die: Design of request_handler
39-
Request handler is responsible manage requests and schedule a proper batch from exisiting requests. According to existing work and experiments, we do believe that it is beneficial to increase the length of decoding sequences. In our design, we partition requests into three priorities depending on their lengths, the longer sequences are first considered.
37+
### :mailbox_closed: Engine
38+
Engine is designed as the entry point where the user kickstarts an inference loop. User can easily instantialize an inference engine with the inference configuration and execute requests. The engine object will expose the following APIs for inference:
39+
40+
- `generate`: main function which handles inputs, performs inference and returns outputs
41+
- `add_request`: add request to the waiting list
42+
- `step`: perform one decoding iteration. The `request handler` first schedules a batch to do prefill/decoding. Then, it invokes a model to generate a batch of token and afterwards does logit processing and sampling, checks and decodes finished requests.
43+
44+
### :game_die: Request Handler
45+
46+
Request handler is responsible for managing requests and scheduling a proper batch from exisiting requests. According to the existing work and experiments, we do believe that it is beneficial to increase the length of decoding sequences. In our design, we partition requests into three priorities depending on their lengths, the longer sequences are first considered.
47+
4048
<p align="center">
4149
<img src="https://raw.githubusercontent.com/hpcaitech/public_assets/main/colossalai/img/inference/Structure/Request_handler.svg" width="800"/>
4250
<br/>
4351
</p>
4452

45-
### :radio: Design of KV cache and cache manager
46-
We design a unified blocked type cache and cache manager to distribute memory. The physical memory is allocated before decoding and represented by a logical block table. During decoding process, cache manager administrate physical memory through `block table` and other components(i.e. engine) can focus on the light-weighted `block table`. Their details are introduced below.
47-
- `cache block` We group physical memory into different memory blocks. A typical cache block is shaped `(num_kv_heads, head_size, block_size)`. We decide block number beforehand. The memory allocation and computation are executed with the granularity of memory block.
48-
- `block table` Block table is the logical representation of cache blocks. Concretely, a block table of a single sequence is a 1D tensor, with each element holding a block id of allocated id or `-1` for non allocated. Each iteration we pass through a batch block table to the corresponding model. For more information, you can checkout the source code.
53+
### :radio: KV cache and cache manager
54+
55+
We design a unified block cache and cache manager to allocate and manage memory. The physical memory is allocated before decoding and represented by a logical block table. During decoding process, cache manager administrates the physical memory through `block table` and other components(i.e. engine) can focus on the lightweight `block table`. More details are given below.
56+
57+
- `cache block`: We group physical memory into different memory blocks. A typical cache block is shaped `(num_kv_heads, head_size, block_size)`. We determine the block number beforehand. The memory allocation and computation are executed at the granularity of memory block.
58+
- `block table`: Block table is the logical representation of cache blocks. Concretely, a block table of a single sequence is a 1D tensor, with each element holding a block ID. Block ID of `-1` means "Not Allocated". In each iteration, we pass through a batch block table to the corresponding model.
4959

5060
<figure>
5161
<p align="center">
@@ -57,48 +67,71 @@ We design a unified blocked type cache and cache manager to distribute memory. T
5767

5868

5969
### :railway_car: Modeling
70+
6071
Modeling contains models and layers, which are hand-crafted for better performance easier usage. Deeply integrated with `shardformer`, we also construct policy for our models. In order to minimize users' learning costs, our models are aligned with [Transformers](https://github.com/huggingface/transformers)
6172

6273
## 🕹 Usage
6374

6475
### :arrow_right: Quick Start
65-
You can enjoy your fast generation journey within three step
76+
6677
```python
67-
# First, create a model in "transformers" way, you can provide a model config or use the default one.
68-
model = transformers.LlamaForCausalLM(config).cuda()
69-
# Second, create an inference_config
78+
import torch
79+
import transformers
80+
import colossalai
81+
from colossalai.inference import InferenceEngine, InferenceConfig
82+
from pprint import pprint
83+
84+
colossalai.launch_from_torch(config={})
85+
86+
# Step 1: create a model in "transformers" way
87+
model_path = "lmsys/vicuna-7b-v1.3"
88+
model = transformers.LlamaForCausalLM.from_pretrained(model_path).cuda()
89+
tokenizer = transformers.LlamaTokenizer.from_pretrained(model_path)
90+
91+
# Step 2: create an inference_config
7092
inference_config = InferenceConfig(
71-
dtype=args.dtype,
72-
max_batch_size=args.max_batch_size,
73-
max_input_len=args.seq_len,
74-
max_output_len=args.output_len,
93+
dtype=torch.float16,
94+
max_batch_size=4,
95+
max_input_len=1024,
96+
max_output_len=512,
7597
)
76-
# Third, create an engine with model and config
77-
engine = InferenceEngine(model, tokenizer, inference_config, verbose=True)
7898

79-
# Try fast infrence now!
80-
prompts = {'Nice to meet you, Colossal-Inference!'}
81-
engine.generate(prompts)
99+
# Step 3: create an engine with model and config
100+
engine = InferenceEngine(model, tokenizer, inference_config, verbose=True)
82101

102+
# Step 4: try inference
103+
generation_config = transformers.GenerationConfig(
104+
pad_token_id=tokenizer.pad_token_id,
105+
max_new_tokens=512,
106+
)
107+
prompts = ['Who is the best player in the history of NBA?']
108+
engine.add_request(prompts=prompts)
109+
response = engine.generate(generation_config)
110+
pprint(response)
83111
```
84112

85113
### :bookmark: Customize your inference engine
86-
Besides the basic fast-start inference, you can also customize your inference engine via modifying config or upload your own model or decoding components (logit processors or sampling strategies).
114+
Besides the basic quick-start inference, you can also customize your inference engine via modifying config or upload your own model or decoding components (logit processors or sampling strategies).
115+
87116
#### Inference Config
88117
Inference Config is a unified api for generation process. You can define the value of args to control the generation, like `max_batch_size`,`max_output_len`,`dtype` to decide the how many sequences can be handled at a time, and how many tokens to output. Refer to the source code for more detail.
118+
89119
#### Generation Config
90120
In colossal-inference, Generation config api is inherited from [Transformers](https://github.com/huggingface/transformers). Usage is aligned. By default, it is automatically generated by our system and you don't bother to construct one. If you have such demand, you can also create your own and send it to your engine.
91121

92122
#### Logit Processors
93-
Logit Processosr receives logits and return processed ones, take the following step to make your own.
123+
The `Logit Processosr` receives logits and return processed results. You can take the following step to make your own.
124+
94125
```python
95126
@register_logit_processor("name")
96127
def xx_logit_processor(logits, args):
97128
logits = do_some_process(logits)
98129
return logits
99130
```
131+
100132
#### Sampling Strategies
101133
We offer 3 main sampling strategies now (i.e. `greedy sample`, `multinomial sample`, `beam_search sample`), you can refer to [sampler](/ColossalAI/colossalai/inference/sampler.py) for more details. We would strongly appreciate if you can contribute your varities.
134+
102135
## 🪅 Support Matrix
103136

104137
| Model | KV Cache | Paged Attention | Kernels | Tensor Parallelism | Speculative Decoding |
@@ -158,5 +191,4 @@ If you wish to cite relevant research papars, you can find the reference below.
158191
}
159192
160193
# we do not find any research work related to lightllm
161-
162194
```

colossalai/inference/__init__.py

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,4 @@
1+
from .config import InferenceConfig
2+
from .core import InferenceEngine
3+
4+
__all__ = ["InferenceConfig", "InferenceEngine"]
Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,4 @@
1+
from .engine import InferenceEngine
2+
from .request_handler import RequestHandler
3+
4+
__all__ = ["InferenceEngine", "RequestHandler"]

colossalai/inference/core/engine.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,8 @@
1717

1818
from .request_handler import RequestHandler
1919

20+
__all__ = ["InferenceEngine"]
21+
2022
PP_AXIS, TP_AXIS = 0, 1
2123

2224
_supported_models = [

colossalai/inference/core/request_handler.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,8 @@
1111
from colossalai.inference.struct import BatchInfo, RequestStatus, Sequence
1212
from colossalai.logging import get_dist_logger
1313

14+
__all__ = ["RunningList", "RequestHandler"]
15+
1416
logger = get_dist_logger(__name__)
1517

1618

colossalai/inference/kv_cache/block_cache.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,7 @@
11
from typing import Any
22

3+
__all__ = ["CacheBlock"]
4+
35

46
class CacheBlock:
57
"""A simplified version of logical cache block used for Paged Attention."""

colossalai/inference/kv_cache/kvcache_manager.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,8 @@
1010

1111
from .block_cache import CacheBlock
1212

13+
__all__ = ["KVCacheManager"]
14+
1315
GIGABYTE = 1024**3
1416

1517

colossalai/inference/modeling/__init__.py

Whitespace-only changes.

colossalai/inference/modeling/layers/__init__.py

Whitespace-only changes.

requirements/requirements.txt

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -16,3 +16,4 @@ ray
1616
sentencepiece
1717
google
1818
protobuf
19+
ordered-set

0 commit comments

Comments
 (0)