Skip to content

Commit c7c104c

Browse files
authored
[DOC] Update inference readme (#5280)
* add readme * add readme * 1 * update engine * finish readme * add readme
1 parent 1f8a75d commit c7c104c

File tree

2 files changed

+79
-3
lines changed

2 files changed

+79
-3
lines changed

colossalai/inference/README.md

Lines changed: 78 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -13,18 +13,92 @@
1313

1414

1515
## 📌 Introduction
16-
1716
ColossalAI-Inference is a library which offers acceleration to Transformers models, especially LLMs. In ColossalAI-Inference, we leverage high-performance kernels, KV cache, paged attention, continous batching and other techniques to accelerate the inference of LLMs. We also provide a unified interface for users to easily use our library.
1817

1918
## 🛠 Design and Implementation
2019

21-
To be added.
20+
### :book: Overview
21+
We build ColossalAI-Inference based on **Four** core components: `engine`,`request handler`,`cache manager(block cached)`, `hand crafted modeling`. **Engine** controls inference step, it recives `requests`, calls `request handler` to schedule a decoding batch and runs `modeling` to perform a iteration and returns finished `requests`. **Cache manager** is bound with `request handler`, updates cache blocks and logical block tables during schedule.
22+
23+
The interaction between different components are shown below, you can also checkout detailed introduction below.:
24+
<p align="center">
25+
<img src="https://raw.githubusercontent.com/hpcaitech/public_assets/main/colossalai/img/inference/Structure/Introduction.png" width="600"/>
26+
<br/>
27+
</p>
28+
29+
### :mailbox_closed: Design of engine
30+
Engine is designed as starter of inference loop. User can easily instantialize an infer engine with config and execute requests. We provids apis below in engine, you can refer to source code for more information:
31+
- `generate`: main function, handle inputs and return outputs
32+
- `add_request`: add request to waitting list
33+
- `step`: perform one decoding iteration
34+
- first, `request handler` schedules a batch to do prefill/decode
35+
- then, invoke a model to generate a batch of token
36+
- after that, do logit processing and sampling, check and decode finished requests
37+
38+
### :game_die: Design of request_handler
39+
Request handler is responsible manage requests and schedule a proper batch from exisiting requests. According to existing work and experiments, we do believe that it is beneficial to increase the length of decoding sequences. In our design, we partition requests into three priorities depending on their lengths, the longer sequences are first considered.
40+
<p align="center">
41+
<img src="https://raw.githubusercontent.com/hpcaitech/public_assets/main/colossalai/img/inference/Structure/Request_handler.svg" width="800"/>
42+
<br/>
43+
</p>
44+
45+
### :radio: Design of KV cache and cache manager
46+
We design a unified blocked type cache and cache manager to distribute memory. The physical memory is allocated before decoding and represented by a logical block table. During decoding process, cache manager administrate physical memory through `block table` and other components(i.e. engine) can focus on the light-weighted `block table`. Their details are introduced below.
47+
- `cache block` We group physical memory into different memory blocks. A typical cache block is shaped `(num_kv_heads, head_size, block_size)`. We decide block number beforehand. The memory allocation and computation are executed with the granularity of memory block.
48+
- `block table` Block table is the logical representation of cache blocks. Concretely, a block table of a single sequence is a 1D tensor, with each element holding a block id of allocated id or `-1` for non allocated. Each iteration we pass through a batch block table to the corresponding model. For more information, you can checkout the source code.
49+
50+
<figure>
51+
<p align="center">
52+
<img src="https://raw.githubusercontent.com/hpcaitech/public_assets/main/colossalai/img/inference/Structure/BlockTable.svg"/>
53+
<br/>
54+
<figcation>Example of Batch Block Table</figcation>
55+
</p>
56+
</figure>
57+
58+
59+
### :railway_car: Modeling
60+
Modeling contains models and layers, which are hand-crafted for better performance easier usage. Deeply integrated with `shardformer`, we also construct policy for our models. In order to minimize users' learning costs, our models are aligned with [Transformers](https://github.com/huggingface/transformers)
2261

2362
## 🕹 Usage
2463

64+
### :arrow_right: Quick Start
65+
You can enjoy your fast generation journey within three step
66+
```python
67+
# First, create a model in "transformers" way, you can provide a model config or use the default one.
68+
model = transformers.LlamaForCausalLM(config).cuda()
69+
# Second, create an inference_config
70+
inference_config = InferenceConfig(
71+
dtype=args.dtype,
72+
max_batch_size=args.max_batch_size,
73+
max_input_len=args.seq_len,
74+
max_output_len=args.output_len,
75+
)
76+
# Third, create an engine with model and config
77+
engine = InferenceEngine(model, tokenizer, inference_config, verbose=True)
78+
79+
# Try fast infrence now!
80+
prompts = {'Nice to meet you, Colossal-Inference!'}
81+
engine.generate(prompts)
2582

26-
To be added.
83+
```
2784

85+
### :bookmark: Customize your inference engine
86+
Besides the basic fast-start inference, you can also customize your inference engine via modifying config or upload your own model or decoding components (logit processors or sampling strategies).
87+
#### Inference Config
88+
Inference Config is a unified api for generation process. You can define the value of args to control the generation, like `max_batch_size`,`max_output_len`,`dtype` to decide the how many sequences can be handled at a time, and how many tokens to output. Refer to the source code for more detail.
89+
#### Generation Config
90+
In colossal-inference, Generation config api is inherited from [Transformers](https://github.com/huggingface/transformers). Usage is aligned. By default, it is automatically generated by our system and you don't bother to construct one. If you have such demand, you can also create your own and send it to your engine.
91+
92+
#### Logit Processors
93+
Logit Processosr receives logits and return processed ones, take the following step to make your own.
94+
```python
95+
@register_logit_processor("name")
96+
def xx_logit_processor(logits, args):
97+
logits = do_some_process(logits)
98+
return logits
99+
```
100+
#### Sampling Strategies
101+
We offer 3 main sampling strategies now (i.e. `greedy sample`, `multinomial sample`, `beam_search sample`), you can refer to [sampler](/ColossalAI/colossalai/inference/sampler.py) for more details. We would strongly appreciate if you can contribute your varities.
28102
## 🪅 Support Matrix
29103

30104
| Model | KV Cache | Paged Attention | Kernels | Tensor Parallelism | Speculative Decoding |
@@ -44,6 +118,7 @@ Notations:
44118
- [x] High-Performance Kernels
45119
- [x] Llama Modelling
46120
- [ ] Tensor Parallelism
121+
- [ ] Beam Search
47122
- [ ] Speculative Decoding
48123
- [ ] Continuous Batching
49124
- [ ] Online Inference

colossalai/inference/core/engine.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -242,6 +242,7 @@ def step(self) -> List[str]:
242242
finished_sequences = self.request_handler.update()
243243

244244
# Decode completed sentences.
245+
# TODO : update decoding step
245246
for seq in finished_sequences:
246247
output_str = self.tokenizer.decode(seq.input_token_id + seq.output_token_id, skip_special_tokens=True)
247248
output_list.append(output_str)

0 commit comments

Comments
 (0)