Refactor the files in example and add chunk size searching.= Evaluate on 8 nodes of SuperPod. Fix bugs in multi-GPU mem tracer.
The system is successfully evaluated on a multi-node system. The benchmark scripts are integrated with memory-centric tiling borrowed from DeepSpeed. It trains an 18B model on WeChat Yard.
The system is evaluated on A100 SuperPod. Some optimizations are developed to improve further the model scale and efficiency, including memory saving communication (MSC) and allocation cache (CACHE). A severe bug caused by asyn chunk copy using stream is identified and fixed. It trains a 50B model on an 8xA100 SuperPod node.
The system is upgraded with a better memory tracer. We improve the max model scale further than v0.3.0 (15B vs. 12B) on the WeChat Yard Platform.
Our initial version significantly surpasses DeepSpeed both in model-scale and computing efficiency.