Releases: Angel-ML/PyTorch-On-Angel
Releases · Angel-ML/PyTorch-On-Angel
Release-0.4.0
PyTorch On Angel, arming PyTorch with a powerful Parameter Server, which enable PyTorch to train very big models. we introduce the following new features in version 0.4.0 :
- Upgrade Spark version to 3.3.1
- Upgrade Scala version to 2.12.15
- Upgrade Angel version to 3.3.0
- Add new GNN model: GATNE focused on embedding learning for the Attributed Multiplex Heterogeneous Network
- Decouple GAMLP into two independent modules: Aggregator and GAMLP. Aggregator serves as a feature propagation aggregation module, and GAMLP serves as a model training module. It loads the aggregation module features for training, which greatly improves the model training efficiency
- Improve the usability of PyTorch on Angel and reduce the cost of user-generated models, the optimization work is as follows:
- Separate
examplesfromjavaas an independent module- Supports templated parameter configuration, such as using a YAML file to configure parameters
- Supports uploading user-defined Python models
- Integrates PythonRunner into PyTorch on Angel, generating PT files from configuration file parameters without installing a local environment, directly loads PT models from the system to train.
Bug fix:
- Handle null value for gnn without labels
- Recalculate the number of ps partitions for incremental training
0.1.0
PyTorch On Angel, arming PyTorch with a powerful Parameter Server, which enable PyTorch to train very big models. we introduce the following new features :
- Support PyTorch distributed training through angel parameter server.
- Supports a series of recommendation algorithms, including FM, DeepFM, AttentionFM , DeepAndWide, DCN, PNN and XDeepFM.
- Provides the ability to run graph convolution network algorithm, now support GraphSage and GCN.