Skip to content

Releases: Angel-ML/PyTorch-On-Angel

Release-0.4.0

29 Sep 09:18
16f7dce

Choose a tag to compare

PyTorch On Angel, arming PyTorch with a powerful Parameter Server, which enable PyTorch to train very big models. we introduce the following new features in version 0.4.0 :

  1. Upgrade Spark version to 3.3.1
  2. Upgrade Scala version to 2.12.15
  3. Upgrade Angel version to 3.3.0
  4. Add new GNN model: GATNE focused on embedding learning for the Attributed Multiplex Heterogeneous Network
  5. Decouple GAMLP into two independent modules: Aggregator and GAMLP. Aggregator serves as a feature propagation aggregation module, and GAMLP serves as a model training module. It loads the aggregation module features for training, which greatly improves the model training efficiency
  6. Improve the usability of PyTorch on Angel and reduce the cost of user-generated models, the optimization work is as follows:
  • Separate examples from java as an independent module
  • Supports templated parameter configuration, such as using a YAML file to configure parameters
  • Supports uploading user-defined Python models
  • Integrates PythonRunner into PyTorch on Angel, generating PT files from configuration file parameters without installing a local environment, directly loads PT models from the system to train.

Bug fix:

  1. Handle null value for gnn without labels
  2. Recalculate the number of ps partitions for incremental training

0.1.0

20 Aug 13:13

Choose a tag to compare

0.1.0 Pre-release
Pre-release

PyTorch On Angel, arming PyTorch with a powerful Parameter Server, which enable PyTorch to train very big models. we introduce the following new features :

  • Support PyTorch distributed training through angel parameter server.
  • Supports a series of recommendation algorithms, including FM, DeepFM, AttentionFM , DeepAndWide, DCN, PNN and XDeepFM.
  • Provides the ability to run graph convolution network algorithm, now support GraphSage and GCN.