Angel is a high-performance distributed machine learning and graph computing platform based on the philosophy of Parameter Server. It is tuned for performance with big data from Tencent and has a wide range of applicability and stability, demonstrating increasing advantage in handling higher dimension model. Angel is jointly developed by Tencent and Peking University, taking account of both high availability in industry and innovation in academia.
With model-centered core design concept, Angel partitions parameters of complex models into multiple parameter-server nodes, and implements a variety of machine learning algorithms and graph algorithms using efficient model-updating interfaces and functions, as well as flexible consistency model for synchronization.
Angel is developed with Java and Scala. It supports running on Yarn. With PS Service abstraction, it supports Spark on Angel. Graph computing and deep learning frameworks support is under development and will be released in the future.
We welcome everyone interested in machine learning or graph computing to contribute code, create issues or pull requests. Please refer to Angel Contribution Guide for more detail.
- Angel or Spark On Angel?
- Algorithm Parameter Description
- Angel
- Traditional Machine Learning Methods
- Spark on Angel
- Compilation Guide
- Running on Local
- Running on Yarn
- Configuration Details
- Resource Configuration Guide
- Mailing list: angel-tsc@lists.deeplearningfoundation.org
- Angel homepage in Linux FD: https://lists.deeplearningfoundation.org/g/angel-main
- Committers & Contributors
- Contributing to Angel
- Roadmap
- Lele Yu, Bin Cui, Ce Zhang, Yingxia Shao. LDA*: A Robust and Large-scale Topic Modeling System. VLDB, 2017
- Jiawei Jiang, Bin Cui, Ce Zhang, Lele Yu. Heterogeneity-aware Distributed Parameter Servers. SIGMOD, 2017
- Jie Jiang, Lele Yu, Jiawei Jiang, Yuhong Liu and Bin Cui. Angel: a new large-scale machine learning system. National Science Review (NSR), 2017
- Jie Jiang, Jiawei Jiang, Bin Cui and Ce Zhang. TencentBoost: A Gradient Boosting Tree System with Parameter Server. ICDE, 2017
- Jiawei Jiang, Bin Cui, Ce Zhang and Fangcheng Fu. DimBoost: Boosting Gradient Boosting Decision Tree to Higher Dimensions. SIGMOD, 2018.
- Jiawei Jiang, Pin Xiao, Lele Yu, Xiaosen Li.PSGraph: How Tencent trains extremely large-scale graphs with Spark?.ICDE, 2020.