Skip to content

Commit

Permalink
Update URLs in READMEs (#2949)
Browse files Browse the repository at this point in the history
  • Loading branch information
Robert-Steiner authored Feb 15, 2024
1 parent 314492c commit 0629589
Show file tree
Hide file tree
Showing 21 changed files with 107 additions and 107 deletions.
50 changes: 25 additions & 25 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,24 +1,24 @@
# Flower: A Friendly Federated Learning Framework

<p align="center">
<a href="https://flower.dev/">
<img src="https://flower.dev/_next/image/?url=%2F_next%2Fstatic%2Fmedia%2Fflower_white_border.c2012e70.png&w=640&q=75" width="140px" alt="Flower Website" />
<a href="https://flower.ai/">
<img src="https://flower.ai/_next/image/?url=%2F_next%2Fstatic%2Fmedia%2Fflower_white_border.c2012e70.png&w=640&q=75" width="140px" alt="Flower Website" />
</a>
</p>
<p align="center">
<a href="https://flower.dev/">Website</a> |
<a href="https://flower.dev/blog">Blog</a> |
<a href="https://flower.dev/docs/">Docs</a> |
<a href="https://flower.dev/conf/flower-summit-2022">Conference</a> |
<a href="https://flower.dev/join-slack">Slack</a>
<a href="https://flower.ai/">Website</a> |
<a href="https://flower.ai/blog">Blog</a> |
<a href="https://flower.ai/docs/">Docs</a> |
<a href="https://flower.ai/conf/flower-summit-2022">Conference</a> |
<a href="https://flower.ai/join-slack">Slack</a>
<br /><br />
</p>

[![GitHub license](https://img.shields.io/github/license/adap/flower)](https://github.com/adap/flower/blob/main/LICENSE)
[![PRs Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg)](https://github.com/adap/flower/blob/main/CONTRIBUTING.md)
![Build](https://github.com/adap/flower/actions/workflows/framework.yml/badge.svg)
[![Downloads](https://static.pepy.tech/badge/flwr)](https://pepy.tech/project/flwr)
[![Slack](https://img.shields.io/badge/Chat-Slack-red)](https://flower.dev/join-slack)
[![Slack](https://img.shields.io/badge/Chat-Slack-red)](https://flower.ai/join-slack)

Flower (`flwr`) is a framework for building federated learning systems. The
design of Flower is based on a few guiding principles:
Expand All @@ -39,7 +39,7 @@ design of Flower is based on a few guiding principles:
- **Understandable**: Flower is written with maintainability in mind. The
community is encouraged to both read and contribute to the codebase.

Meet the Flower community on [flower.dev](https://flower.dev)!
Meet the Flower community on [flower.ai](https://flower.ai)!

## Federated Learning Tutorial

Expand Down Expand Up @@ -73,19 +73,19 @@ Stay tuned, more tutorials are coming soon. Topics include **Privacy and Securit

## Documentation

[Flower Docs](https://flower.dev/docs):
[Flower Docs](https://flower.ai/docs):

- [Installation](https://flower.dev/docs/framework/how-to-install-flower.html)
- [Quickstart (TensorFlow)](https://flower.dev/docs/framework/tutorial-quickstart-tensorflow.html)
- [Quickstart (PyTorch)](https://flower.dev/docs/framework/tutorial-quickstart-pytorch.html)
- [Quickstart (Hugging Face)](https://flower.dev/docs/framework/tutorial-quickstart-huggingface.html)
- [Quickstart (PyTorch Lightning)](https://flower.dev/docs/framework/tutorial-quickstart-pytorch-lightning.html)
- [Quickstart (Pandas)](https://flower.dev/docs/framework/tutorial-quickstart-pandas.html)
- [Quickstart (fastai)](https://flower.dev/docs/framework/tutorial-quickstart-fastai.html)
- [Quickstart (JAX)](https://flower.dev/docs/framework/tutorial-quickstart-jax.html)
- [Quickstart (scikit-learn)](https://flower.dev/docs/framework/tutorial-quickstart-scikitlearn.html)
- [Quickstart (Android [TFLite])](https://flower.dev/docs/framework/tutorial-quickstart-android.html)
- [Quickstart (iOS [CoreML])](https://flower.dev/docs/framework/tutorial-quickstart-ios.html)
- [Installation](https://flower.ai/docs/framework/how-to-install-flower.html)
- [Quickstart (TensorFlow)](https://flower.ai/docs/framework/tutorial-quickstart-tensorflow.html)
- [Quickstart (PyTorch)](https://flower.ai/docs/framework/tutorial-quickstart-pytorch.html)
- [Quickstart (Hugging Face)](https://flower.ai/docs/framework/tutorial-quickstart-huggingface.html)
- [Quickstart (PyTorch Lightning)](https://flower.ai/docs/framework/tutorial-quickstart-pytorch-lightning.html)
- [Quickstart (Pandas)](https://flower.ai/docs/framework/tutorial-quickstart-pandas.html)
- [Quickstart (fastai)](https://flower.ai/docs/framework/tutorial-quickstart-fastai.html)
- [Quickstart (JAX)](https://flower.ai/docs/framework/tutorial-quickstart-jax.html)
- [Quickstart (scikit-learn)](https://flower.ai/docs/framework/tutorial-quickstart-scikitlearn.html)
- [Quickstart (Android [TFLite])](https://flower.ai/docs/framework/tutorial-quickstart-android.html)
- [Quickstart (iOS [CoreML])](https://flower.ai/docs/framework/tutorial-quickstart-ios.html)

## Flower Baselines

Expand All @@ -112,9 +112,9 @@ Flower Baselines is a collection of community-contributed projects that reproduc
- [FedAvg](https://github.com/adap/flower/tree/main/baselines/flwr_baselines/flwr_baselines/publications/fedavg_mnist)
- [FedOpt](https://github.com/adap/flower/tree/main/baselines/flwr_baselines/flwr_baselines/publications/adaptive_federated_optimization)

Please refer to the [Flower Baselines Documentation](https://flower.dev/docs/baselines/) for a detailed categorization of baselines and for additional info including:
* [How to use Flower Baselines](https://flower.dev/docs/baselines/how-to-use-baselines.html)
* [How to contribute a new Flower Baseline](https://flower.dev/docs/baselines/how-to-contribute-baselines.html)
Please refer to the [Flower Baselines Documentation](https://flower.ai/docs/baselines/) for a detailed categorization of baselines and for additional info including:
* [How to use Flower Baselines](https://flower.ai/docs/baselines/how-to-use-baselines.html)
* [How to contribute a new Flower Baseline](https://flower.ai/docs/baselines/how-to-contribute-baselines.html)

## Flower Usage Examples

Expand Down Expand Up @@ -151,7 +151,7 @@ Other [examples](https://github.com/adap/flower/tree/main/examples):

## Community

Flower is built by a wonderful community of researchers and engineers. [Join Slack](https://flower.dev/join-slack) to meet them, [contributions](#contributing-to-flower) are welcome.
Flower is built by a wonderful community of researchers and engineers. [Join Slack](https://flower.ai/join-slack) to meet them, [contributions](#contributing-to-flower) are welcome.

<a href="https://github.com/adap/flower/graphs/contributors">
<img src="https://contrib.rocks/image?repo=adap/flower" />
Expand Down
6 changes: 3 additions & 3 deletions baselines/README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Flower Baselines


> We are changing the way we structure the Flower baselines. While we complete the transition to the new format, you can still find the existing baselines in the `flwr_baselines` directory. Currently, you can make use of baselines for [FedAvg](https://github.com/adap/flower/tree/main/baselines/flwr_baselines/flwr_baselines/publications/fedavg_mnist), [FedOpt](https://github.com/adap/flower/tree/main/baselines/flwr_baselines/flwr_baselines/publications/adaptive_federated_optimization), and [LEAF-FEMNIST](https://github.com/adap/flower/tree/main/baselines/flwr_baselines/flwr_baselines/publications/leaf/femnist).
> We are changing the way we structure the Flower baselines. While we complete the transition to the new format, you can still find the existing baselines in the `flwr_baselines` directory. Currently, you can make use of baselines for [FedAvg](https://github.com/adap/flower/tree/main/baselines/flwr_baselines/flwr_baselines/publications/fedavg_mnist), [FedOpt](https://github.com/adap/flower/tree/main/baselines/flwr_baselines/flwr_baselines/publications/adaptive_federated_optimization), and [LEAF-FEMNIST](https://github.com/adap/flower/tree/main/baselines/flwr_baselines/flwr_baselines/publications/leaf/femnist).
> The documentation below has been updated to reflect the new way of using Flower baselines.
Expand All @@ -23,7 +23,7 @@ Please note that some baselines might include additional files (e.g. a `requirem

## Running the baselines

Each baseline is self-contained in its own directory. Furthermore, each baseline defines its own Python environment using [Poetry](https://python-poetry.org/docs/) via a `pyproject.toml` file and [`pyenv`](https://github.com/pyenv/pyenv). If you haven't setup `Poetry` and `pyenv` already on your machine, please take a look at the [Documentation](https://flower.dev/docs/baselines/how-to-use-baselines.html#setting-up-your-machine) for a guide on how to do so.
Each baseline is self-contained in its own directory. Furthermore, each baseline defines its own Python environment using [Poetry](https://python-poetry.org/docs/) via a `pyproject.toml` file and [`pyenv`](https://github.com/pyenv/pyenv). If you haven't setup `Poetry` and `pyenv` already on your machine, please take a look at the [Documentation](https://flower.ai/docs/baselines/how-to-use-baselines.html#setting-up-your-machine) for a guide on how to do so.

Assuming `pyenv` and `Poetry` are already installed on your system. Running a baseline can be done by:

Expand Down Expand Up @@ -54,7 +54,7 @@ The steps to follow are:
```bash
# This will create a new directory with the same structure as `baseline_template`.
./dev/create-baseline.sh <baseline-name>
```
```
3. Then, go inside your baseline directory and continue with the steps detailed in `EXTENDED_README.md` and `README.md`.
4. Once your code is ready and you have checked that following the instructions in your `README.md` the Python environment can be created correctly and that running the code following your instructions can reproduce the experiments in the paper, you just need to create a Pull Request (PR). Then, the process to merge your baseline into the Flower repo will begin!
Expand Down
84 changes: 42 additions & 42 deletions baselines/fedpara/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ labels: [image classification, personalization, low-rank training, tensor decomp
dataset: [CIFAR-10, CIFAR-100, MNIST]
---

# FedPara: Low-rank Hadamard Product for Communication-Efficient Federated Learning
# FedPara: Low-rank Hadamard Product for Communication-Efficient Federated Learning

> Note: If you use this baseline in your work, please remember to cite the original authors of the paper as well as the Flower paper.
Expand Down Expand Up @@ -43,7 +43,7 @@ Specifically, it replicates the results for CIFAR-10 and CIFAR-100 in Figure 3
On a machine with RTX 3090Ti (24GB VRAM) it takes approximately 1h to run each CIFAR-10/100 experiment while using < 12GB of VRAM. You can lower the VRAM footprint my reducing the number of clients allowed to run in parallel in your GPU (do this by raising `client_resources.num_gpus`).


**Contributors:** Yahia Salaheldin Shaaban, Omar Mokhtar and Roeia Amr
**Contributors:** Yahia Salaheldin Shaaban, Omar Mokhtar and Roeia Amr


## Experimental Setup
Expand All @@ -52,48 +52,48 @@ On a machine with RTX 3090Ti (24GB VRAM) it takes approximately 1h to run each C

**Model:** This baseline implements VGG16 with group normalization.

**Dataset:**
**Dataset:**

| Dataset | #classes | #partitions | partitioning method IID | partitioning method non-IID |
|:---------|:--------:|:-----------:|:----------------------:| :----------------------:|
| CIFAR-10 | 10 | 100 | random split | Dirichlet distribution ($\alpha=0.5$)|
| CIFAR-100 | 100 | 50 | random split| Dirichlet distribution ($\alpha=0.5$)|
| Dataset | #classes | #partitions | partitioning method IID | partitioning method non-IID |
| :-------- | :------: | :---------: | :---------------------: | :-----------------------------------: |
| CIFAR-10 | 10 | 100 | random split | Dirichlet distribution ($\alpha=0.5$) |
| CIFAR-100 | 100 | 50 | random split | Dirichlet distribution ($\alpha=0.5$) |


**Training Hyperparameters:**

| | Cifar10 IID | Cifar10 Non-IID | Cifar100 IID | Cifar100 Non-IID | MNIST |
|---|-------|-------|------|-------|----------|
| Fraction of client (K) | 16 | 16 | 8 | 8 | 10 |
| Total rounds (T) | 200 | 200 | 400 | 400 | 100 |
| Number of SGD epochs (E) | 10 | 5 | 10 | 5 | 5 |
| Batch size (B) | 64 | 64 | 64 | 64 | 10 |
| Initial learning rate (η) | 0.1 | 0.1 | 0.1 | 0.1 | 0.1-0.01 |
| Learning rate decay (τ) | 0.992 | 0.992 | 0.992| 0.992 | 0.999 |
| Regularization coefficient (λ) | 1 | 1 | 1 | 1 | 0 |
| | Cifar10 IID | Cifar10 Non-IID | Cifar100 IID | Cifar100 Non-IID | MNIST |
| ------------------------------ | ----------- | --------------- | ------------ | ---------------- | -------- |
| Fraction of client (K) | 16 | 16 | 8 | 8 | 10 |
| Total rounds (T) | 200 | 200 | 400 | 400 | 100 |
| Number of SGD epochs (E) | 10 | 5 | 10 | 5 | 5 |
| Batch size (B) | 64 | 64 | 64 | 64 | 10 |
| Initial learning rate (η) | 0.1 | 0.1 | 0.1 | 0.1 | 0.1-0.01 |
| Learning rate decay (τ) | 0.992 | 0.992 | 0.992 | 0.992 | 0.999 |
| Regularization coefficient (λ) | 1 | 1 | 1 | 1 | 0 |

As for the parameters ratio ($\gamma$) we use the following model sizes. As in the paper, $\gamma=0.1$ is used for CIFAR-10 and $\gamma=0.4$ for CIFAR-100:

| Parameters ratio ($\gamma$) | CIFAR-10 | CIFAR-100 |
|----------|--------|--------|
| 1.0 (original) | 15.25M | 15.30M |
| 0.1 | 1.55M | - |
| 0.4 | - | 4.53M |
| --------------------------- | -------- | --------- |
| 1.0 (original) | 15.25M | 15.30M |
| 0.1 | 1.55M | - |
| 0.4 | - | 4.53M |


### Notes:
### Notes:
- Notably, Fedpara's low-rank training technique heavily relies on initialization, with our experiments revealing that employing a 'Fan-in' He initialization (or Kaiming) renders the model incapable of convergence, resulting in a performance akin to that of a random classifier. We found that only Fan-out initialization yielded the anticipated results, and we postulated that this is attributed to the variance conservation during backward propagation.

- The paper lacks explicit guidance on calculating the rank, aside from the "Rank_min - Rank_max" equation. To address this, we devised an equation aligning with the literature's explanation and constraint, solving a quadratic equation to determine max_rank and utilizing proposition 2 from the paper to establish min_rank.

- The Jacobian correction was not incorporated into our implementation, primarily due to the lack of explicit instructions in the paper regarding the specific implementation of the dual update principle mentioned in the Jacobian correction section.

- It was observed that data generation is crutial for model convergence
- It was observed that data generation is crutial for model convergence

## Environment Setup
To construct the Python environment follow these steps:

It is assumed that `pyenv` is installed, `poetry` is installed and python 3.10.6 is installed using `pyenv`. Refer to this [documentation](https://flower.dev/docs/baselines/how-to-usef-baselines.html#setting-up-your-machine) to ensure that your machine is ready.
It is assumed that `pyenv` is installed, `poetry` is installed and python 3.10.6 is installed using `pyenv`. Refer to this [documentation](https://flower.ai/docs/baselines/how-to-usef-baselines.html#setting-up-your-machine) to ensure that your machine is ready.

```bash
# Set Python 3.10
Expand All @@ -112,7 +112,7 @@ poetry shell

Running `FedPara` is easy. You can run it with default parameters directly or by tweaking them directly on the command line. Some command examples are shown below.

```bash
```bash
# To run fedpara with default parameters
python -m fedpara.main

Expand All @@ -138,45 +138,45 @@ To reproduce the curves shown below (which correspond to those in Figure 3 in th

```bash
# To run fedpara for non-iid CIFAR-10 on vgg16 for lowrank and original schemes
python -m fedpara.main --multirun model.param_type=standard,lowrank
python -m fedpara.main --multirun model.param_type=standard,lowrank
# To run fedpara for non-iid CIFAR-100 on vgg16 for lowrank and original schemes
python -m fedpara.main --config-name cifar100 --multirun model.param_type=standard,lowrank
python -m fedpara.main --config-name cifar100 --multirun model.param_type=standard,lowrank
# To run fedpara for iid CIFAR-10 on vgg16 for lowrank and original schemes
python -m fedpara.main --multirun model.param_type=standard,lowrank num_epochs=10 dataset_config.partition=iid
python -m fedpara.main --multirun model.param_type=standard,lowrank num_epochs=10 dataset_config.partition=iid
# To run fedpara for iid CIFAR-100 on vgg16 for lowrank and original schemes
python -m fedpara.main --config-name cifar100 --multirun model.param_type=standard,lowrank num_epochs=10 dataset_config.partition=iid
# To run fedavg for non-iid MINST on FC
python -m fedpara.main --config-name mnist_fedavg
# To run fedper for non-iid MINST on FC
python -m fedpara.main --config-name mnist_fedper
# To run pfedpara for non-iid MINST on FC
python -m fedpara.main --config-name mnist_pfedpara
# To run fedavg for non-iid MINST on FC
python -m fedpara.main --config-name mnist_fedavg
# To run fedper for non-iid MINST on FC
python -m fedpara.main --config-name mnist_fedper
# To run pfedpara for non-iid MINST on FC
python -m fedpara.main --config-name mnist_pfedpara
```

#### Communication Cost:
Communication costs as measured as described in the paper:
#### Communication Cost:
Communication costs as measured as described in the paper:
*"FL evaluation typically measures the required rounds to achieve the target accuracy as communication costs, but we instead assess total transferred bit sizes, 2 ×
(#participants)×(model size)×(#rounds)"*


### CIFAR-100 (Accuracy vs Communication Cost)

| IID | Non-IID |
|:----:|:----:|
|![Cifar100 iid](_static/Cifar100_iid.jpeg) | ![Cifar100 non-iid](_static/Cifar100_noniid.jpeg) |
| IID | Non-IID |
| :----------------------------------------: | :-----------------------------------------------: |
| ![Cifar100 iid](_static/Cifar100_iid.jpeg) | ![Cifar100 non-iid](_static/Cifar100_noniid.jpeg) |


### CIFAR-10 (Accuracy vs Communication Cost)

| IID | Non-IID |
|:----:|:----:|
|![CIFAR10 iid](_static/Cifar10_iid.jpeg) | ![CIFAR10 non-iid](_static/Cifar10_noniid.jpeg) |
| IID | Non-IID |
| :--------------------------------------: | :---------------------------------------------: |
| ![CIFAR10 iid](_static/Cifar10_iid.jpeg) | ![CIFAR10 non-iid](_static/Cifar10_noniid.jpeg) |

### NON-IID MINST (FedAvg vs FedPer vs pFedPara)

The only federated averaging (FedAvg) implementation replicates the results outlined in the paper. However, challenges with convergence were encountered when applying `pFedPara` and `FedPer` methods.

![Personalization algorithms](_static/non-iid_mnist_personalization.png)
![Personalization algorithms](_static/non-iid_mnist_personalization.png)

## Code Acknowledgments
Our code is inspired from these repos:
Expand Down
Loading

0 comments on commit 0629589

Please sign in to comment.