Skip to content

Commit

Permalink
chore: use lowercase project name
Browse files Browse the repository at this point in the history
  • Loading branch information
XuehaiPan committed Aug 25, 2022
1 parent 9b3a118 commit c5014f9
Show file tree
Hide file tree
Showing 11 changed files with 61 additions and 61 deletions.
2 changes: 1 addition & 1 deletion .github/PULL_REQUEST_TEMPLATE.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ Why is this change required? What problem does it solve?
If it fixes an open issue, please link to the issue here.
You can use the syntax `close #15213` if this solves the issue #15213

- [ ] I have raised an issue to propose this change ([required](https://github.com/metaopt/TorchOpt/issues) for new features and bug fixes)
- [ ] I have raised an issue to propose this change ([required](https://github.com/metaopt/torchopt/issues) for new features and bug fixes)

## Types of changes

Expand Down
6 changes: 3 additions & 3 deletions .github/workflows/build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ env:
jobs:
build-sdist:
runs-on: ubuntu-latest
if: github.repository == 'metaopt/TorchOpt' && (github.event_name != 'push' || startsWith(github.ref, 'refs/tags/'))
if: github.repository == 'metaopt/torchopt' && (github.event_name != 'push' || startsWith(github.ref, 'refs/tags/'))
timeout-minutes: 10
steps:
- name: Checkout
Expand Down Expand Up @@ -74,7 +74,7 @@ jobs:
build-wheels:
runs-on: ubuntu-latest
needs: [build-sdist]
if: github.repository == 'metaopt/TorchOpt' && (github.event_name != 'push' || startsWith(github.ref, 'refs/tags/'))
if: github.repository == 'metaopt/torchopt' && (github.event_name != 'push' || startsWith(github.ref, 'refs/tags/'))
timeout-minutes: 60
steps:
- name: Checkout
Expand All @@ -100,7 +100,7 @@ jobs:
runs-on: ubuntu-latest
needs: [build-sdist, build-wheels]
if: |
github.repository == 'metaopt/TorchOpt' && github.event_name != 'pull_request' &&
github.repository == 'metaopt/torchopt' && github.event_name != 'pull_request' &&
(github.event_name != 'workflow_dispatch' || github.event.inputs.task == 'build-and-publish') &&
(github.event_name != 'push' || startsWith(github.ref, 'refs/tags/'))
timeout-minutes: 15
Expand Down
52 changes: 26 additions & 26 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,23 +13,23 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

### Added

- Add option `maximize` option to optimizers by [@XuehaiPan](https://github.com/XuehaiPan) in [#64](https://github.com/metaopt/TorchOpt/pull/64).
- Refactor tests using `pytest.mark.parametrize` and enabling parallel testing by [@XuehaiPan](https://github.com/XuehaiPan) and [@Benjamin-eecs](https://github.com/Benjamin-eecs) in [#55](https://github.com/metaopt/TorchOpt/pull/55).
- Add maml-omniglot few-shot classification example using functorch.vmap by [@Benjamin-eecs](https://github.com/Benjamin-eecs) in [#39](https://github.com/metaopt/TorchOpt/pull/39).
- Add parallel training on one GPU using functorch.vmap example by [@Benjamin-eecs](https://github.com/Benjamin-eecs) in [#32](https://github.com/metaopt/TorchOpt/pull/32).
- Add question/help/support issue template by [@Benjamin-eecs](https://github.com/Benjamin-eecs) in [#43](https://github.com/metaopt/TorchOpt/pull/43).
- Add option `maximize` option to optimizers by [@XuehaiPan](https://github.com/XuehaiPan) in [#64](https://github.com/metaopt/torchopt/pull/64).
- Refactor tests using `pytest.mark.parametrize` and enabling parallel testing by [@XuehaiPan](https://github.com/XuehaiPan) and [@Benjamin-eecs](https://github.com/Benjamin-eecs) in [#55](https://github.com/metaopt/torchopt/pull/55).
- Add maml-omniglot few-shot classification example using functorch.vmap by [@Benjamin-eecs](https://github.com/Benjamin-eecs) in [#39](https://github.com/metaopt/torchopt/pull/39).
- Add parallel training on one GPU using functorch.vmap example by [@Benjamin-eecs](https://github.com/Benjamin-eecs) in [#32](https://github.com/metaopt/torchopt/pull/32).
- Add question/help/support issue template by [@Benjamin-eecs](https://github.com/Benjamin-eecs) in [#43](https://github.com/metaopt/torchopt/pull/43).

### Changed

- Replace JAX PyTrees with OpTree by [@XuehaiPan](https://github.com/XuehaiPan) in [#62](https://github.com/metaopt/TorchOpt/pull/62).
- Update image link in README to support PyPI rendering by [@Benjamin-eecs](https://github.com/Benjamin-eecs) in [#56](https://github.com/metaopt/TorchOpt/pull/56).
- Replace JAX PyTrees with OpTree by [@XuehaiPan](https://github.com/XuehaiPan) in [#62](https://github.com/metaopt/torchopt/pull/62).
- Update image link in README to support PyPI rendering by [@Benjamin-eecs](https://github.com/Benjamin-eecs) in [#56](https://github.com/metaopt/torchopt/pull/56).

### Fixed

- Fix RMSProp optimizer by [@XuehaiPan](https://github.com/XuehaiPan) in [#55](https://github.com/metaopt/TorchOpt/pull/55).
- Fix momentum tracing by [@XuehaiPan](https://github.com/XuehaiPan) in [#58](https://github.com/metaopt/TorchOpt/pull/58).
- Fix CUDA build for accelerated OP by [@XuehaiPan](https://github.com/XuehaiPan) in [#53](https://github.com/metaopt/TorchOpt/pull/53).
- Fix gamma error in MAML-RL implementation by [@Benjamin-eecs](https://github.com/Benjamin-eecs) [#47](https://github.com/metaopt/TorchOpt/pull/47).
- Fix RMSProp optimizer by [@XuehaiPan](https://github.com/XuehaiPan) in [#55](https://github.com/metaopt/torchopt/pull/55).
- Fix momentum tracing by [@XuehaiPan](https://github.com/XuehaiPan) in [#58](https://github.com/metaopt/torchopt/pull/58).
- Fix CUDA build for accelerated OP by [@XuehaiPan](https://github.com/XuehaiPan) in [#53](https://github.com/metaopt/torchopt/pull/53).
- Fix gamma error in MAML-RL implementation by [@Benjamin-eecs](https://github.com/Benjamin-eecs) [#47](https://github.com/metaopt/torchopt/pull/47).

### Removed

Expand All @@ -39,37 +39,37 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

### Added

- Bump PyTorch version to 1.12.1 by [@XuehaiPan](https://github.com/XuehaiPan) in [#49](https://github.com/metaopt/TorchOpt/pull/49).
- CPU-only build without `nvcc` requirement by [@XuehaiPan](https://github.com/XuehaiPan) in [#51](https://github.com/metaopt/TorchOpt/pull/51).
- Use [`cibuildwheel`](https://github.com/pypa/cibuildwheel) to build wheels by [@XuehaiPan](https://github.com/XuehaiPan) in [#45](https://github.com/metaopt/TorchOpt/pull/45).
- Use dynamic process number in CPU kernels by [@JieRen98](https://github.com/JieRen98) in [#42](https://github.com/metaopt/TorchOpt/pull/42).
- Bump PyTorch version to 1.12.1 by [@XuehaiPan](https://github.com/XuehaiPan) in [#49](https://github.com/metaopt/torchopt/pull/49).
- CPU-only build without `nvcc` requirement by [@XuehaiPan](https://github.com/XuehaiPan) in [#51](https://github.com/metaopt/torchopt/pull/51).
- Use [`cibuildwheel`](https://github.com/pypa/cibuildwheel) to build wheels by [@XuehaiPan](https://github.com/XuehaiPan) in [#45](https://github.com/metaopt/torchopt/pull/45).
- Use dynamic process number in CPU kernels by [@JieRen98](https://github.com/JieRen98) in [#42](https://github.com/metaopt/torchopt/pull/42).

### Changed

- Use correct Python Ctype for pybind11 function prototype [@XuehaiPan](https://github.com/XuehaiPan) in [#52](https://github.com/metaopt/TorchOpt/pull/52).
- Use correct Python Ctype for pybind11 function prototype [@XuehaiPan](https://github.com/XuehaiPan) in [#52](https://github.com/metaopt/torchopt/pull/52).

------

## [0.4.2] - 2022-07-26

### Added

- Read the Docs integration by [@Benjamin-eecs](https://github.com/Benjamin-eecs) and [@XuehaiPan](https://github.com/XuehaiPan) in [#34](https://github.com/metaopt/TorchOpt/pull/34).
- Update documentation and code styles by [@Benjamin-eecs](https://github.com/Benjamin-eecs) and [@XuehaiPan](https://github.com/XuehaiPan) in [#22](https://github.com/metaopt/TorchOpt/pull/22).
- Update tutorial notebooks by [@XuehaiPan](https://github.com/XuehaiPan) in [#27](https://github.com/metaopt/TorchOpt/pull/27).
- Bump PyTorch version to 1.12 by [@XuehaiPan](https://github.com/XuehaiPan) in [#25](https://github.com/metaopt/TorchOpt/pull/25).
- Support custom Python executable path in `CMakeLists.txt` by [@XuehaiPan](https://github.com/XuehaiPan) in [#18](https://github.com/metaopt/TorchOpt/pull/18).
- Add citation information by [@waterhorse1](https://github.com/waterhorse1) in [#14](https://github.com/metaopt/TorchOpt/pull/14) and [@Benjamin-eecs](https://github.com/Benjamin-eecs) in [#15](https://github.com/metaopt/TorchOpt/pull/15).
- Implement RMSProp optimizer by [@future-xy](https://github.com/future-xy) in [#8](https://github.com/metaopt/TorchOpt/pull/8).
- Read the Docs integration by [@Benjamin-eecs](https://github.com/Benjamin-eecs) and [@XuehaiPan](https://github.com/XuehaiPan) in [#34](https://github.com/metaopt/torchopt/pull/34).
- Update documentation and code styles by [@Benjamin-eecs](https://github.com/Benjamin-eecs) and [@XuehaiPan](https://github.com/XuehaiPan) in [#22](https://github.com/metaopt/torchopt/pull/22).
- Update tutorial notebooks by [@XuehaiPan](https://github.com/XuehaiPan) in [#27](https://github.com/metaopt/torchopt/pull/27).
- Bump PyTorch version to 1.12 by [@XuehaiPan](https://github.com/XuehaiPan) in [#25](https://github.com/metaopt/torchopt/pull/25).
- Support custom Python executable path in `CMakeLists.txt` by [@XuehaiPan](https://github.com/XuehaiPan) in [#18](https://github.com/metaopt/torchopt/pull/18).
- Add citation information by [@waterhorse1](https://github.com/waterhorse1) in [#14](https://github.com/metaopt/torchopt/pull/14) and [@Benjamin-eecs](https://github.com/Benjamin-eecs) in [#15](https://github.com/metaopt/torchopt/pull/15).
- Implement RMSProp optimizer by [@future-xy](https://github.com/future-xy) in [#8](https://github.com/metaopt/torchopt/pull/8).

### Changed

- Use `pyproject.toml` for packaging and update GitHub Action workflows by [@XuehaiPan](https://github.com/XuehaiPan) in [#31](https://github.com/metaopt/TorchOpt/pull/31).
- Rename the package from `TorchOpt` to `torchopt` by [@XuehaiPan](https://github.com/XuehaiPan) in [#20](https://github.com/metaopt/TorchOpt/pull/20).
- Use `pyproject.toml` for packaging and update GitHub Action workflows by [@XuehaiPan](https://github.com/XuehaiPan) in [#31](https://github.com/metaopt/torchopt/pull/31).
- Rename the package from `TorchOpt` to `torchopt` by [@XuehaiPan](https://github.com/XuehaiPan) in [#20](https://github.com/metaopt/torchopt/pull/20).

### Fixed

- Fixed errors while building from the source and add `conda` environment recipe by [@XuehaiPan](https://github.com/XuehaiPan) in [#24](https://github.com/metaopt/TorchOpt/pull/24).
- Fixed errors while building from the source and add `conda` environment recipe by [@XuehaiPan](https://github.com/XuehaiPan) in [#24](https://github.com/metaopt/torchopt/pull/24).

------

Expand Down
2 changes: 1 addition & 1 deletion CITATION.cff
Original file line number Diff line number Diff line change
Expand Up @@ -31,4 +31,4 @@ authors:
version: 0.4.3
date-released: "2022-08-08"
license: Apache-2.0
repository-code: "https://github.com/metaopt/TorchOpt"
repository-code: "https://github.com/metaopt/torchopt"
4 changes: 2 additions & 2 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ RUN TORCH_INDEX_URL="https://download.pytorch.org/whl/cu$(echo "${CUDA_VERSION}"
echo "source /home/torchopt/venv/bin/activate" >> ~/.bashrc

# Install dependencies
WORKDIR /home/torchopt/TorchOpt
WORKDIR /home/torchopt/torchopt
COPY --chown=torchopt requirements.txt requirements.txt
RUN source ~/venv/bin/activate && \
python -m pip install --extra-index-url "${TORCH_INDEX_URL}" -r requirements.txt && \
Expand Down Expand Up @@ -84,4 +84,4 @@ ENTRYPOINT [ "/bin/bash", "--login" ]

FROM devel-builder AS devel

COPY --from=base /home/torchopt/TorchOpt .
COPY --from=base /home/torchopt/torchopt .
24 changes: 12 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,17 +2,17 @@
<!-- markdownlint-disable html -->

<div align="center">
<img src="https://github.com/metaopt/TorchOpt/raw/HEAD/image/logo-large.png" width="75%" />
<img src="https://github.com/metaopt/torchopt/raw/HEAD/image/logo-large.png" width="75%" />
</div>

![Python 3.7+](https://img.shields.io/badge/Python-3.7%2B-brightgreen.svg)
[![PyPI](https://img.shields.io/pypi/v/torchopt?label=PyPI)](https://pypi.org/project/torchopt)
![Status](https://img.shields.io/pypi/status/torchopt?label=Status)
![GitHub Workflow Status](https://img.shields.io/github/workflow/status/metaopt/TorchOpt/Tests?label=tests&logo=github)
![GitHub Workflow Status](https://img.shields.io/github/workflow/status/metaopt/torchopt/Tests?label=tests&logo=github)
[![Documentation Status](https://readthedocs.org/projects/torchopt/badge/?version=latest)](https://torchopt.readthedocs.io/en/latest/?badge=latest)
[![Downloads](https://static.pepy.tech/personalized-badge/torchopt?period=month&left_color=grey&right_color=blue&left_text=Downloads/month)](https://pepy.tech/project/torchopt)
[![GitHub Repo Stars](https://img.shields.io/github/stars/metaopt/torchopt?label=Stars&logo=github&color=brightgreen)](https://github.com/metaopt/torchopt/stargazers)
[![License](https://img.shields.io/github/license/metaopt/TorchOpt?label=License)](#license)
[![License](https://img.shields.io/github/license/metaopt/torchopt?label=License)](#license)

**TorchOpt** is a high-performance optimizer library built upon [PyTorch](https://pytorch.org/) for easy implementation of functional optimization and gradient-based meta-learning. It consists of two main features:

Expand Down Expand Up @@ -113,7 +113,7 @@ params = torchopt.apply_updates(params, updates, inplace=False)
Meta-Learning has gained enormous attention in both Supervised Learning and Reinforcement Learning. Meta-Learning algorithms often contain a bi-level optimization process with *inner loop* updating the network parameters and *outer loop* updating meta parameters. The figure below illustrates the basic formulation for meta-optimization in Meta-Learning. The main feature is that the gradients of *outer loss* will back-propagate through all `inner.step` operations.

<div align="center">
<img src="https://github.com/metaopt/TorchOpt/raw/HEAD/image/TorchOpt.png" width="85%" />
<img src="https://github.com/metaopt/torchopt/raw/HEAD/image/TorchOpt.png" width="85%" />
</div>

Since network parameters become a node of computation graph, a flexible Meta-Learning library should enable users manually control the gradient graph connection which means that users should have access to the network parameters and optimizer states for manually detaching or connecting the computation graph. In PyTorch designing, the network parameters or optimizer states are members of network (a.k.a. `torch.nn.Module`) or optimizer (a.k.a. `torch.optim.Optimizer`), this design significantly introducing difficulty for user control network parameters or optimizer states. Previous differentiable optimizer Repo [`higher`](https://github.com/facebookresearch/higher), [`learn2learn`](https://github.com/learnables/learn2learn) follows the PyTorch designing which leads to inflexible API.
Expand Down Expand Up @@ -191,7 +191,7 @@ One can think of the scale procedures on gradients of optimizer algorithms as a
Here we evaluate the performance using the MAML-Omniglot code with the inner-loop Adam optimizer on GPU. We comparable the run time of the overall algorithm and the meta-optimization (outer-loop optimization) under different network architecture/inner-step numbers. We choose [`higher`](https://github.com/facebookresearch/higher) as our baseline. The figure below illustrate that our accelerated Adam can achieve at least $1/3$ efficiency improvement over the baseline.

<div align="center">
<img src="https://github.com/metaopt/TorchOpt/raw/HEAD/image/time.png" width="80%" />
<img src="https://github.com/metaopt/torchopt/raw/HEAD/image/time.png" width="80%" />
</div>

Notably, the operator fusion not only increases performance but also help simplify the computation graph, which will be discussed in the next section.
Expand All @@ -205,7 +205,7 @@ Complex gradient flow in meta-learning brings in a great challenge for managing
The figure below show the visualization result. Compared with [`torchviz`](https://github.com/szagoruyko/pytorchviz), TorchOpt fuses the operations within the `Adam` together (orange) to reduce the complexity and provide simpler visualization.

<div align="center">
<img src="https://github.com/metaopt/TorchOpt/raw/HEAD/image/torchviz_torchopt.jpg" width="80%" />
<img src="https://github.com/metaopt/torchopt/raw/HEAD/image/torchviz_torchopt.jpg" width="80%" />
</div>

--------------------------------------------------------------------------------
Expand Down Expand Up @@ -235,16 +235,16 @@ See <https://pytorch.org> for more information about installing PyTorch.
You can also build shared libraries from source, use:

```bash
git clone https://github.com/metaopt/TorchOpt.git
cd TorchOpt
git clone https://github.com/metaopt/torchopt.git
cd torchopt
pip3 install .
```

We provide a [conda](https://github.com/conda/conda) environment recipe to install the build toolchain such as `cmake`, `g++`, and `nvcc`:

```bash
git clone https://github.com/metaopt/TorchOpt.git
cd TorchOpt
git clone https://github.com/metaopt/torchopt.git
cd torchopt

# You may need `CONDA_OVERRIDE_CUDA` if conda fails to detect the NVIDIA driver (e.g. in docker or WSL2)
CONDA_OVERRIDE_CUDA=11.7 conda env create --file conda-recipe.yaml
Expand All @@ -257,7 +257,7 @@ pip3 install --no-build-isolation --editable .

## Future Plan

- [x] CPU-acclerated optimizer
- [x] CPU-accelerated optimizer
- [ ] Support general implicit differentiation with functional programing.
- [ ] Support more optimizers such as AdamW, RMSProp

Expand All @@ -282,6 +282,6 @@ If you find TorchOpt useful, please cite it in your publications.
year = {2022},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/metaopt/TorchOpt}},
howpublished = {\url{https://github.com/metaopt/torchopt}},
}
```
4 changes: 2 additions & 2 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -186,8 +186,8 @@ def setup(app):
# -- Source code links -------------------------------------------------------

extlinks = {
'gitcode': ('https://github.com/metaopt/TorchOpt/blob/HEAD/%s', '%s'),
'issue': ('https://github.com/metaopt/TorchOpt/issues/%s', 'issue %s'),
'gitcode': ('https://github.com/metaopt/torchopt/blob/HEAD/%s', '%s'),
'issue': ('https://github.com/metaopt/torchopt/issues/%s', 'issue %s'),
}

# -- Extension configuration -------------------------------------------------
Expand Down
8 changes: 4 additions & 4 deletions docs/source/developer/contributing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,14 +3,14 @@ Contributing to TorchOpt

Before contributing to TorchOpt, please follow the instructions below to setup.

1. Fork TorchOpt (`fork <https://github.com/metaopt/TorchOpt/fork>`_) on GitHub and clone the repository.
1. Fork TorchOpt (`fork <https://github.com/metaopt/torchopt/fork>`_) on GitHub and clone the repository.

.. code-block:: bash
git clone git@github.com:<your username>/TorchOpt.git # use the SSH protocol
cd TorchOpt
git clone git@github.com:<your username>/torchopt.git # use the SSH protocol
cd torchopt
git remote add upstream git@github.com:metaopt/TorchOpt.git
git remote add upstream git@github.com:metaopt/torchopt.git
2. Setup a development environment via `conda <https://github.com/conda/conda>`_:

Expand Down
2 changes: 1 addition & 1 deletion docs/source/developer/contributor.rst
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
Contributor
===========

We always welcome contributions to help make TorchOpt better. Below is an incomplete list of our contributors (find more on `this page <https://github.com/metaopt/TorchOpt/graphs/contributors>`_).
We always welcome contributions to help make TorchOpt better. Below is an incomplete list of our contributors (find more on `this page <https://github.com/metaopt/torchopt/graphs/contributors>`_).

* Yao Fu (`future-xy <https://github.com/future-xy>`_)
Loading

0 comments on commit c5014f9

Please sign in to comment.