Skip to content

Commit

Permalink
Merge branch 'main' of https://github.com/nebuly-ai/nebullvm
Browse files Browse the repository at this point in the history
  • Loading branch information
diegofiori committed Mar 2, 2023
2 parents 2fd6783 + 84ae6aa commit c7f0830
Show file tree
Hide file tree
Showing 2 changed files with 49 additions and 34 deletions.
65 changes: 40 additions & 25 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,12 @@
# Guidelines for Contributing to Nebullvm.
# Guidelines for Contributing to Nebullvm 🚀

Hello coder 👋

We are very happy that you have decided to contribute to the library and we thank you for your efforts. Below we briefly lay out the main guidelines for conforming your code to the coding style we have adopted for `nebullvm`.
We are very happy that you have decided to contribute to the library and we thank you for your efforts. Here you can find guidelines on how to standardize your code with the style we adopted for `nebullvm`. But remember, there are various ways to help the community other than submitting code contributions, answering questions and improving the documentation are also very valuable.

It also helps us if you mention our library in your blog posts to show off the cool things it's made possible, or just give the repository a ⭐️ to show us that you appreciate the project

This guide was inspired by the awesome [Transformers](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) guide to contributing.

We hope to come across your pull request soon!

Expand All @@ -12,30 +16,41 @@ Happy coding 💫 The nebullvm Team
## How to submit an issue
Did you spot a bug? Did you come up with a cool idea that you think should be implemented in nebullvm? Well, GitHub issues are the best way to let us know!

We don't have a strict policy on issue generation: just use a meaningful title and specify the problem or your proposal in the first problem comment. Then, you can use GitHub labels to let us know what kind of proposal you are making, for example `bug` if you are reporting a new bug or `enhancement` if you are proposing a library improvement.
We don't have a strict policy on issue generation, just use a meaningful title and specify the problem or your proposal in the first problem comment. Then, you can use GitHub labels to let us know what kind of proposal you are making, for example `bug` if you are reporting a bug or `enhancement` if you are proposing a library improvement.

## How to contribute to solve an issue
We are always delighted to welcome other people to the contributor section of `nebullvm`! We are looking forward to welcoming you to the community, but before you rush off and write 1000 lines of code, please take a few minutes to read our tips for contributing to the library.
* Please fork the library instead of pulling it and creating a new branch.
* Work on your fork and, when you think the problem has been solved, open a pull request.
* In the pull request specify which problems the it is solving/closing. For instance, if the pull request solves problem #1, the comment should be `Closes #1`.
* The title of the pull request must be meaningful and self-explanatory.


## Coding style
Before you git commit and push your code, please use `black` to format your code. We strongly recommend that you install `pre-commit` to reformat your code when you commit your changes.

To use the formatting style defined for nebullvm, run the following commands:
```bash
pip install pre-commit black autoflake
pre-commit install
# the following command is optional, but needed if you have already committed some files to your forked repo.
pre-commit run --all-files
```
Then add and commit all changes!

As for the naming convention, we follow [PEP 8](https://peps.python.org/pep-0008/) for code and a slight variation of [Google convention](https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html) for docstrings. For docstrings we redundantly express the input type in both the function definition and the function docstring.

We are always delighted to welcome other people to the contributors section of nebullvm! We are looking forward to welcoming you to the community, here are some guidelines to follow:
1. Please [fork](https://github.com/nebuly-ai/nebullvm/fork) the [library](https://github.com/nebuly-ai/nebullvm) by clicking on the Fork button on the repository's page. This will create a copy of the repository in your GitHub account.
2. Clone your fork to your local machine, and add the base repository as a remote:
```bash
$ git clone git@github.com:<your Github handle>/nebuly-ai/nebullvm.git
$ cd nebullvm
$ git remote add upstream https://github.com/nebuly-ai/nebullvm.git
```
3. Install the library in editable mode with the following command:
```bash
$ pip install -e .
```
4. Work on your fork to develop the feature you have in mind.
5. Nebullvm relies on `black` to format its source code consistently. To use the formatting style defined for nebullvm, run the following commands:
```bash
$ pip install pre-commit black autoflake
$ pre-commit install
# the following command is optional, but needed if you have already
# committed some files to your forked repo.
$ pre-commit run --all-files
```
As for the naming convention, we follow [PEP 8](https://peps.python.org/pep-0008/) for code and a slight variation of [Google convention](https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html) for docstrings. For docstrings we redundantly express the input type in both the function definition and the function docstring.
6. Once you're happy with your changes, add changed files with git add and commit your code:
```bash
$ git add edited_file.py
$ git commit -m "Add a cool feature"
```
7. Push your changes to your repo:
```bash
$ git push
```
8. Now you can go to the repo you have forked on your github profile and press on **Pull Request** to open a pull request. In the pull request specify which problems it is solving. For instance, if the pull request solves `Issue #1`, the comment should be `Closes #1`. Also make the title of the pull request meaningful and self-explanatory.
---
See you soon in the list of nebullvm contributors
See you soon in the list of nebullvm contributors 🌈
18 changes: 9 additions & 9 deletions notebooks/speedster/huggingface/Readme.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
# **HuggingFace Optimization**
# **Hugging Face Optimization**

This section contains all the available notebooks that show how to leverage Speedster to optimize HuggingFace models.
This section contains all the available notebooks that show how to leverage Speedster to optimize Hugging Face models.

HuggingFace hosts models that can use either PyTorch or TensorFlow as backend. Both the backends are supported by Speedster.
Hugging Face hosts models that can use either PyTorch or TensorFlow as backend. Both the backends are supported by Speedster.

## HuggingFace API quick view:
## Hugging Face API quick view:

``` python
from speedster import optimize_model
Expand Down Expand Up @@ -53,8 +53,8 @@ res = optimized_model(**input_dict)
## Notebooks:
| Notebook | Description | |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| [Accelerate HuggingFace PyTorch GPT2](https://github.com/nebuly-ai/nebullvm/blob/main/notebooks/speedster/huggingface/Accelerate_Hugging_Face_PyTorch_GPT2_with_Speedster.ipynb) | Show how to optimize with Speedster the GPT2 model from Huggingface with PyTorch backend. | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/nebuly-ai/nebullvm/blob/main/notebooks/speedster/huggingface/Accelerate_Hugging_Face_PyTorch_GPT2_with_Speedster.ipynb) |
| [Accelerate HuggingFace PyTorch BERT](https://github.com/nebuly-ai/nebullvm/blob/main/notebooks/speedster/huggingface/Accelerate_Hugging_Face_PyTorch_BERT_with_Speedster.ipynb) | Show how to optimize with Speedster the BERT model from Huggingface with PyTorch backend. | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/nebuly-ai/nebullvm/blob/main/notebooks/speedster/huggingface/Accelerate_Hugging_Face_PyTorch_BERT_with_Speedster.ipynb) |
| [Accelerate HuggingFace PyTorch DistilBERT](https://github.com/nebuly-ai/nebullvm/blob/main/notebooks/speedster/huggingface/Accelerate_Hugging_Face_PyTorch_DistilBERT_with_Speedster.ipynb) | Show how to optimize with Speedster the DistilBERT model from Huggingface with PyTorch backend. | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/nebuly-ai/nebullvm/blob/main/notebooks/speedster/huggingface/Accelerate_Hugging_Face_PyTorch_DistilBERT_with_Speedster.ipynb) | |
| [Accelerate HuggingFace TensorFlow BERT](https://github.com/nebuly-ai/nebullvm/blob/main/notebooks/speedster/huggingface/Accelerate_Hugging_Face_TensorFlow_BERT_with_Speedster.ipynb) | Show how to optimize with Speedster the BERT model from Huggingface with TensorFlow backend. | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/nebuly-ai/nebullvm/blob/main/notebooks/speedster/huggingface/Accelerate_Hugging_Face_TensorFlow_BERT_with_Speedster.ipynb) |
| [Accelerate HuggingFace PyTorch T5](https://github.com/nebuly-ai/nebullvm/blob/main/notebooks/speedster/huggingface/Accelerate_Hugging_Face_PyTorch_T5_with_Speedster.ipynb) | Show how to optimize with Speedster the T5 model from Huggingface with PyTorch backend. | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/nebuly-ai/nebullvm/blob/main/notebooks/speedster/huggingface/Accelerate_Hugging_Face_PyTorch_T5_with_Speedster.ipynb) |
| [Accelerate Hugging Face PyTorch GPT2](https://github.com/nebuly-ai/nebullvm/blob/main/notebooks/speedster/huggingface/Accelerate_Hugging_Face_PyTorch_GPT2_with_Speedster.ipynb) | Show how to optimize with Speedster the GPT2 model from Hugging Face with PyTorch backend. | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/nebuly-ai/nebullvm/blob/main/notebooks/speedster/huggingface/Accelerate_Hugging_Face_PyTorch_GPT2_with_Speedster.ipynb) |
| [Accelerate Hugging Face PyTorch BERT](https://github.com/nebuly-ai/nebullvm/blob/main/notebooks/speedster/huggingface/Accelerate_Hugging_Face_PyTorch_BERT_with_Speedster.ipynb) | Show how to optimize with Speedster the BERT model from Hugging Face with PyTorch backend. | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/nebuly-ai/nebullvm/blob/main/notebooks/speedster/huggingface/Accelerate_Hugging_Face_PyTorch_BERT_with_Speedster.ipynb) |
| [Accelerate Hugging Face PyTorch DistilBERT](https://github.com/nebuly-ai/nebullvm/blob/main/notebooks/speedster/huggingface/Accelerate_Hugging_Face_PyTorch_DistilBERT_with_Speedster.ipynb) | Show how to optimize with Speedster the DistilBERT model from Hugging Face with PyTorch backend. | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/nebuly-ai/nebullvm/blob/main/notebooks/speedster/huggingface/Accelerate_Hugging_Face_PyTorch_DistilBERT_with_Speedster.ipynb) | |
| [Accelerate Hugging Face TensorFlow BERT](https://github.com/nebuly-ai/nebullvm/blob/main/notebooks/speedster/huggingface/Accelerate_Hugging_Face_TensorFlow_BERT_with_Speedster.ipynb) | Show how to optimize with Speedster the BERT model from Hugging Face with TensorFlow backend. | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/nebuly-ai/nebullvm/blob/main/notebooks/speedster/huggingface/Accelerate_Hugging_Face_TensorFlow_BERT_with_Speedster.ipynb) |
| [Accelerate Hugging Face PyTorch T5](https://github.com/nebuly-ai/nebullvm/blob/main/notebooks/speedster/huggingface/Accelerate_Hugging_Face_PyTorch_T5_with_Speedster.ipynb) | Show how to optimize with Speedster the T5 model from Hugging Face with PyTorch backend. | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/nebuly-ai/nebullvm/blob/main/notebooks/speedster/huggingface/Accelerate_Hugging_Face_PyTorch_T5_with_Speedster.ipynb) |

0 comments on commit c7f0830

Please sign in to comment.