Skip to content

Commit

Permalink
Put almost complete tasks live (huggingface#495)
Browse files Browse the repository at this point in the history
* put tasks live

* fix

* gave credits to authors

* add HF handle of a contributor

* added text-to-image demo

* Added schema for reinforcement learning

* fix curly bracket

* wording nit

* Added zero shot image clf demo

* Added links for contribution

* added link in conversational useful resources
  • Loading branch information
merveenoyan authored Nov 14, 2022
1 parent 7b84225 commit 80c9499
Show file tree
Hide file tree
Showing 11 changed files with 115 additions and 21 deletions.
Binary file added tasks/assets/text-to-image/image.jpeg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
6 changes: 4 additions & 2 deletions tasks/src/conversational/about.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ Conversational response models are used as part of voice assistants to provide a

## Task Variants

This place can be filled with variants of this task if there's any.
You can contribute variants of this task [here](https://github.com/huggingface/hub-docs/blob/main/tasks/src/conversational/about.md).

## Inference

Expand All @@ -35,4 +35,6 @@ converse([conversation_1, conversation_2])

## Useful Resources

In this area, you can insert useful resources about how to train or use a model for this task.
In this area, you can insert useful resources about how to train or use a model for this task [here](https://github.com/huggingface/hub-docs/blob/main/tasks/src/conversational/about.md).

This page was made possible thanks to the efforts of [Viraat Aryabumi](https://huggingface.co/viraat).
6 changes: 4 additions & 2 deletions tasks/src/image-to-image/about.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,11 +21,11 @@ Super resolution models increase the resolution of an image, allowing for higher

## Inference

This section should have useful information about how to pull a model from Hugging Face Hub that is a part of a library specialized in a task and use it.
You can add a small snippet [here](https://github.com/huggingface/hub-docs/blob/main/tasks/src/image-to-image/about.md) that shows how to infer with `image-to-image` models.

## Useful Resources

In this area, you can insert useful resources about how to train or use a model for this task.
You can contribute useful resources about this task [here](https://github.com/huggingface/hub-docs/blob/main/tasks/src/image-to-image/about.md).

## Most Used Model for the Task

Expand All @@ -39,3 +39,5 @@ Below images show some of the examples shared in the paper that can be obtained
## References

[1] P. Isola, J. -Y. Zhu, T. Zhou and A. A. Efros, "Image-to-Image Translation with Conditional Adversarial Networks," 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 5967-5976, doi: 10.1109/CVPR.2017.632.

This page was made possible thanks to the efforts of [Paul Gafton](https://github.com/Paul92) and [Osman Alenbey](https://huggingface.co/osman93).
6 changes: 4 additions & 2 deletions tasks/src/reinforcement-learning/about.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ There are many videos on the Internet where a game-playing reinforcement learnin

## Task Variants

This place can be filled with variants of this task if there's any.
You can contribute variants of this task [here](https://github.com/huggingface/hub-docs/blob/main/tasks/src/reinforcement-learning/about.md).

## Glossary

Expand Down Expand Up @@ -40,7 +40,7 @@ This place can be filled with variants of this task if there's any.

## Inference

This section should have useful information about how to pull a model from Hugging Face Hub that is a part of a library specialized in a task and use it.
You can add a small snippet [here](https://github.com/huggingface/hub-docs/blob/main/tasks/src/reinforcement-learning/about.md) that shows how to infer with `reinforcement-learning` models.

## Useful Resources

Expand All @@ -54,3 +54,5 @@ Would you like to learn more about the topic? Awesome! Here you can find some cu
- [Train a Deep Reinforcement Learning lander agent to land correctly on the Moon 🌕 using Stable-Baselines3](https://github.com/huggingface/deep-rl-class/blob/main/unit1/unit1.ipynb)
- [Introduction to Unity MLAgents](https://colab.research.google.com/github/huggingface/deep-rl-class/blob/main/unit4/unit4.ipynb)
- [Training Decision Transformers with 🤗 transformers](https://github.com/huggingface/blog/blob/main/notebooks/101_train-decision-transformers.ipynb)

This page was made possible thanks to the efforts of [Ram Ananth](https://huggingface.co/RamAnanth1), [Emilio Lehoucq](https://huggingface.co/emiliol) and [Osman Alenbey](https://huggingface.co/osman93).
26 changes: 23 additions & 3 deletions tasks/src/reinforcement-learning/data.ts
Original file line number Diff line number Diff line change
Expand Up @@ -7,9 +7,29 @@ const taskData: TaskDataCustom = {
id: "edbeeching/decision_transformer_gym_replay",
}
],
demo: {
inputs: [],
outputs: [],
demo: {
inputs: [
{
label: "State",
content:
"Red traffic light, pedestrians are about to pass.",
type: "text",
},
],
outputs: [
{
label: "Action",
content:
"Stop the car.",
type: "text",
},
{
label: "Next State",
content:
"Yellow light, pedestrians have crossed.",
type: "text",
},
],
},
metrics: [{
description: "Accumulated reward across all time steps discounted by a factor that ranges between 0 and 1 and determines how much the agent optimizes for future relative to immediate rewards. Measures how good is the policy ultimately found by a given algorithm considering uncertainty over the future.",
Expand Down
15 changes: 10 additions & 5 deletions tasks/src/tasksData.ts
Original file line number Diff line number Diff line change
Expand Up @@ -4,23 +4,28 @@ import type { TaskDataCustom, TaskData } from "./Types";
import audioClassification from "./audio-classification/data";
import audioToAudio from "./audio-to-audio/data";
import automaticSpeechRecognition from "./automatic-speech-recognition/data";
import conversational from "./conversational/data";
import documentQuestionAnswering from "./document-question-answering/data";
import fillMask from "./fill-mask/data";
import imageClassification from "./image-classification/data";
import imageToImage from "./image-to-image/data";
import imageSegmentation from "./image-segmentation/data";
import objectDetection from "./object-detection/data";
import placeholder from "./placeholder/data";
import reinforcementLearning from "./reinforcement-learning/data";
import questionAnswering from "./question-answering/data";
import sentenceSimilarity from "./sentence-similarity/data";
import summarization from "./summarization/data";
import tableQuestionAnswering from "./table-question-answering/data";
import tabularClassification from "./tabular-classification/data";
import textToImage from "./text-to-image/data";
import textToSpeech from "./text-to-speech/data";
import tokenClassification from "./token-classification/data";
import translation from "./translation/data";
import textClassification from "./text-classification/data";
import textGeneration from "./text-generation/data";
import visualQuestionAnswering from "./visual-question-answering/data";
import zeroShotImageClassification from "./zero-shot-image-classification/data";
import { TASKS_MODEL_LIBRARIES } from "./const";

// To make comparisons easier, task order is the same as in const.ts
Expand All @@ -31,20 +36,20 @@ export const TASKS_DATA: Record<PipelineType, TaskData | undefined> = {
"audio-classification": getData("audio-classification", audioClassification),
"audio-to-audio": getData("audio-to-audio", audioToAudio),
"automatic-speech-recognition": getData("automatic-speech-recognition", automaticSpeechRecognition),
"conversational": getData("conversational"),
"conversational": getData("conversational", conversational),
"document-question-answering": getData("document-question-answering", documentQuestionAnswering),
"feature-extraction": getData("feature-extraction"),
"fill-mask": getData("fill-mask", fillMask),
"image-classification": getData("image-classification", imageClassification),
"image-segmentation": getData("image-segmentation", imageSegmentation),
"image-to-image": getData("image-to-image"),
"image-to-image": getData("image-to-image", imageToImage),
"image-to-text": getData("image-to-text"),
"multiple-choice": undefined,
"object-detection": getData("object-detection", objectDetection),
"video-classification": getData("video-classification"),
"other": undefined,
"question-answering": getData("question-answering", questionAnswering),
"reinforcement-learning": getData("reinforcement-learning"),
"reinforcement-learning": getData("reinforcement-learning", reinforcementLearning),
"robotics": getData("robotics"),
"sentence-similarity": getData("sentence-similarity", sentenceSimilarity),
"summarization": getData("summarization", summarization),
Expand All @@ -56,7 +61,7 @@ export const TASKS_DATA: Record<PipelineType, TaskData | undefined> = {
"text-classification": getData("text-classification", textClassification),
"text-generation": getData("text-generation", textGeneration),
"text-retrieval": undefined,
"text-to-image": getData("text-to-image"),
"text-to-image": getData("text-to-image", textToImage),
"text-to-speech": getData("text-to-speech", textToSpeech),
"text2text-generation": getData("text2text-generation"),
"time-series-forecasting": undefined,
Expand All @@ -66,7 +71,7 @@ export const TASKS_DATA: Record<PipelineType, TaskData | undefined> = {
"visual-question-answering": getData("visual-question-answering", visualQuestionAnswering),
"voice-activity-detection": getData("voice-activity-detection"),
"zero-shot-classification": getData("zero-shot-classification"),
"zero-shot-image-classification": getData("zero-shot-image-classification"),
"zero-shot-image-classification": getData("zero-shot-image-classification", zeroShotImageClassification),
} as const;

/*
Expand Down
10 changes: 10 additions & 0 deletions tasks/src/text-to-image/about.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,17 @@ Different patterns can be generated to obtain unique pieces of fashion. Text-to-
### Architecture Industry

Architects can utilise the models to construct an environment based out on the requirements of the floor plan. This can also include the furniture that has to be placed in that environment.

## Task Variants

You can contribute variants of this task [here](https://github.com/huggingface/hub-docs/blob/main/tasks/src/text-to-image/about.md).

## Inference

You can add a small snippet [here](https://github.com/huggingface/hub-docs/blob/main/tasks/src/text-to-image/about.md) that shows how to infer with `text-to-image` models.

## Useful Resources
- [MinImagen - Build Your Own Imagen Text-to-Image Model](https://www.assemblyai.com/blog/minimagen-build-your-own-imagen-text-to-image-model/)
- [OpenAI Blog - Dall E](https://openai.com/blog/dall-e/)

This page was made possible thanks to efforts of [Ishan Dutta](https://huggingface.co/ishandutta) and [Oğuz Akif](https://huggingface.co/oguzakif).
19 changes: 15 additions & 4 deletions tasks/src/text-to-image/data.ts
Original file line number Diff line number Diff line change
Expand Up @@ -11,10 +11,21 @@ const taskData: TaskDataCustom = {
id: "conceptual_captions",
},
],
demo: {
inputs: [],
outputs: [],
},
demo: {
inputs: [
{
label: "Input",
content: "A city above clouds, pastel colors, Victorian style",
type: "text",
},
],
outputs: [
{
filename: "image.jpeg",
type: "img",
},
]
},
metrics: [],
models: [
{
Expand Down
16 changes: 15 additions & 1 deletion tasks/src/zero-shot-image-classification/about.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,4 +8,18 @@ The data in this learning paradigm consists of

- Seen data - images and their corresponding labels
- Unseen data - only labels and no images
- auxiliary information - additional information given to the model during training connecting the unseen and seen data. This can be in the form of textual description or word embeddings.
- auxiliary information - additional information given to the model during training connecting the unseen and seen data. This can be in the form of textual description or word embeddings.

## Task Variants

You can contribute variants of this task [here](https://github.com/huggingface/hub-docs/blob/main/tasks/src/zero-shot-image-classification/about.md).

## Inference

You can add a small snippet [here](https://github.com/huggingface/hub-docs/blob/main/tasks/src/zero-shot-image-classification/about.md) that shows how to infer with `zero-shot-image-classification` models.

## Useful Resources

You can contribute useful resources about this task [here](https://github.com/huggingface/hub-docs/blob/main/tasks/src/zero-shot-image-classification/about.md).

This page was made possible thanks to the efforts of [Shamima Hossain](https://huggingface.co/Shamima).
32 changes: 30 additions & 2 deletions tasks/src/zero-shot-image-classification/data.ts
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,36 @@ const taskData: TaskDataCustom = {
},
],
demo: {
inputs: [],
outputs: [],
inputs: [
{
filename: "image-classification-input.jpeg",
type: "img",
},
{
label: "Classes",
content: "cat, dog, bird",
type: "text",
},
],
outputs: [
{
type: "chart",
data: [
{
label: "Cat",
score: 0.664,
},
{
label: "Dog",
score: 0.329,
},
{
label: "Bird",
score: 0.008,
},
],
},
],
},
metrics: [
{
Expand Down

0 comments on commit 80c9499

Please sign in to comment.