Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

update jupyterbook #37

Merged
merged 3 commits into from
Jan 25, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file modified book/_build/.doctrees/content/README.doctree
Binary file not shown.
Binary file modified book/_build/.doctrees/content/docker_container.doctree
Binary file not shown.
Binary file modified book/_build/.doctrees/content/extract_frames.doctree
Binary file not shown.
Binary file modified book/_build/.doctrees/content/gui_or_cli.doctree
Binary file not shown.
Binary file modified book/_build/.doctrees/content/high_level_overview.doctree
Binary file not shown.
Binary file modified book/_build/.doctrees/content/how_to_install.doctree
Binary file not shown.
Binary file modified book/_build/.doctrees/content/introduction.doctree
Binary file not shown.
Binary file modified book/_build/.doctrees/environment.pickle
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
11 changes: 8 additions & 3 deletions book/_build/html/_sources/content/docker_container.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,14 @@
# Docker

We built a docker container for annolid to make it easier to access the package without the need to install anything beside Docker.
We provide a script to build a docker container for Annolid to make it easier to access the package without the need to install anything besides Docker.

You need to make sure that [Docker](https://docs.docker.com/engine/install/ubuntu/) is installed on your system (or a similar software capable of building containerized applications)


```{note}
Currently this has only been tested on Ubuntu 20.04 LTS.
```

You need to make sure that Docker is installed on your system.
This has only been test on Ubuntu 20.04 LTS.

```
cd annolid/docker
Expand Down
10 changes: 4 additions & 6 deletions book/_build/html/_sources/content/extract_frames.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,12 +2,10 @@

First you need to extract the frames from your video.

Start the Annolid GUI.
Click Extract frames in the toolbar list. When the dialog opens, click and select the video file.
You can choose the start and the end (in seconds) if you want extract frames in the given time interval.
Use random algorithm and select the desired number of frames.
The waiting time depends on the video length, and the algorimth that was selected. **It might take a while to finish**.
When the process is done, it will load all the images into canvas for labeling. You can check the extract frames by clicking the files in the file list.
Start the Annolid GUI (as a reminder type `annolid` in the terminal).
Click Extract frames in the toolbar list. When the dialog opens, click and select the video file. You can choose the start and the end (in seconds) if you want extract frames in the given time interval. Select your desired algorithm to extract the frames (e.g. random) and select the desired number of frames.

The waiting time depends on the video length, and the algorimth that was selected. **It might take a while to finish**. When the process is done, it will load all the images for labelling. You can check the extract frames by clicking the files in the file list.

Here is a video summary to help guide you in this process:
<figure class="video_container">
Expand Down
4 changes: 3 additions & 1 deletion book/_build/html/_sources/content/gui_or_cli.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
# GUI or CLI ?

To use Annolid you can either use the command line or the graphical interface we developed (an heavily modified version of LabelMe, an annotating software by Kentaro Wada https://github.com/wkentaro/labelme).
To use Annolid you can either use the command line (CLI) or the graphical user interface (GUI) we developed and based on an heavily modified version of LabelMe, an annotating software developed by Kentaro Wada https://github.com/wkentaro/labelme.

In the following pages we will provide you with guidelines for getting started using Annolid through the GUI or the CLI. As we expect most people to use the GUI interface, explanation will start with it. If you are looking for CLI commands look at the bottom of eachs pages.
7 changes: 4 additions & 3 deletions book/_build/html/_sources/content/high_level_overview.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# High level overview

A high level overview of its workflow consists in the following steps:
At a high level Annolid's workflow consists of the following steps:

1. Labeling of frames (annotation) & COCO formatting

Expand All @@ -13,12 +13,13 @@ A high level overview of its workflow consists in the following steps:


# Accessibility and efficiency
We work hard toward making Annolid as accessible as possible to anyone trying to use it and are striving to make all the code be also runnable on Google Colab.

- Options for training on Google Colab (as well as on a local workstation)
- Options for training on Google Colab as well as on a local workstation
- Fast training with quality- and speed-optimized options
- Model training :
- 200 labeled images
- < 2 hours for 3000 iterations on Colab
- < 2 hours for 3000 iterations on Google Colab
- 30 min on NVidia 1080Ti
- Inference (applying trained model to behavior videos)
- Mask R-CNN: ~7 FPS
Expand Down
63 changes: 42 additions & 21 deletions book/_build/html/_sources/content/how_to_install.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,20 +2,20 @@

To run Annolid, we suggest using Anaconda package/environment manager for Python. Download and install the [Anaconda](https://www.anaconda.com/products/individual) environment first. Then do the following, using the bash shell in Linux or the conda command line (= Anaconda Prompt) in Windows.

We also provide a pypi version of Annaconda that you can use but it is most likely not an as up-to-date version of the code as the codebase on Github.

```{note}
We also provide a pypi version of Annaconda that you can use but it is most likely not an as up-to-date version of the code as the codebase on Github (at least for the foreseable future)
```

## Requirements
Ubuntu / macOS / Windows \
Python >= 3.7 \
[PyQt4 / PyQt5]
- Ubuntu / macOS / Windows \
- Python >= 3.7 \
- [PyQt4 / PyQt5]


## Install annolid locally
## Install Annolid locally

We create a virtual environment called _annolid-env_ into which we will install Annolid and all of its dependencies, along with whatever other Python tools we need. Python 3.7 is recommended, as it is the version being used for Annolid development.

### Clone the code repo and change into the directory
### Clone Annolid repository and change into the directory
```
git clone --recurse-submodules https://github.com/healthonrails/annolid.git
cd annolid
Expand All @@ -31,16 +31,16 @@ Note: if you got this error:
```

```{note}
Alternatively you can install with pip if you prefer
pip install -e .
Alternatively you can install with pip if you prefer using
`pip install -e .`
```

```{note}
If you encounter errors On Windows for pycocotools, please download and install [Visual studio 2019](https://visualstudio.microsoft.com/thank-you-downloading-visual-studio/?sku=Community&rel=16). Then please run the following command in your terminal. `pip install "git+https://github.com/philferriere/cocoapi.git#egg=pycocotools&subdirectory=PythonAPI"`
On Windows, if you encounter errors for pycocotools, please download and install [Visual studio 2019](https://visualstudio.microsoft.com/thank-you-downloading-visual-studio/?sku=Community&rel=16). Then please run the following command in your terminal. `pip install "git+https://github.com/philferriere/cocoapi.git#egg=pycocotools&subdirectory=PythonAPI"`
```

```{note}
To fix error, `“Failed to load platform plugin “xcb” while launching qt5 app on Linux` run `sudo apt install --reinstall libxcb-xinerama0`
To fix the error, `“Failed to load platform plugin “xcb”` while launching qt5 app on Linux run `sudo apt install --reinstall libxcb-xinerama0`
```

We then activate the virtual environment we just created.
Expand All @@ -49,7 +49,7 @@ conda activate annolid-env
```

```{note}
Be sure to activate the annolid virtual environment every time you restart Anaconda or your computer; the conda shell prompt should read "(annolid)"
Be sure to activate the annolid virtual environment every time you restart Anaconda or your computer; the shell prompt should read "(annolid-env)"
```

Finally to open Annolid GUI, just type the following :
Expand All @@ -60,14 +60,13 @@ annolid

## Install Detectron2 locally
::::{Important}
if you intend to process your tagged videos using Google Colab (which you should do unless you are using a workstation with a higher-end GPU), then you do not need to install Detectron2 on your local machine, and you can ignore this section.
If you intend to process your tagged videos using Google Colab (which you should do unless you are using a workstation with a higher-end GPU), then you do not need to install Detectron2 on your local machine, and you can ignore this section.
::::


### Requirements:

Windows, Linux or macOS with Python ≥ 3.7
PyTorch ≥ 1.5 and torchvision that matches the PyTorch installation. Install them together at [pytorch.org](http://pytorch.org) to make sure of this. Presently, the combination of torch 1.8 and torchvision 0.9.1 works well, along with pyyaml 5.3, as shown below.
Windows, Linux or MacOS with Python ≥ 3.7, PyTorch ≥ 1.5 and torchvision that matches the PyTorch installation. Install them together at [pytorch.org](http://pytorch.org) to make sure of this. Presently, the combination of torch 1.8 and torchvision 0.9.1 works well, along with pyyaml 5.3, as shown below.
For purposes of using Annolid, it is OK to downgrade pyyaml from its current version to 5.3.

### Install Detectron2 dependencies:
Expand All @@ -87,20 +86,42 @@ See https://detectron2.readthedocs.io/tutorials/install.html for further informa

### Install Detectron2 on Windows 10

`git clone https://github.com/facebookresearch/detectron2.git`
`cd detectron2`
`pip install -e .`
```
git clone https://github.com/facebookresearch/detectron2.git
cd detectron2
pip install -e .
```


```{note}
If you encounter an error on windows with message says:
`in _run_ninja_build raise RuntimeError(message) RuntimeError: Error compiling objects for extension` , please go to the link https://support.microsoft.com/en-us/topic/the-latest-supported-visual-c-downloads-2647da03-1eea-4433-9aff-95f26a218cc0 and download x64: `vc_redist.x64.exe`. Please click and install it. After restart, you can cd to detectron2 folder and run the following command: `pip install -e .` .
```

# Using Detectron2 on google Colab
Instructions will be posted here presently. (Colab uses CUDA 10.2 + torch 1.9.0).
# Using Detectron2 on Google Colab
```{note}
If you installed Detectron2 locally you can skip this section.
```

This step is only if you did not install Detectron2 locally and intend to process your tagged videos using Google Colab.
Google Colab uses CUDA 10.2 + torch 1.9.0.

[![Google Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/healthonrails/annolid/blob/master/docs/tutorials/Annolid_on_Detectron2_Tutorial.ipynb)

# Using YOLACT instead of Detectron2:
```{note}
YOLACT models are less accurate comparing to Mask-RCNN in Detectron2. However, it is faster in terms of inference.
```
DCNv2 will not work if Pytorch is greater than 1.4.0

```
!pip install torchvision==0.5.0
!pip install torch==1.4.0
```

For more information, please check https://github.com/healthonrails/annolid/blob/master/docs/tutorials/Train_networks_tutorial_v1.0.1.ipynb and https://github.com/healthonrails/yolac


# Alternative installation
## Get stable release from PyPI
```
Expand Down
20 changes: 12 additions & 8 deletions book/_build/html/_sources/content/introduction.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
# Introduction
Annolid stands for: Annotation + Annelid (segmentation)
Annolid stands for: Annotation + Annelid (segmentation).

Annolid is based on instance segmentation models, which facilitate the tracking of multiple animals along with flexible state identification (e.g., behavior classification, urine deposition, interactions among objects). Annolid has self-supervised, weakly-supervised, and unsupervised training options. We are striving to incorporate optical flow mechanics to improve performances as well as improving labeling efficiency via autolabeling and iterative model training.
Annolid is based on instance segmentation models. Instance segmentation is the task of attributing every pixel of an image to a specific category. It can be used to detect and delineate each distinct object of interest appearing in that image. As such it facilitates the tracking of multiple animals and along with it the flexible state identification (e.g., behavior classification, urine deposition, interactions among objects). Annolid has self-supervised, weakly-supervised, and unsupervised training options. We are striving to incorporate optical flow mechanics to improve performances as well as improving labeling efficiency via autolabeling and iterative model training.

Currently, Annolid is a work-in-progress, still in its alpha version, and subject to major changes. Nevertheless we hope you can use this jupyterbook as an efficient support to guide you through the process of using Annolid for your specific use case.

If you need help or encounter an issue don't hesitate to reach out to the developers by openning an [issue](get_in_touch) on github
If you need help or encounter an issue don't hesitate to reach out to the developers by openning an [issue](get_in_touch) on Github.

## Video introduction
Below is a brief introduction to annolid:
Expand All @@ -14,13 +14,11 @@ Below is a brief introduction to annolid:
<iframe width="720" height="480" src="https://www.youtube.com/embed/tVIE6vG9Gao" frameborder="0" allowfullscreen="true"> </iframe>
</figure>

## Youtube playlist
You can find this video, tutorial on how to use annolid as well as exemples in annolid's youtube playlist [here](https://www.youtube.com/playlist?list=PLYp4D9Y-8_dRXPOtfGu48W5ENtfKn-Owc).
## Annolid can be applied to many diverse goals


## Annolid can be applied to many diverse goals:
- Animal Tracking
- Keypoints tracking (ie body parts)
- Keypoints tracking (i.e. body parts)
- Automated behavior recognition

<figure class="video_container">
Expand All @@ -46,6 +44,7 @@ Video courtesy of Rikki Laser and Alex OphirL:
<iframe width="720" height="480" src="https://lh5.googleusercontent.com/FyOrtO6nEGeBEgEnZeuPf66cfqanl7NNmJFnHG7tJRnnvEOrf0FFfKNjT64pIS2HHjMs3queacFYFBVt4n18s4U1Dr6r7m3IYfEJzit83dh4UVRuUOpRUlU0UUjl0a7Bd6LACqGBuVc" frameborder="0" allowfullscreen="true"> </iframe>
</figure>


- Animal and object tracking, including periods of occlusion
- Tracked objects automatically associated with user-defined zones
- Robustness to noisy background
Expand Down Expand Up @@ -73,7 +72,12 @@ Video courtesy of Jessica Nowicki, Julia Cora-anne Lee, and Lauren O’Connell:


- Multiple animal tracking with a large field of view

Video courtesy of Santiago Forero and Alex Ophir:
<figure class="video_container">
<iframe width="720" height="480" src="https://drive.google.com/file/d/1cYdmueC-CaMhScpcB2E-eL9mkGDFuVCs/view?resourcekey" frameborder="0" allowfullscreen="true"> </iframe>
<iframe width="720" height="480" src="https://drive.google.com/file/d/1cYdmueC-CaMhScpcB2E-eL9mkGDFuVCs/preview" frameborder="0" allowfullscreen="true"> </iframe>
</figure>


## Youtube playlist
You can find these videos, tutorials on how to best use Annolid as well as exemples in Annolid's youtube playlist [here](https://www.youtube.com/playlist?list=PLYp4D9Y-8_dRXPOtfGu48W5ENtfKn-Owc).
Loading