Skip to content

Commit

Permalink
Merge pull request #56 from augmentedstartups/installation_error_fix
Browse files Browse the repository at this point in the history
added change for installation error
  • Loading branch information
augmentedstartups authored Feb 6, 2024
2 parents a60b317 + 2ada2ed commit 362e006
Showing 1 changed file with 39 additions and 28 deletions.
67 changes: 39 additions & 28 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,27 +2,25 @@

[<img src="https://kajabi-storefronts-production.kajabi-cdn.com/kajabi-storefronts-production/file-uploads/themes/2151476941/settings_images/65d82-0d84-6171-a7e0-5aa180b657d5_Black_with_Logo.jpg" width="100%">](https://www.youtube.com/watch?v=K-VcpPwcM8k)





#### Table of Contents

1. Introduction
2. Prerequisites
3. Clone the Repo
4. Installation
- [Linux](#4-installation)
- [Windows 10/11](#4-installation)
- [MacOS](#4-installation)
- [Linux](#4-installation)
- [Windows 10/11](#4-installation)
- [MacOS](#4-installation)
5. Running AS-One
6. [Sample Code Snippets](#6-sample-code-snippets)
7. [Model Zoo](asone/linux/Instructions/Benchmarking.md)

## 1. Introduction

==UPDATE: YOLO-NAS is OUT==

AS-One is a python wrapper for multiple detection and tracking algorithms all at one place. Different trackers such as `ByteTrack`, `DeepSORT` or `NorFair` can be integrated with different versions of `YOLO` with minimum lines of code.
This python wrapper provides YOLO models in `ONNX`, `PyTorch` & `CoreML` flavors. We plan to offer support for future versions of YOLO when they get released.
This python wrapper provides YOLO models in `ONNX`, `PyTorch` & `CoreML` flavors. We plan to offer support for future versions of YOLO when they get released.

This is One Library for most of your computer vision needs.

Expand All @@ -35,20 +33,21 @@ Watch the step-by-step tutorial
## 2. Prerequisites

- Make sure to install `GPU` drivers in your system if you want to use `GPU` . Follow [driver installation](asone/linux/Instructions/Driver-Installations.md) for further instructions.
- Make sure you have [MS Build tools](https://aka.ms/vs/17/release/vs_BuildTools.exe) installed in system if using windows.
- Make sure you have [MS Build tools](https://aka.ms/vs/17/release/vs_BuildTools.exe) installed in system if using windows.
- [Download git for windows](https://git-scm.com/download/win) if not installed.

## 3. Clone the Repo

Navigate to an empty folder of your choice.

```git clone https://github.com/augmentedstartups/AS-One.git```
`git clone https://github.com/augmentedstartups/AS-One.git`

Change Directory to AS-One

```cd AS-One```
`cd AS-One`

## 4. Installation

<details open>
<summary>For Linux</summary>

Expand All @@ -58,12 +57,14 @@ source .env/bin/activate

pip install numpy Cython
pip install cython-bbox asone onnxruntime-gpu==1.12.1
pip install typing_extensions==4.7.1
pip install super-gradients==3.1.3
# for CPU
pip install torch torchvision
# for GPU
pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/cu113
```

</details>

<details>
Expand All @@ -72,11 +73,12 @@ pip install torch torchvision --extra-index-url https://download.pytorch.org/whl
```shell
python -m venv .env
.env\Scripts\activate
pip install numpy Cython
pip install numpy Cython
pip install lap
pip install -e git+https://github.com/samson-wang/cython_bbox.git#egg=cython-bbox

pip install asone onnxruntime-gpu==1.12.1
pip install typing_extensions==4.7.1
pip install super-gradients==3.1.3
# for CPU
pip install torch torchvision
Expand All @@ -86,6 +88,7 @@ pip install torch torchvision --extra-index-url https://download.pytorch.org/whl
or
pip install torch==1.10.1+cu113 torchvision==0.11.2+cu113 torchaudio===0.10.1+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html
```

</details>
<details>
<summary>For MacOS</summary>
Expand All @@ -100,6 +103,7 @@ pip install super-gradients==3.1.3
# for CPU
pip install torch torchvision
```

</details>

## 5. Running AS-One
Expand All @@ -112,10 +116,10 @@ python main.py data/sample_videos/test.mp4

### Run in `Google Colab`

<a href="https://drive.google.com/file/d/1xy5P9WGI19-PzRH3ceOmoCgp63K6J_Ls/view?usp=sharing"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>

<a href="https://drive.google.com/file/d/1xy5P9WGI19-PzRH3ceOmoCgp63K6J_Ls/view?usp=sharing"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>

## 6. Sample Code Snippets

<details>
<summary>6.1. Object Detection</summary>

Expand Down Expand Up @@ -199,13 +203,16 @@ while True:
if cv2.waitKey(25) & 0xFF == ord('q'):
break
```

</details>

<details>
<summary>6.1.2. Changing Detector Models </summary>

Change detector by simply changing detector flag. The flags are provided in [benchmark](asone/linux/Instructions/Benchmarking.md) tables.
* Our library now supports YOLOv5, YOLOv7, and YOLOv8 on macOS.

- Our library now supports YOLOv5, YOLOv7, and YOLOv8 on macOS.

```python
# Change detector
detector = ASOne(detector=asone.YOLOX_S_PYTORCH, use_cuda=True)
Expand All @@ -226,7 +233,7 @@ detector = ASOne(detector=asone.YOLOV8L_MLMODEL)
<details>
<summary>6.2. Object Tracking </summary>

Use tracker on sample video.
Use tracker on sample video.

```python
import asone
Expand All @@ -243,7 +250,7 @@ filter_classes = ['person'] # set to None to track all classes
# Get tracking function
track = detect.track_video('data/sample_videos/test.mp4', output_dir='data/results', save_result=True, display=True, filter_classes=filter_classes)

# Loop over track to retrieve outputs of each frame
# Loop over track to retrieve outputs of each frame
for bbox_details, frame_details in track:
bbox_xyxy, ids, scores, class_ids = bbox_details
frame, frame_num, fps = frame_details
Expand All @@ -255,7 +262,7 @@ for bbox_details, frame_details in track:
# Get tracking function
track = detect.track_webcam(cam_id=0, output_dir='data/results', save_result=True, display=True, filter_classes=filter_classes)

# Loop over track to retrieve outputs of each frame
# Loop over track to retrieve outputs of each frame
for bbox_details, frame_details in track:
bbox_xyxy, ids, scores, class_ids = bbox_details
frame, frame_num, fps = frame_details
Expand All @@ -268,7 +275,7 @@ for bbox_details, frame_details in track:
stream_url = 'rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mp4'
track = detect.track_stream(stream_url, output_dir='data/results', save_result=True, display=True, filter_classes=filter_classes)

# Loop over track to retrieve outputs of each frame
# Loop over track to retrieve outputs of each frame
for bbox_details, frame_details in track:
bbox_xyxy, ids, scores, class_ids = bbox_details
frame, frame_num, fps = frame_details
Expand Down Expand Up @@ -296,8 +303,8 @@ detect = ASOne(tracker=asone.DEEPSORT, detector=asone.YOLOV7_PYTORCH, use_cuda=T
# Change Detector
detect = ASOne(tracker=asone.DEEPSORT, detector=asone.YOLOX_S_PYTORCH, use_cuda=True)
```
</details>

</details>

Run the `asone/demo_detector.py` to test detector.

Expand All @@ -308,6 +315,7 @@ python -m asone.demo_detector data/sample_videos/test.mp4
# run on cpu
python -m asone.demo_detector data/sample_videos/test.mp4 --cpu
```

</details>
<details>
<summary>6.3. Text Detection</summary>
Expand All @@ -325,12 +333,13 @@ import cv2
img_path = 'data/sample_imgs/sample_text.jpeg'
ocr = ASOne(detector=asone.CRAFT, recognizer=asone.EASYOCR, use_cuda=True) # Set use_cuda to False for cpu
img = cv2.imread(img_path)
results = ocr.detect_text(img)
results = ocr.detect_text(img)
img = utils.draw_text(img, results)
cv2.imwrite("data/results/results.jpg", img)
```

Use Tracker on Text

```python
import asone
from asone import ASOne
Expand All @@ -344,7 +353,7 @@ detect = ASOne(tracker=asone.DEEPSORT, detector=asone.CRAFT, recognizer=asone.EA
# Get tracking function
track = detect.track_video('data/sample_videos/GTA_5-Unique_License_Plate.mp4', output_dir='data/results', save_result=True, display=True)

# Loop over track to retrieve outputs of each frame
# Loop over track to retrieve outputs of each frame
for bbox_details, frame_details in track:
bbox_xyxy, ids, scores, class_ids = bbox_details
frame, frame_num, fps = frame_details
Expand Down Expand Up @@ -378,11 +387,12 @@ import cv2
img_path = 'data/sample_imgs/test2.jpg'
pose_estimator = PoseEstimator(estimator_flag=asone.YOLOV8M_POSE, use_cuda=True) #set use_cuda=False to use cpu
img = cv2.imread(img_path)
kpts = pose_estimator.estimate_image(img)
kpts = pose_estimator.estimate_image(img)
img = utils.draw_kpts(img, kpts)
cv2.imwrite("data/results/results.jpg", img)
```
* Now you can use Yolov8 and Yolov7-w6 for pose estimation. The flags are provided in [benchmark](asone/linux/Instructions/Benchmarking.md) tables.

- Now you can use Yolov8 and Yolov7-w6 for pose estimation. The flags are provided in [benchmark](asone/linux/Instructions/Benchmarking.md) tables.

```python
# Pose Estimation on video
Expand Down Expand Up @@ -410,9 +420,10 @@ Run the `asone/demo_pose_estimator.py` to test Pose estimation.

</details>

To setup ASOne using Docker follow instructions given in [docker setup](asone/linux/Instructions/Docker-Setup.md)
To setup ASOne using Docker follow instructions given in [docker setup](asone/linux/Instructions/Docker-Setup.md)

# ToDo

- [x] First Release
- [x] Import trained models
- [x] Simplify code even further
Expand All @@ -424,6 +435,6 @@ To setup ASOne using Docker follow instructions given in [docker setup](asone/li
- [x] YOLO-NAS
- [ ] SAM Integration

|Offered By: |Maintained By:|
|-------------|-------------|
|[![AugmentedStarups](https://user-images.githubusercontent.com/107035454/195115263-d3271ef3-973b-40a4-83c8-0ade8727dd40.png)](https://augmentedstartups.com)|[![AxcelerateAI](https://user-images.githubusercontent.com/107035454/195114870-691c8a52-fcf0-462e-9e02-a720fc83b93f.png)](https://axcelerate.ai/)|
| Offered By: | Maintained By: |
| ------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------- |
| [![AugmentedStarups](https://user-images.githubusercontent.com/107035454/195115263-d3271ef3-973b-40a4-83c8-0ade8727dd40.png)](https://augmentedstartups.com) | [![AxcelerateAI](https://user-images.githubusercontent.com/107035454/195114870-691c8a52-fcf0-462e-9e02-a720fc83b93f.png)](https://axcelerate.ai/) |

0 comments on commit 362e006

Please sign in to comment.