From a60b317483c133c1a6cec89b990b1779c2ee0514 Mon Sep 17 00:00:00 2001 From: Augmented Startups Date: Thu, 27 Jul 2023 13:03:13 +0200 Subject: [PATCH 1/2] Update README.md super-gradients==3.1.3 to solve issues with PyCocoTools on Windows Signed-off-by: Augmented Startups --- README.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index a7002fb..9cf3a68 100644 --- a/README.md +++ b/README.md @@ -58,7 +58,7 @@ source .env/bin/activate pip install numpy Cython pip install cython-bbox asone onnxruntime-gpu==1.12.1 -pip install super-gradients==3.1.1 +pip install super-gradients==3.1.3 # for CPU pip install torch torchvision # for GPU @@ -77,7 +77,7 @@ pip install lap pip install -e git+https://github.com/samson-wang/cython_bbox.git#egg=cython-bbox pip install asone onnxruntime-gpu==1.12.1 -pip install super-gradients==3.1.1 +pip install super-gradients==3.1.3 # for CPU pip install torch torchvision @@ -96,7 +96,7 @@ source .env/bin/activate pip install numpy Cython pip install cython-bbox asone -pip install super-gradients==3.1.1 +pip install super-gradients==3.1.3 # for CPU pip install torch torchvision ``` From 2ada2edb0afd4dbcca713911191f51f534d3fbf6 Mon Sep 17 00:00:00 2001 From: 1297rohit <1297rohit@gmail.com> Date: Sat, 20 Jan 2024 12:57:51 +0530 Subject: [PATCH 2/2] added change for installation error --- README.md | 67 ++++++++++++++++++++++++++++++++----------------------- 1 file changed, 39 insertions(+), 28 deletions(-) diff --git a/README.md b/README.md index 9cf3a68..c9e02a1 100644 --- a/README.md +++ b/README.md @@ -2,27 +2,25 @@ [](https://www.youtube.com/watch?v=K-VcpPwcM8k) - - - - #### Table of Contents + 1. Introduction 2. Prerequisites 3. Clone the Repo 4. Installation - - [Linux](#4-installation) - - [Windows 10/11](#4-installation) - - [MacOS](#4-installation) + - [Linux](#4-installation) + - [Windows 10/11](#4-installation) + - [MacOS](#4-installation) 5. Running AS-One 6. [Sample Code Snippets](#6-sample-code-snippets) 7. [Model Zoo](asone/linux/Instructions/Benchmarking.md) ## 1. Introduction + ==UPDATE: YOLO-NAS is OUT== AS-One is a python wrapper for multiple detection and tracking algorithms all at one place. Different trackers such as `ByteTrack`, `DeepSORT` or `NorFair` can be integrated with different versions of `YOLO` with minimum lines of code. -This python wrapper provides YOLO models in `ONNX`, `PyTorch` & `CoreML` flavors. We plan to offer support for future versions of YOLO when they get released. +This python wrapper provides YOLO models in `ONNX`, `PyTorch` & `CoreML` flavors. We plan to offer support for future versions of YOLO when they get released. This is One Library for most of your computer vision needs. @@ -35,20 +33,21 @@ Watch the step-by-step tutorial ## 2. Prerequisites - Make sure to install `GPU` drivers in your system if you want to use `GPU` . Follow [driver installation](asone/linux/Instructions/Driver-Installations.md) for further instructions. -- Make sure you have [MS Build tools](https://aka.ms/vs/17/release/vs_BuildTools.exe) installed in system if using windows. +- Make sure you have [MS Build tools](https://aka.ms/vs/17/release/vs_BuildTools.exe) installed in system if using windows. - [Download git for windows](https://git-scm.com/download/win) if not installed. ## 3. Clone the Repo Navigate to an empty folder of your choice. -```git clone https://github.com/augmentedstartups/AS-One.git``` +`git clone https://github.com/augmentedstartups/AS-One.git` Change Directory to AS-One -```cd AS-One``` +`cd AS-One` ## 4. Installation +
For Linux @@ -58,12 +57,14 @@ source .env/bin/activate pip install numpy Cython pip install cython-bbox asone onnxruntime-gpu==1.12.1 +pip install typing_extensions==4.7.1 pip install super-gradients==3.1.3 # for CPU pip install torch torchvision # for GPU pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/cu113 ``` +
@@ -72,11 +73,12 @@ pip install torch torchvision --extra-index-url https://download.pytorch.org/whl ```shell python -m venv .env .env\Scripts\activate -pip install numpy Cython +pip install numpy Cython pip install lap pip install -e git+https://github.com/samson-wang/cython_bbox.git#egg=cython-bbox pip install asone onnxruntime-gpu==1.12.1 +pip install typing_extensions==4.7.1 pip install super-gradients==3.1.3 # for CPU pip install torch torchvision @@ -86,6 +88,7 @@ pip install torch torchvision --extra-index-url https://download.pytorch.org/whl or pip install torch==1.10.1+cu113 torchvision==0.11.2+cu113 torchaudio===0.10.1+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html ``` +
For MacOS @@ -100,6 +103,7 @@ pip install super-gradients==3.1.3 # for CPU pip install torch torchvision ``` +
## 5. Running AS-One @@ -112,10 +116,10 @@ python main.py data/sample_videos/test.mp4 ### Run in `Google Colab` - Open In Colab - +Open In Colab ## 6. Sample Code Snippets +
6.1. Object Detection @@ -199,13 +203,16 @@ while True: if cv2.waitKey(25) & 0xFF == ord('q'): break ``` +
6.1.2. Changing Detector Models Change detector by simply changing detector flag. The flags are provided in [benchmark](asone/linux/Instructions/Benchmarking.md) tables. -* Our library now supports YOLOv5, YOLOv7, and YOLOv8 on macOS. + +- Our library now supports YOLOv5, YOLOv7, and YOLOv8 on macOS. + ```python # Change detector detector = ASOne(detector=asone.YOLOX_S_PYTORCH, use_cuda=True) @@ -226,7 +233,7 @@ detector = ASOne(detector=asone.YOLOV8L_MLMODEL)
6.2. Object Tracking -Use tracker on sample video. +Use tracker on sample video. ```python import asone @@ -243,7 +250,7 @@ filter_classes = ['person'] # set to None to track all classes # Get tracking function track = detect.track_video('data/sample_videos/test.mp4', output_dir='data/results', save_result=True, display=True, filter_classes=filter_classes) -# Loop over track to retrieve outputs of each frame +# Loop over track to retrieve outputs of each frame for bbox_details, frame_details in track: bbox_xyxy, ids, scores, class_ids = bbox_details frame, frame_num, fps = frame_details @@ -255,7 +262,7 @@ for bbox_details, frame_details in track: # Get tracking function track = detect.track_webcam(cam_id=0, output_dir='data/results', save_result=True, display=True, filter_classes=filter_classes) -# Loop over track to retrieve outputs of each frame +# Loop over track to retrieve outputs of each frame for bbox_details, frame_details in track: bbox_xyxy, ids, scores, class_ids = bbox_details frame, frame_num, fps = frame_details @@ -268,7 +275,7 @@ for bbox_details, frame_details in track: stream_url = 'rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mp4' track = detect.track_stream(stream_url, output_dir='data/results', save_result=True, display=True, filter_classes=filter_classes) -# Loop over track to retrieve outputs of each frame +# Loop over track to retrieve outputs of each frame for bbox_details, frame_details in track: bbox_xyxy, ids, scores, class_ids = bbox_details frame, frame_num, fps = frame_details @@ -296,8 +303,8 @@ detect = ASOne(tracker=asone.DEEPSORT, detector=asone.YOLOV7_PYTORCH, use_cuda=T # Change Detector detect = ASOne(tracker=asone.DEEPSORT, detector=asone.YOLOX_S_PYTORCH, use_cuda=True) ``` -
+
Run the `asone/demo_detector.py` to test detector. @@ -308,6 +315,7 @@ python -m asone.demo_detector data/sample_videos/test.mp4 # run on cpu python -m asone.demo_detector data/sample_videos/test.mp4 --cpu ``` +
6.3. Text Detection @@ -325,12 +333,13 @@ import cv2 img_path = 'data/sample_imgs/sample_text.jpeg' ocr = ASOne(detector=asone.CRAFT, recognizer=asone.EASYOCR, use_cuda=True) # Set use_cuda to False for cpu img = cv2.imread(img_path) -results = ocr.detect_text(img) +results = ocr.detect_text(img) img = utils.draw_text(img, results) cv2.imwrite("data/results/results.jpg", img) ``` Use Tracker on Text + ```python import asone from asone import ASOne @@ -344,7 +353,7 @@ detect = ASOne(tracker=asone.DEEPSORT, detector=asone.CRAFT, recognizer=asone.EA # Get tracking function track = detect.track_video('data/sample_videos/GTA_5-Unique_License_Plate.mp4', output_dir='data/results', save_result=True, display=True) -# Loop over track to retrieve outputs of each frame +# Loop over track to retrieve outputs of each frame for bbox_details, frame_details in track: bbox_xyxy, ids, scores, class_ids = bbox_details frame, frame_num, fps = frame_details @@ -378,11 +387,12 @@ import cv2 img_path = 'data/sample_imgs/test2.jpg' pose_estimator = PoseEstimator(estimator_flag=asone.YOLOV8M_POSE, use_cuda=True) #set use_cuda=False to use cpu img = cv2.imread(img_path) -kpts = pose_estimator.estimate_image(img) +kpts = pose_estimator.estimate_image(img) img = utils.draw_kpts(img, kpts) cv2.imwrite("data/results/results.jpg", img) ``` -* Now you can use Yolov8 and Yolov7-w6 for pose estimation. The flags are provided in [benchmark](asone/linux/Instructions/Benchmarking.md) tables. + +- Now you can use Yolov8 and Yolov7-w6 for pose estimation. The flags are provided in [benchmark](asone/linux/Instructions/Benchmarking.md) tables. ```python # Pose Estimation on video @@ -410,9 +420,10 @@ Run the `asone/demo_pose_estimator.py` to test Pose estimation.
-To setup ASOne using Docker follow instructions given in [docker setup](asone/linux/Instructions/Docker-Setup.md) +To setup ASOne using Docker follow instructions given in [docker setup](asone/linux/Instructions/Docker-Setup.md) # ToDo + - [x] First Release - [x] Import trained models - [x] Simplify code even further @@ -424,6 +435,6 @@ To setup ASOne using Docker follow instructions given in [docker setup](asone/li - [x] YOLO-NAS - [ ] SAM Integration -|Offered By: |Maintained By:| -|-------------|-------------| -|[![AugmentedStarups](https://user-images.githubusercontent.com/107035454/195115263-d3271ef3-973b-40a4-83c8-0ade8727dd40.png)](https://augmentedstartups.com)|[![AxcelerateAI](https://user-images.githubusercontent.com/107035454/195114870-691c8a52-fcf0-462e-9e02-a720fc83b93f.png)](https://axcelerate.ai/)| +| Offered By: | Maintained By: | +| ------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------- | +| [![AugmentedStarups](https://user-images.githubusercontent.com/107035454/195115263-d3271ef3-973b-40a4-83c8-0ade8727dd40.png)](https://augmentedstartups.com) | [![AxcelerateAI](https://user-images.githubusercontent.com/107035454/195114870-691c8a52-fcf0-462e-9e02-a720fc83b93f.png)](https://axcelerate.ai/) |