diff --git a/data-images/holo2-detected.jpg-objects/hololens-1.jpg b/data-images/holo2-detected-objects/hololens-1.jpg
similarity index 100%
rename from data-images/holo2-detected.jpg-objects/hololens-1.jpg
rename to data-images/holo2-detected-objects/hololens-1.jpg
diff --git a/data-images/holo2-detected.jpg-objects/hololens-2.jpg b/data-images/holo2-detected-objects/hololens-2.jpg
similarity index 100%
rename from data-images/holo2-detected.jpg-objects/hololens-2.jpg
rename to data-images/holo2-detected-objects/hololens-2.jpg
diff --git a/data-images/holo2-detected.jpg-objects/hololens-3.jpg b/data-images/holo2-detected-objects/hololens-3.jpg
similarity index 100%
rename from data-images/holo2-detected.jpg-objects/hololens-3.jpg
rename to data-images/holo2-detected-objects/hololens-3.jpg
diff --git a/data-images/holo2-detected.jpg-objects/hololens-4.jpg b/data-images/holo2-detected-objects/hololens-4.jpg
similarity index 100%
rename from data-images/holo2-detected.jpg-objects/hololens-4.jpg
rename to data-images/holo2-detected-objects/hololens-4.jpg
diff --git a/data-images/holo2-detected.jpg-objects/hololens-5.jpg b/data-images/holo2-detected-objects/hololens-5.jpg
similarity index 100%
rename from data-images/holo2-detected.jpg-objects/hololens-5.jpg
rename to data-images/holo2-detected-objects/hololens-5.jpg
diff --git a/data-images/holo2-detected.jpg-objects/hololens-6.jpg b/data-images/holo2-detected-objects/hololens-6.jpg
similarity index 100%
rename from data-images/holo2-detected.jpg-objects/hololens-6.jpg
rename to data-images/holo2-detected-objects/hololens-6.jpg
diff --git a/data-images/holo2-detected.jpg-objects/hololens-7.jpg b/data-images/holo2-detected-objects/hololens-7.jpg
similarity index 100%
rename from data-images/holo2-detected.jpg-objects/hololens-7.jpg
rename to data-images/holo2-detected-objects/hololens-7.jpg
diff --git a/data-images/image3new.jpg-objects/bicycle-5.jpg b/data-images/image3new-objects/bicycle-5.jpg
similarity index 100%
rename from data-images/image3new.jpg-objects/bicycle-5.jpg
rename to data-images/image3new-objects/bicycle-5.jpg
diff --git a/data-images/image3new.jpg-objects/car-4.jpg b/data-images/image3new-objects/car-4.jpg
similarity index 100%
rename from data-images/image3new.jpg-objects/car-4.jpg
rename to data-images/image3new-objects/car-4.jpg
diff --git a/data-images/image3new.jpg-objects/cat-2.jpg b/data-images/image3new-objects/cat-2.jpg
similarity index 100%
rename from data-images/image3new.jpg-objects/cat-2.jpg
rename to data-images/image3new-objects/cat-2.jpg
diff --git a/data-images/image3new.jpg-objects/dog-1.jpg b/data-images/image3new-objects/dog-1.jpg
similarity index 100%
rename from data-images/image3new.jpg-objects/dog-1.jpg
rename to data-images/image3new-objects/dog-1.jpg
diff --git a/data-images/image3new.jpg-objects/motorcycle-3.jpg b/data-images/image3new-objects/motorcycle-3.jpg
similarity index 100%
rename from data-images/image3new.jpg-objects/motorcycle-3.jpg
rename to data-images/image3new-objects/motorcycle-3.jpg
diff --git a/data-images/image3new.jpg-objects/person-10.jpg b/data-images/image3new-objects/person-10.jpg
similarity index 100%
rename from data-images/image3new.jpg-objects/person-10.jpg
rename to data-images/image3new-objects/person-10.jpg
diff --git a/data-images/image3new.jpg-objects/person-6.jpg b/data-images/image3new-objects/person-6.jpg
similarity index 100%
rename from data-images/image3new.jpg-objects/person-6.jpg
rename to data-images/image3new-objects/person-6.jpg
diff --git a/data-images/image3new.jpg-objects/person-7.jpg b/data-images/image3new-objects/person-7.jpg
similarity index 100%
rename from data-images/image3new.jpg-objects/person-7.jpg
rename to data-images/image3new-objects/person-7.jpg
diff --git a/data-images/image3new.jpg-objects/person-8.jpg b/data-images/image3new-objects/person-8.jpg
similarity index 100%
rename from data-images/image3new.jpg-objects/person-8.jpg
rename to data-images/image3new-objects/person-8.jpg
diff --git a/data-images/image3new.jpg-objects/person-9.jpg b/data-images/image3new-objects/person-9.jpg
similarity index 100%
rename from data-images/image3new.jpg-objects/person-9.jpg
rename to data-images/image3new-objects/person-9.jpg
diff --git a/examples/custom_detection_extract_objects.py b/examples/custom_detection_extract_objects.py
index ca948234..45e7daa0 100644
--- a/examples/custom_detection_extract_objects.py
+++ b/examples/custom_detection_extract_objects.py
@@ -15,23 +15,23 @@
"""
SAMPLE RESULT
-holo2-detected.jpg-objects\hololens-1.jpg
+holo2-detected-objects\hololens-1.jpg
hololens : 39.69653248786926 : [611, 74, 751, 154]
---------------
-holo2-detected.jpg-objects\hololens-1.jpg
+holo2-detected-objects\hololens-1.jpg
hololens : 87.6643180847168 : [23, 46, 90, 79]
---------------
-holo2-detected.jpg-objects\hololens-1.jpg
+holo2-detected-objects\hololens-1.jpg
hololens : 89.25175070762634 : [191, 66, 243, 95]
---------------
-holo2-detected.jpg-objects\hololens-1.jpg
+holo2-detected-objects\hololens-1.jpg
hololens : 64.49641585350037 : [437, 81, 514, 133]
---------------
-holo2-detected.jpg-objects\hololens-1.jpg
+holo2-detected-objects\hololens-1.jpg
hololens : 91.78624749183655 : [380, 113, 423, 138]
---------------
"""
\ No newline at end of file
diff --git a/imageai/Detection/Custom/CUSTOMDETECTION.md b/imageai/Detection/Custom/CUSTOMDETECTION.md
index 577b7613..d279cc1c 100644
--- a/imageai/Detection/Custom/CUSTOMDETECTION.md
+++ b/imageai/Detection/Custom/CUSTOMDETECTION.md
@@ -1,29 +1,33 @@
-# ImageAI : Custom Object Detection
-
- detection_config.json
-
-
- Once you download the custom object detection model file, you should copy the model file to the your project folder where your .py files will be.
- Then create a python file and give it a name; an example is FirstCustomDetection.py. Then write the code below into the python file:
+# ImageAI : Custom Object Detection
+An **DeepQuest AI** project [https://deepquestai.com](https://deepquestai.com)
+
+---
+
+
+### TABLE OF CONTENTS
+
+- ▣ Custom Object Detection
+- ▣ Object Detection, Extraction and Fine-tune
+- ▣ Hiding/Showing Object Name and Probability
+- ▣ Image Input & Output Types
+- ▣ Documentation
+
+
+ImageAI provides very convenient and powerful methods to perform object detection on images and extract each object from the image using your own **custom YOLOv3 model** and the corresponding **detection_config.json** generated during the training. To test the custom object detection, you can download a sample custom model we have trained to detect the Hololens headset and its **detection_config.json** file via the links below:
+
+* [**hololens-ex-60--loss-2.76.h5**](https://github.com/OlafenwaMoses/ImageAI/releases/download/essential-v4/hololens-ex-60--loss-2.76.h5) _(Size = 236 mb)_
+* [**detection_config.json**](https://github.com/OlafenwaMoses/ImageAI/releases/download/essential-v4/detection_config.json)
+
+
+ Once you download the custom object detection model file, you should copy the model file to the your project folder where your **.py** files will be.
+ Then create a python file and give it a name; an example is FirstCustomDetection.py. Then write the code below into the python file:
+
+### FirstCustomDetection.py
- FirstCustomDetection.py
-from imageai.Detection.Custom import CustomObjectDetection
+```python
+from imageai.Detection.Custom import CustomObjectDetection
detector = CustomObjectDetection()
detector.setModelTypeAsYOLOv3()
@@ -34,67 +38,66 @@ detections = detector.detectObjectsFromImage(input_image="holo2.jpg", output_ima
for detection in detections:
print(detection["name"], " : ", detection["percentage_probability"], " : ", detection["box_points"])
-
+```
+
Sample Result - Input:
-
-
- Output:
-
+![Input](../../../data-images/holo2.jpg)
+ Output:
+![Output](../../../data-images/holo2-detected.jpg)
-
+```
hololens : 39.69653248786926 : [611, 74, 751, 154]
hololens : 87.6643180847168 : [23, 46, 90, 79]
hololens : 89.25175070762634 : [191, 66, 243, 95]
hololens : 64.49641585350037 : [437, 81, 514, 133]
hololens : 91.78624749183655 : [380, 113, 423, 138]
-
+```
+
-
Let us make a breakdown of the object detection code that we used above.
-
+```python
from imageai.Detection.Custom import CustomObjectDetection
detector = CustomObjectDetection()
detector.setModelTypeAsYOLOv3()
-
- In the 3 lines above , we import the ImageAI custom object detection class in the first line, created the class instance on the second line and set the model type to YOLOv3.
-
+```
+ In the 3 lines above , we import the **ImageAI custom object detection** class in the first line, created the class instance on the second line and set the model type to YOLOv3.
+```
detector.setModelPath("hololens-ex-60--loss-2.76.h5")
detector.setJsonPath("detection_config.json")
detector.loadModel()
-
- In the 3 lines above, we specified the file path to our downloaded model file in the first line , specified the path to our detection_config.json file in the second line and loaded the model on the third line.
+```
+ In the 3 lines above, we specified the file path to our downloaded model file in the first line , specified the path to our **detection_config.json** file in the second line and loaded the model on the third line.
-
+```python
detections = detector.detectObjectsFromImage(input_image="holo2.jpg", output_image_path="holo2-detected.jpg")
for detection in detections:
print(detection["name"], " : ", detection["percentage_probability"], " : ", detection["box_points"])
-
+```
-In the 3 lines above, we ran the detectObjectsFromImage() function and parse in the path to our test image, and the path to the new
+In the 3 lines above, we ran the `detectObjectsFromImage()` function and parse in the path to our test image, and the path to the new
image which the function will save. Then the function returns an array of dictionaries with each dictionary corresponding
- to the number of objects detected in the image. Each dictionary has the properties name (name of the object),
-percentage_probability (percentage probability of the detection) and box_points ( the x1,y1,x2 and y2 coordinates of the bounding box of the object).
+ to the number of objects detected in the image. Each dictionary has the properties `name` (name of the object),
+`percentage_probability` (percentage probability of the detection) and `box_points` (the x1,y1,x2 and y2 coordinates of the bounding box of the object).
-
+
+### Object Detection, Extraction and Fine-tune
- Object Detection, Extraction and Fine-tune
In the examples we used above, we ran the object detection on an image and it
-returned the detected objects in an array as well as save a new image with rectangular markers drawn
- on each object. In our next examples, we will be able to extract each object from the input image
- and save it independently.
-
-
- In the example code below which is very identical to the previous object detection code, we will save each object
- detected as a separate image.
+returned the detected objects in an array as well as save a new image with rectangular markers drawn on each object. In our next examples, we will be able to extract each object from the input image and save it independently.
+
+
+
+In the example code below which is very identical to the previous object detection code, we will save each object detected as a separate image.
- from imageai.Detection.Custom import CustomObjectDetection
+```python
+from imageai.Detection.Custom import CustomObjectDetection
detector = CustomObjectDetection()
detector.setModelTypeAsYOLOv3()
@@ -107,97 +110,101 @@ for detection, object_path in zip(detections, extracted_objects_array):
print(object_path)
print(detection["name"], " : ", detection["percentage_probability"], " : ", detection["box_points"])
print("---------------")
+```
-
-
-
- Sample Result:
-
Output Images
-
-
-
-
-
-
-
+Sample Result: Output Images
+![](../../../data-images/holo2-detected-objects/hololens-1.jpg)
+![](../../../data-images/holo2-detected-objects/hololens-2.jpg)
+![](../../../data-images/holo2-detected-objects/hololens-3.jpg)
+![](../../../data-images/holo2-detected-objects/hololens-4.jpg)
+![](../../../data-images/holo2-detected-objects/hololens-5.jpg)
+![](../../../data-images/holo2-detected-objects/hololens-6.jpg)
+![](../../../data-images/holo2-detected-objects/hololens-7.jpg)
+
+
-
Let us review the part of the code that perform the object detection and extract the images:
-
+```python
detections, extracted_objects_array = detector.detectObjectsFromImage(input_image="holo2.jpg", output_image_path="holo2-detected.jpg", extract_detected_objects=True)
for detection, object_path in zip(detections, extracted_objects_array):
print(object_path)
print(detection["name"], " : ", detection["percentage_probability"], " : ", detection["box_points"])
print("---------------")
-
+```
-In the above above lines, we called the detectObjectsFromImage() , parse in the input image path, output image part, and an
-extra parameter extract_detected_objects=True. This parameter states that the function should extract each object detected from the image
-and save it has a seperate image. The parameter is false by default. Once set to true, the function will create a directory
- which is the output image path + "-objects" . Then it saves all the extracted images into this new directory with
- each image's name being the detected object name + "-" + a number which corresponds to the order at which the objects
+In the above above lines, we called the `detectObjectsFromImage()` , parse in the input image path, output image part, and an
+extra parameter `extract_detected_objects=True`. This parameter states that the function should extract each object detected from the image
+and save it has a seperate image. The parameter is false by default. Once set to `true`, the function will create a directory
+ which is the `output image path + "-objects"`. Then it saves all the extracted images into this new directory with
+ each image's name being the `detected object name + "-" + a number` which corresponds to the order at which the objects
were detected.
-
+
This new parameter we set to extract and save detected objects as an image will make the function to return 2 values. The
first is the array of dictionaries with each dictionary corresponding to a detected object. The second is an array of the paths
to the saved images of each object detected and extracted, and they are arranged in order at which the objects are in the
first array.
-
- And one important feature you need to know!
You will recall that the percentage probability
- for each detected object is sent back by the detectObjectsFromImage() function. The function has a parameter
- minimum_percentage_probability , whose default value is 30 (value ranges between 0 - 100) , but it set to 30 in this example. That means the function will only return a detected
- object if it's percentage probability is 30 or above. The value was kept at this number to ensure the integrity of the
+
+
+### And one important feature you need to know!
+
+You will recall that the percentage probability
+ for each detected object is sent back by the `detectObjectsFromImage()` function. The function has a parameter
+ `minimum_percentage_probability` , whose default value is `30` (value ranges between 0 - 100) , but it set to 30 in this example. That means the function will only return a detected
+ object if it's percentage probability is **30 or above**. The value was kept at this number to ensure the integrity of the
detection results. You fine-tune the object
- detection by setting minimum_percentage_probability equal to a smaller value to detect more number of objects or higher value to detect less number of objects.
+ detection by setting `minimum_percentage_probability` equal to a smaller value to detect more number of objects or higher value to detect less number of objects.
+
-
+### Hiding/Showing Object Name and Probability
- Hiding/Showing Object Name and Probability
-ImageAI provides options to hide the name of objects detected and/or the percentage probability from being shown on the saved/returned detected image. Using the detectObjectsFromImage() and detectCustomObjectsFromImage() functions, the parameters 'display_object_name' and 'display_percentage_probability' can be set to True of False individually. Take a look at the code below:
-
-detections = detector.detectObjectsFromImage(input_image=os.path.join(execution_path , "holo2.jpg"), output_image_path=os.path.join(execution_path , "holo2_nodetails.jpg"), minimum_percentage_probability=30, display_percentage_probability=False, display_object_name=False)
-
+**ImageAI** provides options to hide the name of objects detected and/or the percentage probability from being shown on the saved/returned detected image. Using the `detectObjectsFromImage()` and `detectCustomObjectsFromImage()` functions, the parameters `'display_object_name'` and `'display_percentage_probability'` can be set to True of False individually. Take a look at the code below:
+```python
+detections = detector.detectObjectsFromImage(input_image=os.path.join(execution_path , "holo2.jpg"), output_image_path=os.path.join(execution_path , "holo2_nodetails.jpg"), minimum_percentage_probability=30, display_percentage_probability=False, display_object_name=False)
+```
-
In the above code, we specified that both the object name and percentage probability should not be shown. As you can see in the result below, both the names of the objects and their individual percentage probability is not shown in the detected image.
-Result
-
+In the above code, we specified that both the object name and percentage probability should not be shown. As you can see in the result below, both the names of the objects and their individual percentage probability is not shown in the detected image.
+**Result**
-
+![](../../../data-images/holo2-nodetails.jpg)
+### Image Input & Output Types
-Image Input & Output Types
-ImageAI custom object detection supports 2 input types of inputs which are file path to image file(default) and numpy array of an image
-as well as 2 types of output which are image file(default) and numpy array .
+
+**ImageAI** custom object detection supports 2 input types of inputs which are **file path to image file**(default) and **numpy array of an image**
+as well as 2 types of output which are image **file**(default) and numpy **array **.
This means you can now perform object detection in production applications such as on a web server and system
that returns file in any of the above stated formats.
-
To perform object detection with numpy array input, you just need to state the input type
-in the .detectObjectsFromImage() function. See example below.
+ To perform object detection with numpy array input, you just need to state the input type
+in the `.detectObjectsFromImage()` function. See example below.
+
+```python
+detections = detector.detectObjectsFromImage(input_type="array", input_image=image_array , output_image_path=os.path.join(execution_path , "holo2-detected.jpg")) # For numpy array input type
+```
+To perform object detection with numpy array output you just need to state the output type
+in the `.detectObjectsFromImage()` function. See example below.
-detections = detector.detectObjectsFromImage(input_type="array", input_image=image_array , output_image_path=os.path.join(execution_path , "holo2-detected.jpg")) # For numpy array input type
-
To perform object detection with numpy array output you just need to state the output type
-in the .detectObjectsFromImage() function. See example below.
+```python
+detected_image_array, detections = detector.detectObjectsFromImage(output_type="array", input_image="holo2.jpg" ) # For numpy array output type
+```
-detected_image_array, detections = detector.detectObjectsFromImage(output_type="array", input_image="holo2.jpg" ) # For numpy array output type
-
-
+### Documentation
- >> Documentation
-We have provided full documentation for all ImageAI classes and functions in 3 major languages. Find links below:
- >> Documentation - English Version [https://imageai.readthedocs.io](https://imageai.readthedocs.io)
- >> Documentation - Chinese Version [https://imageai-cn.readthedocs.io](https://imageai-cn.readthedocs.io)
-
- >> Documentation - French Version [https://imageai-fr.readthedocs.io](https://imageai-fr.readthedocs.io)
+We have provided full documentation for all **ImageAI** classes and functions in 3 major languages. Find links below:
+
+* Documentation - **English Version** [https://imageai.readthedocs.io](https://imageai.readthedocs.io)**
+* Documentation - **Chinese Version** [https://imageai-cn.readthedocs.io](https://imageai-cn.readthedocs.io)**
+* Documentation - **French Version** [https://imageai-fr.readthedocs.io](https://imageai-fr.readthedocs.io)**
diff --git a/imageai/Detection/Custom/CUSTOMDETECTIONTRAINING.md b/imageai/Detection/Custom/CUSTOMDETECTIONTRAINING.md
index c57c933e..752b7c6c 100644
--- a/imageai/Detection/Custom/CUSTOMDETECTIONTRAINING.md
+++ b/imageai/Detection/Custom/CUSTOMDETECTIONTRAINING.md
@@ -1,108 +1,114 @@
-# ImageAI : Custom Detection Model Training
-
-
-ImageAI provides the most simple and powerful approach to training custom object detection models
-using the YOLOv3 architeture, which
-which you can load into the imageai.Detection.Custom.CustomObjectDetection class. This allows
- you to train your own model on any set of images that corresponds to any type of objects of interest.
-The training process generates a JSON file that maps the objects names in your image dataset and the detection anchors, as well as creates lots of models. In choosing the best model for your custom object detection task, an evaluateModel() function has been provided to compute the mAP of your saved models by allowing you to state your desired IoU and Non-maximum Suppression values. Then you can perform custom
-object detection using the model and the JSON file generated.
-
-TABLE OF CONTENTS
- ▣ Preparing your custom dataset
- ▣ Training on your custom Dataset
- ▣ Evaluating your saved detection models' mAP
-
-
-
-Preparing your custom dataset
-
-To train a custom detection model, you need to prepare the images you want to use to train the model.
-You will prepare the images as follows:
-
-1. Decide the type of object(s) you want to detect and collect about 200 (minimum recommendation) or more picture of each of the object(s)
-2. Once you have collected the images, you need to annotate the object(s) in the images. ImageAI uses the Pascal VOC format for image annotation. You can generate this annotation for your images using the easy to use LabelImg image annotation tool, available for Windows, Linux and MacOS systems. Open the link below to install the annotation tool.
-https://github.com/tzutalin/labelImg
+# ImageAI : Custom Detection Model Training
-3. When you are done annotating your images, annotation XML files will be generated for each image in your dataset. For example, if your image names are image(1).jpg, image(2).jpg, image(3).jpg till image(z).jpg; the corresponding annotation for each of the images will be image(1).xml, image(2).xml, image(3).xml till image(z).xml.
-4. Once you have the annotations for all your images, create a folder for your dataset (E.g headsets) and in this parent folder, create child folders train and validation
-5. In the train folder, create images and annotations
- sub-folders. Put about 70-80% of your dataset images in the images folder and put the corresponding annotations for these images in the annotations folder.
-6. In the validation folder, create images and annotations sub-folders. Put the rest of your dataset images in the images folder and put the corresponding annotations for these images in the annotations folder.
-8. Once you have done this, the structure of your image dataset folder should look like below:
+---
- >> train >> images >> img_1.jpg
- >> images >> img_2.jpg
- >> images >> img_3.jpg
- >> annotations >> img_1.xml
- >> annotations >> img_2.xml
- >> annotations >> img_3.xml
- >> validation >> images >> img_151.jpg
- >> images >> img_152.jpg
- >> images >> img_153.jpg
- >> annotations >> img_151.xml
- >> annotations >> img_152.xml
- >> annotations >> img_153.xml
-
+**ImageAI** provides the most simple and powerful approach to training custom object detection models
+using the YOLOv3 architeture, which
+which you can load into the `imageai.Detection.Custom.CustomObjectDetection` class. This allows
+ you to train your own model on any set of images that corresponds to any type of objects of interest.
+The training process generates a JSON file that maps the objects names in your image dataset and the detection anchors, as well as creates lots of models. In choosing the best model for your custom object detection task, an `evaluateModel()` function has been provided to compute the **mAP** of your saved models by allowing you to state your desired **IoU** and **Non-maximum Suppression** values. Then you can perform custom
+object detection using the model and the JSON file generated.
-9. You can train your custom detection model completely from scratch or use transfer learning (recommended for better accuracy) from a pre-trained YOLOv3 model. Also, we have provided a sample annotated Hololens and Headsets (Hololens and Oculus) dataset for you to train with. Download the pre-trained YOLOv3 model and the sample datasets in the link below.
+### TABLE OF CONTENTS
+- ▣ Preparing your custom dataset
+- ▣ Training on your custom Dataset
+- ▣ Evaluating your saved detection models' mAP
-https://github.com/OlafenwaMoses/ImageAI/releases/tag/essential-v4
+### Preparing your custom dataset
+
+To train a custom detection model, you need to prepare the images you want to use to train the model.
+You will prepare the images as follows:
+
+1. Decide the type of object(s) you want to detect and collect about **200 (minimum recommendation)** or more picture of each of the object(s)
+2. Once you have collected the images, you need to annotate the object(s) in the images. **ImageAI** uses the **Pascal VOC format** for image annotation. You can generate this annotation for your images using the easy to use [**LabelImg**](https://github.com/tzutalin/labelImg) image annotation tool, available for Windows, Linux and MacOS systems. Open the link below to install the annotation tool. See: [https://github.com/tzutalin/labelImg](https://github.com/tzutalin/labelImg)
+3. When you are done annotating your images, **annotation XML** files will be generated for each image in your dataset. For example, if your image names are **image(1).jpg**, **image(2).jpg**, **image(3).jpg** till **image(z).jpg**; the corresponding annotation for each of the images will be **image(1).xml**, **image(2).xml**, **image(3).xml** till **image(z).xml**.
+4. Once you have the annotations for all your images, create a folder for your dataset (E.g headsets) and in this parent folder, create child folders **train** and **validation**
+5. In the train folder, create **images** and **annotations**
+ sub-folders. Put about 70-80% of your dataset images in the **images** folder and put the corresponding annotations for these images in the **annotations** folder.
+6. In the validation folder, create **images** and **annotations** sub-folders. Put the rest of your dataset images in the **images** folder and put the corresponding annotations for these images in the **annotations** folder.
+7. Once you have done this, the structure of your image dataset folder should look like below:
+ ```
+ >> train >> images >> img_1.jpg
+ >> images >> img_2.jpg
+ >> images >> img_3.jpg
+ >> annotations >> img_1.xml
+ >> annotations >> img_2.xml
+ >> annotations >> img_3.xml
+
+ >> validation >> images >> img_151.jpg
+ >> images >> img_152.jpg
+ >> images >> img_153.jpg
+ >> annotations >> img_151.xml
+ >> annotations >> img_152.xml
+ >> annotations >> img_153.xml
+ ```
+8. You can train your custom detection model completely from scratch or use transfer learning (recommended for better accuracy) from a pre-trained YOLOv3 model. Also, we have provided a sample annotated Hololens and Headsets (Hololens and Oculus) dataset for you to train with. Download the pre-trained YOLOv3 model and the sample datasets in the link below.
+
+[https://github.com/OlafenwaMoses/ImageAI/releases/tag/essential-v4](https://github.com/OlafenwaMoses/ImageAI/releases/tag/essential-v4)
+
+
+### Training on your custom dataset
-Training on your custom dataset
-Before you start training your custom detection model, kindly take note of the following:
-- The default batch_size is 4. If you are training with Google Colab, this will be fine. However, I will advice you use a more powerful GPU than the K80 offered by Colab as the higher your batch_size (8, 16), the better the accuracy of your detection model.
- - If you experience '_TfDeviceCaptureOp' object has no attribute '_set_device_from_string' error in Google Colab, it is due to a bug in Tensorflow. You can solve this by installing Tensorflow GPU 1.13.1.
-
- pip3 install tensorflow-gpu==1.13.1
+Before you start training your custom detection model, kindly take note of the following:
-Then your training code goes as follows:
-from imageai.Detection.Custom import DetectionModelTrainer
+- The default **batch_size** is 4. If you are training with **Google Colab**, this will be fine. However, I will advice you use a more powerful GPU than the K80 offered by Colab as the higher your **batch_size (8, 16)**, the better the accuracy of your detection model.
+- If you experience '_TfDeviceCaptureOp' object has no attribute '_set_device_from_string' error in Google Colab, it is due to a bug in **Tensorflow**. You can solve this by installing **Tensorflow GPU 1.13.1**.
+ ```bash
+ pip3 install tensorflow-gpu==1.13.1
+ ```
+
+Then your training code goes as follows:
+```python
+from imageai.Detection.Custom import DetectionModelTrainer
trainer = DetectionModelTrainer()
trainer.setModelTypeAsYOLOv3()
trainer.setDataDirectory(data_directory="hololens")
trainer.setTrainConfig(object_names_array=["hololens"], batch_size=4, num_experiments=200, train_from_pretrained_model="pretrained-yolov3.h5")
trainer.trainModel()
-
+```
+
Yes! Just 6 lines of code and you can train object detection models on your custom dataset.
-Now lets take a look at how the code above works.
-from imageai.Detection.Custom import DetectionModelTrainer
+Now lets take a look at how the code above works.
+
+```python
+from imageai.Detection.Custom import DetectionModelTrainer
trainer = DetectionModelTrainer()
trainer.setModelTypeAsYOLOv3()
-trainer.setDataDirectory(data_directory="hololens")
-
-In the first line, we import the ImageAI detection model training class, then we define the model trainer in the second line,
+trainer.setDataDirectory(data_directory="hololens")
+```
+
+In the first line, we import the **ImageAI** detection model training class, then we define the model trainer in the second line,
we set the network type in the third line and set the path to the image dataset we want to train the network on.
-trainer.setTrainConfig(object_names_array=["hololens"], batch_size=4, num_experiments=200, train_from_pretrained_model="pretrained-yolov3.h5")
-
-
+```python
+trainer.setTrainConfig(object_names_array=["hololens"], batch_size=4, num_experiments=200, train_from_pretrained_model="pretrained-yolov3.h5")
+```
-In the line above, we configured our detection model trainer. The parameters we stated in the function as as below:
-- num_objects : this is an array containing the names of the objects in our dataset
-- batch_size : this is to state the batch size for the training
-- num_experiments : this is to state the number of times the network will train over all the training images,
- which is also called epochs
-- train_from_pretrained_model(optional) : this is to train using transfer learning from a pre-trained YOLOv3 model
-
-
-trainer.trainModel()
-
-
+In the line above, we configured our detection model trainer. The parameters we stated in the function as as below:
+- **num_objects** : this is an array containing the names of the objects in our dataset
+- **batch_size** : this is to state the batch size for the training
+- **num_experiments** : this is to state the number of times the network will train over all the training images,
+ which is also called epochs
+- **train_from_pretrained_model(optional)** : this is to train using transfer learning from a pre-trained **YOLOv3** model
+```python
+trainer.trainModel()
+```
-When you start the training, you should see something like this in the console:
-
+
+
+When you start the training, you should see something like this in the console:
+```
Using TensorFlow backend.
Generating anchor boxes for training images and annotation...
Average IOU for 9 anchors: 0.78
@@ -112,8 +118,6 @@ Training on: ['hololens']
Training with Batch Size: 4
Number of Experiments: 200
-
-
Epoch 1/200
- 733s - loss: 34.8253 - yolo_layer_1_loss: 6.0920 - yolo_layer_2_loss: 11.1064 - yolo_layer_3_loss: 17.6269 - val_loss: 20.5028 - val_yolo_layer_1_loss: 4.0171 - val_yolo_layer_2_loss: 7.5175 - val_yolo_layer_3_loss: 8.9683
Epoch 2/200
@@ -127,13 +131,11 @@ Epoch 5/200
Epoch 6/200
- 655s - loss: 4.7582 - yolo_layer_1_loss: 0.9959 - yolo_layer_2_loss: 1.5986 - yolo_layer_3_loss: 2.1637 - val_loss: 5.8313 - val_yolo_layer_1_loss: 1.1880 - val_yolo_layer_2_loss: 1.9962 - val_yolo_layer_3_loss: 2.6471
Epoch 7/200
+```
-
-
-
-Let us explain the details shown above:
-
+Let us explain the details shown above:
+```
Using TensorFlow backend.
Generating anchor boxes for training images and annotation...
Average IOU for 9 anchors: 0.78
@@ -142,15 +144,15 @@ Detection configuration saved in hololens/json/detection_config.json
Training on: ['hololens']
Training with Batch Size: 4
Number of Experiments: 200
-
+```
-The above details signifies the following:
-- ImageAI autogenerates the best match detection anchor boxes for your image dataset.
+The above details signifies the following:
+- **ImageAI** autogenerates the best match detection **anchor boxes** for your image dataset.
- The anchor boxes and the object names mapping are saved in
-json/detection_config.json path of in the image dataset folder. Please note that for every new training you start, a new detection_config.json file is generated and is only compatible with the model saved during that training.
+**json/detection_config.json** path of in the image dataset folder. Please note that for every new training you start, a new **detection_config.json** file is generated and is only compatible with the model saved during that training.
-
+```
Epoch 1/200
- 733s - loss: 34.8253 - yolo_layer_1_loss: 6.0920 - yolo_layer_2_loss: 11.1064 - yolo_layer_3_loss: 17.6269 - val_loss: 20.5028 - val_yolo_layer_1_loss: 4.0171 - val_yolo_layer_2_loss: 7.5175 - val_yolo_layer_3_loss: 8.9683
Epoch 2/200
@@ -164,131 +166,114 @@ Epoch 5/200
Epoch 6/200
- 655s - loss: 4.7582 - yolo_layer_1_loss: 0.9959 - yolo_layer_2_loss: 1.5986 - yolo_layer_3_loss: 2.1637 - val_loss: 5.8313 - val_yolo_layer_1_loss: 1.1880 - val_yolo_layer_2_loss: 1.9962 - val_yolo_layer_3_loss: 2.6471
Epoch 7/200
-
+```
-- The above signifies the progress of the training.
-- For each experiment (Epoch), the general total validation loss (E.g - loss: 4.7582) is reported.
-- For each drop in the loss after an experiment, a model is saved in the hololens/models folder. The lower the loss, the better the model.
+- The above signifies the progress of the training.
+- For each experiment (Epoch), the general total validation loss (E.g - loss: 4.7582) is reported.
+- For each drop in the loss after an experiment, a model is saved in the **hololens/models** folder. The lower the loss, the better the model.
-Once you are done training, you can visit the link below for performing object detection with your custom detection model and detection_config.json file.
+Once you are done training, you can visit the link below for performing object detection with your **custom detection model** and **detection_config.json** file.
- Detection/Custom/CUSTOMDETECTION.md
+[Detection/Custom/CUSTOMDETECTION.md](./CUSTOMDETECTION.md)
-
+
+### Evaluating your saved detection models' mAP
-Evaluating your saved detection models' mAP
-
-After training on your custom dataset, you can evaluate the mAP of your saved models by specifying your desired IoU and Non-maximum suppression values. See details as below:
-- Single Model Evaluation: To evaluate a single model, simply use the example code below with the path to your dataset directory, the model file and the detection_config.json file saved during the training. In the example, we used an object_threshold of 0.3 ( percentage_score >= 30% ), IoU of 0.5 and Non-maximum suppression value of 0.5.
-
-
-from imageai.Detection.Custom import DetectionModelTrainer
-
-trainer = DetectionModelTrainer()
-trainer.setModelTypeAsYOLOv3()
-trainer.setDataDirectory(data_directory="hololens")
-trainer.evaluateModel(model_path="detection_model-ex-60--loss-2.76.h5", json_path="detection_config.json", iou_threshold=0.5, object_threshold=0.3, nms_threshold=0.5)
-
-
-Sample Result:
-
-
-
-Model File: hololens_detection_model-ex-09--loss-4.01.h5
-Using IoU : 0.5
-Using Object Threshold : 0.3
-Using Non-Maximum Suppression : 0.5
-hololens: 0.9613
-mAP: 0.9613
-===============================
-
-
-
-
- - Multi Model Evaluation: To evaluate all your saved models, simply parse in the path to the folder containing the models as the model_path as seen in the example below:
-
-
-
-from imageai.Detection.Custom import DetectionModelTrainer
-
-trainer = DetectionModelTrainer()
-trainer.setModelTypeAsYOLOv3()
-trainer.setDataDirectory(data_directory="hololens")
-trainer.evaluateModel(model_path="hololens/models", json_path="hololens/json/detection_config.json", iou_threshold=0.5, object_threshold=0.3, nms_threshold=0.5)
-
-
-
-Sample Result:
-
-
-Model File: hololens/models/detection_model-ex-07--loss-4.42.h5
-Using IoU : 0.5
-Using Object Threshold : 0.3
-Using Non-Maximum Suppression : 0.5
-hololens: 0.9231
-mAP: 0.9231
-===============================
-Model File: hololens/models/detection_model-ex-10--loss-3.95.h5
-Using IoU : 0.5
-Using Object Threshold : 0.3
-Using Non-Maximum Suppression : 0.5
-hololens: 0.9725
-mAP: 0.9725
-===============================
-Model File: hololens/models/detection_model-ex-05--loss-5.26.h5
-Using IoU : 0.5
-Using Object Threshold : 0.3
-Using Non-Maximum Suppression : 0.5
-hololens: 0.9204
-mAP: 0.9204
-===============================
-Model File: hololens/models/detection_model-ex-03--loss-6.44.h5
-Using IoU : 0.5
-Using Object Threshold : 0.3
-Using Non-Maximum Suppression : 0.5
-hololens: 0.8120
-mAP: 0.8120
-===============================
-Model File: hololens/models/detection_model-ex-18--loss-2.96.h5
-Using IoU : 0.5
-Using Object Threshold : 0.3
-Using Non-Maximum Suppression : 0.5
-hololens: 0.9431
-mAP: 0.9431
-===============================
-Model File: hololens/models/detection_model-ex-17--loss-3.10.h5
-Using IoU : 0.5
-Using Object Threshold : 0.3
-Using Non-Maximum Suppression : 0.5
-hololens: 0.9404
-mAP: 0.9404
-===============================
-Model File: hololens/models/detection_model-ex-08--loss-4.16.h5
-Using IoU : 0.5
-Using Object Threshold : 0.3
-Using Non-Maximum Suppression : 0.5
-hololens: 0.9725
-mAP: 0.9725
-===============================
-
-
-
-
-
-
-
+After training on your custom dataset, you can evaluate the mAP of your saved models by specifying your desired IoU and Non-maximum suppression values. See details as below:
+
+- **Single Model Evaluation:** To evaluate a single model, simply use the example code below with the path to your dataset directory, the model file and the **detection_config.json** file saved during the training. In the example, we used an **object_threshold** of 0.3 ( percentage_score >= 30% ), **IoU** of 0.5 and **Non-maximum suppression** value of 0.5.
+ ```python
+ from imageai.Detection.Custom import DetectionModelTrainer
+
+ trainer = DetectionModelTrainer()
+ trainer.setModelTypeAsYOLOv3()
+ trainer.setDataDirectory(data_directory="hololens")
+ trainer.evaluateModel(model_path="detection_model-ex-60--loss-2.76.h5", json_path="detection_config.json", iou_threshold=0.5, object_threshold=0.3, nms_threshold=0.5)
+ ```
+ Sample Result:
+ ```
+ Model File: hololens_detection_model-ex-09--loss-4.01.h5
+ Using IoU : 0.5
+ Using Object Threshold : 0.3
+ Using Non-Maximum Suppression : 0.5
+ hololens: 0.9613
+ mAP: 0.9613
+ ===============================
+ ```
+- **Multi Model Evaluation:** To evaluate all your saved models, simply parse in the path to the folder containing the models as the **model_path** as seen in the example below:
+ ```python
+ from imageai.Detection.Custom import DetectionModelTrainer
+
+ trainer = DetectionModelTrainer()
+ trainer.setModelTypeAsYOLOv3()
+ trainer.setDataDirectory(data_directory="hololens")
+ trainer.evaluateModel(model_path="hololens/models", json_path="hololens/json/detection_config.json", iou_threshold=0.5, object_threshold=0.3, nms_threshold=0.5)
+ ```
+ Sample Result:
+ ```
+ Model File: hololens/models/detection_model-ex-07--loss-4.42.h5
+ Using IoU : 0.5
+ Using Object Threshold : 0.3
+ Using Non-Maximum Suppression : 0.5
+ hololens: 0.9231
+ mAP: 0.9231
+ ===============================
+ Model File: hololens/models/detection_model-ex-10--loss-3.95.h5
+ Using IoU : 0.5
+ Using Object Threshold : 0.3
+ Using Non-Maximum Suppression : 0.5
+ hololens: 0.9725
+ mAP: 0.9725
+ ===============================
+ Model File: hololens/models/detection_model-ex-05--loss-5.26.h5
+ Using IoU : 0.5
+ Using Object Threshold : 0.3
+ Using Non-Maximum Suppression : 0.5
+ hololens: 0.9204
+ mAP: 0.9204
+ ===============================
+ Model File: hololens/models/detection_model-ex-03--loss-6.44.h5
+ Using IoU : 0.5
+ Using Object Threshold : 0.3
+ Using Non-Maximum Suppression : 0.5
+ hololens: 0.8120
+ mAP: 0.8120
+ ===============================
+ Model File: hololens/models/detection_model-ex-18--loss-2.96.h5
+ Using IoU : 0.5
+ Using Object Threshold : 0.3
+ Using Non-Maximum Suppression : 0.5
+ hololens: 0.9431
+ mAP: 0.9431
+ ===============================
+ Model File: hololens/models/detection_model-ex-17--loss-3.10.h5
+ Using IoU : 0.5
+ Using Object Threshold : 0.3
+ Using Non-Maximum Suppression : 0.5
+ hololens: 0.9404
+ mAP: 0.9404
+ ===============================
+ Model File: hololens/models/detection_model-ex-08--loss-4.16.h5
+ Using IoU : 0.5
+ Using Object Threshold : 0.3
+ Using Non-Maximum Suppression : 0.5
+ hololens: 0.9725
+ mAP: 0.9725
+ ===============================
+ ```
+
+
+### >> Documentation
+
+We have provided full documentation for all **ImageAI** classes and functions in 3 major languages. Find links below:
-
- >> Documentation
-We have provided full documentation for all ImageAI classes and functions in 3 major languages. Find links below:
+* Documentation - **English Version** [https://imageai.readthedocs.io](https://imageai.readthedocs.io)**
+* Documentation - **Chinese Version** [https://imageai-cn.readthedocs.io](https://imageai-cn.readthedocs.io)**
- >> Documentation - English Version [https://imageai.readthedocs.io](https://imageai.readthedocs.io)
- >> Documentation - Chinese Version [https://imageai-cn.readthedocs.io](https://imageai-cn.readthedocs.io)
-
- >> Documentation - French Version [https://imageai-fr.readthedocs.io](https://imageai-fr.readthedocs.io)
+* Documentation - **French Version** [https://imageai-fr.readthedocs.io](https://imageai-fr.readthedocs.io)**
diff --git a/imageai/Detection/Custom/CUSTOMVIDEODETECTION.md b/imageai/Detection/Custom/CUSTOMVIDEODETECTION.md
index adb747b0..8ba93af3 100644
--- a/imageai/Detection/Custom/CUSTOMVIDEODETECTION.md
+++ b/imageai/Detection/Custom/CUSTOMVIDEODETECTION.md
@@ -1,36 +1,39 @@
-# ImageAI : Custom Video Object Detection, Tracking and Analysis
-An DeepQuest AI project https://deepquestai.com
-
-
-TABLE OF CONTENTS
-▣ First Custom Video Object Detection
-▣ Camera / Live Stream Video Detection
-▣ Video Analysis
-
-▣ Hiding/Showing Object Name and Probability
-▣ Frame Detection Intervals
-▣ Video Detection Timeout (NEW)
-▣ Documentation
-
- ImageAI provides convenient, flexible and powerful methods to perform object detection on videos using your own custom YOLOv3 model and the corresponding detection_config.json generated during the training. This version of ImageAI provides commercial grade video objects detection features, which include but not limited to device/IP camera inputs, per frame, per second, per minute and entire video analysis for storing in databases and/or real-time visualizations and for future insights.
-To test the custom video object detection,you can download a sample custom model we have trained to detect the Hololens headset and its detection_config.json file via the links below:
- - hololens-ex-60--loss-2.76.h5 (Size = 236 mb)
-
-- detection_config.json
+# ImageAI : Custom Video Object Detection, Tracking and Analysis
+
+An **DeepQuest AI** project [https://deepquestai.com](https://deepquestai.com)
+
+---
+
+### TABLE OF CONTENTS
+
+- ▣ First Custom Video Object Detection
+- ▣ Camera / Live Stream Video Detection
+- ▣ Video Analysis
+- ▣ Hiding/Showing Object Name and Probability
+- ▣ Frame Detection Intervals
+- ▣ Video Detection Timeout (NEW)
+- ▣ Documentation
+
+
+ImageAI provides convenient, flexible and powerful methods to perform object detection on videos using your own **custom YOLOv3 model** and the corresponding **detection_config.json** generated during the training. This version of **ImageAI** provides commercial grade video objects detection features, which include but not limited to device/IP camera inputs, per frame, per second, per minute and entire video analysis for storing in databases and/or real-time visualizations and for future insights.
+To test the custom video object detection,you can download a sample custom model we have trained to detect the Hololens headset and its **detection_config.json** file via the links below:
+- [**hololens-ex-60--loss-2.76.h5**](https://github.com/OlafenwaMoses/ImageAI/releases/download/essential-v4/hololens-ex-60--loss-2.76.h5) _(Size = 236 mb)_
+- [**detection_config.json**](https://github.com/OlafenwaMoses/ImageAI/releases/download/essential-v4/detection_config.json)
Because video object detection is a compute intensive tasks, we advise you perform this experiment using a computer with a NVIDIA GPU and the GPU version of Tensorflow
installed. Performing Video Object Detection CPU will be slower than using an NVIDIA GPU powered computer. You can use Google Colab for this
experiment as it has an NVIDIA K80 GPU available for free.
-
+\n
Once you download the custom object detection model and JSON files, you should copy the model and the JSON files to the your project folder where your .py files will be.
- Then create a python file and give it a name; an example is FirstCustomVideoObjectDetection.py. Then write the code below into the python file:
+ Then create a python file and give it a name; an example is FirstCustomVideoObjectDetection.py. Then write the code below into the python file: \n
+### FirstCustomVideoObjectDetection.py
- FirstCustomVideoObjectDetection.py
-from imageai.Detection.Custom import CustomVideoObjectDetection
+```python
+from imageai.Detection.Custom import CustomVideoObjectDetection
import os
execution_path = os.getcwd()
@@ -46,67 +49,60 @@ video_detector.detectObjectsFromVideo(input_file_path="holo1.mp4",
frames_per_second=20,
minimum_percentage_probability=40,
log_progress=True)
-
-
-
-
-
+```
+
+[**Input Video**](../../../data-videos/holo1.mp4)
+![Input Video](../../../data-images/holo-video.jpg)
+[**Output Video**](https://www.youtube.com/watch?v=4o5GyAR4Mpw)
+![Output Video](../../../data-images/holo-video-detected.jpg)
+
+
Let us make a breakdown of the object detection code that we used above.
-
+
+```python
from imageai.Detection.Custom import CustomVideoObjectDetection
import os
execution_path = os.getcwd()
-
- In the 3 lines above , we import the ImageAI custom video object detection class in the first line, import the os in the second line and obtained
+```
+
+In the 3 lines above , we import the **ImageAI custom video object detection** class in the first line, import the **os** in the second line and obtained
the path to folder where our python file runs.
-
+```python
video_detector = CustomVideoObjectDetection()
video_detector.setModelTypeAsYOLOv3()
video_detector.setModelPath("hololens-ex-60--loss-2.76.h5")
video_detector.setJsonPath("detection_config.json")
video_detector.loadModel()
-
- In the 4 lines above, we created a new instance of the CustomVideoObjectDetection class in the first line, set the model type to YOLOv3 in the second line,
- set the model path to our custom YOLOv3 model file in the third line, specified the path to the model's corresponding detection_config.json in the fourth line and load the model in the fifth line.
+```
+In the 4 lines above, we created a new instance of the `CustomVideoObjectDetection` class in the first line, set the model type to YOLOv3 in the second line,
+ set the model path to our custom YOLOv3 model file in the third line, specified the path to the model's corresponding **detection_config.json** in the fourth line and load the model in the fifth line.
-
+```python
video_detector.detectObjectsFromVideo(input_file_path="holo1.mp4",
output_file_path=os.path.join(execution_path, "holo1-detected3"),
frames_per_second=20,
minimum_percentage_probability=40,
log_progress=True)
-
+```
-In the code above, we ran the detectObjectsFromVideo() function and parse in the path to our video,the path to the new
+In the code above, we ran the `detectObjectsFromVideo()` function and parse in the path to our video,the path to the new
video (without the extension, it saves a .avi video by default) which the function will save, the number of frames per second (fps) that
you we desire the output video to have and option to log the progress of the detection in the console. Then the function returns a the path to the saved video
which contains boxes and percentage probabilities rendered on objects detected in the video.
-
-
+### Camera / Live Stream Video Detection
-Camera / Live Stream Video Detection
-ImageAI now allows live-video detection with support for camera inputs. Using OpenCV's VideoCapture() function, you can load live-video streams from a device camera, cameras connected by cable or IP cameras, and parse it into ImageAI's detectObjectsFromVideo() function. All features that are supported for detecting objects in a video file is also available for detecting objects in a camera's live-video feed. Find below an example of detecting live-video feed from the device camera.
-
+**ImageAI** now allows live-video detection with support for camera inputs. Using **OpenCV**'s **VideoCapture()** function, you can load live-video streams from a device camera, cameras connected by cable or IP cameras, and parse it into **ImageAI**'s **detectObjectsFromVideo()** function. All features that are supported for detecting objects in a video file is also available for detecting objects in a camera's live-video feed. Find below an example of detecting live-video feed from the device camera.
+
+```python
from imageai.Detection import VideoObjectDetection
import os
import cv2
execution_path = os.getcwd()
-
-
camera = cv2.VideoCapture(0)
video_detector = CustomVideoObjectDetection()
@@ -120,20 +116,21 @@ video_detector.detectObjectsFromVideo(camera_input=camera,
frames_per_second=20,
minimum_percentage_probability=40,
log_progress=True)
-
+```
-The difference in the code above and the code for the detection of a video file is that we defined an OpenCV VideoCapture instance and loaded the default device camera into it. Then we parsed the camera we defined into the parameter camera_input which replaces the input_file_path that is used for video file.
+The difference in the code above and the code for the detection of a video file is that we defined an **OpenCV VideoCapture** instance and loaded the default device camera into it. Then we parsed the camera we defined into the parameter **camera_input** which replaces the **input_file_path** that is used for video file.
-
+\n
+### Video Analysis
-Video Analysis
-ImageAI now provide commercial-grade video analysis in the Custom Video Object Detection class, for both video file inputs and camera inputs. This feature allows developers to obtain deep insights into any video processed with ImageAI. This insights can be visualized in real-time, stored in a NoSQL database for future review or analysis.
+**ImageAI** now provide commercial-grade video analysis in the Custom Video Object Detection class, for both video file inputs and camera inputs. This feature allows developers to obtain deep insights into any video processed with **ImageAI**. This insights can be visualized in real-time, stored in a NoSQL database for future review or analysis. \n
-For video analysis, the detectObjectsFromVideo() now allows you to state your own defined functions which will be executed for every frame, seconds and/or minute of the video detected as well as a state a function that will be executed at the end of a video detection. Once this functions are stated, they will receive raw but comprehensive analytical data on the index of the frame/second/minute, objects detected (name, percentage_probability and box_points), number of instances of each unique object detected and average number of occurrence of each unique object detected over a second/minute and entire video.
-To obtain the video analysis, all you need to do is specify a function, state the corresponding parameters it will be receiving and parse the function name into the per_frame_function, per_second_function, per_minute_function and video_complete_function parameters in the detection function. Find below examples of video analysis functions.
+For video analysis, the **detectObjectsFromVideo()** now allows you to state your own defined functions which will be executed for every frame, seconds and/or minute of the video detected as well as a state a function that will be executed at the end of a video detection. Once this functions are stated, they will receive raw but comprehensive analytical data on the index of the frame/second/minute, objects detected (name, percentage_probability and box_points), number of instances of each unique object detected and average number of occurrence of each unique object detected over a second/minute and entire video.
-
+To obtain the video analysis, all you need to do is specify a function, state the corresponding parameters it will be receiving and parse the function name into the **per_frame_function**, **per_second_function**, **per_minute_function** and **video_complete_function** parameters in the detection function. Find below examples of video analysis functions.
+
+```python
def forFrame(frame_number, output_array, output_count):
print("FOR FRAME " , frame_number)
print("Output for each object : ", output_array)
@@ -154,7 +151,6 @@ def forMinute(minute_number, output_arrays, count_arrays, average_output_count):
print("Output average count for unique objects in the last minute: ", average_output_count)
print("------------END OF A MINUTE --------------")
-
video_detector = CustomVideoObjectDetection()
video_detector.setModelTypeAsYOLOv3()
video_detector.setModelPath("hololens-ex-60--loss-2.76.h5")
@@ -166,12 +162,11 @@ video_detector.detectObjectsFromVideo(camera_input=camera,
frames_per_second=20, per_second_function=forSeconds, per_frame_function = forFrame, per_minute_function= forMinute,
minimum_percentage_probability=40,
log_progress=True)
+```
-
-
-ImageAI also allows you to obtain complete analysis of the entire video processed. All you need is to define a function like the forSecond or forMinute function and set the video_complete_function parameter into your .detectObjectsFromVideo() function. The same values for the per_second-function and per_minute_function will be returned. The difference is that no index will be returned and the other 3 values will be returned, and the 3 values will cover all frames in the video. Below is a sample function:
-
+**ImageAI** also allows you to obtain complete analysis of the entire video processed. All you need is to define a function like the forSecond or forMinute function and set the **video_complete_function** parameter into your **.detectObjectsFromVideo()** function. The same values for the per_second-function and per_minute_function will be returned. The difference is that no index will be returned and the other 3 values will be returned, and the 3 values will cover all frames in the video. Below is a sample function:
+```python
def forFull(output_arrays, count_arrays, average_output_count):
#Perform action on the 3 parameters returned into the function
@@ -182,10 +177,11 @@ video_detector.detectObjectsFromVideo(camera_input=camera,
minimum_percentage_probability=40,
log_progress=True)
-
-
-FINAL NOTE ON VIDEO ANALYSIS : ImageAI allows you to obtain the detected video frame as a Numpy array at each frame, second and minute function. All you need to do is specify one more parameter in your function and set return_detected_frame=True in your detectObjectsFromVideo() function. Once this is set, the extra parameter you sepecified in your function will be the Numpy array of the detected frame. See a sample below:
-
+```
+
+**FINAL NOTE ON VIDEO ANALYSIS** : **ImageAI** allows you to obtain the detected video frame as a Numpy array at each frame, second and minute function. All you need to do is specify one more parameter in your function and set **return_detected_frame=True** in your **detectObjectsFromVideo()** function. Once this is set, the extra parameter you sepecified in your function will be the Numpy array of the detected frame. See a sample below:
+
+```python
def forFrame(frame_number, output_array, output_count, detected_frame):
print("FOR FRAME " , frame_number)
print("Output for each object : ", output_array)
@@ -199,35 +195,35 @@ video_detector.detectObjectsFromVideo(camera_input=camera,
per_frame_function=forFrame,
minimum_percentage_probability=40,
log_progress=True, return_detected_frame=True)
-
+```
+### Frame Detection Intervals
-Frame Detection Intervals
+
The above video objects detection task are optimized for frame-real-time object detections that ensures that objects in every frame
-of the video is detected. ImageAI provides you the option to adjust the video frame detections which can speed up
-your video detection process. When calling the .detectObjectsFromVideo(), you can
-specify at which frame interval detections should be made. By setting the frame_detection_interval parameter to be
+of the video is detected. **ImageAI** provides you the option to adjust the video frame detections which can speed up
+your video detection process. When calling the `.detectObjectsFromVideo()`, you can
+specify at which frame interval detections should be made. By setting the **frame_detection_interval** parameter to be
equal to 5 or 20, that means the object detections in the video will be updated after 5 frames or 20 frames.
-If your output video frames_per_second is set to 20, that means the object detections in the video will
+If your output video **frames_per_second** is set to 20, that means the object detections in the video will
be updated once in every quarter of a second or every second. This is useful in case scenarios where the available
compute is less powerful and speeds of moving objects are low. This ensures you can have objects detected as second-real-time
, half-a-second-real-time or whichever way suits your needs.
-
+
+### Custom Video Detection Timeout
-Custom Video Detection Timeout
-ImageAI now allows you to set a timeout in seconds for detection of objects in videos or camera live feed. To set a timeout for your video detection code, all you need to do is specify the detection_timeout parameter in the detectObjectsFromVideo() function to the number of desired seconds. In the example code below, we set detection_timeout to 120 seconds (2 minutes).
-
-
+**ImageAI** now allows you to set a timeout in seconds for detection of objects in videos or camera live feed. To set a timeout for your video detection code, all you need to do is specify the `detection_timeout` parameter in the `detectObjectsFromVideo()` function to the number of desired seconds. In the example code below, we set `detection_timeout` to 120 seconds (2 minutes).
+
+
+```python
from imageai.Detection import VideoObjectDetection
import os
import cv2
execution_path = os.getcwd()
-
-
camera = cv2.VideoCapture(0)
video_detector = CustomVideoObjectDetection()
@@ -240,18 +236,16 @@ video_detector.detectObjectsFromVideo(camera_input=camera,
output_file_path=os.path.join(execution_path, "holo1-detected3"),
frames_per_second=20, minimum_percentage_probability=40,
detection_timeout=120)
-
+```
+### >> Documentation
+
-
+We have provided full documentation for all **ImageAI** classes and functions in 3 major languages. Find links below:
-
- >> Documentation
-We have provided full documentation for all ImageAI classes and functions in 3 major languages. Find links below:
+* Documentation - **English Version** [https://imageai.readthedocs.io](https://imageai.readthedocs.io)**
+* Documentation - **Chinese Version** [https://imageai-cn.readthedocs.io](https://imageai-cn.readthedocs.io)**
- >> Documentation - English Version [https://imageai.readthedocs.io](https://imageai.readthedocs.io)
- >> Documentation - Chinese Version [https://imageai-cn.readthedocs.io](https://imageai-cn.readthedocs.io)
-
- >> Documentation - French Version [https://imageai-fr.readthedocs.io](https://imageai-fr.readthedocs.io)
+* Documentation - **French Version** [https://imageai-fr.readthedocs.io](https://imageai-fr.readthedocs.io)**
diff --git a/imageai/Detection/README.md b/imageai/Detection/README.md
index 4011f41b..24fc6154 100644
--- a/imageai/Detection/README.md
+++ b/imageai/Detection/README.md
@@ -152,15 +152,15 @@ for eachObject, eachObjectPath in zip(detections, objects_path):
![Input Image](../../data-images/image3.jpg)
![Output Images](../../data-images/image3new.jpg)
-![dog](../../data-images/image3new.jpg-objects/dog-1.jpg)
-![motorcycle](../../data-images/image3new.jpg-objects/motorcycle-3.jpg)
-![car](../../data-images/image3new.jpg-objects/car-4.jpg)
-![bicycle](../../data-images/image3new.jpg-objects/bicycle-5.jpg)
-![person](../../data-images/image3new.jpg-objects/person-6.jpg)
-![person](../../data-images/image3new.jpg-objects/person-7.jpg)
-![person](../../data-images/image3new.jpg-objects/person-8.jpg)
-![person](../../data-images/image3new.jpg-objects/person-9.jpg)
-![person](../../data-images/image3new.jpg-objects/person-10.jpg)
+![dog](../../data-images/image3new-objects/dog-1.jpg)
+![motorcycle](../../data-images/image3new-objects/motorcycle-3.jpg)
+![car](../../data-images/image3new-objects/car-4.jpg)
+![bicycle](../../data-images/image3new-objects/bicycle-5.jpg)
+![person](../../data-images/image3new-objects/person-6.jpg)
+![person](../../data-images/image3new-objects/person-7.jpg)
+![person](../../data-images/image3new-objects/person-8.jpg)
+![person](../../data-images/image3new-objects/person-9.jpg)
+![person](../../data-images/image3new-objects/person-10.jpg)
Let us review the part of the code that perform the object detection and extract the images: