You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Dec 2, 2021. It is now read-only.
Copy file name to clipboardExpand all lines: README.md
+23-34Lines changed: 23 additions & 34 deletions
Original file line number
Diff line number
Diff line change
@@ -2,7 +2,7 @@
2
2
3
3
## Deep Learning Project ##
4
4
5
-
In this project, you will train a deep neural network to identify and track a target in simulation and then issue commands to a drone to follow that target. So-called “follow me” applications like this are key to many fields of robotics and the very same techniques you apply here could be extended to scenarios like advanced cruise control in autonomous vehicles or human-robot collaboration in industry.
5
+
In this project, you will train a deep neural network to identify and track a target in simulation. So-called “follow me” applications like this are key to many fields of robotics and the very same techniques you apply here could be extended to scenarios like advanced cruise control in autonomous vehicles or human-robot collaboration in industry.
6
6
7
7
[image_0]: ./docs/misc/sim_screenshot.png
8
8
![alt text][image_0]
@@ -29,27 +29,25 @@ The simulator binary can be downloaded [here](https://github.com/udacity/RoboND-
29
29
30
30
**Install Dependencies**
31
31
32
-
You'll need Python 3 and Jupyter Notebooks installed to do this project. The best way to get setup with these if you are not already is to use Anaconda following along with the [RoboND-Python-Starterkit](https://github.com/ryan-keenan/RoboND-Python-Starterkit).
32
+
You'll need Python 3 and Jupyter Notebooks installed to do this project. The best way to get setup with these if you are not already is to use Anaconda following along with the [RoboND-Python-Starterkit](https://github.com/udacity/RoboND-Python-StarterKit).
33
33
34
34
If for some reason you choose not to use Anaconda, you must install the following frameworks and packages on your system:
35
35
* Python 3.x
36
36
* Tensorflow 1.2.1
37
37
* NumPy 1.11
38
-
* OpenCV 2
39
38
* SciPy 0.17.0
40
39
* eventlet
41
40
* Flask
42
41
* h5py
43
42
* PIL
44
43
* python-socketio
45
44
* scikit-image
46
-
* socketIO-client
47
45
* transforms3d
46
+
* PyQt4/Pyqt5
48
47
49
48
## Implement the Segmentation Network
50
49
1. Download the training dataset from above and extract to the project `data` directory.
51
-
2. Complete `make_model.py`by following the TODOs in `make_model_template.py`
52
-
3. Complete `data_iterator.py` by following the TODOs in `data_iterator_template.py`
50
+
2. Complete `project_nn_lib.py`by following the TODOs in `project_nn_lib_template.py`
53
51
4. Complete `train.py` by following the TODOs in `train_template.py`
54
52
5. Train the network locally, or on [AWS](docs/aws_setup.md).
55
53
6. Continue to experiment with the training data and network until you attain the score you desire.
@@ -68,35 +66,24 @@ data/validation/masks - contains masked (labeled) images for the validation set
68
66
data/weights - contains trained TensorFlow models
69
67
```
70
68
71
-
### Training Set: with Hero Present ###
69
+
### Training Set ###
72
70
1. Run QuadSim
73
-
2. Select `Use Hero Target`
74
-
3. Select `With Other Poeple`
75
-
4. Click the `DL Training` button
76
-
5. With the simulator running, press "r" to begin recording.
77
-
6. In the file selection menu navigate to the `data/train/target/run1` directory
78
-
7.**optional** to speed up data collection, press "9" (1-9 will slow down collection speed)
79
-
8. When you have finished collecting data, hit "r" to stop recording.
80
-
9. To exit the simulator, hit "`<esc>`"
81
-
82
-
### Training Set: without Hero Present ###
83
-
1. Run QuadSim
84
-
2. Make sure `Use Hero Target` is **NOT** selected
85
-
3. Select `With Other Poeple`
86
-
4. Click the `DL Training` button
87
-
5. With the simulator running, press "r" to begin recording.
88
-
6. In the file selection menu navigate to the `data/train/non_target/run1` directory.
89
-
7.**optional** to speed up data collection, press "9" (1-9 will slow down collection speed)
90
-
8. When you have finished collecting data, hit "r" to stop recording.
91
-
9. To exit the simulator, hit "`<esc>`"
71
+
2. Click the `DL Training` button
72
+
3. Set patrol points, path points, and spawn points. **TODO** add link to data collection doc
73
+
3. With the simulator running, press "r" to begin recording.
74
+
4. In the file selection menu navigate to the `data/train/run1` directory
75
+
5.**optional** to speed up data collection, press "9" (1-9 will slow down collection speed)
76
+
6. When you have finished collecting data, hit "r" to stop recording.
77
+
7. To reset the simulator, hit "`<esc>`"
78
+
8. To collect multiple runs create directories `data/train/run2`, `data/train/run3` and repeat the above steps.
79
+
92
80
93
81
### Validation Set ###
94
82
To collect the validation set, repeat both sets of steps above, except using the directory `data/validation` instead rather than `data/train`.
95
83
96
84
### Image Preprocessing ###
97
-
Before the network is trained, the images first need to be undergo a preprocessing step.
98
-
**TODO**: Explain what preprocessing does, approximately.
99
-
To run preprocesing:
85
+
Before the network is trained, the images first need to be undergo a preprocessing step. The preprocessing step transforms the depth masks from the sim, into binary masks suitable for training a neural network. It also converts the images from .png to .jpeg to create a reduced sized dataset, suitable for uploading to AWS.
86
+
To run preprocessing:
100
87
```
101
88
$ python preprocess_ims.py
102
89
```
@@ -105,7 +92,7 @@ $ python preprocess_ims.py
105
92
## Training, Predicting and Scoring ##
106
93
With your training and validation data having been generated or downloaded from the above section of this repository, you are free to begin working with the neural net.
107
94
108
-
**Note**: Training CNNs is a very compute-intensive process. If your system does not have a recent Nvidia graphics card, with [cuDNN](https://developer.nvidia.com/cudnn) installed , you may need to perform the training step in the cloud. Instructions for using AWS to train your network in the cloud may be found [here](docs/aws_setup.md)
95
+
**Note**: Training CNNs is a very compute-intensive process. If your system does not have a recent Nvidia graphics card, with [cuDNN](https://developer.nvidia.com/cudnn)and [CUDA](https://developer.nvidia.com/cuda)installed , you may need to perform the training step in the cloud. Instructions for using AWS to train your network in the cloud may be found [here](docs/aws_setup.md)
1. Copy your saved model to the weights directory `data/weights`.
163
-
2. Launch the simulator, select "Spawn People", and then click the "Follow Me" button.
164
-
3. Run `server.py` to launch the socketio server.
165
-
4. Run the realtime follower script `$ realtime_follower.py my_awesome_model.h5`
150
+
2. Launch the simulator, select "Spawn People", and then click the "Follow Me" button.
151
+
3. Run the realtime follower script
152
+
```
153
+
$ python follower.py my_amazing_model.h5
154
+
```
166
155
167
-
**Note:** If you'd like to see an overlay of the detected region on each camera frame from the drone, simply pass the `--overlay_viz` parameter to `realtime_follower.py`
156
+
**Note:** If you'd like to see an overlay of the detected region on each camera frame from the drone, simply pass the `--pred_viz` parameter to `follower.py`
0 commit comments