Skip to content
This repository was archived by the owner on Dec 2, 2021. It is now read-only.

Commit 18216d8

Browse files
committed
V1.3.0
1 parent fb764d9 commit 18216d8

29 files changed

+1107
-3306
lines changed

README.md

Lines changed: 23 additions & 34 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
## Deep Learning Project ##
44

5-
In this project, you will train a deep neural network to identify and track a target in simulation and then issue commands to a drone to follow that target. So-called “follow me” applications like this are key to many fields of robotics and the very same techniques you apply here could be extended to scenarios like advanced cruise control in autonomous vehicles or human-robot collaboration in industry.
5+
In this project, you will train a deep neural network to identify and track a target in simulation. So-called “follow me” applications like this are key to many fields of robotics and the very same techniques you apply here could be extended to scenarios like advanced cruise control in autonomous vehicles or human-robot collaboration in industry.
66

77
[image_0]: ./docs/misc/sim_screenshot.png
88
![alt text][image_0]
@@ -29,27 +29,25 @@ The simulator binary can be downloaded [here](https://github.com/udacity/RoboND-
2929

3030
**Install Dependencies**
3131

32-
You'll need Python 3 and Jupyter Notebooks installed to do this project. The best way to get setup with these if you are not already is to use Anaconda following along with the [RoboND-Python-Starterkit](https://github.com/ryan-keenan/RoboND-Python-Starterkit).
32+
You'll need Python 3 and Jupyter Notebooks installed to do this project. The best way to get setup with these if you are not already is to use Anaconda following along with the [RoboND-Python-Starterkit](https://github.com/udacity/RoboND-Python-StarterKit).
3333

3434
If for some reason you choose not to use Anaconda, you must install the following frameworks and packages on your system:
3535
* Python 3.x
3636
* Tensorflow 1.2.1
3737
* NumPy 1.11
38-
* OpenCV 2
3938
* SciPy 0.17.0
4039
* eventlet
4140
* Flask
4241
* h5py
4342
* PIL
4443
* python-socketio
4544
* scikit-image
46-
* socketIO-client
4745
* transforms3d
46+
* PyQt4/Pyqt5
4847

4948
## Implement the Segmentation Network
5049
1. Download the training dataset from above and extract to the project `data` directory.
51-
2. Complete `make_model.py`by following the TODOs in `make_model_template.py`
52-
3. Complete `data_iterator.py` by following the TODOs in `data_iterator_template.py`
50+
2. Complete `project_nn_lib.py`by following the TODOs in `project_nn_lib_template.py`
5351
4. Complete `train.py` by following the TODOs in `train_template.py`
5452
5. Train the network locally, or on [AWS](docs/aws_setup.md).
5553
6. Continue to experiment with the training data and network until you attain the score you desire.
@@ -68,35 +66,24 @@ data/validation/masks - contains masked (labeled) images for the validation set
6866
data/weights - contains trained TensorFlow models
6967
```
7068

71-
### Training Set: with Hero Present ###
69+
### Training Set ###
7270
1. Run QuadSim
73-
2. Select `Use Hero Target`
74-
3. Select `With Other Poeple`
75-
4. Click the `DL Training` button
76-
5. With the simulator running, press "r" to begin recording.
77-
6. In the file selection menu navigate to the `data/train/target/run1` directory
78-
7. **optional** to speed up data collection, press "9" (1-9 will slow down collection speed)
79-
8. When you have finished collecting data, hit "r" to stop recording.
80-
9. To exit the simulator, hit "`<esc>`"
81-
82-
### Training Set: without Hero Present ###
83-
1. Run QuadSim
84-
2. Make sure `Use Hero Target` is **NOT** selected
85-
3. Select `With Other Poeple`
86-
4. Click the `DL Training` button
87-
5. With the simulator running, press "r" to begin recording.
88-
6. In the file selection menu navigate to the `data/train/non_target/run1` directory.
89-
7. **optional** to speed up data collection, press "9" (1-9 will slow down collection speed)
90-
8. When you have finished collecting data, hit "r" to stop recording.
91-
9. To exit the simulator, hit "`<esc>`"
71+
2. Click the `DL Training` button
72+
3. Set patrol points, path points, and spawn points. **TODO** add link to data collection doc
73+
3. With the simulator running, press "r" to begin recording.
74+
4. In the file selection menu navigate to the `data/train/run1` directory
75+
5. **optional** to speed up data collection, press "9" (1-9 will slow down collection speed)
76+
6. When you have finished collecting data, hit "r" to stop recording.
77+
7. To reset the simulator, hit "`<esc>`"
78+
8. To collect multiple runs create directories `data/train/run2`, `data/train/run3` and repeat the above steps.
79+
9280

9381
### Validation Set ###
9482
To collect the validation set, repeat both sets of steps above, except using the directory `data/validation` instead rather than `data/train`.
9583

9684
### Image Preprocessing ###
97-
Before the network is trained, the images first need to be undergo a preprocessing step.
98-
**TODO**: Explain what preprocessing does, approximately.
99-
To run preprocesing:
85+
Before the network is trained, the images first need to be undergo a preprocessing step. The preprocessing step transforms the depth masks from the sim, into binary masks suitable for training a neural network. It also converts the images from .png to .jpeg to create a reduced sized dataset, suitable for uploading to AWS.
86+
To run preprocessing:
10087
```
10188
$ python preprocess_ims.py
10289
```
@@ -105,7 +92,7 @@ $ python preprocess_ims.py
10592
## Training, Predicting and Scoring ##
10693
With your training and validation data having been generated or downloaded from the above section of this repository, you are free to begin working with the neural net.
10794

108-
**Note**: Training CNNs is a very compute-intensive process. If your system does not have a recent Nvidia graphics card, with [cuDNN](https://developer.nvidia.com/cudnn) installed , you may need to perform the training step in the cloud. Instructions for using AWS to train your network in the cloud may be found [here](docs/aws_setup.md)
95+
**Note**: Training CNNs is a very compute-intensive process. If your system does not have a recent Nvidia graphics card, with [cuDNN](https://developer.nvidia.com/cudnn) and [CUDA](https://developer.nvidia.com/cuda) installed , you may need to perform the training step in the cloud. Instructions for using AWS to train your network in the cloud may be found [here](docs/aws_setup.md)
10996

11097
### Training your Model ###
11198
**Prerequisites**
@@ -160,8 +147,10 @@ average squared log pixel distance error 1.4663195103
160147

161148
## Experimentation: Testing in Simulation
162149
1. Copy your saved model to the weights directory `data/weights`.
163-
2. Launch the simulator, select "Spawn People", and then click the "Follow Me" button.
164-
3. Run `server.py` to launch the socketio server.
165-
4. Run the realtime follower script `$ realtime_follower.py my_awesome_model.h5`
150+
2. Launch the simulator, select "Spawn People", and then click the "Follow Me" button.
151+
3. Run the realtime follower script
152+
```
153+
$ python follower.py my_amazing_model.h5
154+
```
166155

167-
**Note:** If you'd like to see an overlay of the detected region on each camera frame from the drone, simply pass the `--overlay_viz` parameter to `realtime_follower.py`
156+
**Note:** If you'd like to see an overlay of the detected region on each camera frame from the drone, simply pass the `--pred_viz` parameter to `follower.py`

code/build_model.py

Lines changed: 0 additions & 5 deletions
This file was deleted.

code/evaluate.py

Lines changed: 0 additions & 62 deletions
This file was deleted.

0 commit comments

Comments
 (0)