|
| 1 | +[](https://www.udacity.com/robotics) |
| 2 | + |
| 3 | +## Deep Learning Project ## |
| 4 | + |
| 5 | +In this project, you will train a deep neural network to identify and track a target in simulation and then issue commands to a drone to follow that target. So-called “follow me” applications like this are key to many fields of robotics and the very same techniques you apply here could be extended to scenarios like advanced cruise control in autonomous vehicles or human-robot collaboration in industry. |
| 6 | + |
| 7 | +[image_0]: ./docs/misc/sim_screenshot.png |
| 8 | +![alt text][image_0] |
| 9 | + |
| 10 | +## Setup Instructions |
| 11 | +**Clone the repository** |
| 12 | +``` |
| 13 | +$ git clone https://github.com/udacity/RoboND-DeepLearning-Private.git |
| 14 | +``` |
| 15 | + |
| 16 | +**Download the QuadSim binary** |
| 17 | + |
| 18 | +To interface your neural net with the QuadSim simulator, you must use a version QuadSim that has been custom tailored for this project. The previous version that you might have used for the Controls lab will not work. |
| 19 | + |
| 20 | +The simulator binary can be downloaded [here](https://github.com/udacity/RoboND-DeepLearning-Private/releases) |
| 21 | + |
| 22 | +**Install Dependencies** |
| 23 | + |
| 24 | +You'll need Python 3 and Jupyter Notebooks installed to do this project. The best way to get setup with these if you are not already is to use Anaconda following along with the [RoboND-Python-Starterkit](https://github.com/ryan-keenan/RoboND-Python-Starterkit). |
| 25 | + |
| 26 | +If for some reason you choose not to use Anaconda, you must install the following framworks and packages on your system: |
| 27 | +* Python 2.7 |
| 28 | +* Tensorflow 1.2.1 |
| 29 | +* NumPy 1.11 |
| 30 | +* OpenCV 2 |
| 31 | +* SciPy 0.17.0 |
| 32 | +* eventlet |
| 33 | +* Flask |
| 34 | +* h5py |
| 35 | +* PIL |
| 36 | +* python-socketio |
| 37 | +* scikit-image |
| 38 | +* socketIO-client |
| 39 | + |
| 40 | +## Implement the Segmentation Network |
| 41 | +1. Download the training dataset from [here](https://github.com/udacity/RoboND-DeepLearning-Private/releases), and extract to the project `data` directory. |
| 42 | +2. Complete `make_model.py`by following the TODOs in `make_model_template.py` |
| 43 | +3. Complete `data_iterator.py` by following the TODOs in `data_iterator_template.py` |
| 44 | +4. Complete `train.py` by following the TODOs in `train_template.py` |
| 45 | +5. Train the network locally, or on [AWS](docs/aws_setup.md). |
| 46 | +6. Continue to experiement with the training data and network until you attain the score you desire. |
| 47 | +7. Once you are comfortable with performance on the training dataset, see how it performs in live simulation! |
| 48 | + |
| 49 | +## Collecting Training Data ## |
| 50 | +A simple training dataset has been provided in the [releases](https://github.com/udacity/RoboND-DeepLearning-Private/releases) section of this repository. This dataset will allow you to verify that you're segmentation network is semi-functional. However, if you're interested in improving your score, you may be interested in collecting additional training data. To do, please see the following steps. |
| 51 | + |
| 52 | +The data directory is organized as follows: |
| 53 | +``` |
| 54 | +data/runs - contains the results of prediction runs |
| 55 | +data/train/images - contains images for the training set |
| 56 | +data/train/masks - contains masked (labeled) images for the training set |
| 57 | +data/validation/images - contains images for the validation set |
| 58 | +data/validation/masks - contains masked (labeled) images for the validation set |
| 59 | +data/weights - contains trained TensorFlow models |
| 60 | +``` |
| 61 | + |
| 62 | +### Training Set: with Hero Present ### |
| 63 | +1. Run QuadSim |
| 64 | +2. Select `Use Hero Target` |
| 65 | +3. Select `With Other Poeple` |
| 66 | +4. Click the `DL Training` button |
| 67 | +5. With the simulator running, press "r" to begin recording. |
| 68 | +6. In the file slection menu navigate to the `data/train/target/hero_train1` directory |
| 69 | +7. **optional** to speed up data collection, press "9" (1-9 will slow down collection speed) |
| 70 | +8. When you have finished collecting data, hit "r" to stop recording. |
| 71 | +9. To exit the simulator, hit "`<esc>`" |
| 72 | + |
| 73 | +### Training Set: without Hero Present ### |
| 74 | +1. Run QuadSim |
| 75 | +2. Make sure `Use Hero Target` is **NOT** selected |
| 76 | +3. Select `With Other Poeple` |
| 77 | +4. Click the `DL Training` button |
| 78 | +5. With the simulator running, press "r" to begin recording. |
| 79 | +6. In the file slection menu navigate to the `data/train/non_target/run_train1` directory. |
| 80 | +7. **optional** to speed up data collection, press "9" (1-9 will slow down collection speed) |
| 81 | +8. When you have finished collecting data, hit "r" to stop recording. |
| 82 | +9. To exit the simulator, hit "`<esc>`" |
| 83 | + |
| 84 | +### Validation Set ### |
| 85 | +To collect the validation set, repeat both sets of steps above, except using the directory `data/validation` instead rather than `data/train`. |
| 86 | + |
| 87 | +### Image Preprocessing ### |
| 88 | +Before the network is trained, the images first need to be undergo a preprocessing step. |
| 89 | +**TODO**: Explain what preprocessing does, approximately. |
| 90 | +To run preprocesing: |
| 91 | +``` |
| 92 | +$ python preprocess_ims.py |
| 93 | +``` |
| 94 | +**Note**: If your data is stored as suggested in the steps above, this script should run without error. |
| 95 | + |
| 96 | +## Training, Predicting and Scoring ## |
| 97 | +With your training and validation data having been generated (or downloaded from the [releases](https://github.com/udacity/RoboND-DeepLearning-Private/releases) section of this repository, you are free to begin working with the neural net. |
| 98 | + |
| 99 | +**Note**: Training CNNs is a very compute-intensive process. If your system does not have a recent Nvidia graphics card, with [cuDNN](https://developer.nvidia.com/cudnn) installed , you may need to perform the training step in the cloud. Instructions for using AWS to train your network in the cloud may be found [here](docs/aws_setup.md) |
| 100 | + |
| 101 | +### Training your Model ### |
| 102 | +**Prerequisites** |
| 103 | +- Net has been implemented as per these instructions |
| 104 | +- Training data is in `data` directory |
| 105 | + |
| 106 | +To train, simply run the training script, `train.py`, giving it the name of the model weights file as a parameter: |
| 107 | +``` |
| 108 | +$ python train.py my_amazing_model.h5 |
| 109 | +``` |
| 110 | +After the trianing run has completed, your model will be stored in the `data/weights` directory as an [HDF5](https://en.wikipedia.org/wiki/Hierarchical_Data_Format) file. |
| 111 | + |
| 112 | +### Predicting on the Validation Set and Evaluating the Results ### |
| 113 | +**Prerequisites** |
| 114 | +-Model has been trained |
| 115 | +-Validation set has been collected |
| 116 | + |
| 117 | +Once the network has been trained, you can run inference on the validation set using `predict.py`. This script requires two parameters, the name of the model file you wish to perform prediction with, and the output directory where you would like to store your prediction results. |
| 118 | + |
| 119 | +``` |
| 120 | +$ python predict.py my_amazing_model.h5 my_prediction_run |
| 121 | +``` |
| 122 | + |
| 123 | +For the prediction run above, the results will be stored in `data/runs/my_prediction_run`. |
| 124 | + |
| 125 | +To get a sense of the overall performance of the your net on the prediction set, you can use `evaluation.py` as follows: |
| 126 | + |
| 127 | +``` |
| 128 | +$ python evaluate.py validation my_prediction_run |
| 129 | +average intersection over union 0.34498680536 |
| 130 | +number of validation samples evaluated on 1000 |
| 131 | +number of images with target detected: 541 |
| 132 | +number of images false positives is: 4 |
| 133 | +average squared pixel distance error 11.0021170157 |
| 134 | +average squared log pixel distance error 1.4663195103 |
| 135 | +``` |
| 136 | + |
| 137 | +## Scoring ## |
| 138 | +**TODO** |
| 139 | + |
| 140 | +**How the Final score is Calculated** |
| 141 | + |
| 142 | +**TODO** |
| 143 | + |
| 144 | +**Ideas for Improving your Score** |
| 145 | + |
| 146 | +**TODO** |
| 147 | + |
| 148 | +**Obtaining a leaderboard score** |
| 149 | + |
| 150 | +**TODO** |
| 151 | + |
| 152 | +## Experimentation: Testing in Simulation |
| 153 | +1. Copy your saved model to the weights directory `data/weights`. |
| 154 | +2. Launch the simulator, select "Spawn People", and then click the "Follow Me" button. |
| 155 | +3. Run `server.py` to launch the socketio server. |
| 156 | +4. Run the relatime follower script `$ realtime_follower.py my_awesome_model` |
| 157 | + |
| 158 | +**Note:** If you'd like to see an overlay of the detected region on each camera frame from the drone, simply pass the `--overlay_viz` parameter to `realtime_follower.py` |
0 commit comments