Skip to content
This repository was archived by the owner on Dec 2, 2021. It is now read-only.

Commit 2b36a2d

Browse files
committed
V1.5.0
1 parent 444d1d4 commit 2b36a2d

20 files changed

+39
-206
lines changed

README.md

+39-36
Original file line numberDiff line numberDiff line change
@@ -15,11 +15,13 @@ $ git clone https://github.com/udacity/RoboND-DeepLearning.git
1515

1616
**Download the data**
1717

18-
Save the following two files into the data folder of the cloned repository.
18+
Save the following three files into the data folder of the cloned repository.
1919

20-
[Training Data](https://s3-us-west-1.amazonaws.com/udacity-robotics/Deep+Learning+Data/train.zip)
20+
[Training Data](https://s3-us-west-1.amazonaws.com/udacity-robotics/Deep+Learning+Data/Lab/train.zip)
2121

22-
[Validation Data](https://s3-us-west-1.amazonaws.com/udacity-robotics/Deep+Learning+Data/validation.zip)
22+
[Validation Data](https://s3-us-west-1.amazonaws.com/udacity-robotics/Deep+Learning+Data/Lab/validation.zip)
23+
24+
[Sample Evaluation Data](https://s3-us-west-1.amazonaws.com/udacity-robotics/Deep+Learning+Data/Project/sample_evaluation_data.zip)
2325

2426
**Download the QuadSim binary**
2527

@@ -48,7 +50,7 @@ If for some reason you choose not to use Anaconda, you must install the followin
4850
## Implement the Segmentation Network
4951
1. Download the training dataset from above and extract to the project `data` directory.
5052
2. Implement your solution in model_training.ipynb
51-
3. Train the network locally, or on [AWS](docs/aws_setup.md).
53+
3. Train the network locally, or on [AWS](https://classroom.udacity.com/nanodegrees/nd209/parts/09664d24-bdec-4e64-897a-d0f55e177f09/modules/cac27683-d5f4-40b4-82ce-d708de8f5373/lessons/197a058e-44f6-47df-8229-0ce633e0a2d0/concepts/27c73209-5d7b-4284-8315-c0e07a7cd87f?contentVersion=1.0.0&contentLocale=en-us).
5254
4. Continue to experiment with the training data and network until you attain the score you desire.
5355
5. Once you are comfortable with performance on the training dataset, see how it performs in live simulation!
5456

@@ -88,61 +90,62 @@ $ python preprocess_ims.py
8890
```
8991
**Note**: If your data is stored as suggested in the steps above, this script should run without error.
9092

93+
**Important Note 1:**
94+
95+
Running `preprocess_ims.py` does *not* delete files in the processed_data folder. This means if you leave images in processed data and collect a new dataset, some of the data in processed_data will be overwritten some will be left as is. It is recommended to **delete** the train and validation folders inside processed_data(or the entire folder) before running `preprocess_ims.py` with a new set of collected data.
96+
97+
**Important Note 2:**
98+
99+
The notebook, and supporting code assume your data for training/validation is in data/train, and data/validation. After you run `preprocess_ims.py` you will have new `train`, and possibly `validation` folders in the `processed_ims`.
100+
Rename or move `data/train`, and `data/validation`, then move `data/processed_ims/train`, into `data/`, and `data/processed_ims/validation`also into `data/`
101+
102+
**Important Note 3:**
103+
104+
Merging multiple `train` or `validation` may be difficult, it is recommended that data choices be determined by what you include in `raw_sim_data/train/run1` with possibly many different runs in the directory. You can create a tempory folder in `data/` and store raw run data you don't currently want to use, but that may be useful for later. Choose which `run_x` folders to include in `raw_sim_data/train`, and `raw_sim_data/validation`, then run `preprocess_ims.py` from within the 'code/' directory to generate your new training and validation sets.
105+
106+
91107
## Training, Predicting and Scoring ##
92108
With your training and validation data having been generated or downloaded from the above section of this repository, you are free to begin working with the neural net.
93109

94-
**Note**: Training CNNs is a very compute-intensive process. If your system does not have a recent Nvidia graphics card, with [cuDNN](https://developer.nvidia.com/cudnn) and [CUDA](https://developer.nvidia.com/cuda) installed , you may need to perform the training step in the cloud. Instructions for using AWS to train your network in the cloud may be found [here](docs/aws_setup.md)
110+
**Note**: Training CNNs is a very compute-intensive process. If your system does not have a recent Nvidia graphics card, with [cuDNN](https://developer.nvidia.com/cudnn) and [CUDA](https://developer.nvidia.com/cuda) installed , you may need to perform the training step in the cloud. Instructions for using AWS to train your network in the cloud may be found [here](https://classroom.udacity.com/nanodegrees/nd209/parts/09664d24-bdec-4e64-897a-d0f55e177f09/modules/cac27683-d5f4-40b4-82ce-d708de8f5373/lessons/197a058e-44f6-47df-8229-0ce633e0a2d0/concepts/27c73209-5d7b-4284-8315-c0e07a7cd87f?contentVersion=1.0.0&contentLocale=en-us)
95111

96112
### Training your Model ###
97113
**Prerequisites**
98-
- Net has been implemented as per these instructions
99114
- Training data is in `data` directory
115+
- Validation data is in the `data` directory
116+
- The folders `data/train/images/`, `data/train/masks/`, `data/validation/images/`, and `data/validation/masks/` should exist and contain the appropriate data
100117

101-
To train, simply run the training script, `train.py`, giving it the name of the model weights file as a parameter:
102-
```
103-
$ python train.py my_amazing_model.h5
104-
```
105-
After the training run has completed, your model will be stored in the `data/weights` directory as an [HDF5](https://en.wikipedia.org/wiki/Hierarchical_Data_Format) file.
118+
To train complete the network definition in the `model_training.ipynb` notebook and then run the training cell with appropriate hyperparameters selected.
106119

107-
### Predicting on the Validation Set and Evaluating the Results ###
108-
**Prerequisites**
109-
-Model has been trained
110-
-Validation set has been collected
120+
After the training run has completed, your model will be stored in the `data/weights` directory as an [HDF5](https://en.wikipedia.org/wiki/Hierarchical_Data_Format) file, and a configuration_weights file. As long as they are both in the same location, things should work.
111121

112-
Once the network has been trained, you can run inference on the validation set using `predict.py`. This script requires two parameters, the name of the model file you wish to perform prediction with, and the output directory where you would like to store your prediction results.
122+
**Important Note** the *validation* directory is used to store data that will be used during training to produce the plots of the loss, and help determine when the network is overfitting your data.
113123

114-
```
115-
$ python predict.py my_amazing_model.h5 my_prediction_run
116-
```
124+
The **sample_evalution_data** directory contains data specifically designed to test the networks performance on the FollowME task. In sample_evaluation data are three directories each generated using a different sampling method. The structure of these directories is exactly the same as `validation`, and `train` datasets provided to you. For instance `patrol_with_targ` contains an `images` and `masks` subdirectory. If you would like to the evaluation code on your `validation` data a copy of the it should be moved into `sample_evaluation_data`, and then the appropriate arguments changed to the function calls in the `model_training.ipynb` notebook.
117125

118-
For the prediction run above, the results will be stored in `data/runs/my_prediction_run`.
126+
The notebook has examples of how to evaulate your model once you finish training. Think about the sourcing methods, and how the information provided in the evaluation sections relates to the final score. Then try things out that seem like they may work.
119127

120-
To get a sense of the overall performance of the your net on the prediction set, you can use `evaluation.py` as follows:
128+
## Scoring ##
121129

122-
```
123-
$ python evaluate.py validation my_prediction_run
124-
average intersection over union 0.34498680536
125-
number of validation samples evaluated on 1000
126-
number of images with target detected: 541
127-
number of images false positives is: 4
128-
average squared pixel distance error 11.0021170157
129-
average squared log pixel distance error 1.4663195103
130-
```
130+
To score the network on the Follow Me task, two types of error are measured. First the intersection over the union for the pixelwise classifications is computed for the target channel.
131131

132-
## Scoring ##
133-
**TODO**
132+
In addition to this we determine whether the network detected the target person or not. If more then 3 pixels have probability greater then 0.5 of being the target person then this counts as the network guessing the target is in the image.
133+
134+
We determine whether the target is actually in the image by whether there are more then 3 pixels containing the target in the label mask.
135+
136+
Using the above the number of detection true_positives, false positives, false negatives are counted.
134137

135138
**How the Final score is Calculated**
136139

137-
**TODO**
140+
The final score is the pixelwise `average_IoU*(n_true_positive/(n_true_positive+n_false_positive+n_false_negative))` on data similar to that provided in sample_evaulation_data
138141

139142
**Ideas for Improving your Score**
140143

141-
**TODO**
144+
Collect more data from the sim. Look at the predictions think about what the network is getting wrong, then collect data to counteract this. Or improve your network architecture and hyperparameters.
142145

143-
**Obtaining a leaderboard score**
146+
**Obtaining a Leaderboard Score**
144147

145-
**TODO**
148+
Share your scores in slack, and keep a tally in a pinned message. Scores should be computed on the sample_evaluation_data. This is for fun, your grade will be determined on unreleased data. If you use the sample_evaluation_data to train the network, it will result in inflated scores, and you will not be able to determine how your network will actually perform when evaluated to determine your grade.
146149

147150
## Experimentation: Testing in Simulation
148151
1. Copy your saved model to the weights directory `data/weights`.

docs/SETUP.md

-16
This file was deleted.

docs/aws_setup.md

-126
This file was deleted.

docs/misc/AWS setup images/1.png

-150 KB
Binary file not shown.

docs/misc/AWS setup images/10.png

-83.3 KB
Binary file not shown.

docs/misc/AWS setup images/11.png

-45.4 KB
Binary file not shown.

docs/misc/AWS setup images/12.png

-75.1 KB
Binary file not shown.

docs/misc/AWS setup images/13.png

-49.8 KB
Binary file not shown.

docs/misc/AWS setup images/14.png

-54.8 KB
Binary file not shown.

docs/misc/AWS setup images/15.png

-27.2 KB
Binary file not shown.

docs/misc/AWS setup images/16.png

-150 KB
Binary file not shown.

docs/misc/AWS setup images/2.png

-9.06 KB
Binary file not shown.

docs/misc/AWS setup images/3.png

-86.8 KB
Binary file not shown.

docs/misc/AWS setup images/4.png

-93.8 KB
Binary file not shown.

docs/misc/AWS setup images/5.png

-175 KB
Binary file not shown.

docs/misc/AWS setup images/6.png

-109 KB
Binary file not shown.

docs/misc/AWS setup images/7.png

-47.9 KB
Binary file not shown.

docs/misc/AWS setup images/8.png

-56.9 KB
Binary file not shown.

docs/misc/AWS setup images/9.png

-61.3 KB
Binary file not shown.

docs/training_and_scoring.md

-28
This file was deleted.

0 commit comments

Comments
 (0)