Skip to content
This repository was archived by the owner on Dec 2, 2021. It is now read-only.

Commit 2ec8478

Browse files
committed
Fixed bug in follower.py resulting in the detection of non-target people.
Fixed bug with marker box generation Updating phrasing.
1 parent 6676c7e commit 2ec8478

File tree

3 files changed

+16
-11
lines changed

3 files changed

+16
-11
lines changed

README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -55,9 +55,9 @@ If for some reason you choose not to use Anaconda, you must install the followin
5555
5. Once you are comfortable with performance on the training dataset, see how it performs in live simulation!
5656

5757
## Collecting Training Data ##
58-
A simple training dataset has been provided above in this repository. This dataset will allow you to verify that you're segmentation network is semi-functional. However, if you're interested in improving your score, you may be interested in collecting additional training data. To do, please see the following steps.
58+
A simple training dataset has been provided in this project's repository. This dataset will allow you to verify that your segmentation network is semi-functional. However, if your interested in improving your score,you may want to collect additional training data. To do it, please see the following steps.
5959

60-
The data directory is organized as follows:
60+
The data directory is organized as follows:
6161
```
6262
data/runs - contains the results of prediction runs
6363
data/train/images - contains images for the training set

code/follower.py

+8-3
Original file line numberDiff line numberDiff line change
@@ -57,6 +57,10 @@
5757

5858
import time
5959

60+
import signal
61+
import sys
62+
63+
6064
# Create socketio server and Flask app
6165
sio = socketio.Server()
6266
app = Flask(__name__)
@@ -107,6 +111,7 @@ def __init__(self, image_hw, model, pred_viz_enabled = False, queue=None):
107111
self.pred_viz_enabled = pred_viz_enabled
108112
self.target_found = False
109113

114+
110115
def on_sensor_frame(self, data):
111116
rgb_image = Image.open(BytesIO(base64.b64decode(data['rgb_image'])))
112117
rgb_image = np.asarray(rgb_image)
@@ -124,7 +129,7 @@ def on_sensor_frame(self, data):
124129
if self.pred_viz_enabled:
125130
self.queue.put([rgb_image, pred])
126131

127-
target_mask = pred[:, :, 1] > 0.5
132+
target_mask = pred[:, :, 2] > 0.5
128133
# reduce the number of false positives by requiring more pixels to be identified as containing the target
129134
if target_mask.sum() > 10:
130135
centroid = scoring_utils.get_centroid_largest_blob(target_mask)
@@ -136,7 +141,7 @@ def on_sensor_frame(self, data):
136141
depth_img = get_depth_image(data['depth_image'])
137142

138143
# Get XYZ coordinates for specific pixel
139-
pixel_depth = depth_img[centroid[0]][centroid[1]][0]*100/255.0
144+
pixel_depth = depth_img[centroid[0]][centroid[1]][0]*50/255.0
140145
point_3d = get_xyz_from_image(centroid[0], centroid[1], pixel_depth, self.image_hw)
141146
point_3d.append(1)
142147

@@ -217,5 +222,5 @@ def sio_server():
217222

218223
follower = Follower(image_hw, model, args.pred_viz, queue)
219224
# start eventlet server
220-
225+
221226
sio_server()

code/model_training.ipynb

+6-6
Original file line numberDiff line numberDiff line change
@@ -122,18 +122,18 @@
122122
"metadata": {},
123123
"source": [
124124
"## Build the Model <a id='build'></a>\n",
125-
"In the following cells, you will build an FCN to train a model to detect the hero target and location within an image. The steps are:\n",
125+
"In the following cells, you will build an FCN to train a model to detect and locate the hero target within an image. The steps are:\n",
126126
"- Create an `encoder_block`\n",
127127
"- Create a `decoder_block`\n",
128-
"- Build the FCN consiting of encoder block(s), a 1x1 convolution, and decoder block(s). This step requires experimentation with different numbers of layers and filter sizes to build your model."
128+
"- Build the FCN consisting of encoder block(s), a 1x1 convolution, and decoder block(s). This step requires experimentation with different numbers of layers and filter sizes to build your model."
129129
]
130130
},
131131
{
132132
"cell_type": "markdown",
133133
"metadata": {},
134134
"source": [
135135
"### Encoder Block\n",
136-
"Create an encoder block that includes a separable convolution layer using the separable_conv2d_batchnorm() function. The `filters` parameter defines the size or depth of the output layer. For example, 32 or 64. "
136+
"Create an encoder block that includes a separable convolution layer using the `separable_conv2d_batchnorm()` function. The `filters` parameter defines the size or depth of the output layer. For example, 32 or 64. "
137137
]
138138
},
139139
{
@@ -156,7 +156,7 @@
156156
"metadata": {},
157157
"source": [
158158
"### Decoder Block\n",
159-
"The decoder block, as covered in the Classroom, comprises of three steps:\n",
159+
"The decoder block is comprised of three parts:\n",
160160
"- A bilinear upsampling layer using the upsample_bilinear() function. The current recommended factor for upsampling is set to 2.\n",
161161
"- A layer concatenation step. This step is similar to skip connections. You will concatenate the upsampled small_ip_layer and the large_ip_layer.\n",
162162
"- Some (one or two) additional separable convolution layers to extract some more spatial information from prior layers."
@@ -529,7 +529,7 @@
529529
],
530530
"metadata": {
531531
"kernelspec": {
532-
"display_name": "Python 3",
532+
"display_name": "Python [default]",
533533
"language": "python",
534534
"name": "python3"
535535
},
@@ -543,7 +543,7 @@
543543
"name": "python",
544544
"nbconvert_exporter": "python",
545545
"pygments_lexer": "ipython3",
546-
"version": "3.4.1"
546+
"version": "3.5.2"
547547
},
548548
"widgets": {
549549
"state": {},

0 commit comments

Comments
 (0)