Skip to content

Commit

Permalink
release v1.2.0 (#28)
Browse files Browse the repository at this point in the history
* add a note to the README concerning /boot/config.txt (#22)

* Update README.md

* add a note about dtparam=i2c1=on in /boot/config.txt for Rasberry 4 Rasberian users

Co-authored-by: Leigh Johnson <[email protected]>
Co-authored-by: Matt Gantner <[email protected]>

* update README.md with installation instructions for TensorFlow 2.2 (#30)

* --edge-tpu initializes Coral Interpreter packaged in tflite_runtime instead of TensorFlow.lite.interpreter.Interpreter (#31)

* Make pantilt installation optional, merge face-detect and face-track into detect/track commands (#32)

* first pass on combining track/face-track and detect/face-detect into single CLI target with nargs labels

add refactor run_detect fns into camera.run_detect_stationary and
camera.run_detect_pantilt

* zero scores on labels not in provided label list for rpi-deep-pantilt detect [LABELS] command

* remove pan-tilt HAT zero from camera test (allows usage w/o pantilt installed) closes #20

* clean up CLI commands

* update README with merged detect/track command usage

* --rotation opt (#33)

* add support for --rotation option (PiCamera preview orientation)

* update README with --rotation option, closes #25

* lint 🐐

* Bump version: 1.1.0 → 1.2.0-rc0

* fix release target

* bump2version effed up

Co-authored-by: Matt <[email protected]>
Co-authored-by: Matt Gantner <[email protected]>
  • Loading branch information
3 people authored May 26, 2020
1 parent 0439477 commit 1dc9690
Show file tree
Hide file tree
Showing 17 changed files with 595 additions and 370 deletions.
8 changes: 5 additions & 3 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -76,9 +76,6 @@ docs: ## generate Sphinx HTML documentation, including API docs
servedocs: docs ## compile the docs watching for changes
watchmedo shell-command -p '*.rst' -c '$(MAKE) -C docs html' -R -D .

release: dist ## package and upload a release
twine upload dist/*

sdist: clean ## builds source package
python setup.py sdist
ls -l dist
Expand All @@ -87,6 +84,11 @@ bdist_wheel: clean ## builds wheel package
python setup.py bdist_wheel
ls -l dist

release: dist ## package and upload a release
twine upload dist/*

dist: sdist bdist_wheel

install: clean ## install the package to the active Python's site-packages
python setup.py install

Expand Down
151 changes: 132 additions & 19 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,48 +40,122 @@ Before you get started, you should have an up-to-date installation of Raspbian 1
1. Install system dependencies

```bash
sudo apt-get update && sudo apt-get install -y \
$ sudo apt-get update && sudo apt-get install -y \
cmake python3-dev libjpeg-dev libatlas-base-dev raspi-gpio libhdf5-dev python3-smbus
```

2. Install TensorFlow 2.0 (community-built wheel)
1. Create new virtual environment

```bash
pip install https://github.com/leigh-johnson/Tensorflow-bin/blob/master/tensorflow-2.0.0-cp37-cp37m-linux_armv7l.whl?raw=true
$ python3 -m venv .venv
```

3. Activate virtual environment

```bash
$ source .venv/bin/activate
```

4. Upgrade setuptools

```bash
$ pip install --upgrade setuptools
```

5. Install TensorFlow 2.2 (community-built wheel)

```bash
$ pip install https://github.com/leigh-johnson/Tensorflow-bin/releases/download/v2.2.0/tensorflow-2.2.0-cp37-cp37m-linux_armv7l.whl

```

3. Install the `rpi-deep-pantilt` package.
6. Install the `rpi-deep-pantilt` package.

```bash
pip install rpi-deep-pantilt
```

7. Install Coral Edge TPU `tflite_runtime` (optional)

NOTE: This step is only required if you are using [Coral's Edge TPU USB Accelerator](https://coral.withgoogle.com/products/accelerator). If you would like to run TFLite inferences using CPU only, skip this step.

```bash
$ pip install https://dl.google.com/coral/python/tflite_runtime-2.1.0.post1-cp37-cp37m-linux_armv7l.whl
```

=======
# Configuration

WARNING: Do not skip this section! You will not be able to use `rpi-deep-pantilt` without properly configuring your Pi.

### Enable Pi Camera

1. Run `sudo raspi-config` and select `Interfacing Options` from the Raspberry Pi Software Configuration Tool’s main menu. Press ENTER.

![raspi-config main menu](/images/camera1.png)

2. Select the Enable Camera menu option and press ENTER.

![raspi-config interfacing options menu](/images/camera2.png)

3. In the next menu, use the right arrow key to highlight ENABLE and press ENTER.

![raspi-config enable camera yes/no menu](/images/camera3.png)

### Enable i2c in Device Tree

1. Open `/boot/config.txt` and verify the following `dtparams` lines are uncommented:

```bash
dtparam=i2c1=on
dtparam=i2c_arm=on
```
# Example Usage

## Object Detection

The following will start a PiCamera preview and render detected objects as an overlay. Verify you're able to detect an object before trying to track it.
The `detect` command will start a PiCamera preview and render detected objects as an overlay. Verify you're able to detect an object before trying to track it.

Supports Edge TPU acceleration by passing the `--edge-tpu` option.

`rpi-deep-pantilt detect`
`rpi-deep-pantilt detect [OPTIONS] [LABELS]...`

```
rpi-deep-pantilt detect --help
Usage: rpi-deep-pantilt detect [OPTIONS] [LABELS]...
rpi-deep-pantilt detect [OPTIONS] [LABELS]
LABELS (optional) One or more labels to detect, for example:
$ rpi-deep-pantilt detect person book "wine glass"
If no labels are specified, model will detect all labels in this list:
$ rpi-deep-pantilt list-labels
Detect command will automatically load the appropriate model
Usage: rpi-deep-pantilt detect [OPTIONS]
For example, providing "face" as the only label will initalize
FaceSSD_MobileNet_V2 model $ rpi-deep-pantilt detect face
Other labels use SSDMobileNetV3 with COCO labels $ rpi-deep-pantilt detect
person "wine class" orange
Options:
--loglevel TEXT Run object detection without pan-tilt controls. Pass
--loglevel=DEBUG to inspect FPS.
--edge-tpu Accelerate inferences using Coral USB Edge TPU
--rotation INTEGER PiCamera rotation. If you followed this guide, a
rotation value of 0 is correct.
https://medium.com/@grepLeigh/real-time-object-tracking-
with-tensorflow-raspberry-pi-and-pan-tilt-
hat-2aeaef47e134
--help Show this message and exit.
```

## Object Tracking

The following will start a PiCamera preview, render detected objects as an overlay, and track an object's movement with the pan-tilt HAT.
The following will start a PiCamera preview, render detected objects as an overlay, and track an object's movement with Pimoroni pan-tilt HAT.

By default, this will track any `person` in the frame. You can track other objects by passing `--label <label>`. For a list of valid labels, run `rpi-deep-pantilt list-labels`.

Expand All @@ -90,15 +164,30 @@ By default, this will track any `person` in the frame. You can track other objec
Supports Edge TPU acceleration by passing the `--edge-tpu` option.

```
rpi-deep-pantilt track --help
Usage: cli.py track [OPTIONS]
Usage: rpi-deep-pantilt track [OPTIONS] [LABEL]
rpi-deep-pantilt track [OPTIONS] [LABEL]
LABEL (required, default: person) Exactly one label to detect, for example:
$ rpi-deep-pantilt track person
Track command will automatically load the appropriate model
For example, providing "face" will initalize FaceSSD_MobileNet_V2 model
$ rpi-deep-pantilt track face
Other labels use SSDMobileNetV3 model with COCO labels
$ rpi-deep-pantilt detect orange
Options:
--label TEXT The class label to track, e.g `orange`. Run `rpi-deep-
pantilt list-labels` to inspect all valid values
[required]
--loglevel TEXT
--loglevel TEXT Pass --loglevel=DEBUG to inspect FPS and tracking centroid
X/Y coordinates
--edge-tpu Accelerate inferences using Coral USB Edge TPU
--rotation INTEGER PiCamera rotation. If you followed this guide, a
rotation value of 0 is correct.
https://medium.com/@grepLeigh/real-time-object-tracking-
with-tensorflow-raspberry-pi-and-pan-tilt-
hat-2aeaef47e134
--help Show this message and exit.
```

Expand All @@ -114,10 +203,14 @@ The following labels are valid tracking targets.

## Face Detection (NEW in v1.1.x)

The following command will detect all faces. Supports Edge TPU acceleration by passing the `--edge-tpu` option.
The following command will detect human faces.

NOTE: Face detection uses a specialized model (FaceSSD_MobileNet_V2), while other labels are detecting using SSDMobileNetV3_COCO. You cannot detect both face and COCO labels at this time.

Watch this repo for updates that allow you to re-train these models to support a custom mix of object labels!

```
rpi-deep-pantilt face-detect --help
rpi-deep-pantilt detect face
Usage: cli.py face-detect [OPTIONS]
Options:
Expand All @@ -129,11 +222,11 @@ Options:

## Face Tracking (NEW in v1.1.x)

The following command will track between all faces in a frame. Supports Edge TPU acceleration by passing the `--edge-tpu` option.
The following command will track a human face.

```
rpi-deep-pantilt face-track --help
Usage: cli.py face-track [OPTIONS]
rpi-deep-pantilt track face
Usage: cli.py face-detect [OPTIONS]
Options:
--loglevel TEXT Run object detection without pan-tilt controls. Pass
Expand Down Expand Up @@ -173,6 +266,25 @@ I was able to use the same model architechture for FLOAT32 and UINT8 input, `fac

This model is derived from `facessd_mobilenet_v2_quantized_320x320_open_image_v4` in [tensorflow/models](https://github.com/tensorflow/models).

# Common Issues

### i2c is not enabled

If you run `$ rpi-deep-pantilt test pantilt` and see a similar error, check your Device Tree configuration.

```python
File "/home/pi/projects/rpi-deep-pantilt/.venv/lib/python3.7/site-packages/pantilthat/pantilt.py", line 72, in setup
self._i2c = SMBus(1)
FileNotFoundError: [Errno 2] No such file or directory
```

Open `/boot/config.txt` and ensure the following lines are uncommented:

```bash
dtparam=i2c1=on
dtparam=i2c_arm=on
```

# Credits

The MobileNetV3-SSD model in this package was derived from [TensorFlow's model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md), with [post-processing ops added](https://gist.github.com/leigh-johnson/155264e343402c761c03bc0640074d8c).
Expand All @@ -183,3 +295,4 @@ This package was created with
[Cookiecutter](https://github.com/audreyr/cookiecutter) and the
[audreyr/cookiecutter-pypackage](https://github.com/audreyr/cookiecutter-pypackage)
project template.

Binary file added images/camera1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/camera2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/camera3.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion rpi_deep_pantilt/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,4 @@

__author__ = """Leigh Johnson"""
__email__ = '[email protected]'
__version__ = '1.1.0'
__version__ = '1.2.0'
Loading

0 comments on commit 1dc9690

Please sign in to comment.