Skip to content

Commit

Permalink
spelling fixes 5
Browse files Browse the repository at this point in the history
  • Loading branch information
dtischler committed Jan 6, 2025
1 parent 3aaf498 commit 72fc658
Show file tree
Hide file tree
Showing 18 changed files with 37 additions and 33 deletions.
4 changes: 4 additions & 0 deletions .wordlist.txt
Original file line number Diff line number Diff line change
Expand Up @@ -2193,4 +2193,8 @@ HailoTracker
Roboflow’s
Lescaudron
ECR
xNAN
LX
Xtensa
segmenter

Original file line number Diff line number Diff line change
Expand Up @@ -658,7 +658,7 @@ static const float features[] = {
};
```

Connect the Spresesce Board to your computer, select the appropriate port, and upload the Sketch. On the Serial Monitor, you should see the Classification result showing **serv** with the right score.
Connect the Spresense Board to your computer, select the appropriate port, and upload the Sketch. On the Serial Monitor, you should see the Classification result showing **serv** with the right score.

![](../.gitbook/assets/environmental-sensor-fusion-commonsense/image-51.png)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,7 @@ Then in a terminal, you will see the output of the model running. I have placed

## Going Further

Using machine learning and an embedded development kit, we were able to successfully identify and classify pump behavior by listening to the sound of the compressor. This demonstration validated the approach as feasible, and when wrapped into a larger application and alerting system, an audio classification model could be used for remote infrastructure facilities, factory equipment, or building HVAC equipment that is not continually monitored by workers or other human presence. The Renesas RA6 MCU combined with the Syntaint NDP120 neural decision processor in the Avnet RaSynBoard create a low-power, cost-effective solution for predictive maintenance or intervention as needed, prior to a failure or accident occurring.
Using machine learning and an embedded development kit, we were able to successfully identify and classify pump behavior by listening to the sound of the compressor. This demonstration validated the approach as feasible, and when wrapped into a larger application and alerting system, an audio classification model could be used for remote infrastructure facilities, factory equipment, or building HVAC equipment that is not continually monitored by workers or other human presence. The Renesas RA6 MCU combined with the Syntiant NDP120 neural decision processor in the Avnet RaSynBoard create a low-power, cost-effective solution for predictive maintenance or intervention as needed, prior to a failure or accident occurring.



12 changes: 6 additions & 6 deletions audio-projects/synthetic-data-dog-bark-classifier.md

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion audio-projects/synthetic-data-pipeline-keyword-spotting.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
description: >-
End-to-end synthetic data pipeline for the creation of a portable LED product
equipped with keyword spotting capabilities. The project serves as a
comprehensive guide for development of any KWS produc
comprehensive guide for development of any KWS product
---

# Developing a Voice-Activated Product with Edge Impulse's Synthetic Data Pipeline
Expand Down
2 changes: 1 addition & 1 deletion audio-projects/voice-commands-particle-photon-2.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ cables.

## Data Acquisition and Model Training

We will create an account at [Edge Impulse](https://edgeimpulse.com/), then login and reate a new project.
We will create an account at [Edge Impulse](https://edgeimpulse.com/), then login and create a new project.

![](../.gitbook/assets/voice-commands-particle-photon-2/project.jpg)

Expand Down
4 changes: 2 additions & 2 deletions audio-projects/voice-controlled-power-plug-nicla-voice.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ A power-plug which can be controlled using voice commands, with no connection to

This project takes advantage of Edge Impulse's Syntiant audio processing block that extracts time and frequency features from a signal, specific to the Syntiant NDP120 accelerator included in the Nicla Voice. The NDP120 is ideal for always-on, low-power speech recognition applications with the “find posterior parameters” feature that will only react to the specified keywords.

Devices with an embedded ML model will accept voice commands, but won't need a WIFI or Bluetooth connection. All processing is done locally on the device, so you can directly tell a lamp, air conitioner, or TV to turn on or off without Alexa or Siri, or any digital assistant speaker/hub.
Devices with an embedded ML model will accept voice commands, but won't need a WIFI or Bluetooth connection. All processing is done locally on the device, so you can directly tell a lamp, air conditioner, or TV to turn on or off without Alexa or Siri, or any digital assistant speaker/hub.

This project will use relays and a power strip connected to various appliances such as a lamp, fan, TV, etc. An Arduino Nicla Voice with embedded ML model has been trained to recognize various keywords like: `one`, `two`, `three`, `four`, `on`, and `off` is the center of the decision process. From the Nicla Voice we use the I2C protocol which is connected to an Arduino Pro Micro to carry out voice commands from the Nicla Voice, and forwarded to the relays which control power sockets.

Expand Down Expand Up @@ -104,7 +104,7 @@ For a Syntiant NDP device like the Nicla Voice, we can configure the [Posterior

### 5. Upload the Arduino Code

Because there are two MCU's in this solution, two seperate applications are needed:
Because there are two MCU's in this solution, two separate applications are needed:

#### Nicla Voice

Expand Down
2 changes: 1 addition & 1 deletion computer-vision-projects/asl-byom-ti-am62a.md
Original file line number Diff line number Diff line change
Expand Up @@ -79,7 +79,7 @@ Then I went to Step 2.

![](../.gitbook/assets/asl-byom-ti-am62a/byom-2.jpg)

In Step 2, I have selected _RGB Images_ (you can also import other data type models like audio), not normalized pixels, and _Classification_ as the model output, and copied the label row that I prepared ealrier.
In Step 2, I have selected _RGB Images_ (you can also import other data type models like audio), not normalized pixels, and _Classification_ as the model output, and copied the label row that I prepared earlier.

After importing the model you can use **On Device Performance** and select the target device, to estimate memory and processing time. I chose the TI AM62A, and obtained: 2ms processing time, 4.7M RAM usage and 2.0M flash needed.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ Deep learning solves this approach by making use of learning algorithms to simpl

The Object Detector functions as the Region of Interest segmenter, while the Classifier then determines if a product is defective or damaged, or passes the quality check. We will proceed to implement such a pipeline together with a custom GUI based app.

Akida Neuralmorphic technology is unrivaled in terms of power usage at a given performance level. Neuromorphic also provides unique features not found in other technologies, such as on-device edge learning made possible by the Spiking Neural Network architecture.
Akida Neuramorphic technology is unrivaled in terms of power usage at a given performance level. Neuromorphic also provides unique features not found in other technologies, such as on-device edge learning made possible by the Spiking Neural Network architecture.

## Setting up the Brainchip Akida Developer Kit

Expand Down Expand Up @@ -95,7 +95,7 @@ python3 -c "import tensorflow as tf;
print(tf.reduce_sum(tf.random.normal([1000, 1000])))"
```

Once Tensorflow is running, the next step is to install the Akida Execution engine. Tis done via `pip` as well, as it is offered as a Python package:
Once Tensorflow is running, the next step is to install the Akida Execution engine. This is done via `pip` as well, as it is offered as a Python package:

```
pip install akida
Expand Down Expand Up @@ -169,7 +169,7 @@ Users with an Enterprise Account can make use of the "Auto Labeler" feature, whi

![](../.gitbook/assets/brainchip-akida-industrial-inspection/labeling.jpg)

The auto labeller uses an instance segmentation model to automatically find differing objects and extract them for you, and then once you choose a label a bounding box is automatically applied as seen in the following example.
The auto labeler uses an instance segmentation model to automatically find differing objects and extract them for you, and then once you choose a label a bounding box is automatically applied as seen in the following example.

![](../.gitbook/assets/brainchip-akida-industrial-inspection/object-detection.jpg)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -124,7 +124,7 @@ $ edge-impulse-uploader --category split images/*.jpg

The command above will upload the demo input images to Edge Impulse Studio and split them into "Training" and "Testing" datasets. Once the upload completes, the input datasets are visible on the **Data Acquisition** page within Edge Impulse Studio.

![Data Aquisition](../.gitbook/assets/brainchip-akida-multi-camera-inference/data_aquisition.png)
![Data Acquisition](../.gitbook/assets/brainchip-akida-multi-camera-inference/data_aquisition.png)

We can now assign labels to the data by using bounding boxes in the **Labeling queue** tab, as demonstrated in the GIF below. We have successfully labeled over 1800 objects, which was a tedious and time-consuming task, but it will greatly contribute to the creation of a diverse training dataset.

Expand Down Expand Up @@ -454,6 +454,6 @@ Engine Info: Power Consumption: 20.94 mW

## Conclusion

In this project, we have evaluated the Brainchip AKD1000 Akida processor and demonstrated its effectiveness and efficiency in terms of accuracy, latency, bandwidth, and power consumption. We also conclude that Edge Impulse FOMO model is highly suitable for contrained and low-power edge devices to achieve fast inferencing without losing much accuracy. The public version of the Edge Impulse Studio project can be found here: [https://studio.edgeimpulse.com/public/298672/latest](https://studio.edgeimpulse.com/public/298672/latest).
In this project, we have evaluated the Brainchip AKD1000 Akida processor and demonstrated its effectiveness and efficiency in terms of accuracy, latency, bandwidth, and power consumption. We also conclude that Edge Impulse FOMO model is highly suitable for constrained and low-power edge devices to achieve fast inferencing without losing much accuracy. The public version of the Edge Impulse Studio project can be found here: [https://studio.edgeimpulse.com/public/298672/latest](https://studio.edgeimpulse.com/public/298672/latest).


Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ Once your dataset is ready, go to **Create Impulse** and set the image width and

Next, go to the **Image** parameter section, select **Grayscale** for the color depth, and **Save Parameters**, then click on **Generate features**. In the Anomaly Detection settings, set the training processor to _CPU_ with a capacity of _High_. Choose **MobileNet V2 0.35** for the neural network architecture with a 1-class output layer. Start training the model by pressing **Start Training** and monitor the progress.

If everything is functioning correctly, once complete proceed to the **Live Classification** with a connected camera or test the model by going to the **Model Testing** section and clicking **Classify all**. After these steps, you can adjust the confidence thresholds to set the minimum score required before tagging as an anomaly and clcik **Classify all** again. If your model's test result is above 80%, you can proceed to the next step: _Deployment_.
If everything is functioning correctly, once complete proceed to the **Live Classification** with a connected camera or test the model by going to the **Model Testing** section and clicking **Classify all**. After these steps, you can adjust the confidence thresholds to set the minimum score required before tagging as an anomaly and click **Classify all** again. If your model's test result is above 80%, you can proceed to the next step: _Deployment_.

![Learning_blocks](../.gitbook/assets/fomo-ad-product-inspection-spresense/photo06.png)

Expand Down
4 changes: 2 additions & 2 deletions computer-vision-projects/fomo-ad-ti-tda4vm.md
Original file line number Diff line number Diff line change
Expand Up @@ -93,7 +93,7 @@ Now you can check your camera focus by opening the stream at http://your-ip-addr
For this sample project the idea is to spot and alert about faulty electricity components in a production line. The dataset to be used is composed of hundreds of pictures of properly assembled, high quality items.

* Upload around 100 pictures of correct, quality products to Edge Impulse using **No Anomaly** as the label
* In Impusle Design, Select Image Data, 96x96 pixels, and **Squash** as the resize mode.
* In Impulse Design, Select Image Data, 96x96 pixels, and **Squash** as the resize mode.
* Select an Image processing block, and choose **FOMO-AD**.

![](../.gitbook/assets/fomo-ad-ti-tda4vm/impulse.jpg)
Expand All @@ -120,7 +120,7 @@ Choose **Model Testing** from the navigation, and you can click **Classify All**

![](../.gitbook/assets/fomo-ad-ti-tda4vm/anomaly-result.jpg)

All cells are assigned a cell background color based on the anomaly score, going from blue to red, with an increasing opaqueness. The cells with white borders are the ones that exceed the defined confidence threshold, signifiying an anomoly.
All cells are assigned a cell background color based on the anomaly score, going from blue to red, with an increasing opaqueness. The cells with white borders are the ones that exceed the defined confidence threshold, signifying an anomaly.

If you hover over a cell, you will see the specific score.

Expand Down
2 changes: 1 addition & 1 deletion computer-vision-projects/helmet-detection-alif-ensemble.md
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ For the deployment of our proposed approach, we select Alif Ensemble E7 from the

## Results

To test the model, images of a person wearing a helmet or not wearing a helmet are needed. The dataset was split earlier, with 20% being set aside for Testing, that can be used now. The Studio takes the input image as a parameter and predicts the class it belongs to. Before passing the image, we need to ensure that we are using the same dimensions that we used during the training phase; here it’s by default the same dimension. You can also test with a live image taken directly from the development board, if you have a camera attached. In this case, we have a low resolution camera with our kit, and lighting is not optimal, so the images are dark. However, with a high resolution camera and proper lighting condition, better results can be acheived. But having another look at the Test dataset images, which are bright and high quality, we can see that the model is predicting results (hardhats) effectively.
To test the model, images of a person wearing a helmet or not wearing a helmet are needed. The dataset was split earlier, with 20% being set aside for Testing, that can be used now. The Studio takes the input image as a parameter and predicts the class it belongs to. Before passing the image, we need to ensure that we are using the same dimensions that we used during the training phase; here it’s by default the same dimension. You can also test with a live image taken directly from the development board, if you have a camera attached. In this case, we have a low resolution camera with our kit, and lighting is not optimal, so the images are dark. However, with a high resolution camera and proper lighting condition, better results can be achieved. But having another look at the Test dataset images, which are bright and high quality, we can see that the model is predicting results (hardhats) effectively.

![](../.gitbook/assets/helmet-detection-alif-ensemble/testing-1.jpg)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ Detection/segmentation is one of the functional algorithms in machine learning.

To take pictures you can use the TI board with an attached USB camera, but I have decided instead to use an Android App named [Open Camera](https://play.google.com/store/apps/details?id=net.sourceforge.opencamera&hl=en&gl=US) that includes a continuous shutter feature.

I have taken around 30 pictures for each desired label, `helmet` and `nohelmet`. I uploaded the pictures to Edge Impulse using the **Data aquisition** tab, then I went to the **Labeling queue**.
I have taken around 30 pictures for each desired label, `helmet` and `nohelmet`. I uploaded the pictures to Edge Impulse using the **Data acquisition** tab, then I went to the **Labeling queue**.

![](../.gitbook/assets/motorcycle-helmet-detection-smart-light-ti-am62a/label.jpg)

Expand Down Expand Up @@ -63,7 +63,7 @@ The Texas Instruments AM62A SK-AM62A-LP is a "low-power Starter Kit for Edge AI

There are several differences in working with this board compared to a Raspberry Pi for example. You cannot just connect a keyboard, mouse, and monitor to login; the OS is an Arago Linux version with limited tools installed by default (though you could also build your own operating systems if necessary).

After some trial and error, my recommendationed method for interacting with the board is:
After some trial and error, my recommended method for interacting with the board is:

- Download this operating system image version: [https://www.ti.com/tool/download/PROCESSOR-SDK-LINUX-AM62A/08.06.00.45](https://dr-download.ti.com/software-development/software-development-kit-sdk/MD-D37Ls3JjkT/08.06.00.45/tisdk-edgeai-image-am62axx-evm.wic.xz)
- Flash the image to a 16gb or larger microSD card with Balena Etcher or any other similar software
Expand Down Expand Up @@ -112,7 +112,7 @@ I created and uploaded files named `updateHelmet.php` and `helmet.ini` (which ar

To begin, connect a USB-C cable to the Unihiker, open a web browser to `http://10.1.2.3`, enter your WiFi SSID and password, and obtain the new IP address of the Unihiker.

Now with that Unihiker on the same network, you can connect via SFTP to the Unihikey using the user `root` and password `dfrobot`, and upload the `unihiker_trafficLight.py` file (again, obtained from the GitHub repo) and the three traffic light images to the `/images` folder.
Now with that Unihiker on the same network, you can connect via SFTP to the Unihiker using the user `root` and password `dfrobot`, and upload the `unihiker_trafficLight.py` file (again, obtained from the GitHub repo) and the three traffic light images to the `/images` folder.

## Run the System

Expand All @@ -127,7 +127,7 @@ Now that the camera was placed, the inference module was ready, the intermediate

## Conclusions

The applications and the machine learning model work as expected, succesfully identifying helmets (or lack of) on the Lego figures. However, the ethical and practical implications of this project are debatable (helmeted riders are penalized by the system too, and traffic congestion may increase if non-helmeted riders trigger a red-light, with no ability to acquire a helmet thus creating an indefinite red light). But, it is worthwhile to explore and develop machine learning for human and public health scenarios.
The applications and the machine learning model work as expected, successfully identifying helmets (or lack of) on the Lego figures. However, the ethical and practical implications of this project are debatable (helmeted riders are penalized by the system too, and traffic congestion may increase if non-helmeted riders trigger a red-light, with no ability to acquire a helmet thus creating an indefinite red light). But, it is worthwhile to explore and develop machine learning for human and public health scenarios.

This project was trained with Lego figures, but the same principles can be scaled up and applied to real-world situations. In fact, it may be easier to detect patterns with larger figures considering the camera quality and resolution.

Expand All @@ -153,7 +153,7 @@ def telegramAlert(message):
print(e)
```

Or, as riders likely could not be identified with precision, what about using a secondary camera with otical character recognition (OCR) to capture the license plate of the motorcycle and issuing the rider an automatic ticket? The TI AM62A is able to utilize several cameras concurrently, in fact there are 2 CSI ports on the board ready for Raspberry Pi Cameras. You can use them to take a picture from behind the vehicle, send the picture to an OCR application, obtain the license plate and automatically make the ticket.
Or, as riders likely could not be identified with precision, what about using a secondary camera with optical character recognition (OCR) to capture the license plate of the motorcycle and issuing the rider an automatic ticket? The TI AM62A is able to utilize several cameras concurrently, in fact there are 2 CSI ports on the board ready for Raspberry Pi Cameras. You can use them to take a picture from behind the vehicle, send the picture to an OCR application, obtain the license plate and automatically make the ticket.

For OCR, a good Python library is located here [https://pypi.org/project/pytesseract/](https://pypi.org/project/pytesseract/)

Expand Down
Loading

0 comments on commit 72fc658

Please sign in to comment.