Skip to content

Arm-Examples/Arm-Ethos-Zephyr-Playground

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Animal Classification Sample

Animal Classification with Arm Ethos-U

Welcome to Arm's Edge AI sample application with Zephyr! 🚀 This sample uses a MobileNetV2 model to distinguish between cats, dogs, and others using TensorFlow Lite Micro.

No Hardware Required! 🖥️ Everything runs on the Arm Corstone-300 FVP (Fixed Virtual Platform) simulator - no physical boards are needed.

AI Acceleration with Arm Ethos-U

This sample showcases seamless AI workload acceleration using Arm's Ethos-U NPU. Simply toggle CONFIG_ETHOS_U=y in prj.conf to enable NPU acceleration - no application code changes required. The same model runs efficiently on both standard Cortex-M processors and Ethos-U accelerated systems, demonstrating the power of unified embedded AI development.

Tested Images

Image Description
Cat Image A majestic feline friend with not-so-piercing eyes! 🐱
Dog Image The goodest boy striking a pose! 🐕
Pug Image An adorable wrinkly pup (proves we work with different dog breeds, not cherry-picked!)! 🐶
Otter Image An otter that's otterly perfect for testing the "Other" class! 🦦

All images are 224x224x3 RGB and were quantized to INT8 format for model input. You can examine the raw quantized pixel values in src/input_data.cpp to see exactly what numbers the model processes.

Prerequisites

⏱️ Estimated Setup Time: 30-60 minutes (depending on internet speed and system configuration)

Zephyr Configuration

To set the ZEPHYR_BASE environment variable persistently, add the following line to your ~/.zshrc file by running the command below in your terminal (replace with your actual Zephyr installation path):

echo 'export ZEPHYR_BASE=/path/to/zephyrproject/zephyr' >> ~/.zshrc

Then reload your shell configuration with:

source ~/.zshrc

Enable TensorFlow Lite Micro Module

Run these commands to enable the TensorFlow Lite Micro module in your Zephyr workspace:

west config manifest.project-filter -- +tflite-micro
west update

Getting the Animal Classification Sample

📝 Note: We are actively working on upstreaming this example and more samples directly into the Zephyr repository. In the future, this process will be much simpler with more examples available out-of-the-box!

For now, you'll need to manually add this sample to your Zephyr workspace:

  1. Navigate to your Zephyr project root (where you run build commands):

    cd ~/zephyrproject/zephyr
  2. Clone this sample into the correct directory:

    git clone https://github.com/Arm-Examples/Arm-Ethos-Zephyr-Playground samples/modules/tflite-micro/animal_classification
  3. Verify the sample is in place:

    ls samples/modules/tflite-micro/animal_classification/

You should see the sample files including CMakeLists.txt, prj.conf, src/, and this ReadMe.md.

Build

Run the following command in your project root:

west build -b mps3/corstone300/fvp ./samples/modules/tflite-micro/animal_classification --pristine auto

Model Configuration: Set CONFIG_ETHOS_U=y in prj.conf for Ethos-U optimization, or CONFIG_ETHOS_U=n for regular TensorFlow Lite.

Run

After building, run the project using:

source run.sh

You should start seeing the output, and note that it may take some time for the inference to be completed.

Sample Output

The application provides detailed classification results:

Starting Dog vs Cat classification example...
Using Vela-compiled model with Ethos-U support.
Tensor arena used: 1510148 bytes
Interpreter invoked successfully!

Interpreter invoked successfully!
Animal Classification Result:
  Dog: 86%, Cat: 0%, Other: 14%
  Prediction: Dog

🎉 Congratulations!

You've successfully run Arm's Edge AI application on Zephyr! The model has analyzed your chosen animal image and provided its classification results.

The sections below dive deeper into the model architecture, classification strategy, and how to customize the sample with your own models.

Changing Test Images

Want to try different animals? You can easily change which image the model processes by editing the input selection in src/main.cpp. Look for this line:

if (!ml::animal_classification::get_input(PUG, input)) {

Simply replace PUG with one of: CAT, DOG, PUG, or OTTER to test different images!

Model Architecture

The sample uses a MobileNetV2 model with the following characteristics:

  • Input Size: 224x224x3 (RGB images)
  • Quantization: INT8 for both weights and activations
  • Output: 1000 ImageNet classes (filtered to extract dog/cat related classes)
  • Framework: TensorFlow Lite Micro
  • Source: Arm ML-zoo MobileNetV2

Classification Strategy

The model uses ImageNet's 1000 classes and applies a post-processing strategy to categorize results:

  1. Class Aggregation: Aggregates probabilities from multiple ImageNet classes:

    • Dog classes: Various dog breeds from ImageNet
    • Cat classes: Various cat breeds from ImageNet
    • Other: All remaining classes
  2. Decision Logic:

    • Calculates percentage distribution across Dog/Cat/Other
    • Performs head-to-head comparison between Dog vs Cat

Model Options

The sample supports two model configurations:

  1. Standard TensorFlow Lite Model: Regular quantized model for general ARM Cortex-M processors
  2. Vela-compiled Model: Optimized for Arm Ethos-U NPU acceleration

This can be controlled by modifying the CONFIG_ETHOS_U setting in prj.conf.

Memory Configuration

The sample includes increased memory settings for the corstone FVP:

  • Tensor Arena Size:
    • Ethos-U: 1.5 MB (1024 * 512 * 3)
    • Regular: 2 MB (1024 * 512 * 4)
  • Board Overlay: Enlarged memory regions for FVP simulation
    • ITCM: 16 MB (for model storage)
    • DTCM: 8 MB (for runtime data)

Converting Custom Models

The commands and steps below explain how the models are used from the tflite file. You don't have to do it in this case as it is already included in this example.

Converting TFLite Model to C Array

To convert your own TensorFlow Lite model to a byte array:

  1. Navigate to the model directory:
    cd model
  2. Convert the TensorFlow Lite model to a byte array:
    xxd -i mobilenet_v2_1_0_224_INT8.tflite > model_data.cc

Using Vela Compiler for Ethos-U Optimization

To compile the TensorFlow Lite model with Vela for Ethos-U acceleration:

  1. In the same model directory, run the Vela compilation script:
    python3 use_vela.py
  2. The compiled model will be saved in the compiled_model directory:
    cd compiled_model
  3. Convert the Vela-compiled model to a byte array:
    xxd -i mobilenet_v2_1_0_224_INT8_vela.tflite > vela_model_data.cc

About

Simulate tiny edge AI applications running on Ethos-U NPU through Zephyr and Corstone

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages