Welcome to Arm's Edge AI sample application with Zephyr! 🚀 This sample uses a MobileNetV2 model to distinguish between cats, dogs, and others using TensorFlow Lite Micro.
No Hardware Required! 🖥️ Everything runs on the Arm Corstone-300 FVP (Fixed Virtual Platform) simulator - no physical boards are needed.
This sample showcases seamless AI workload acceleration using Arm's Ethos-U NPU. Simply toggle CONFIG_ETHOS_U=y in prj.conf to enable NPU acceleration - no application code changes required. The same model runs efficiently on both standard Cortex-M processors and Ethos-U accelerated systems, demonstrating the power of unified embedded AI development.
All images are 224x224x3 RGB and were quantized to INT8 format for model input. You can examine the raw quantized pixel values in src/input_data.cpp to see exactly what numbers the model processes.
⏱️ Estimated Setup Time: 30-60 minutes (depending on internet speed and system configuration)
- Zephyr RTOS and west tool (~20-30 minutes): This sample requires a full Zephyr development environment. See Zephyr Getting Started for complete installation instructions.
- FVP Simulator for mps3/corstone300 board (~5-15 minutes):
- Windows/Linux: Download from Arm's official FVP downloads
- macOS: Use FVPs-on-Mac Github with Docker
- Docker Desktop (~10-15 minutes): Required only for macOS users
To set the ZEPHYR_BASE environment variable persistently, add the following line to your ~/.zshrc file by running the command below in your terminal (replace with your actual Zephyr installation path):
echo 'export ZEPHYR_BASE=/path/to/zephyrproject/zephyr' >> ~/.zshrcThen reload your shell configuration with:
source ~/.zshrcRun these commands to enable the TensorFlow Lite Micro module in your Zephyr workspace:
west config manifest.project-filter -- +tflite-micro
west update📝 Note: We are actively working on upstreaming this example and more samples directly into the Zephyr repository. In the future, this process will be much simpler with more examples available out-of-the-box!
For now, you'll need to manually add this sample to your Zephyr workspace:
-
Navigate to your Zephyr project root (where you run build commands):
cd ~/zephyrproject/zephyr
-
Clone this sample into the correct directory:
git clone https://github.com/Arm-Examples/Arm-Ethos-Zephyr-Playground samples/modules/tflite-micro/animal_classification
-
Verify the sample is in place:
ls samples/modules/tflite-micro/animal_classification/
You should see the sample files including CMakeLists.txt, prj.conf, src/, and this ReadMe.md.
Run the following command in your project root:
west build -b mps3/corstone300/fvp ./samples/modules/tflite-micro/animal_classification --pristine autoModel Configuration: Set CONFIG_ETHOS_U=y in prj.conf for Ethos-U optimization, or CONFIG_ETHOS_U=n for regular TensorFlow Lite.
After building, run the project using:
source run.shYou should start seeing the output, and note that it may take some time for the inference to be completed.
The application provides detailed classification results:
Starting Dog vs Cat classification example...
Using Vela-compiled model with Ethos-U support.
Tensor arena used: 1510148 bytes
Interpreter invoked successfully!
Interpreter invoked successfully!
Animal Classification Result:
Dog: 86%, Cat: 0%, Other: 14%
Prediction: Dog
You've successfully run Arm's Edge AI application on Zephyr! The model has analyzed your chosen animal image and provided its classification results.
The sections below dive deeper into the model architecture, classification strategy, and how to customize the sample with your own models.
Want to try different animals? You can easily change which image the model processes by editing the input selection in src/main.cpp. Look for this line:
if (!ml::animal_classification::get_input(PUG, input)) {Simply replace PUG with one of: CAT, DOG, PUG, or OTTER to test different images!
The sample uses a MobileNetV2 model with the following characteristics:
- Input Size: 224x224x3 (RGB images)
- Quantization: INT8 for both weights and activations
- Output: 1000 ImageNet classes (filtered to extract dog/cat related classes)
- Framework: TensorFlow Lite Micro
- Source: Arm ML-zoo MobileNetV2
The model uses ImageNet's 1000 classes and applies a post-processing strategy to categorize results:
-
Class Aggregation: Aggregates probabilities from multiple ImageNet classes:
- Dog classes: Various dog breeds from ImageNet
- Cat classes: Various cat breeds from ImageNet
- Other: All remaining classes
-
Decision Logic:
- Calculates percentage distribution across Dog/Cat/Other
- Performs head-to-head comparison between Dog vs Cat
The sample supports two model configurations:
- Standard TensorFlow Lite Model: Regular quantized model for general ARM Cortex-M processors
- Vela-compiled Model: Optimized for Arm Ethos-U NPU acceleration
This can be controlled by modifying the CONFIG_ETHOS_U setting in prj.conf.
The sample includes increased memory settings for the corstone FVP:
- Tensor Arena Size:
- Ethos-U: 1.5 MB (1024 * 512 * 3)
- Regular: 2 MB (1024 * 512 * 4)
- Board Overlay: Enlarged memory regions for FVP simulation
- ITCM: 16 MB (for model storage)
- DTCM: 8 MB (for runtime data)
The commands and steps below explain how the models are used from the tflite file. You don't have to do it in this case as it is already included in this example.
To convert your own TensorFlow Lite model to a byte array:
- Navigate to the
modeldirectory:cd model - Convert the TensorFlow Lite model to a byte array:
xxd -i mobilenet_v2_1_0_224_INT8.tflite > model_data.cc
To compile the TensorFlow Lite model with Vela for Ethos-U acceleration:
- In the same
modeldirectory, run the Vela compilation script:python3 use_vela.py
- The compiled model will be saved in the
compiled_modeldirectory:cd compiled_model - Convert the Vela-compiled model to a byte array:
xxd -i mobilenet_v2_1_0_224_INT8_vela.tflite > vela_model_data.cc




