Description
MLPerf Mobile Inference Benchmark is an open-source benchmark suite for measuring how fast mobile devices (e.g. phones, laptops) can run AI tasks. The benchmark is supported by the MLPerf Mobile App which currently supports Android and iOS. Please see the MLPerf Mobile Inference benchmark paper for a detailed description of the benchmarks along with the motivation and guiding principles behind the benchmark suite.
Release Notes
- Added support for Exynos 2500
Full Changelog: v5.0.1...v5.0.2
Supported OSs
The MLPerf Mobile app supports Android 11 (API level 30) and above, as well as iOS 13.1 or later.
Supported SOCs
Mediatek
- Dimensity 9000 series (9000/9000+/9200/9200+/9300/9300+/9400)
- Dimensity 8000 series (8000/8020/8050/8100/8200/8300)
Qualcomm
- Snapdragon 8 Elite
- Snapdragon 8 Gen 3
- Snapdragon 8s Gen 3
- Snapdragon 8 Gen 2
- Snapdragon 7 Gen 3
- Snapdragon 7s Gen 3
- Snapdragon 4 Gen 2
- Default fallback for all other Snapdragon mobile platforms
Samsung
- Exynos 2500
- Exynos 2400
- Exynos 2300
- Exynos 2200
- Exynos 2100
Google Pixel
- Pixel 10/9/8/7/6 and Pixel 10/9/8/7/6 Pro (Tensor G5/G4/G3/G2/G1 SoC)
The MLPerf Mobile App will also run on a host of other devices via our default path, which uses TensorFlow Lite on Android devices.
Installation instructions
- Allow installation of unknown apps in Settings > Apps > Special Access.
- Download the MLPerf Mobile APK.
- Find the APK in 'Downloads' or file browser.
- Tap the APK file. Approve installation when prompted.
- Confirm 'Install'.
- Once installed, tap 'Open' to launch MLPerf Mobile.