Skip to content

Commit 194626a

Browse files
update allowlist and README
1 parent 03f7daa commit 194626a

File tree

3 files changed

+239
-18
lines changed

3 files changed

+239
-18
lines changed

README.md

Lines changed: 25 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -5,15 +5,18 @@
55

66
**Explore, Experience, and Evaluate the Future of On-Device Generative AI with Google AI Edge.**
77

8-
The Google AI Edge Gallery is an experimental app that puts the power of cutting-edge Generative AI models directly into your hands, running entirely on your Android *(available now)* and iOS *(available now)* devices. Dive into a world of creative and practical AI use cases, all running locally, without needing an internet connection once the model is loaded. Experiment with different models, chat, ask questions with images and audio clip, explore prompts, and more!
8+
AI Edge Gallery is the premier destination for running the world's most powerful open-source Large Language Models (LLMs) on your mobile device. Experience high-performance Generative AI directly on your hardware—fully offline, private, and lightning-fast.
9+
10+
**Now Featuring: Gemma 4**
11+
12+
The latest version brings official support for the newly released Gemma 4 family. As the centerpiece of this release, Gemma 4 allows you to test the cutting edge of on-device AI. Experience advanced reasoning, logic, and creative capabilities without ever sending your data to a server.
13+
914

1015
| **Install the app today from Google Play** | **Install the app today from App Store** |
1116
| :--- | :--- |
1217
| <a href='https://play.google.com/store/apps/details?id=com.google.ai.edge.gallery'><img alt='Get it on Google Play' height="120" src='https://play.google.com/intl/en_us/badges/static/images/badges/en_badge_web_generic.png'/></a> | <a href="https://apps.apple.com/us/app/google-ai-edge-gallery/id6749645337?itscg=30200&itsct=apps_box_badge&mttnsubad=6749645337" style="display: inline-block;"> <img src="https://toolbox.marketingtools.apple.com/api/v2/badges/download-on-the-app-store/black/en-us?releaseDate=1771977600" alt="Download on the App Store" style="width: 246px; height: 90px; vertical-align: middle; object-fit: contain;" /></a> |
1318

1419
For users without Google Play access, install the apk from the [**latest release**](https://github.com/google-ai-edge/gallery/releases/latest/)
15-
> [!IMPORTANT]
16-
> You must uninstall all previous versions of the app before installing this one. Past versions will no longer be working and supported.
1720

1821

1922
## App Preview
@@ -28,31 +31,36 @@ For users without Google Play access, install the apk from the [**latest release
2831

2932
## ✨ Core Features
3033

31-
* **📱 Run Locally, Fully Offline:** Experience the magic of GenAI without an internet connection. All processing happens directly on your device.
32-
* **🤖 Choose Your Model:** Easily switch between different models from Hugging Face and compare their performance.
33-
* **🌻 Tiny Garden**: Play an experimental and fully offline mini game that uses natural language to plant, water, and harvest flowers.
34-
* **📳 Mobile Actions**: Use our [open-source recipe](https://github.com/google-gemini/gemma-cookbook/blob/main/FunctionGemma/%5BFunctionGemma%5DFinetune_FunctionGemma_270M_for_Mobile_Actions_with_Hugging_Face.ipynb) to learn model fine-tuning, then load it in app to unlock offline device controls.
35-
* **🖼️ Ask Image:** Upload images and ask questions about them. Get descriptions, solve problems, or identify objects.
36-
* **🎙️ Audio Scribe:** Transcribe an uploaded or recorded audio clip into text or translate it into another language.
37-
* **✍️ Prompt Lab:** Summarize, rewrite, generate code, or use freeform prompts to explore single-turn LLM use cases.
38-
* **💬 AI Chat:** Engage in multi-turn conversations.
39-
* **📊 Performance Insights:** Real-time benchmarks (TTFT, decode speed, latency).
40-
* **🧩 Bring Your Own Model:** Test your local LiteRT `.litertlm` models.
41-
* **🔗 Developer Resources:** Quick links to model cards and source code.
34+
* **Agent Skills**: Transform your LLM from a conversationalist into a proactive assistant. Use the Agent Skills tile to augment model capabilities with tools like Wikipedia for fact-grounding, interactive maps, and rich visual summary cards. You can even load modular skills from a URL or browse community contributions on GitHub Discussions.
35+
36+
* **AI Chat with Thinking Mode**: Engage in fluid, multi-turn conversations and toggle the new Thinking Mode to peek "under the hood." This feature allows you to see the model’s step-by-step reasoning process, which is perfect for understanding complex problem-solving. Note: Thinking Mode currently works with supported models, starting with the Gemma 4 family.
37+
38+
* **Ask Image**: Use multimodal power to identify objects, solve visual puzzles, or get detailed descriptions using your device’s camera or photo gallery.
39+
40+
* **Audio Scribe**: Transcribe and translate voice recordings into text in real-time using high-efficiency on-device language models.
41+
42+
* **Prompt Lab**: A dedicated workspace to test different prompts and single-turn use cases with granular control over model parameters like temperature and top-k.
43+
44+
* **Mobile Actions**: Unlock offline device controls and automated tasks powered entirely by a finetune of FuntionGemma 270m.
45+
46+
* **Tiny Garden**: A fun, experimental mini-game that uses natural language to plant and harvest a virtual garden using a finetune of FunctionGemma 270m.
47+
48+
* **Model Management & Benchmark**: Gallery is a flexible sandbox for a wide variety of open-source models. Easily download models from the list or load your own custom models. Manage your model library effortlessly and run benchmark tests to understand exactly how each model performs on your specific hardware.
49+
50+
* **100% On-Device Privacy**: All model inferences happen directly on your device hardware. No internet is required, ensuring total privacy for your prompts, images, and sensitive data.
4251

4352
## 🏁 Get Started in Minutes!
4453

45-
1. **Check OS Requirement**: Android 12 and up
54+
1. **Check OS Requirement**: Android 12 and up, and iOS 17 and up.
4655
2. **Download the App:**
47-
- Install the app from [Google Play](https://play.google.com/store/apps/details?id=com.google.ai.edge.gallery).
56+
- Install the app from [Google Play](https://play.google.com/store/apps/details?id=com.google.ai.edge.gallery) or [App Store](https://apps.apple.com/us/app/google-ai-edge-gallery/id6749645337).
4857
- For users without Google Play access: install the apk from the [**latest release**](https://github.com/google-ai-edge/gallery/releases/latest/)
4958
3. **Install & Explore:** For detailed installation instructions (including for corporate devices) and a full user guide, head over to our [**Project Wiki**](https://github.com/google-ai-edge/gallery/wiki)!
5059

5160
## 🛠️ Technology Highlights
5261

5362
* **Google AI Edge:** Core APIs and tools for on-device ML.
5463
* **LiteRT:** Lightweight runtime for optimized model execution.
55-
* **LLM Inference API:** Powering on-device Large Language Models.
5664
* **Hugging Face Integration:** For model discovery and download.
5765

5866
## ⌨️ Development
@@ -74,6 +82,5 @@ Licensed under the Apache License, Version 2.0. See the [LICENSE](LICENSE) file
7482

7583
* [**Project Wiki (Detailed Guides)**](https://github.com/google-ai-edge/gallery/wiki)
7684
* [Hugging Face LiteRT Community](https://huggingface.co/litert-community)
77-
* [LLM Inference guide for Android](https://ai.google.dev/edge/mediapipe/solutions/genai/llm_inference/android)
7885
* [LiteRT-LM](https://github.com/google-ai-edge/LiteRT-LM)
7986
* [Google AI Edge Documentation](https://ai.google.dev/edge)

model_allowlists/1_0_11.json

Lines changed: 209 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,209 @@
1+
{
2+
"models": [
3+
{
4+
"name": "Gemma-4-E2B-it",
5+
"modelId": "litert-community/gemma-4-E2B-it-litert-lm",
6+
"modelFile": "gemma-4-E2B-it.litertlm",
7+
"description": "A variant of Gemma 4 E2B ready for deployment on Android using [LiteRT-LM](https://github.com/google-ai-edge/LiteRT-LM/blob/main/docs/api/kotlin/getting_started.md). It supports multi-modality input, with up to 32K context length.",
8+
"sizeInBytes": 2583085056,
9+
"minDeviceMemoryInGb": 8,
10+
"commitHash": "7fa1d78473894f7e736a21d920c3aa80f950c0db",
11+
"llmSupportImage": true,
12+
"llmSupportAudio": true,
13+
"llmSupportThinking": true,
14+
"defaultConfig": {
15+
"topK": 64,
16+
"topP": 0.95,
17+
"temperature": 1.0,
18+
"maxContextLength": 32000,
19+
"maxTokens": 4000,
20+
"accelerators": "gpu,cpu",
21+
"visionAccelerator": "gpu"
22+
},
23+
"taskTypes": [
24+
"llm_chat",
25+
"llm_prompt_lab",
26+
"llm_agent_chat",
27+
"llm_ask_image",
28+
"llm_ask_audio"
29+
],
30+
"bestForTaskTypes": [
31+
"llm_chat",
32+
"llm_prompt_lab",
33+
"llm_agent_chat",
34+
"llm_ask_image",
35+
"llm_ask_audio"
36+
]
37+
},
38+
{
39+
"name": "Gemma-4-E4B-it",
40+
"modelId": "litert-community/gemma-4-E4B-it-litert-lm",
41+
"modelFile": "gemma-4-E4B-it.litertlm",
42+
"description": "A variant of Gemma 4 E4B ready for deployment on Android using [LiteRT-LM](https://github.com/google-ai-edge/LiteRT-LM/blob/main/docs/api/kotlin/getting_started.md). It supports multi-modality input, with up to 32K context length.",
43+
"sizeInBytes": 3654467584,
44+
"minDeviceMemoryInGb": 12,
45+
"commitHash": "9695417f248178c63a9f318c6e0c56cb917cb837",
46+
"llmSupportImage": true,
47+
"llmSupportAudio": true,
48+
"llmSupportThinking": true,
49+
"defaultConfig": {
50+
"topK": 64,
51+
"topP": 0.95,
52+
"temperature": 1.0,
53+
"maxContextLength": 32000,
54+
"maxTokens": 4000,
55+
"accelerators": "gpu,cpu",
56+
"visionAccelerator": "gpu"
57+
},
58+
"taskTypes": [
59+
"llm_chat",
60+
"llm_prompt_lab",
61+
"llm_agent_chat",
62+
"llm_ask_image",
63+
"llm_ask_audio"
64+
],
65+
"bestForTaskTypes": [
66+
"llm_chat",
67+
"llm_prompt_lab",
68+
"llm_agent_chat",
69+
"llm_ask_image",
70+
"llm_ask_audio"
71+
]
72+
},
73+
{
74+
"name": "Gemma-3n-E2B-it",
75+
"modelId": "google/gemma-3n-E2B-it-litert-lm",
76+
"modelFile": "gemma-3n-E2B-it-int4.litertlm",
77+
"description": "A variant of [Gemma 3n E2B](https://ai.google.dev/gemma/docs/gemma-3n) ready for deployment on Android using [LiteRT-LM](https://github.com/google-ai-edge/LiteRT-LM/blob/main/kotlin/README.md). It supports text, vision, and audio input, with 4096 context length.",
78+
"sizeInBytes": 3655827456,
79+
"minDeviceMemoryInGb": 8,
80+
"commitHash": "ba9ca88da013b537b6ed38108be609b8db1c3a16",
81+
"llmSupportImage": true,
82+
"llmSupportAudio": true,
83+
"defaultConfig": {
84+
"topK": 64,
85+
"topP": 0.95,
86+
"temperature": 1.0,
87+
"maxTokens": 4096,
88+
"accelerators": "cpu,gpu"
89+
},
90+
"taskTypes": ["llm_chat", "llm_prompt_lab", "llm_ask_image", "llm_ask_audio"],
91+
"bestForTaskTypes": ["llm_ask_image", "llm_ask_audio"]
92+
},
93+
{
94+
"name": "Gemma-3n-E4B-it",
95+
"modelId": "google/gemma-3n-E4B-it-litert-lm",
96+
"modelFile": "gemma-3n-E4B-it-int4.litertlm",
97+
"description": "A variant of [Gemma 3n E4B](https://ai.google.dev/gemma/docs/gemma-3n) ready for deployment on Android using [LiteRT-LM](https://github.com/google-ai-edge/LiteRT-LM/blob/main/kotlin/README.md). It supports text, vision, and audio input, with 4096 context length.",
98+
"sizeInBytes": 4919541760,
99+
"minDeviceMemoryInGb": 12,
100+
"commitHash": "297ed75955702dec3503e00c2c2ecbbf475300bc",
101+
"llmSupportImage": true,
102+
"llmSupportAudio": true,
103+
"defaultConfig": {
104+
"topK": 64,
105+
"topP": 0.95,
106+
"temperature": 1.0,
107+
"maxTokens": 4096,
108+
"accelerators": "cpu,gpu"
109+
},
110+
"taskTypes": ["llm_chat", "llm_prompt_lab", "llm_ask_image", "llm_ask_audio"]
111+
},
112+
{
113+
"name": "Gemma3-1B-IT",
114+
"modelId": "litert-community/Gemma3-1B-IT",
115+
"modelFile": "gemma3-1b-it-int4.litertlm",
116+
"description": "A variant of [google/Gemma-3-1B-IT](https://huggingface.co/google/Gemma-3-1B-IT) with 4-bit quantization ready for deployment on Android using [LiteRT-LM](https://github.com/google-ai-edge/LiteRT-LM/blob/main/kotlin/README.md).",
117+
"sizeInBytes": 584417280,
118+
"minDeviceMemoryInGb": 6,
119+
"commitHash": "42d538a932e8d5b12e6b3b455f5572560bd60b2c",
120+
"defaultConfig": {
121+
"topK": 64,
122+
"topP": 0.95,
123+
"temperature": 1.0,
124+
"maxTokens": 1024,
125+
"accelerators": "gpu,cpu"
126+
},
127+
"taskTypes": ["llm_chat", "llm_prompt_lab"],
128+
"bestForTaskTypes": ["llm_chat", "llm_prompt_lab"]
129+
},
130+
{
131+
"name": "Qwen2.5-1.5B-Instruct",
132+
"modelId": "litert-community/Qwen2.5-1.5B-Instruct",
133+
"modelFile": "Qwen2.5-1.5B-Instruct_multi-prefill-seq_q8_ekv4096.litertlm",
134+
"description": "A variant of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) ready for deployment on Android using [LiteRT-LM](https://github.com/google-ai-edge/LiteRT-LM/blob/main/kotlin/README.md).",
135+
"sizeInBytes": 1597931520,
136+
"minDeviceMemoryInGb": 6,
137+
"commitHash": "19edb84c69a0212f29a6ef17ba0d6f278b6a1614",
138+
"defaultConfig": {
139+
"topK": 20,
140+
"topP": 0.8,
141+
"temperature": 0.7,
142+
"maxTokens": 4096,
143+
"accelerators": "gpu,cpu"
144+
},
145+
"taskTypes": ["llm_chat", "llm_prompt_lab"]
146+
},
147+
{
148+
"name": "DeepSeek-R1-Distill-Qwen-1.5B",
149+
"modelId": "litert-community/DeepSeek-R1-Distill-Qwen-1.5B",
150+
"modelFile": "DeepSeek-R1-Distill-Qwen-1.5B_multi-prefill-seq_q8_ekv4096.litertlm",
151+
"description": "A variant of [deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) ready for deployment on Android using [LiteRT-LM](https://github.com/google-ai-edge/LiteRT-LM/blob/main/kotlin/README.md).",
152+
"sizeInBytes": 1833451520,
153+
"minDeviceMemoryInGb": 6,
154+
"commitHash": "e34bb88632342d1f9640bad579a45134eb1cf988",
155+
"defaultConfig": {
156+
"topK": 64,
157+
"topP": 0.95,
158+
"temperature": 1.0,
159+
"maxTokens": 4096,
160+
"accelerators": "gpu,cpu"
161+
},
162+
"taskTypes": ["llm_chat", "llm_prompt_lab"]
163+
},
164+
{
165+
"name": "TinyGarden-270M",
166+
"modelId": "litert-community/functiongemma-270m-ft-tiny-garden",
167+
"modelFile": "tiny_garden_q8_ekv1024.litertlm",
168+
"description": "Fine-tuned Function Gemma 270M model for Tiny Garden.",
169+
"sizeInBytes": 288964608,
170+
"minDeviceMemoryInGb": 6,
171+
"commitHash": "c205853ff82da86141a1105faa2344a8b176dfe7",
172+
"defaultConfig": {
173+
"topK": 64,
174+
"topP": 0.95,
175+
"temperature": 0.0,
176+
"maxTokens": 1024,
177+
"accelerators": "cpu"
178+
},
179+
"taskTypes": [
180+
"llm_tiny_garden"
181+
],
182+
"bestForTaskTypes": [
183+
"llm_tiny_garden"
184+
]
185+
},
186+
{
187+
"name": "MobileActions-270M",
188+
"modelId": "litert-community/functiongemma-270m-ft-mobile-actions",
189+
"modelFile": "mobile_actions_q8_ekv1024.litertlm",
190+
"description": "Fine-tuned Function Gemma 270M model for Mobile Actions.",
191+
"sizeInBytes": 288964608,
192+
"minDeviceMemoryInGb": 6,
193+
"commitHash": "38942192c9b723af836d489074823ff33d4a3e7a",
194+
"defaultConfig": {
195+
"topK": 64,
196+
"topP": 0.95,
197+
"temperature": 0.0,
198+
"maxTokens": 1024,
199+
"accelerators": "cpu"
200+
},
201+
"taskTypes": [
202+
"llm_mobile_actions"
203+
],
204+
"bestForTaskTypes": [
205+
"llm_mobile_actions"
206+
]
207+
}
208+
]
209+
}

skills/README.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -415,6 +415,11 @@ to the app by using the skill url.
415415
3. Enter the skill url in the popup dialog. The url should be pointing to the
416416
**skill folder** itself.
417417

418+
**Verify your URL**: Ensure the URL is correct by loading the `SKILL.md`
419+
file in your browser (e.g., `https://your/url/SKILL.md`). If the raw content
420+
of the file displays correctly, your URL is ready to use (excluding the
421+
`SKILL.md` suffix).
422+
418423
> [!IMPORTANT]
419424
>
420425
> To avoid webview loading failures, you must host your **JS skill** assets on

0 commit comments

Comments
 (0)