AI-powered cultural heritage companion
Brings museum artifacts to life through real-time conversation, image restoration, and interactive discovery.
Built for the Gemini Live Agent Challenge.
Select your language and start → Grant camera & microphone access → "Point your camera at an artifact, and a thousand-year story begins."
Gemini Live session initializes with your museum context → AI curator greets with today's live exhibitions via Google Search Grounding
recognize_artifact identifies the Winged Victory in real-time → Voice conversation with historical narration → AI-generated restoration showing the original Hellenistic appearance
- Live AI Curator — Real-time voice/video conversation powered by Gemini Live API. Point your camera at an artifact and ask questions naturally.
- Artifact Recognition — Identifies artifacts through the camera and provides historical context, era, civilization, and fun facts.
- Image Restoration — Generates historically accurate restorations of damaged artifacts using Gemini Flash.
- Nearby Discovery — Find museums and cultural heritage sites near your location via Google Places API.
- Visit Diary — Auto-generates illustrated diary entries summarizing your museum visit.
- Museum-Aware Onboarding — Select your museum before starting; the AI greets you with context about current exhibitions.
| Layer | Technology |
|---|---|
| Frontend | Next.js 15, React 19, TypeScript 5, Tailwind CSS 4 |
| AI | Gemini Live API, Google ADK, @google/genai |
| Database | Firebase Firestore, Firebase Auth |
| Maps | Google Places API (New), Geolocation API |
| Deploy | Docker, Cloud Run (Seoul), GitHub Actions CI/CD |
- Node.js 20+ and npm 10+ — Download
- Google Chrome (recommended) — Microphone & camera permissions work best on Chrome
- API keys (see Step 2 below)
git clone https://github.com/wigtn/wigtn-timelens.git
cd wigtn-timelens
npm install
Copy the template first:
cp .env.example .env.local
This is the only key you need to use the core features: voice conversation, artifact recognition, image restoration, and diary generation.
- Go to Google AI Studio
- Click "Create API Key"
- Copy the key into
.env.local:
GOOGLE_GENAI_API_KEY=your_gemini_api_key_here
With just this key, you can start the app and use Live AI Curator, Artifact Recognition, Image Restoration, and Visit Diary.
Without Firebase, the app works fully but session history and diary sharing won't persist across page reloads.
- Go to Firebase Console → Create a project (or use existing)
- Enable Authentication → Sign-in method → Anonymous → Enable
- Enable Cloud Firestore → Create database → Start in test mode
- Go to Project Settings → General → scroll to "Your apps" → click Web (
</>) → Register app - Copy the config values into
.env.local:
NEXT_PUBLIC_FIREBASE_API_KEY=your_firebase_api_key
NEXT_PUBLIC_FIREBASE_AUTH_DOMAIN=your-project.firebaseapp.com
NEXT_PUBLIC_FIREBASE_PROJECT_ID=your-project-id
Without these keys, the "What's nearby?" discovery feature won't work, but all other features remain fully functional.
- Go to Google Cloud Console → APIs & Services → Credentials
- Create an API key (or use existing)
- Enable these APIs in APIs & Services → Library:
- Maps JavaScript API (for museum map display)
- Places API (New) (for nearby museum/heritage site search)
- Copy the keys into
.env.local:
NEXT_PUBLIC_GOOGLE_MAPS_API_KEY=your_maps_api_key
GOOGLE_PLACES_API_KEY=your_places_api_key
Tip: You can use the same API key for both if both APIs are enabled on it.
# Gemini (Required — powers all AI features)
GOOGLE_GENAI_API_KEY=✅
# Firebase (Optional — session persistence & diary sharing)
NEXT_PUBLIC_FIREBASE_API_KEY=
NEXT_PUBLIC_FIREBASE_AUTH_DOMAIN=
NEXT_PUBLIC_FIREBASE_PROJECT_ID=
# Maps & Places (Optional — museum search & nearby discovery)
NEXT_PUBLIC_GOOGLE_MAPS_API_KEY=
GOOGLE_PLACES_API_KEY=
# App URL (keep default for local dev)
NEXT_PUBLIC_APP_URL=http://localhost:3000
npm run dev
Open http://localhost:3000 in Chrome.
- Allow permissions — Grant microphone and camera access when prompted
- Select a museum — Pick one from the nearby list, search by name, or skip to start directly
- Start a session — The AI curator will greet you with context about current exhibitions
- Try these voice commands:
- "이거 뭐야?" / "What is this?" — Point camera at an artifact
- "원래 어떻게 생겼어?" / "Show me the original" — Restoration
- "근처에 박물관 있어?" / "What's nearby?" — Discovery
- "다이어리 만들어줘" / "Create my diary" — Visit diary
| Issue | Solution |
|---|---|
| Microphone not working | Check Chrome permissions (lock icon in address bar) |
| Camera black screen | Ensure no other app is using the camera |
| "API key not configured" | Verify GOOGLE_GENAI_API_KEY is set in .env.local, then restart npm run dev |
| Museum search returns empty | Places API keys are optional; check that Places API (New) is enabled if you added them |
| Firebase warnings in console | Firebase keys are optional; session data won't persist without them but the app works |
npm run dev # Dev server (Turbopack)
npm run build # Production build
npm start # Production server
npm run lint # ESLint
npm run type-check # TypeScript validation
TimeLens runs on a dual-pipeline architecture:
- Pipeline 1 — Live Streaming: A persistent WebSocket session with Gemini Live API (
gemini-2.5-flash-native-audio). Microphone audio (PCM16, 16kHz) and camera frames (JPEG, 1fps) stream into the model simultaneously. The model responds with real-time voice output and triggers function calls when needed. - Pipeline 2 — REST On-Demand: Server-side API routes handle heavier tasks like image generation and external API calls. These are invoked by function calls from Pipeline 1.
The Gemini Live Agent uses 4 function declarations to route user intent — no intent classifier needed. The model decides which tool to call based on conversation context:
| Tool | Trigger | Backend | Pipeline |
|---|---|---|---|
recognize_artifact |
Camera frame detected | Gemini Live API + Google Search Grounding | In-session (no REST call) |
generate_restoration |
"Show me the original" | POST /api/restore → Gemini 2.5 Flash Image |
REST |
discover_nearby |
"What's nearby?" + GPS | GET /api/discover → Google Places API |
REST |
create_diary |
"Make my diary" | POST /api/diary/generate → Gemini 3 Pro Image |
REST |
recognize_artifactis the only tool that stays entirely within the Live session — camera frames are already streaming, so the model analyzes them directly with Google Search Grounding. The other three tools call REST endpoints.
TimeLens is built with both @google/genai and @google/adk:
| @google/genai (SDK) | @google/adk (Agent Development Kit) |
|---|---|
|
Primary path — powers the real-time Live experience
10 source files across |
Fallback path — text-based agent orchestration
Agent hierarchy: |
| Always active — voice + camera streaming | Activates when WebSocket is unavailable |
Both paths share the same backend APIs — whether a user speaks or types, they get the same restoration, discovery, and diary capabilities. Run npx tsx scripts/adk-demo.ts to see the ADK agents in action.
| Route | Purpose | Backend |
|---|---|---|
POST /api/session |
Create session + ephemeral token | Gemini API |
GET /api/museums/nearby |
GPS-based museum search | Places API |
GET /api/museums/search |
Text search for museums | Places API |
POST /api/restore |
Generate artifact restoration | Gemini Flash |
GET /api/discover |
Find nearby heritage sites | Places API |
POST /api/diary/generate |
Generate visit diary | Gemini + Firestore |
GET /api/diary/[id] |
Retrieve diary | Firestore |
src/
app/ # Next.js pages & API routes
shared/ # Shared types, Gemini tools, configs
web/ # Client components & hooks
back/ # Server-side logic (agents, geo, Firebase)
agents/ # ADK agents (orchestrator + 4 specialists)
agents/tools/ # FunctionTool implementations
mobile/ # React Native + Expo app
scripts/ # ADK demo script
firebase/ # Firestore & Storage security rules
docs/ # PRDs, design docs
assets/ # Logo, architecture diagrams
.github/ # GitHub Actions CI/CD
Deployed to Google Cloud Run (asia-northeast3, Seoul) via GitHub Actions.
# Manual build (optional)
docker build -t timelens .
docker run -p 8080:8080 timelens
| Service | Purpose |
|---|---|
| Cloud Run | Production deployment (Seoul region) |
| Firebase Auth | Anonymous authentication |
| Cloud Firestore | Session, visit, and diary storage |
| Google Places API | Museum and heritage site search |
This project is licensed under the Apache License 2.0.
Built for the Gemini Live Agent Challenge hackathon.










