-
Notifications
You must be signed in to change notification settings - Fork 275
Description
Hi! First of all, thanks for TalkingHead — it's an amazing project.
I'm working on a real-time 3D avatar SDK where an LLM controls a TalkingHead avatar via voice. I needed richer animations than the built-in emojis/gestures, so I extracted a small plugin called MotionEngine.
What it does
MotionEngine plays multi-layered motions defined in pure JSON — synchronizing facial expressions, hand gestures, and bone oscillation overlays (via poseDelta) in a single call:
const engine = new MotionEngine(head);
engine.registerMotions(motions);
head.opt.update = (dt) => engine.update(dt);
await engine.play('wave_right'); // smile + hand raise + wrist oscillationEach motion combines up to 3 layers:
- Visemes/blendshapes — facial expressions (smile, eyebrow raise, etc.)
- Gesture commands — TalkingHead hand poses (
handup,thumbup, etc.) - Bone overlays — sinusoidal oscillation on skeleton bones (e.g., waving hand, hip bounce)
It falls back gracefully to native TalkingHead gestures, emojis, and poses when a custom motion isn't found.
Live demo
https://lhupyn.github.io/motion-engine/
The demo loads brunette.glb from your GitHub Pages and shows a side-by-side comparison: TalkingHead native animations vs MotionEngine custom motions.
Repo
https://github.com/lhupyn/motion-engine
src/MotionEngine.js— ~250 lines, no DOM dependencies, works as a pure pluginsrc/motions.json— 20 built-in motions (wave, thumbs up, nod, shrug, celebrate, jump, etc.)
Why this matters for LLM-driven avatars
When an LLM controls an avatar, it needs a semantic vocabulary — not low-level blendshape values. MotionEngine lets an AI agent call engine.play("thinking") instead of orchestrating 8 individual parameters. And since motions are pure JSON, they can be authored or even generated by an LLM at runtime.
This is still a proof of concept, but I wanted to share it in case it's useful for the TalkingHead community. Would love to hear your thoughts!