Welcome to Lunar Tools, a comprehensive toolkit designed to fascilitate the programming of interactive exhibitions. Our suite of simple, modular tools is crafted to offer a seamless and hopefully bug-free experience for both exhibitors and visitors.
Make sure you have python>=3.10.
python -m pip install git+https://github.com/lunarring/lunar_toolsOn Ubuntu, you may have to install additional dependencies for sound playback/recording.
sudo apt-get install libasound2-dev libportaudio2For running the midi controllers, you might have to create a symlink:
cd /usr/lib/x86_64-linux-gnu/
sudo ln -s alsa-lib/libasound_module_conf_pulse.so libasound_module_conf_pulse.soOur system includes a convenient automatic mode for reading and writing API keys. This feature enables you to dynamically set your API key as needed, and the file will be stored on your local computer. However, if you prefer, you can specify your API keys in your shell configuration file (e.g. ~/.bash_profile or ~/.zshrc or ~/.bash_rc). In this case, paste the below lines with the API keys you want to add.
export OPENAI_API_KEY="XXX"
export REPLICATE_API_TOKEN="XXX"
export ELEVEN_API_KEY="XXX"Runnable input snippets live in examples/inputs. Launch them from the repo root to validate your hardware and copy/paste the relevant code into your own project.
examples/inputs/audio_recorder_example.py exposes lt.AudioRecorder through two
CLI flags so you can verify your microphone pipeline without touching code.
python examples/inputs/audio_recorder_example.py --seconds 5 --output myvoice.mp3examples/inputs/webcam_live_renderer.py pairs lt.WebCam with lt.Renderer
and displays a live preview window for whichever camera ID (or auto-probed
device) you pass in.
python examples/inputs/webcam_live_renderer.py --cam-id autoexamples/inputs/meta_input_inspector.py uses lt.MetaInput to detect a MIDI
controller (or keyboard fallback) and continuously prints one slider + one
button so you can confirm your mappings on the spot.
python examples/inputs/meta_input_inspector.pyexamples/inputs/movie_reader_example.py wraps lt.MovieReader
with a CLI so you can inspect frame shapes, counts, and FPS before embedding
any mp4 into your pipeline.
python examples/inputs/movie_reader_example.py my_movie.mp4 --max-frames 10Runnable output demos live in examples/outputs. Each script is a ready-to-run showcase that you can copy into your own pipeline or execute as-is.
examples/outputs/sound_playback_generated_sine.py demonstrates lt.SoundPlayer
by first writing a generated 440β―Hz sine to disk, then streaming a 660β―Hz tone
directly from memory via play_audiosegment.
python examples/outputs/sound_playback_generated_sine.pyexamples/outputs/display_multi_backend_example.py spins up lt.Renderer and
cycles through NumPy, Pillow, and Torch backends (whichever are installed)
to render random RGBA frames in one looping window.
python examples/outputs/display_multi_backend_example.pynote you can speed-up opengl render calls by upto a factor of 3 by disabling VSYNC on your system On Ubuntu do: Run nvidia-settings 2. Screen 0 > OpenGl > Sync to VBLank -> off
examples/outputs/realtime_console_updates_example.py combines
lt.FPSTracker, lt.LogPrint, and dynamic_print to stream live progress
messages while measuring per-segment timings.
python examples/outputs/realtime_console_updates_example.pyexamples/outputs/logprint_example.py showcases lt.LogPrint formatting,
highlighting how to stream colored, timestamped console output.
python examples/outputs/logprint_example.pyexamples/outputs/movie_saver_example.py creates a short mp4 using random RGB frames so you can validate codec support and file permissions.
python examples/outputs/movie_saver_example.py --output my_movie.mp4 --frames 10 --fps 24lunar_tools.comms.get_local_ip inspects network interfaces to determine the best IP to share with peers. Run the example below to print the detected address or see a friendly warning if one cannot be determined (for example, on air-gapped machines).
python examples/comms/get_local_ip_example.pyLow-latency data channel built on WebRTC for streaming numpy arrays, JSON blobs, PNG previews, and log text. Requires the optional aiortc extra (python -m pip install "lunar_tools[webrtc]").
Sender (hosts an embedded signaling server and streams mixed payloads):
python examples/comms/webrtc_sender.py --session demoReceiver (auto-discovers the sender session via the cached signaling endpoint):
python examples/comms/webrtc_receiver.py --session demo--sender-ipdefaults to the detected local address (vialunar_tools.comms.utils.get_local_ip).- When the sender hosts the embedded signaling server it stores the endpoint details per session in
~/.lunar_tools/webrtc_sessions.json. Receivers can omit--sender-ipto reuse the most recent entry for the requested session, which keeps the bootstrap process simple. - If you prefer using your own signaling server, start it separately (or pass
--no-serverin the sender example) and point both peers to the samehttp://<sender-ip>:<port>URL.
High-level OSC helper built on python-osc. The receiver example spawns the live grid visualizer, and the sender emits demo sine/triangle waves.
Receiver:
python examples/comms/osc_receiver.py --ip 0.0.0.0 --port 8003Sender:
python examples/comms/osc_sender.py --ip 127.0.0.1 --port 8003 --channels /env1 /env2 /env3One-to-one ZeroMQ stream that carries JSON blobs, compressed images, and raw PCM audio. Start the receiver first on the same machine (or pass --ip 0.0.0.0 if you want to accept remote peers), then launch the sender.
Receiver (binds locally):
python examples/comms/zmq_receiver.py --port 5556Sender (connects to the receiver):
python examples/comms/zmq_sender.py --ip 127.0.0.1 --port 5556ZMQPairEndpoint uses ZeroMQ's PAIR pattern, which is strictly one-to-one: exactly one sender and one receiver must be connected, and neither side can reconnect while the other is running. If you need fan-out/fan-in or resilient reconnection, prefer REQ/REP, PUB/SUB, or ROUTER/DEALER and stitch together the behavior you need on top of the raw zmq library.
Voice-focused demos live in examples/voice. Each script below can be run directly from the repo root and pairs with the API snippets that follow.
examples/voice/realtime_voice_example.py is an interactive CLI that lets you start/pause/mute a RealTimeVoice session, inject messages, and update instructions on the fly.
python examples/voice/realtime_voice_example.pyexamples/voice/deepgram_realtime_transcribe_example.py uses
lt.RealTimeTranscribe (Deepgram SDK) to stream microphone audio and print live transcripts.
Set DEEPGRAM_API_KEY before running.
python examples/voice/deepgram_realtime_transcribe_example.pyexamples/voice/openai_speech_to_text_example.py records a short microphone clip and prints the transcript, with an optional flag to save the text to disk.
python examples/voice/openai_speech_to_text_example.py --seconds 5 --output transcript.txtexamples/voice/openai_text_to_speech_example.py
converts text to speech, saves it to an mp3, and can optionally stream the audio
immediately with --play-inline.
python examples/voice/openai_text_to_speech_example.py --text "Testing 1 2 3" --voice nova --play-inlineexamples/voice/elevenlabs_text_to_speech_example.py targets the ElevenLabs API with inline playback plus flags for stability, similarity, style, and speaker boost.
python examples/voice/elevenlabs_text_to_speech_example.py --text "Hi from ElevenLabs" --voice-id EXAVITQu4vr4xnSDxMaL --play-inlineexamples/ai/dalle3_generate_example.py calls
lt.Dalle3ImageGenerator, saves the resulting PNG, and prints the revised prompt.
python examples/ai/dalle3_generate_example.py --prompt "A red house with snow and a chimney"examples/ai/sdxl_turbo_example.py uses the Replicate-powered
lt.SDXL_TURBO helper and stores the PNG plus source URL for reference.
python examples/ai/sdxl_turbo_example.py --prompt "An astronaut riding a rainbow unicorn" --width 768 --height 512examples/ai/nano_banana_edit_gradio.py launches a Gradio UI for interactive Flux/Nano Banana editsβdrop in prompts, tweak sliders, and preview changes.
python examples/ai/nano_banana_edit_gradio.pyObtain a bot here: https://docs.tracardi.com/qa/how_can_i_get_telegram_bot/ Next you will need to update your bashrc or bash_profile with the telegram bot env variables.
export TELEGRAM_BOT_TOKEN='XXX'
export TELEGRAM_CHAT_ID='XXX'See examples/health/telegram_health_reporter_example.py for a runnable heartbeat + alert demo (requires the env vars above):
python examples/health/telegram_health_reporter_example.py --name "My Exhibit" --interval 2 --count 5pip install pytest
make sure you are in base folder
python -m pytest lunar_tools/tests/pipreqs . --force