sema-engine is a JavaScript audio signal engine for live coding, interactive music, machine listening, and browser-based DSP applications.
It provides a small API around a high-performance Web Audio AudioWorklet processor, a live-code compiler pipeline, sample loading, analyser support, shared buffers, and learner integration. It was extracted from Sema, a live coding playground for music and machine learning, and refactored as a reusable engine for the MIMIC project.
Status: experimental. APIs may change.
- Web Audio API engine built around an
AudioWorkletprocessor. - Maximilian-powered DSP objects compiled for browser use.
- Live-code compilation using Nearley grammars.
- ES module and UMD builds.
- Sample loading into the audio processor.
- Analyser utilities for visualisation.
- Shared buffer support for communication between the audio engine, UI, and learners.
- Integration hooks for machine learning workflows through
Learner.
sema-engine brings together three main parts:
-
Maximilian DSP
C++ DSP objects are used as the audio foundation. -
Web Audio API AudioWorklet
The engine loads a custommaxi-processor.jsworklet that runs audio code off the main JavaScript thread. -
Nearley compiler
Language grammars are compiled from EBNF-style specifications and used to turn live-code text into DSP code that can be evaluated by the engine.
npm install sema-engineThe package exposes both ES module and UMD/CommonJS-compatible builds.
import {
Engine,
compile,
Learner,
getBlock
} from "sema-engine";
const engine = new Engine();
// This must point to the location where maxi-processor.js is served.
const assetBaseUrl = window.location.origin;
async function startEngine() {
const ok = await engine.init(assetBaseUrl);
if (!ok) {
throw new Error("Failed to initialise sema-engine.");
}
engine.play();
}engine.init(assetBaseUrl) loads the worklet processor from:
${assetBaseUrl}/maxi-processor.jsMake sure maxi-processor.js is available at that URL.
For example, if your app is served from:
https://example.com/my-appand you pass:
await engine.init("https://example.com/my-app");then the processor must be available at:
https://example.com/my-app/maxi-processor.jsWhen using a bundler, copy the file from:
node_modules/sema-engine/maxi-processor.jsto your public/static output directory.
You can also use the built module directly in a browser page:
<script type="module">
import {
Engine,
compile,
Learner,
getBlock
} from "./index.mjs";
const engine = new Engine();
const assetBaseUrl = new URL(".", window.location.href)
.href
.replace(/\/$/, "");
document.getElementById("playButton").addEventListener("click", async () => {
const ok = await engine.init(assetBaseUrl);
if (!ok) {
console.error("Failed to initialise sema-engine.");
return;
}
engine.play();
});
</script>Samples are loaded relative to the same base URL passed to engine.init().
document.getElementById("loadSamplesButton").addEventListener("click", () => {
if (!engine) {
throw new Error("Engine not initialised. Start the engine first.");
}
try {
engine.loadSample("909.wav", "/audio/909.wav");
engine.loadSample("909b.wav", "/audio/909b.wav");
engine.loadSample("909closed.wav", "/audio/909closed.wav");
engine.loadSample("909open.wav", "/audio/909open.wav");
} catch (error) {
console.error("Failed to load samples:", error);
}
});If assetBaseUrl is:
https://example.com/my-appthen:
engine.loadSample("909.wav", "/audio/909.wav");loads:
https://example.com/my-app/audio/909.wavUse compile(grammarSource, liveCodeSource) to compile live-code text against a grammar specification. If compilation succeeds, pass the generated DSP code to the engine.
function evalLiveCode(grammarSource, liveCodeSource) {
if (!engine) {
throw new Error("Engine not initialised. Start the engine first.");
}
try {
const { errors, dspCode } = compile(grammarSource, liveCodeSource);
if (errors && errors.length > 0) {
console.error("Compilation errors:", errors);
return;
}
if (dspCode) {
engine.eval(dspCode);
}
} catch (error) {
console.error("Failed to compile and evaluate live code:", error);
}
}Learner can be connected to the engine for machine learning workflows and shared-buffer communication.
let learner;
document.getElementById("learnerButton").addEventListener("click", async () => {
if (!engine) {
throw new Error("Engine not initialised. Start the engine first.");
}
try {
learner = new Learner();
await engine.addLearner("l1", learner);
} catch (error) {
console.error("Error creating or initialising learner:", error);
}
});getBlock is a utility for extracting the current code block from an editor. In Sema-style CodeMirror editors, blocks are delimited by three or more underscores:
___Example:
function evalJsBlock(editorJS) {
if (!learner) {
throw new Error("Learner not initialised. Create a learner first.");
}
if (!editorJS) {
throw new Error("No JavaScript editor instance provided.");
}
const code = getBlock(editorJS);
learner.eval(code);
}| Export | Description |
|---|---|
Engine |
Main audio engine. Initialises the AudioContext, loads the worklet processor, evaluates DSP code, loads samples, manages analysers, and communicates with learners. |
compile |
High-level compiler function for turning grammar and live-code input into DSP code. |
compileGrammar |
Compiles a grammar specification. |
parse |
Parses source text with a compiled grammar. |
ASTreeToDSPcode |
Converts parsed AST structures into DSP code. |
ASTreeToJavascript |
JavaScript IR utilities. |
Learner |
Learner integration class for ML-related workflows. |
getBlock |
Extracts the active code block from an editor. |
Logger |
Logging utility used by the engine and processor. |
sema-engine relies on Web Audio API AudioWorklet, so it should be served from a secure context:
https://...- or
http://localhostduring development
Most modern Chromium-based browsers support AudioWorklets. If you experience issues, check that:
- the page is served over HTTPS or localhost,
maxi-processor.jsis reachable,- the browser allows audio playback after a user gesture,
- the browser console does not show CORS or worklet loading errors.
Clone the repository and initialise submodules:
git clone https://github.com/frantic0/sema-engine.git
cd sema-engine
git submodule update --init --recursiveInstall dependencies:
npm installRun the development build:
npm run devBuild the library:
npm run buildRun tests:
npm testIf you are contributing to the custom Web Audio API processor or the Maximilian/Open303 build pipeline, you will also need:
Then run:
makeThis builds the Maximilian/WebAssembly/Pure JS processor and then builds the sema-engine library outputs.
To update submodules:
git submodule update --remote --mergesema-engine uses Mocha for unit and integration tests.
The development build includes a local example for experimenting with:
- starting and stopping the engine,
- loading samples,
- evaluating live code,
- evaluating JavaScript blocks,
- creating learners,
- creating analysers,
- testing grammar changes.
Run:
npm run devThen open the local development page served by the dev task.
A published example is also available here:
https://frantic0.github.io/sema-engine/
More documentation is available in the project wiki:
https://github.com/frantic0/sema-engine/wiki
For a complete application built around this engine, see:
https://github.com/mimic-sussex/sema
Pull requests are welcome.
Please:
- Fork the repository.
- Create a branch from
develop. - Make your changes.
- Run tests and build locally.
- Submit a pull request.
See CONTRIBUTING.md for more details.
Bernardo, F., Kiefer, C., Magnusson, T. (2020).
A Signal Engine for a Live Coding Language Ecosystem.
Journal of the Audio Engineering Society, Vol. 68, No. 10.
DOI: https://doi.org/10.17743/jaes.2020.0016
This project has received funding from two UKRI/AHRC research grants:
- MIMIC: Musically Intelligent Machines Interacting Creatively, Ref: AH/R002657/1
- Innovating Sema, Ref: AH/V005154/1
MIT. See LICENSE.