NodeTool enables you to create custom AI workflows on your computer.
With NodeTool, you can:
- Build Privacy-First: Keep your data local and secure with no need to send files to external servers
- Create Custom Solutions: Design AI tools that work exactly the way you need them to
- Run Locally: Utilize your own hardware to run open-source models via Ollama and access thousands of models via Hugging Face
- Integrate Seamlessly: Connect AI workflows to your favorite apps and services
- Generate & Edit Media: Create and manipulate images, audio, and video using state-of-the-art AI models
- Process Multiple Formats: Work with text, images, audio, and video in a single unified interface
- π¨ Personal Learning Assistant: Create chatbots that read and explain your PDFs, e-books, or academic papers
- π Note Summarization: Extract key insights from Obsidian or Apple Notes
- π€ Voice Memo to Presentation: Convert recorded ideas into documents
- π§οΈ Image Generation & Editing: Create and modify images with advanced AI models
- π΅ Audio Processing: Generate and edit audio content with AI assistance
- π¬ Video Creation: Produce and manipulate video content using AI tools
- π§ Desktop Utilities: Access NodeTool mini-apps from your system tray
- π£οΈ Siri Integration: Extend Siri's capabilities with custom AI workflows
- β‘ Automation: Streamline repetitive tasks with AI-powered scripts
Key features to automate your workflow:
- Visual Editor: Build AI workflows visuallyβno coding required.
- Mini Apps: Access NodeTool mini-apps from your system tray.
- Chat Apps: Build chatbots that explain your PDFs, generate images, and more.
- Ollama Support: Run local language models for chat.
- HuggingFace: Run Transformers and Diffusers models locally.
- Model Manager: Manage and download models from the Hugging Face Hub.
- Integration with AI Platforms: Use models from OpenAI, Hugging Face, Anthropic, Ollama, and ComfyUI.
- ComfyUI Support: Run ComfyUI nodes directly in NodeTool without extra installation.
- Asset Management: Import and manage media assets easily.
- Multimodal Support: Work with images, text, audio, and video together.
- API: Call NodeTool API from your own scripts.
- Custom Nodes: Enhance functionality with Python.
- Cross-Platform: Available on Mac, Windows, and Linux.
Download the latest release from our Releases Page.
NodeTool is designed for your local environment:
- Home Workstation: Build AI tools for personal productivity or creative projects
- Lab or Office: Deploy customized solutions for research and internal utilities
- On the Go: Run lightweight workflows on laptops for portable AI assistance
- Anthropic π§ : Text-based AI tasks.
- Comfy π¨: Support for ComfyUI nodes for image processing.
- Chroma π: Vector database for embeddings.
- ElevenLabs π€: Text-to-speech services.
- Fal π: AI for audio, image, text, and video.
- Google π: Access to Gemini Models and Gmail.
- HuggingFace π€: AI for audio, image, text, and video.
- NodeTool Core βοΈ: Core data and media processing functions.
- Ollama π¦: Run local language models.
- OpenAI π: AI for audio, image, and text tasks.
- Replicate βοΈ: AI for audio, image, text, and video in the cloud.
NodeTool's architecture is designed to be flexible and extensible.
graph TD
A[NodeTool Editor<br>ReactJS] -->|HTTP/WebSocket| B[API Server]
A <-->|WebSocket| C[WebSocket Runner]
B <-->|Internal Communication| C
C <-->|WebSocket| D[Worker with ML Models<br>CPU/GPU<br>Local/Cloud]
D <-->|HTTP Callbacks| B
E[Other Apps/Websites] -->|HTTP| B
E <-->|WebSocket| C
D -->|Optional API Calls| F[OpenAI<br>Replicate<br>Anthropic<br>Others]
classDef default fill:#e0eee0,stroke:#333,stroke-width:2px,color:#000;
classDef frontend fill:#ffcccc,stroke:#333,stroke-width:2px,color:#000;
classDef server fill:#cce5ff,stroke:#333,stroke-width:2px,color:#000;
classDef runner fill:#ccffe5,stroke:#333,stroke-width:2px,color:#000;
classDef worker fill:#ccf2ff,stroke:#333,stroke-width:2px,color:#000;
classDef api fill:#e0e0e0,stroke:#333,stroke-width:2px,color:#000;
classDef darkgray fill:#a9a9a9,stroke:#333,stroke-width:2px,color:#000;
class A frontend;
class B server;
class C runner;
class D worker;
class E other;
class F api;
- π₯οΈ Frontend: The NodeTool Editor for managing workflows and assets, built with ReactJS and TypeScript.
- π API Server: Manages connections from the frontend and handles user sessions and workflow storage.
- π WebSocket Runner: Runs workflows in real-time and keeps track of their state.
Extend NodeTool's functionality by creating custom nodes that can integrate models from your preferred platforms:
class MyAgent(BaseNode):
prompt: Field(default="Build me a website for my business.")
async def process(self, context: ProcessingContext) -> str:
llm = MyLLM()
return llm.generate(self.prompt)
NodeTool provides a powerful Workflow API that allows you to integrate and run your AI workflows programmatically.
You can use the API locally now, api.nodetool.ai
access is limited to Alpha users.
const response = await fetch("http://localhost:8000/api/workflows/");
const workflows = await response.json();
curl -X POST "http://localhost:8000/api/jobs/run" \
-H "Content-Type: application/json" \
-d '{
"workflow_id": "your_workflow_id"
}'
const response = await fetch("http://localhost:8000/api/jobs/run", {
method: "POST",
headers: {
"Content-Type": "application/json",
},
body: JSON.stringify({
workflow_id: workflowId,
params: params,
}),
});
const outputs = await response.json();
// outputs is an object with one property for each output node in the workflow
// the value is the output of the node, which can be a string, image, audio, etc.
The streaming API is useful for getting real-time updates on the status of the workflow.
See run_workflow_streaming.js for an example.
These updates include:
- job_update: The overall status of the job (e.g. running, completed, failed, cancelled)
- node_update: The status of a specific node (e.g. running, completed, error)
- node_progress: The progress of a specific node (e.g. 20% complete)
The final result of the workflow is also streamed as a single job_update with the status "completed".
const response = await fetch("http://localhost:8000/api/jobs/run?stream=true", {
method: "POST",
headers: {
"Content-Type": "application/json",
Authorization: "Bearer YOUR_API_TOKEN",
},
body: JSON.stringify({
workflow_id: workflowId,
params: params,
}),
});
const reader = response.body.getReader();
const decoder = new TextDecoder();
while (true) {
const { done, value } = await reader.read();
if (done) break;
const lines = decoder.decode(value).split("\n");
for (const line of lines) {
if (line.trim() === "") continue;
const message = JSON.parse(line);
switch (message.type) {
case "job_update":
console.log("Job status:", message.status);
if (message.status === "completed") {
console.log("Workflow completed:", message.result);
}
break;
case "node_progress":
console.log(
"Node progress:",
message.node_name,
(message.progress / message.total) * 100
);
break;
case "node_update":
console.log(
"Node update:",
message.node_name,
message.status,
message.error
);
break;
}
}
}
The WebSocket API is useful for getting real-time updates on the status of the workflow. It is similar to the streaming API, but it uses a more efficient binary encoding. It offers additional features like canceling jobs.
See run_workflow_websocket.js for an example.
const socket = new WebSocket("ws://localhost:8000/predict");
const request = {
type: "run_job_request",
workflow_id: "YOUR_WORKFLOW_ID",
params: {
/* workflow parameters */
},
};
// Run a workflow
socket.send(
msgpack.encode({
command: "run_job",
data: request,
})
);
// Handle messages from the server
socket.onmessage = async (event) => {
const data = msgpack.decode(new Uint8Array(await event.data.arrayBuffer()));
if (data.type === "job_update" && data.status === "completed") {
console.log("Workflow completed:", data.result);
} else if (data.type === "node_update") {
console.log("Node update:", data.node_name, data.status, data.error);
} else if (data.type === "node_progress") {
console.log("Progress:", (data.progress / data.total) * 100);
}
// Handle other message types as needed
};
// Cancel a running job
socket.send(msgpack.encode({ command: "cancel_job" }));
// Get the status of the job
socket.send(msgpack.encode({ command: "get_status" }));
- Check out this simple html page.
- Download the html file
- Open in a browser locally.
- Select the endpoint, local or api.nodetool.ai (for alpha users)
- Enter API token (from Nodetool settings dialog)
- Select workflow
- Run workflow
- The page will live stream the output from the local or remote API
- Conda, download and install from miniconda.org
- Node.js, download and install from nodejs.org
conda create -n nodetool python=3.11
conda activate nodetool
conda install -c conda-forge ffmpeg libopus cairo
On macOS:
pip install -r requirements.txt
On Windows and Linux with CUDA 12.1:
pip install -r requirements.txt --extra-index-url https://download.pytorch.org/whl/cu121
On Windows and Linux without CUDA:
pip install -r requirements.txt
Ensure you have the Conda environment activated.
On macOS and Linux:
./scripts/server --with-ui --reload
On windows:
.\scripts\server.bat --with-ui --reload
Now, open your browser and navigate to http://localhost:3000
to access the NodeTool interface.
Ensure you have the Conda environment activated.
Before running Electron, you need to build the frontend located in the /web
directory:
cd web
npm install
npm run build
Once the build is complete, you can start the Electron app:
cd electron
npm install
npm start
The Electron app starts the frontend and backend.
Dependencies are managed via poetry in pyproject.toml
and must be synced to requirements.txt
using:
poetry export -f requirements.txt --output requirements.txt --without-hashes
We welcome contributions from the community! To contribute to NodeTool:
- Fork the repository.
- Create a new branch (
git checkout -b feature/YourFeature
). - Commit your changes (
git commit -am 'Add some feature'
). - Push to the branch (
git push origin feature/YourFeature
). - Open a Pull Request.
Please adhere to our contribution guidelines.
NodeTool is licensed under the AGPLv3 License
Got ideas, suggestions, or just want to say hi? We'd love to hear from you!
- Email: [email protected]
- Discord: Nodetool Discord
- Forum: Nodetool Forum
- GitHub: https://github.com/nodetool-ai/nodetool
- Matthias Georgi: [email protected]
- David BΓΌhrer: [email protected]