->->-> FLUJO is still an early preview! Expect it to break at some points, but improve rapidly! <-<-<-
This issue is currently impacting the CHAT feature and the OPENAI-COMPATIBLE ENDPOINT, fix will come asap!
- a hotfix was implemented to restore minimum functionality
- You can still view the whole Flow execution in the terminal output! (after you started the server in a terminal with npm run dev or npm start)
- You can add ~FLUJOEXPAND=1 or ~FLUJODEBUG=1 somewhere in your message to show more details
FLUJO has currently EXTENSIVE logging enabled by default! This will expose your encrypted API-Keys to the terminal output!. Be VERY careful when grabbing videos or streaming and showing the terminal output!
FLUJO is an open-source platform that bridges the gap between workflow orchestration, Model-Context-Protocol (MCP), and AI tool integration. It provides a unified interface for managing AI models, MCP servers, and complex workflows - all locally and open-source.
FLUJO is powered by the PocketFlowFramework and built with CLine and a lot of LOVE.
- Secure Storage: Store environment variables and API keys with encryption
- Global Access: Use your stored keys across the entire application
- Centralized Management: Keep all your credentials in one secure place
- Multiple Models: Configure and use different AI models simultaneously
- Pre-defined Prompts: Create custom system instructions for each model
- Provider Flexibility: Connect to various API providers (OpenAI, Anthropic, etc.)
- Local Models: Integrate with Ollama for local model execution
- Easy Installation: Install MCP servers from GitHub or local filesystem
- Server Management: Comprehensive interface for managing MCP servers
- Tool Inspection: View and manage available tools from MCP servers
- Environment Binding: Connect server environment variables to global storage
- Visual Flow Builder: Create and design complex workflows
- Model Integration: Connect different models in your workflow
- Tool Management: Allow or restrict specific tools for each model
- Prompt Design: Configure system prompts at multiple levels (Model, Flow, Node)
- Flow Interaction: Interact with your flows through a chat interface
- Message Management: Disable messages or split conversations to reduce context size
- File Attachments: Attach documents or audio for LLM processing
- Transcription: Process audio inputs with automatic transcription
- OpenAI Compatible Endpoint: Integrate with tools like CLine or Roo
- Seamless Connection: Use FLUJO as a backend for other AI applications
- Node.js (v18 or higher)
- npm or yarn
-
Clone the repository:
git clone https://github.com/mario-andreschak/FLUJO.git cd FLUJO
-
Install dependencies:
npm install # or yarn install
-
Start the development server:
npm run dev # or yarn dev
-
Open your browser and navigate to:
http://localhost:4200
-
FLUJO feels and works best if you run it compiled:
npm run build npm start
- Navigate to the Models page
- Click "Add Model" to create a new model configuration
- Configure your model with name, provider, API key, and system prompt
- Save your configuration
- Go to the MCP page
- Click "Add Server" to install a new MCP server
- Choose from GitHub repository or local filesystem
- Configure server settings and environment variables
- Start and manage your server
- Visit the Flows page
- Click "Create Flow" to start a new workflow
- Add processing nodes and connect them
- Configure each node with models and tools
- Save your flow
- Go to the Chat page
- Select a flow to interact with
- Start chatting with your configured workflow
- Attach files or audio as needed
- Manage conversation context by disabling messages or splitting conversations
FLUJO provides comprehensive support for the Model Context Protocol (MCP), allowing you to:
- Install and manage MCP servers
- Inspect server tools
- Connect MCP servers to your workflows
- Reference tools directly in prompts
- Bind environment variables to your global encrypted storage
FLUJO is licensed under the MIT License.
Here's a roadmap of upcoming features and improvements:
- Real-time Voice Feature: Adding support for Whisper.js or OpenWhisper to enable real-time voice capabilities.
- Visual Debugger: Introducing a visual tool to help debug and troubleshoot more effectively.
- MCP Roots Support: Implementing Checkpoints and Restore features within MCP Roots for better control and recovery options.
- MCP Prompts: Enabling users to build custom prompts that fully leverage the capabilities of the MCP server.
- MCP Proxying STDIO<>SSE: Likely utilizing SuperGateway to proxy standard input/output with Server-Sent Events for enhanced communication: Use MCP Servers managed in FLUJo in any other MCP client.
- Enhanced Integrations: Improving compatibility and functionality with tools like Windsurf, Cursor, and Cline.
- Advanced Orchestration: Adding agent-driven orchestration, batch processing, and incorporating features inspired by Pocketflow.
- Online Template Repository: Creating a platform for sharing models, flows, or complete "packages," making it easy to distribute FLUJO flows to others.
- Edge Device Optimization: Enhancing performance and usability for edge devices.
Contributions are welcome! Feel free to open issues or submit pull requests.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature
) - Commit your changes (
git commit -m 'Add some amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
- GitHub: mario-andreschak
- You can add ~FLUJO=HTML, ~FLUJO=MARKDOWN, ~FLUJO=JSON, ~FLUJO=TEXT in your message to format the response, this will give varying results in different tools where you integrate FLUJO.
- You can add ~FLUJOEXPAND=1 or ~FLUJODEBUG=1 somewhere in your message to show more details
- in config/features.ts you can change the Logging-level for the whole application
- in config/features.ts you can enable SSE support which is currently disabled by default
FLUJO - Empowering your AI workflows with open-source orchestration.