Skip to content

DavidEricson00/PedeAi

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

95 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

PedeAi

PedeAi is a food ordering automation platform built to explore workflow automation with n8n and containerized infrastructure with Docker.

The server is built using Node.js + Express, communicates with a PostgreSQL database, integrates with n8n for intelligent chatbot automation, and includes a Gemini AI-powered chatbot for natural language assistance.

πŸš€ Technologies

  • Node.js + Express 5
  • PostgreSQL 16
  • n8n (workflow automation)
  • Docker + Docker Compose
  • Swagger UI (API docs)
  • ngrok (tunnel for Telegram webhook)
  • Telegram Bot API (via BotFather)
  • Redis (caching with ioredis)
  • Google Gemini AI (gemini-2.5-flash)

πŸ— Architecture

The project follows a layered architecture:

  • Routes β†’ Controllers β†’ Services β†’ Repositories
  • Global error handling via AppError
  • PostgreSQL connection pool via pg
  • n8n handles all chatbot logic, connecting to the backend via HTTP
  • Product listing routes are cached with Redis, using scanStream-based invalidation on mutations
  • Gemini AI chatbot exposes a /bot REST endpoint for natural language menu queries

🐳 Infrastructure with Docker

All services are orchestrated with Docker Compose:

services:

  postgres:
    image: postgres:16
    container_name: pedeai-postgres
    environment:
      POSTGRES_USER: ${POSTGRES_USER}
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
      POSTGRES_DB: ${POSTGRES_DB}
    ports:
      - "5432:5432"
    volumes:
      - postgres_data:/var/lib/postgresql/data
      - ./database:/docker-entrypoint-initdb.d

  backend:
    build: ./backend
    container_name: pedeai-backend
    ports:
      - "3000:3000"
    depends_on:
      - postgres
      - redis
    environment:
      DATABASE_URL: ${DATABASE_URL}

  n8n:
    image: n8nio/n8n
    container_name: n8n
    ports:
      - "5678:5678"
    environment:
      N8N_BASIC_AUTH_ACTIVE: true
      N8N_BASIC_AUTH_USER: ${N8N_BASIC_AUTH_USER}
      N8N_BASIC_AUTH_PASSWORD: ${N8N_BASIC_AUTH_PASSWORD}
      N8N_HOST: ${N8N_HOST}
      N8N_PORT: ${N8N_PORT}
      N8N_PROTOCOL: ${N8N_PROTOCOL}
      WEBHOOK_URL: ${WEBHOOK_URL}
    volumes:
      - n8n_data:/home/node/.n8n
    depends_on:
      - backend

volumes:
  postgres_data:
  n8n_data:

The backend runs on port 3000, PostgreSQL on port 5432, n8n on port 5678 and Redis on port 6379.

πŸ€– n8n Workflow Automation


The entire chatbot logic is handled by n8n a visual workflow automation tool.

How it works

  1. The user sends a message to the Telegram bot
  2. The Telegram Trigger node fires in n8n
  3. n8n extracts the input, queries the PostgreSQL session table, and routes the user through a state machine
  4. Depending on the user's current state, n8n calls the backend REST API to read/write data
  5. n8n sends responses back to the user via Telegram
  6. For free-form questions (e.g. about the menu), n8n performs an agent handoff to the Gemini AI chatbot via the /bot endpoint

Main Workflow States

State Description
main_menu User is shown the main menu
awaiting_main_menu_option Waiting for the user to pick an option
order_menu User is viewing the order/category menu
awaiting_order_menu_option Waiting for category or checkout selection
awaiting_product_menu_option Waiting for product selection
checkout User is providing delivery address
get_order User is querying an existing order
agent User interacting with gemini agent

Workflow Features

  • Session management via PostgreSQL (user_sessions table with JSONB state)
  • Dynamic menus built at runtime from backend data (categories and products)
  • Options map stored in session to resolve numbered inputs
  • Order creation and item management via REST API calls
  • Order status notifications via n8n webhook β†’ Telegram message
  • Agent handoff flow β€” unrecognized or free-form messages are forwarded to the Gemini AI chatbot

Order Status Webhook

The backend triggers a webhook to n8n when an order status changes. n8n then sends a Telegram notification to the user:

Status Message
pedido_recebido 🧾 Order received
em_preparo πŸ‘¨β€πŸ³ Being prepared
a_caminho πŸ›΅ On the way
finalizado βœ… Completed
cancelado ❌ Cancelled

🧠 Gemini AI Chatbot

PedeAi includes a Gemini-powered assistant accessible via the /bot REST endpoint. It handles natural language questions about the menu directly, acting as a fallback when the n8n state machine doesn't match a known command.

Features

  • Powered by Google Gemini (gemini-2.5-flash)
  • Conversation history stored in Redis (TTL: 10 minutes, last 6 turns)
  • Menu context cached in Redis (TTL: 5 minutes) and built dynamically from the database
  • Rate limiting β€” maximum 8 messages per user per minute
  • Input sanitization β€” validates message length and blocks prompt injection attempts
  • Strict system prompt β€” the bot only answers questions related to the menu and restaurant; off-topic, offensive, or injection attempts are refused
  • Empty/blocked response handling β€” gracefully falls back to a safe message when Gemini returns no candidates
  • Cache invalidation β€” menu cache is cleared automatically when products are updated

Agent Handoff

When the n8n workflow receives a message it cannot match to a known state, it forwards it to the Gemini chatbot via a POST to /bot. The AI response is then sent back to the user through Telegram.

πŸ—„ Database Schema

The database is initialized automatically via the ./database/init.sql file mounted into the PostgreSQL container.

Tables

  • users β€” Stores Telegram users (identified by telegram_chat_id)
  • categories β€” Product categories
  • products β€” Products linked to categories, with price and active flag
  • orders β€” Orders linked to users, with status, total, address, and payment
  • order_items β€” Items within an order (quantity, unit price, total)
  • user_sessions β€” n8n chatbot session state per user (JSONB)

Order Status Enum

carrinho β†’ pedido_recebido β†’ em_preparo β†’ a_caminho β†’ finalizado | cancelado

Automatic Total Trigger

A PostgreSQL trigger (trigger_update_order_total) automatically recalculates orders.total_price whenever an order_item is inserted, updated, or deleted.

πŸ“‘ REST API

The backend exposes a REST API on port 3000. Full interactive documentation is available via Swagger UI at:

http://localhost:3000/docs

πŸ”Œ Telegram + ngrok Setup

The Telegram bot is created via BotFather. Since n8n runs locally, ngrok is used to expose n8n's webhook endpoint to the internet so Telegram can reach it.

Set the WEBHOOK_URL environment variable to your ngrok public URL so n8n can correctly register the Telegram webhook.

πŸ“¦ Installation

1. Clone the repository

2. Copy the environment file and fill in your values:

cp .env.example .env

3. Start all services with Docker Compose:

docker compose up -d --build

4. Import the n8n workflow JSON into your n8n instance at http://localhost:5678

5. Configure your Telegram bot credentials in n8n and activate the workflow

6. (Optional) Run the database seed script to populate sample data:

npm run seed

βš™οΈ Environment Variables

POSTGRES_USER=
POSTGRES_PASSWORD=
POSTGRES_DB=
DATABASE_URL=postgresql://user:password@postgres:5432/dbname

PORT=

N8N_BASIC_AUTH_USER=
N8N_BASIC_AUTH_PASSWORD=
N8N_HOST=localhost
N8N_PORT=5678
N8N_PROTOCOL=http
WEBHOOK_URL=https://your-ngrok-url.ngrok.io
ORDER_STATUS_WEBHOOK_URL=

REDIS_HOST=
REDIS_PORT=

GEMINI_API_KEY=

🎯 Purpose of the Project

PedeAi was built to:

  • Practice n8n workflow automation with state machines and dynamic menus
  • Integrate a Telegram chatbot with a real REST API backend
  • Manage containerized infrastructure with Docker Compose
  • Work with PostgreSQL including triggers, enums, JSONB, and indexing
  • Build a practical, end-to-end food ordering system
  • Implement Redis caching with stream-based cache invalidation using ioredis
  • Explore Gemini AI integration for natural language menu assistance
  • Practice agent handoff patterns between a rule-based state machine and an LLM

πŸ“Œ Notes

  • Backend runs with Node.js ES modules ("type": "module")
  • n8n communicates with the backend via the internal Docker network using http://backend:3000
  • The database schema is auto-applied on first container startup via ./database/init.sql
  • Swagger docs available at <backend-address>/docs
  • The Gemini chatbot is scoped strictly to menu-related queries and refuses off-topic or adversarial input

About

A food ordering backend API integrated with an automated Telegram chatbot powered by n8n.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors