Trole is a role-based reverse proxy that controls access to decentralized services based on Hive blockchain credentials. It verifies structured messages signed by valid Hive posting
or active
keys before bridging requests to backend services.
This implementation controls upload access to decentralized IPFS nodes on the SPK Network and provides secure, authenticated gateway functionality.
- Blockchain Authentication: Verifies Hive blockchain signatures
- Role-Based Access Control: Controls access based on account permissions
- IPFS Gateway: Secure upload/download functionality
- Intelligent CDN: Smart content routing and performance optimization
- Docker Support: Multiple deployment configurations
- Health Monitoring: Built-in health checks and monitoring
Trole implements a sophisticated blockchain-verified storage system (api.js
) that manages secure file uploads, storage contracts, and decentralized content distribution across the SPK Network.
The system uses a chunked upload approach with blockchain-based contracts to ensure secure, verifiable content storage:
- Contract Creation: Users establish storage contracts via SPK Network blockchain
- Chunked Upload: Files are uploaded in chunks with range headers for resumable transfers
- CID Verification: Content integrity verified by comparing calculated hash with expected CID
- IPFS Pinning: Successfully uploaded files are pinned to local IPFS node
- Contract Completion: Full contract details broadcasted to blockchain upon completion
- 🔗 Blockchain Contracts: All storage agreements recorded on SPK Network blockchain
- 📋 Chunked Uploads: Resumable uploads with precise chunk management
- 🔍 CID Verification: Cryptographic verification of content integrity
- 📌 IPFS Pinning: Automatic pinning to distributed storage network
- 💾 Local Staging: Temporary local storage during upload process
- 🗑️ Automated Cleanup: Smart cleanup of expired contracts and orphaned files
- 📊 Storage Analytics: Real-time monitoring of storage usage and capacity
Client Request → Contract Validation → Chunk Processing → CID Verification
↓ ↓
Blockchain Signature Verification IPFS Pinning
↓ ↓
Temporary File Storage → Upload Completion → Contract Broadcasting
Automated Inventory System:
- Periodic scanning of pinned content vs. active contracts
- Automatic re-pinning of missing content
- Cleanup of expired or invalid contracts
- Real-time storage statistics and disk usage monitoring
Contract Lifecycle Management:
- Real-time contract status monitoring via SPK Network API
- Automatic content removal when contracts expire or are cancelled
- Multi-node coordination for distributed storage agreements
- Reward distribution tracking for storage providers
Endpoint | Method | Purpose | Key Features |
---|---|---|---|
/upload |
POST | Upload file chunks | Range headers, CID verification, resumable uploads |
/upload-contract |
GET | Create storage contract | Blockchain verification, user validation |
/upload-check |
GET | Check upload status | Progress tracking, chunk validation |
/upload-authorize |
GET | Authorize file uploads | Multi-file authorization, signature verification |
/upload-stats |
GET | Storage statistics | Real-time capacity, repo stats, contract counts |
/contracts |
GET | List active contracts | Complete contract details and status |
/storage-stats |
GET | Detailed storage metrics | Disk usage, IPFS repo stats, active contracts |
Multi-Layer Security:
- Hive Blockchain Signatures: All operations require valid posting/active key signatures
- Content Integrity: Hash-based verification ensures uploaded content matches expected CID
- Contract Validation: Cross-reference with SPK Network blockchain for authentic contracts
- Access Control: Role-based permissions tied to blockchain account ownership
- Automatic Flagging: Content moderation system with blockchain-verified flag operations
Data Integrity Assurance:
- Pre-upload CID calculation and verification
- Post-upload hash comparison with IPFS-generated CID
- Periodic integrity checks of pinned content
- Automatic recovery and re-pinning of corrupted data
Trole includes an optional promotional contract system that enables node operators to automatically issue storage contracts to new users, facilitating easy onboarding and uploading to the SPK Network.
Promotional contracts are special storage agreements that provide new users with free temporary storage allocations to encourage network participation. The system automatically evaluates user eligibility and issues contracts with appropriate storage grants. The user will be responsible for maintaining the resource credits (BROCA POWER) in their account to maintain the storage.
- 🎯 Automatic Issuance: Contracts automatically created for eligible users
- ⚡ Rate Limiting: Built-in debouncer prevents abuse (10-minute cooldown per user)
- 📊 Dynamic Allocation: Storage grants adjust based on network capacity and user history
- 🔐 Blockchain Verification: All contracts recorded on SPK Network blockchain
- 💰 Resource Management: Intelligent allocation based on available network resources
Environment Variables:
PROMO_CONTRACT=true
- Enable promotional contract systemBASE_GRANT=30000
- Base storage allocation in bytesSPK_API
- SPK Network API endpoint for user verification
Eligibility Criteria:
- User must have valid SPK Network account
- User must not have existing contract with the node
- User account must have valid public key
- Rate limit: One request per user every 10 minutes
Endpoint | Method | Purpose | Parameters |
---|---|---|---|
/upload-promo-contract |
GET | Create promotional contract | user - Target username |
/upload-contract |
GET | Create standard contract | user - Target username |
The promotional system intelligently manages network resources:
Capacity Monitoring:
- Tracks available "broca" (network resource units)
- Monitors SPK power and network capacity
- Adjusts grant sizes based on real-time network utilization
Historical Allocation Tracking:
- Remembers previous grants to users
- Adjusts future allocations based on usage patterns
- Prevents resource hoarding through intelligent distribution
- User Onboarding: Simplifies new user acquisition
- Network Growth: Encourages SPK Network participation
- Resource Utilization: Optimizes storage capacity usage
- Community Building: Enables easy content sharing for newcomers
- Rate Limiting: Prevents spam and abuse attempts
- Account Verification: Cross-references with SPK Network for valid accounts
- Resource Caps: Automatic limits prevent resource exhaustion
- Blockchain Audit Trail: All contracts recorded for transparency
Trole features an advanced intelligent SPK IPFS CDN system (cdn.js
) that creates a decentralized content delivery network by intelligently routing IPFS requests across multiple gateway nodes based on content ownership, health status, and performance metrics.
- 🎯 Smart Routing: Automatically routes requests to content claimants (SPK Storage Nodes) for optimal performance
- 💚 Health Monitoring: Continuous monitoring of gateway health with automatic failover
- 🔍 Content Integrity Verification: Real-time verification of content authenticity using hash validation
- 📊 Performance Analytics: Detailed statistics and reward scoring for gateway operators
- Possible furute SPK network rewarding mechanisms
- 🔄 Load Balancing: Intelligent distribution across healthy gateways
- ⚡ Caching Strategy: Optimized caching headers for efficient content delivery
- Content Request: When a user requests
/ipfs/QmHash...
, the system analyzes the CID - Claimant Discovery: Fetches file ownership information from the SPK Network API
- Gateway Selection: Routes to the content storer's gateway for optimal delivery
- Health Check: Validates gateway availability and content integrity
- Fallback Strategy: Uses backup gateways if primary nodes are unavailable
- Performance Tracking: Records metrics for reward calculations and network optimization
Endpoint | Purpose | Response |
---|---|---|
GET /ipfs/:cid |
Proxy IPFS content through intelligent routing | File content with optimized headers |
GET /ipfs-health |
Gateway health status and integrity metrics | JSON health report |
GET /ipfs-stats |
Detailed network statistics for rewards | Performance and uptime data |
The CDN system maintains rolling lists of recently accessed CIDs and performs periodic integrity checks to ensure content authenticity. Gateway performance is continuously monitored with scoring algorithms that factor in:
- Uptime: Connection reliability and response times
- Integrity: Content hash verification success rates
- Availability: Consistent service delivery metrics
- Network Participation: Active contribution to the decentralized network
Choose the deployment method that best fits your needs:
Quick start for development and testing environments.
- Docker and Docker Compose
- Basic understanding of environment configuration
-
Clone the repository
git clone https://github.com/spknetwork/trole cd trole
-
Configure environment
cp env.sample .env # Edit .env with your Hive account credentials nano .env
-
Deploy services
# Standard deployment docker-compose build docker-compose up -d # View logs docker-compose logs -f --tail="200"
Host Network Mode (if you experience networking issues):
docker-compose -f docker-host.yml build
docker-compose -f docker-host.yml up -d
Development Mode (with full logging):
docker-compose -f full-docker.yml up -d
Secure, production-ready deployment with enhanced features.
- Docker Engine 20.10+ with Compose v2
- Reverse proxy (Nginx/Caddy) for SSL termination
- Domain name with DNS configuration
-
Prepare environment
git clone https://github.com/spknetwork/trole cd trole cp env.sample .env
-
Configure for production
# Edit .env with production credentials nano .env # Add production-specific settings echo "NODE_ENV=production" >> .env
-
Deploy with production settings
docker-compose -f docker-compose.prod.yml up -d
Production Features:
- ✅ Resource limits and reservations
- ✅ Security hardening (read-only containers, no-new-privileges)
- ✅ Enhanced logging and monitoring
- ✅ Health checks for all services
- ✅ Localhost-only binding for internal services
Complete SPK Network node with all components.
- Ubuntu 20.04+ or Debian 10+ (with sudo privileges)
- Domain name pointed to your server
- Minimum 4GB RAM, 50GB storage
-
Clone and prepare
git clone https://github.com/spknetwork/trole cd trole
-
Run installation script
chmod +x install.sh ./install.sh
The installer will:
- ✅ Install Node.js and npm dependencies
- ✅ Install ProofOfAccess (via pre-built npm binaries - no Go required!)
- ✅ Install IPFS and Caddy web server
- ✅ Configure systemd services
- ✅ Set up SSL certificates (via Caddy)
- ✅ Configure firewall rules
- ✅ Register your node on the network
Note: The script will prompt you for configuration details including your Hive account credentials and whether to install optional components like SPK Node and Validator mode.
Component | Purpose | Port |
---|---|---|
IPFS Kubo | Decentralized storage | 4001, 5001, 8080 |
Trole API | Authentication gateway | 5050 |
ProofOfAccess | Storage validation | 8000-8001 |
SPK Node (optional) | Network participation | 3001 |
Caddy | Reverse proxy & SSL | 80, 443 |
To upgrade an existing installation to the latest versions:
cd trole
./upgrade.sh
This will:
- Pull latest code from git
- Update all npm dependencies
- Update ProofOfAccess to latest version
- Restart all services
- Create backups before making changes
Legacy Installation (builds from source): If you need to build ProofOfAccess from source (requires Go):
./install-legacy-from-source.sh
Migration from Go to npm-based ProofOfAccess: If you have an older installation using Go-compiled ProofOfAccess:
./upgrade-to-npm-poa.sh
For development and testing without full node setup.
- Node.js 18+
- IPFS node (local or remote)
git clone https://github.com/spknetwork/trole
cd trole
npm install
cp env.sample .env
nano .env # Configure IPFS endpoint
# Start development server
npm start
Variable | Description | Required | Default |
---|---|---|---|
ACCOUNT |
Your Hive account name | ✅ | - |
ACTIVE |
Hive active private key | ✅ | - |
DOMAIN |
Your domain name | 🔶 | localhost |
BUILDSPK |
Install SPK components | ❌ | false |
BUILDVAL |
Enable validator mode | ❌ | false |
PORT |
API server port | ❌ | 5050 |
ENDPOINT |
IPFS host | ❌ | 127.0.0.1 |
ENDPORT |
IPFS API port | ❌ | 5001 |
For Validator and Gateway deployments, configure these DNS records:
A @ YOUR_SERVER_IP
A ipfs YOUR_SERVER_IP
A spk YOUR_SERVER_IP # If running SPK node
CNAME www @
# Check service health
curl http://localhost:5050/upload-stats
# Docker service status
docker-compose ps
# System service status (native install)
sudo systemctl status trole ipfs caddy
# Docker logs
docker-compose logs -f trole_api
# System logs (native install)
sudo journalctl -fu trole -n 100
# IPFS logs
sudo journalctl -fu ipfs -n 50
Docker Network Problems
# Try host networking mode
docker-compose -f docker-host.yml up -d
# Reset Docker networks
docker-compose down
docker network prune -f
docker-compose up -d
IPFS Connection Issues
# Check IPFS connectivity
docker-compose exec ipfs ipfs swarm peers
# Restart IPFS
docker-compose restart ipfs
Permission Errors
# Fix volume permissions
docker-compose down
sudo chown -R 1001:1001 ./db
docker-compose up -d
- Check the issues page
- Join our Discord
- Review logs for error details
- Ensure your
.env
configuration is correct
- Endpoint:
POST /upload
- Purpose: Uploads a file chunk to the server
- Headers:
x-contract
: The contract IDcontent-range
: The range of the chunk being uploaded (e.g., bytes=0-999/10000)x-cid
: The CID of the file
- Body: The file chunk
- Response:
- 200 OK: Success
- 400 Bad Request: Missing headers or invalid format
- 401 Unauthorized: No file with such credentials
- 402 Payment Required: Invalid Content-Range
- 403 Forbidden: Bad chunk provided
- 405 Method Not Allowed: Missing Content-Range header
- 406 Not Acceptable: Missing x-contract header
- 500 Internal Server Error: Internal error
- Endpoint:
GET /upload-contract
- Purpose: Creates a new contract for the user
- Query Parameters:
user
: The username
- Response:
- 200 OK: Contract sent successfully
- 400 Bad Request: Contract exists or user pubKey not found
- Endpoint:
GET /upload-check
- Purpose: Checks the upload status of a file
- Headers:
x-cid
: The CID of the filex-files
: The list of CIDsx-account
: The account namex-sig
: The signaturex-contract
: The contract ID
- Response:
- 200 OK: Total chunk uploaded
- 400 Bad Request: Missing data or storage mismatch
- 401 Unauthorized: Access denied
- Endpoint:
GET /upload-authorize
- Purpose: Authorizes the upload of files
- Headers:
x-cid
: The CIDx-files
: The list of CIDsx-account
: The account namex-sig
: The signaturex-contract
: The contract IDx-meta
: The metadata
- Response:
- 200 OK: Authorized CIDs
- 400 Bad Request: Missing data
- 401 Unauthorized: Access denied
- Endpoint:
GET /upload-stats
- Purpose: Provides live statistics of the node
- Response:
- 200 OK: JSON object with IPFS ID, pubKey, head block, node, API, storage max, repo size, and number of objects
- Endpoint:
GET /flag-qry/:cid
- Purpose: Checks if a CID is flagged
- Path Parameters:
cid
: The CID to check
- Response:
- 200 OK: JSON object with flag set to true or false
- Endpoint:
GET /flag
- Purpose: Flags or unflags a CID
- Query Parameters:
cid
: The CID to flag/unflagsig
: The signatureunflag
: Optional, set to true to unflag
- Response:
- 200 OK: JSON object with a message indicating the flag status
- Endpoint:
GET /contracts
- Purpose: Retrieves all active contracts
- Response:
- 200 OK: JSON object with an array of contracts
- Endpoint:
GET /storage-stats
- Purpose: Provides storage statistics
- Response:
- 200 OK: JSON object with disk usage, IPFS repo stats, and active contracts
- 500 Internal Server Error: Error retrieving data
We welcome contributions! Please:
- Fork the repository
- Create a feature branch
- Follow existing code style
- Add tests for new functionality
- Submit a pull request
This project is licensed under the MIT License - see the LICENSE file for details.
- Discord: Join our community
- Documentation: Full docs