A collection of reference Kraken manifests that demonstrate how to provision virtual machines and applications using the Kraken infrastructure automation platform. These manifests serve as templates and examples for deploying various workloads on Scale Computing's HyperCore infrastructure.
Kraken manifests are YAML files that define infrastructure resources and configurations for automated deployment. They follow the Kraken Application specification and are processed by the Kraken Core API to create and manage virtual machines on HyperCore clusters.
- Manifest Creation: Define your application infrastructure using the Kraken manifest format
- Submission: Submit manifests to Kraken via the Fleet Manager UI or API
- Processing: The Kraken Core API validates and processes the manifest
- Deployment: Resources are provisioned on target HyperCore clusters using Pulumi
- Management: Monitor and manage deployed resources through the Kraken ecosystem
In production environments, manifests are typically submitted through the Fleet Manager UI's WYSIWYG editor, which delivers them to the Kraken Core API via Google Pub/Sub and the pubsub-ambassador service.
All Kraken manifests follow this basic structure:
type: Application
version: "1.0.0"
metadata:
name: "my-application"
labels:
- "environment:production"
- "team:platform"
spec:
assets:
- name: "disk-image"
type: "virtual_disk"
format: "raw"
url: "https://storage.example.com/images/ubuntu.img"
resources:
- type: "virdomain"
name: "my-vm"
spec:
description: "My virtual machine"
cpu: 2
memory: "4294967296" # 4GB in bytes
machine_type: "uefi"
storage_devices:
- name: "disk1"
type: "virtio_disk"
source: "disk-image"
boot: 1
capacity: 50000000000 # 50GB in bytes
network_devices:
- name: "eth0"
type: "virtio"
tags:
- "production"
- "web-server"
state: "running"
cloud_init_data:
user_data: |
#cloud-config
package_update: true
packages:
- nginx
- type: Always "Application" for Kraken manifests
- version: Manifest schema version (typically "1.0.0")
- metadata: Application name, labels, and other metadata
- spec: The main specification containing:
- assets: Virtual disk images and other resources
- resources: Virtual machines (VirDomain) and their configurations
- cloud_init_data: Optional cloud-init configuration for VM initialization
A minimal virtual machine configuration demonstrating basic VirDomain resource creation with:
- 2 CPUs, 100MB memory
- UEFI machine type
- IDE CDROM and VirtIO disk storage
- VirtIO network interface
- External disk image asset
A Linux-based virtual machine template showing:
- Template VM in shutoff state (ready for cloning)
- 4GB memory, VirtIO disk with 50GB capacity
- Fedora-based disk image
- UEFI boot configuration
A Windows virtual machine template featuring:
- Windows OS configuration
- TPM machine type for Windows compatibility
- 4GB memory, 100GB VirtIO disk
- Windows Server 2022 disk image
Demonstrates deploying multiple VMs in a single manifest:
- Both Windows and Linux VMs
- Shared asset management
- Different machine types (TPM for Windows, UEFI for Linux)
- Multiple disk images in one deployment
A complete Kubernetes cluster deployment showcasing:
- Advanced cloud-init configuration
- Package installation and system configuration
- K3s installation and cluster setup
- Monitoring stack deployment (Prometheus, Grafana, Node Exporter)
- Service configuration and networking
- Root filesystem expansion
A GPU-accelerated machine learning application demonstrating:
- NVIDIA GPU driver installation
- Docker and NVIDIA Container Toolkit setup
- Automated container deployment
- Service management with systemd
- Template variables for dynamic naming
A minimal example showing ISO asset management:
- ISO format virtual disk
- External asset URL reference
- Asset-only manifest without VMs
Memory and capacity values are specified in bytes:
memory: "4294967296"
= 4GBcapacity: 50000000000
= 50GB
Storage devices support boot ordering:
boot: 1
= Primary boot deviceboot: 2
= Secondary boot device
uefi
: Modern UEFI boot (Linux, modern Windows)bios
: Legacy BIOS boot (older systems)tpm
: TPM-enabled for Windows security features
VirtIO network devices provide optimal performance:
network_devices:
- name: "eth0"
type: "virtio"
Use cloud-init for VM initialization:
cloud_init_data:
user_data: |
#cloud-config
package_update: true
packages:
- nginx
meta_data: |
instance-id: my-vm-001
local-hostname: my-vm
- Use descriptive names: Make resource names clear and meaningful
- Tag resources: Use tags for organization and management
- Optimize resource allocation: Right-size CPU and memory for your workload
- Leverage cloud-init: Use cloud-init for automated configuration
- Template management: Use shutoff state for template VMs
- Security considerations: Avoid hardcoded passwords in production
- Asset management: Use versioned, reliable asset URLs
- Documentation: Include descriptions for complex configurations
To deploy these manifests:
- Via Fleet Manager UI: Upload or paste manifest content into the WYSIWYG editor
- Via API: Submit manifests to the Kraken Core API
/v1/event
endpoint - Via CLI: Use Kraken CLI tools (if available) for automated deployment
Common issues and solutions:
- Invalid memory/capacity values: Ensure values are in bytes, not human-readable formats
- Asset URL failures: Verify asset URLs are publicly hosted and accessible from target clusters
- Boot failures: Check boot order and device configuration
- Cloud-init errors: Validate YAML syntax in cloud-init data
- Network connectivity: Ensure network device types are supported
For comprehensive guides, API reference, and interactive examples, visit our documentation site:
https://scalecomputing.github.io/kraken-applications/
When adding new examples:
- Follow the established naming conventions
- Include comprehensive descriptions
- Add appropriate tags and labels
- Test manifests before submission
- Update this README with new examples
To contribute to the documentation site, you'll need:
- uv for Python package management
- Basic familiarity with MkDocs and Markdown
# Quick development setup (installs dependencies and starts dev server)
make dev
# The development server will be available at http://localhost:8000
# Get help with all available commands
make help
# Install dependencies only
make sync
# Start development server with live reload
make serve
# Run quality checks (linting + strict build)
make check
# Build documentation site
make build
# Clean build artifacts
make clean
# Create a new documentation page
make new-page PAGE=path/to/page.md
# List all documentation pages
make list-pages
# Show documentation status
make status
All contributions must pass quality checks:
# Run linting on YAML files
make lint
# Build with strict mode (warnings become errors)
make build-strict
# Run comprehensive quality checks
make check
The GitHub Actions workflow automatically runs make check
on all pull requests.