This directory contains ArgoCD configurations for managing the Spring Boot application across different environments, including ephemeral PR environments with CloudNativePG database management and integration with the 6-stage CI/CD pipeline.
argocd/
├── applications/ # Individual Application manifests
│ ├── staging.yaml # Staging environment
│ └── production.yaml # Production environment
├── applicationsets/ # ApplicationSet for dynamic environments
│ └── pr-environments.yaml # PR-based ephemeral environments
├── webhook/ # GitHub integration
│ └── github-webhook-config.yaml # Webhook and notifications config
├── cleanup/ # Cleanup workflows
│ └── pr-cleanup.yaml # GitHub Actions for PR cleanup
└── README.md # This file
The deployment strategy follows GitHub Flow principles with simplified environment management:
-
Staging (
spring-app-staging)- Automatic sync enabled
- Long-running integration testing environment
- Single replica, minimal resources
- Used for demos and integration testing
-
Production (
spring-app-production)- Automatic sync with approval gates
- Connected to
mainbranch - 5+ replicas, full resources, HA setup
- Canary deployment strategy
- Namespace:
spring-app-pr-{number} - URL:
https://pr-{number}.dev.domain.local - Lifecycle: Created when PR gets
previewlabel, destroyed when PR closes - Resources: Minimal (1 replica, 256Mi memory)
- Image Tag: Uses semantic versioning
v1.0.0-pr-{number}from 6-stage CI/CD pipeline - Database: Shared development CloudNativePG instance
# Install ArgoCD
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
# Install ArgoCD ApplicationSet Controller (if not included)
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/applicationset/v0.4.1/manifests/install.yaml# Create a GitHub personal access token with repo permissions
kubectl create secret generic github-token \
--from-literal=token=ghp_your_token_here \
-n argocd- Create a GitHub App in your organization
- Generate and download the private key
- Note the App ID and Installation ID
- Update the webhook configuration with these values
# Deploy static environment applications
kubectl apply -f applications/
# Deploy ApplicationSet for PR environments
kubectl apply -f applicationsets/
# Configure GitHub webhooks and notifications
kubectl apply -f webhook/# Add the config repository to ArgoCD
argocd repo add https://github.com/ariaskeneth/spring-app-config \
--username your-username \
--password your-token
# Or using SSH key
argocd repo add git@github.com:ariaskeneth/spring-app-config.git \
--ssh-private-key-path ~/.ssh/id_rsaWhen a PR is created:
- Developer creates a pull request
- 6-Stage CI/CD Pipeline executes:
scan-and-lint: TruffleHog, Trivy, Checkstyle/PMDbuild-and-sast: OWASP, CodeQL, SonarQube quality gatespre-tests: JUnit/JaCoCo, Testcontainers with PostgreSQL 18image-and-push: Multi-arch build, container scanning, registry pushdeploy: Addspreviewlabel automatically
- ArgoCD ApplicationSet detects the labeled PR
- ArgoCD creates new Application with image tag
v1.0.0-pr-{number} - Kubernetes deploys the PR environment
- GitHub Actions comments on PR with preview URL and monitoring links
When commits are pushed to the PR:
- 6-Stage CI/CD Pipeline re-executes with security gates:
- Security gates: 0 critical, ≤5 high vulnerabilities
- Quality gates: ≥80% code coverage requirement
- Multi-architecture build (AMD64/ARM64)
- New container image built with updated tag
v1.0.0-pr-{number} - ArgoCD automatically syncs the updated image
- Environment is updated with new code after passing all quality gates
When a PR is closed or merged:
- GitHub Actions removes the
previewlabel - ArgoCD ApplicationSet no longer finds the PR
- ArgoCD automatically deletes the Application
- Kubernetes removes all PR environment resources
The ArgoCD deployment is tightly integrated with the 6-stage GitHub Actions pipeline:
graph LR
A[scan-and-lint] --> B[build-and-sast]
B --> C[pre-tests]
C --> D[image-and-push]
D --> E[deploy]
E --> F[ArgoCD Sync]
E --> E1[Add 'preview' Label]
E --> E2[GitHub PR Comment]
F --> F1[ApplicationSet Detection]
F --> F2[Environment Creation]
ArgoCD only deploys applications that have passed:
- Security Gates: 0 critical vulnerabilities, ≤5 high vulnerabilities
- Quality Gates: ≥80% code coverage with JUnit/JaCoCo
- Integration Tests: Testcontainers with real PostgreSQL 18
- Container Security: Trivy scanning of multi-architecture images
- Static Analysis: SonarQube quality gate with 300s timeout (executed in pre-tests stage after test completion)
Development Environment:
- Shared CloudNativePG cluster for all PR environments
- Automatic service discovery:
postgres-app-cluster-rw:5432 - No dedicated database per PR (cost optimization)
Production Environment:
- Dedicated CloudNativePG cluster with 3 instances
- Automated failover and backup to S3-compatible storage
- Read replicas for scaling:
postgres-app-cluster-ro:5432
generators:
- pullRequest:
github:
owner: ariaskeneth
repo: spring-app
tokenRef:
secretName: github-token
key: token
labels:
- preview # Only PRs with this label (added by CI/CD pipeline)
requeueAfterSeconds: 30
template:
spec:
source:
kustomize:
images:
- name: ghcr.io/ariaskeneth/spring-app
newTag: 'v1.0.0-pr-{{number}}' # Semantic versioning from CI/CD- Namespace:
spring-app-pr-{{number}} - Ingress:
pr-{{number}}.dev.domain.local - Resources: Minimal allocation (256Mi memory, 100m CPU)
- Database: Shared CloudNativePG development cluster (no dedicated DB per PR)
- Image: Multi-architecture (AMD64/ARM64) with semantic versioning
- Secrets: Separate Vault path per PR environment
- Quality Assurance: Only deployed after passing all 6 CI/CD stages
Each PR environment includes:
- Prometheus metrics with PR-specific labels and ServiceMonitor
- Grafana dashboard filtered by PR number with links in GitHub comments
- Loki logs with PR context and structured logging
- Tempo traces tagged with PR information for distributed tracing
- Mimir long-term metrics storage integration
- ArgoCD Application status visible in GitHub PR comments
# Enforced in PR template
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "250m"PR environments are isolated with NetworkPolicies:
- Can access shared development database
- Cannot access other PR environments
- Limited egress to essential services only
# ServiceAccount with minimal permissions
apiVersion: v1
kind: ServiceAccount
metadata:
name: spring-app-pr
namespace: spring-app-pr-{{number}}Each PR environment gets its own Vault path with External Secrets Operator:
secret/spring-app/pr-123/database
secret/spring-app/pr-123/api
CloudNativePG Integration:
- Shared development cluster for all PR environments
- Automatic connection string generation
- Database credentials managed via External Secrets Operator
# Check ApplicationSet status
kubectl get applicationset spring-app-pr-environments -n argocd -o yaml
# Check ApplicationSet controller logs
kubectl logs -f deployment/argocd-applicationset-controller -n argocd
# Verify GitHub token permissions
kubectl get secret github-token -n argocd -o yaml# Check Application status
kubectl get application spring-app-pr-123 -n argocd
# Check namespace and resources
kubectl get all -n spring-app-pr-123
# Check ingress configuration
kubectl get ingress -n spring-app-pr-123# Manually delete stuck Application
kubectl delete application spring-app-pr-123 -n argocd
# Force cleanup namespace
kubectl delete namespace spring-app-pr-123 --force --grace-period=0- Set appropriate resource limits for PR environments (256Mi/512Mi memory)
- Use horizontal pod autoscaling with conservative limits (1-3 pods for PR)
- Monitor resource usage across all PR environments with Prometheus
- Leverage CloudNativePG shared development cluster for cost efficiency
- Implement automatic cleanup after PR closure via ApplicationSet lifecycle
- Use shared CloudNativePG development cluster for all PR environments
- Set TTL for PR environments (automatic cleanup when PR closes)
- Multi-architecture images reduce infrastructure costs across different node types
- Automatic PR comments with environment URLs and monitoring links
- Include ArgoCD Application status and sync information
- Provide direct links to Grafana dashboards filtered by PR number
- Real-time deployment status via GitHub Actions integration
- Only deploy after passing all 6 CI/CD pipeline stages
- Enforce security gates: 0 critical, ≤5 high vulnerabilities
- Network isolation between PR environments via NetworkPolicies
- External Secrets Operator integration with Vault for secure secret management
- Regular container scanning with Trivy in CI/CD pipeline
- CloudNativePG operator handles database lifecycle automatically
- Shared development cluster reduces resource overhead
- Production cluster with automated failover and backup strategies
- Connection string management via Kubernetes services
This ArgoCD setup provides a robust, scalable solution for managing both static and ephemeral environments with full GitOps automation, comprehensive security scanning, and automated database management through CloudNativePG.