The ProxySQL agent is a small, statically compiled Go binary (Go 1.25) for use in maintaining the state of a ProxySQL cluster, primarily designed as a Kubernetes sidecar container. The repo includes a Dockerfile to generate a debian-based image, or you can use the version in the GitHub Container Registry.
There exists relatively little tooling around ProxySQL, so we hope that this is useful to others out there, even if it's just to learn how to maintain a cluster.
This is mainly useful in a kubernetes deployment if you have a horizontal pod autoscaler defined for satellite and/or core pods; as these pods scale in and out, the state of the ProxySQL cluster needs to be maintained. If you are running a static cluster on VMs and the hosts rarely change, or you don't use an HPA, this probably won't be as useful to you (though there are some features coming that might help with even that).
Some examples of where this is necessary:
- As satellite pods scale in, one of the core pods need to run
LOAD PROXYSQL SERVERS TO RUNTIMEin order to accept the new pods to the cluster; until that is done, the satellite pod will not receive configuration from the core pods - As core pods recycle (or all core pods are recycled) and IPs to them change, the satellites need to run some commands to load the new core pods into runtime
- If all core pods recycle, the satellite pods will run
LOAD PROXYSQL SERVERS FROM CONFIGwhich points them to theproxysql-coreservice, and once the core pods are up the satellites should receive configuration again- Note that if your cluster is running fine and the core pods all go away, the satellites will continue to function with the settings they already had; in other words, even if the core pods vanish, you will still serve proxied MySQL traffic as long as the satellites have fetched the configuration once
We looked into using ruby, and in fact the "agents" we are currently running are written in ruby, but there have been some issues:
- If the ProxySQL admin interface gets wedged, the ruby and mysl processes still continue to spawn and spin, which will eventually lead to either inode exhaustion or a container OOM
- The scheduler spawns a new ruby process every 10s
- Each ruby process shells out to the mysql binary several times times per script invocation
- In addition to the scheduler process, the health probes is a separate ruby script that also spawns several mysql processes per run
- Two script invocations every 10s, one for liveness and one for readiness
- The scheduler spawns a new ruby process every 10s
We wanted to avoid having to install a bunch of ruby gems in the container, so we decided shelling out to mysql was fine; we got most of the patterns from existing ProxySQL tooling and figured it'd work short term. And it has worked fine, though there have been enough instances of OOM'd containers that it's become worrisome. This usually happens if someone is in a pod doing any kind of work (modifying mysql query rules, etc), but we haven't been able to figure out what causes the admin interface to become wedged.
Because k8s tooling is generally written in Golang, the ruby k8s gems didn't seem to be as maintained or as easy to use as the golang libraries. And because the go process is statically compiled, and we won't need to deal with a bunch of external dependencies at runtime.
The agent supports three run modes:
core- Core pods maintain cluster state and configurationsatellite- Satellite pods receive configuration from core podsdump- One-time export of ProxySQL statistics data to CSV files
In the example repo, there are two separate deployments; the core and the satellite deployments. The agent is responsible for maintaining the state of this cluster.
On boot, the agent will connect to the ProxySQL admin interface on 127.0.0.1:6032 (default address). It will maintain the connection throughout the life of the pod, and will periodically run the commands necessary to maintain the cluster, depending on the run mode specified on boot.
The agent exposes several HTTP endpoints (default port 8080):
/healthz/started- Startup probe (simple ping to ProxySQL admin interface)/healthz/ready- Readiness probe (comprehensive health checks, returns 503 when draining)/healthz/live- Liveness probe (health checks, remains healthy during graceful shutdown)/shutdown- Graceful shutdown endpoint forcontainer.lifecycle.preStop.httpGethooks
All health endpoints return JSON responses with detailed status information including backend server states, connection counts, and draining status.
The agent responds to Unix signals for operational control:
SIGTERM/SIGINT- Initiates graceful shutdownSIGUSR1- Dumps current ProxySQL status and statistics to logsSIGUSR2- Reserved for future config reload functionality
The agent uses a sophisticated configuration system built on Viper with multiple configuration sources in order of precedence:
- Defaults set in code
- Configuration file (YAML format,
config.yamlor specified viaAGENT_CONFIG_FILEenv var) - Environment variables (prefixed with
AGENT_, e.g.,AGENT_PROXYSQL_ADDRESS) - Command-line flags
Key configuration options include:
- ProxySQL admin interface connection details
- Run mode (
core,satellite, ordump) - Kubernetes pod selector for cluster discovery
- HTTP API port and settings
- Logging configuration (level, format, structured logging)
- Graceful shutdown timeouts
See the included config.yaml for a complete configuration example.
# Build binary
make build
# Run tests with race detection
make test
# Full validation pipeline
make check
# Generate coverage report
make coverage# Run in satellite mode with debug logging
make runNote: Local development requires a running ProxySQL instance. See the docker-compose.yml for a local development setup.
- P3 - Expand HTTP API for operational control
- Enhanced cluster status endpoints
- Force satellite resync capability
- Runtime configuration updates
- P1 -
Dump the contents ofstats_mysql_query_digeststo a file on disk; will be used to get the data into snowflake. File format is CSV - P1 -
Health checks; replace the ruby health probe with this - P2 -
Replace the pre-stop ruby script with this - P2 -
Better test coverage
- ✅ Cluster management (ie: core and satellite agents)
- ✅ Health checks via an HTTP endpoint, specifically for the ProxySQL container
- ✅ Pre-stop hook replacement
The project uses automated releases with goreleaser. Current version: 1.1.7
To create a new release:
git tag vX.X.Xgit push origin vX.X.X
This triggers goreleaser to build and publish:
- Linux AMD64 binary
- Multi-architecture Docker images
- GitHub release with changelog
See CHANGELOG.md for version history and recent updates.
- go-sql-driver/mysql - MySQL driver for ProxySQL admin interface
- k8s-client-go - Kubernetes API client for cluster discovery
- slog - Structured logging with JSON and text output formats
- tint - Pretty console logging for development
- viper - Configuration management with file, ENV, and flag support
