Skip to content
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions docs/ReleaseNotes/Kubernetes-Operator-for-PS-RN0.6.0.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,12 +24,12 @@ Percona Operator for MySQL allows users to deploy MySQL clusters with both async

## Improvements

* {{ k8spsjira(162) }}: Now [MySQL X protocol :octicons-link-external-16:](https://www.percona.com/blog/understanding-mysql-x-all-flavors) can be used with HAProxy load balancing
* {{ k8spsjira(162) }}: Now [MySQL X protocol](https://www.percona.com/blog/understanding-mysql-x-all-flavors) can be used with HAProxy load balancing
* {{ k8spsjira(163) }}: Percona Monitoring and Management (PMM) is now able to gather HAProxy metrics
* {{ k8spsjira(205) }}: Update user passwords on a per-user basis instead of a cumulative update so that if an error occurs while changing a user's password, other system users are not affected
* {{ k8spsjira(270) }}: Use more clear [Controller :octicons-link-external-16:](https://kubernetes.io/docs/concepts/architecture/controller/) names in log messages to ease troubleshooting
* {{ k8spsjira(270) }}: Use more clear [Controller](https://kubernetes.io/docs/concepts/architecture/controller/) names in log messages to ease troubleshooting
* {{ k8spsjira(280) }}: Full cluster crash recovery with group replication is now using MySQL shell built-in checks to detect the member with latest transactions and reboots from it, making the cluster prone to data loss
* {{ k8spsjira(281) }}: The Operator [can now be run locally :octicons-link-external-16:](https://github.com/percona/percona-server-mysql-operator/blob/main/CONTRIBUTING.md#1-contributing-to-the-source-tree) against a remote Kubernetes cluster, which simplifies the development process, substantially shortening the way to make and try minor code improvements
* {{ k8spsjira(281) }}: The Operator [can now be run locally](https://github.com/percona/percona-server-mysql-operator/blob/main/CONTRIBUTING.md#1-contributing-to-the-source-tree) against a remote Kubernetes cluster, which simplifies the development process, substantially shortening the way to make and try minor code improvements

## Bugs Fixed

Expand Down
4 changes: 2 additions & 2 deletions docs/ReleaseNotes/Kubernetes-Operator-for-PS-RN0.7.0.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,11 +27,11 @@ With our latest release, we put an all-hands-on-deck approach towards fine-tunin
## New features

* {{ k8spsjira(275) }}: The Operator now checks if the needed Secrets exist and connects to the storage to check the existence of a backup before starting the restore process
* {{ k8spsjira(277) }}: The new `topologySpreadConstraints` Custom Resource option allows to use [Pod Topology Spread Constraints :octicons-link-external-16:](https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/#spread-constraints-for-pods) to achieve even distribution of Pods across the Kubernetes cluster
* {{ k8spsjira(277) }}: The new `topologySpreadConstraints` Custom Resource option allows to use [Pod Topology Spread Constraints](https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/#spread-constraints-for-pods) to achieve even distribution of Pods across the Kubernetes cluster

## Improvements

* {{ k8spsjira(129) }}: The documentation on how to build and test the Operator [is now available :octicons-link-external-16:](https://github.com/percona/percona-server-mysql-operator/blob/main/e2e-tests/README.md)
* {{ k8spsjira(129) }}: The documentation on how to build and test the Operator [is now available](https://github.com/percona/percona-server-mysql-operator/blob/main/e2e-tests/README.md)
* {{ k8spsjira(295) }}: Certificate issuer errors are now reflected in the Custom Resource status message and can be easily checked with the `kubectl get ps -o yaml` command
* {{ k8spsjira(326) }}: The mysql-monit Orchestrator sidecar container now inherits orchestrator resources following the way that HAProxy mysql-monit container does (thanks to SlavaUtesinov for contribution)

Expand Down
1 change: 1 addition & 0 deletions docs/System-Requirements.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,3 +30,4 @@ Choose how you wish to install the Operator:
* [on Google Kubernetes Engine (GKE)](gke.md)
* [on Amazon Elastic Kubernetes Service (AWS EKS)](eks.md)
* [in a Kubernetes-based environment](kubernetes.md)
* [on OpenShift](openshift.md)
217 changes: 136 additions & 81 deletions docs/TLS.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,29 +3,21 @@
The Percona Operator for MySQL uses Transport Layer
Security (TLS) cryptographic protocol for the following types of communication:

* Internal - communication between Percona Server for MySQL instances,
* External - communication between the client application and the cluster.

The internal certificate is also used as an authorization method.
* Internal - communication between Percona Server for MySQL instances. The internal certificate is also used as an authorization method.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there's no internal and external in PS operator. there's a single certificate


TLS security can be configured in several ways.

* By default, the Operator *generates long-term certificates* automatically if
there are no certificate secrets available.

??? note "The Operator's self-signed issuer is local to the Operator Namespace"
This self-signed issuer is created because Percona Distribution for MySQL
* By default, the Operator **generates long-term certificates** automatically during the cluster creation if there are no certificate secrets available. The Operator's self-signed issuer is local to the Operator Namespace. This self-signed issuer is created because Percona Distribution for MySQL
requires all certificates issued by the same source.

* The Operator can use a specifically installed *cert-manager*, which will
automatically *generate and renew short-term TLS certificate*
* The Operator can use a *cert-manager*, which will
automatically **generate and renew short-term TLS certificates**. You must explicitly install cert-manager for this scenario.

??? note "The *cert-manager* acts as a self-signed issuer and generates certificates"
It is still a self-signed issuer which allows you to deploy and use the
The *cert-manager* acts as a self-signed issuer and generates certificates allowing you to deploy and use the
Percona Operator without a separate certificate issuer.

* Certificates can be generated manually: obtained from some other issuer and
provided to the Operator.
* You can generate TLS certificates manually or obtain them from some other issuer and provide to the Operator.

## Install and use the *cert-manager*

Expand All @@ -41,84 +33,147 @@ deploy the Operator, and the Operator will request a certificate from the

### Installation of the *cert-manager*

The cert-manager requires its own namespace

The steps to install the *cert-manager* are the following:

* Create a namespace,
1. Create a namespace:

* Disable resource validations on the cert-manager namespace,
```{.bash data-prompt="$"}
$ kubectl create namespace cert-manager
```

* Install the cert-manager.
2. Disable resource validations on the cert-manager namespace:

The following commands perform all the needed actions:
```{.bash data-prompt="$"}
$ kubectl label namespace cert-manager certmanager.k8s.io/disable-validation=true
```

```{.bash data-prompt="$"}
$ kubectl create namespace cert-manager
$ kubectl label namespace cert-manager certmanager.k8s.io/disable-validation=true
$ kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v{{ certmanagerrecommended }}/cert-manager.yaml
```
3. Install the cert-manager:

```{.bash data-prompt="$"}
$ kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v{{ certmanagerrecommended }}/cert-manager.yaml
```

4. Verify the *cert-manager* by running the following command:

After the installation, you can verify the *cert-manager* by running the following command:
```{.bash data-prompt="$"}
$ kubectl get pods -n cert-manager
```

```{.bash data-prompt="$"}
$ kubectl get pods -n cert-manager
The result should display the *cert-manager* and webhook active and running:

```{.text .no-copy}
NAME READY STATUS RESTARTS AGE
cert-manager-69f748766f-6chvt 1/1 Running 0 65s
cert-manager-cainjector-7cf6557c49-l2cwt 1/1 Running 0 66s
cert-manager-webhook-58f4cff74d-th4pp 1/1 Running 0 65s
```

The result should display the *cert-manager* and webhook active and running.
Once you create the database with the Operator, it will automatically trigger the cert-manager to create certificates. Whenever you [check certificates for expiration](#), you will find that they are valid and short-term.

## Generate certificates manually

To generate certificates manually, follow these steps:

1. Provision a Certificate Authority (CA) to generate TLS certificates

2. Generate a CA key and certificate file with the server details

3. Create the server TLS certificates using the CA keys, certs, and server
details

The set of commands generate certificates with the following attributes:

* `Server-pem` - Certificate

* `Server-key.pem` - the private key

* `ca.pem` - Certificate Authority

A secret must be added to `cr.yaml/spec/sslSecretName`.

```{.bash data-prompt="$"}
$ cat <<EOF | cfssl gencert -initca - | cfssljson -bare ca
{
"CN": "Root CA",
"key": {
"algo": "rsa",
"size": 2048
}
}
EOF

$ cat <<EOF | cfssl gencert -ca=ca.pem -ca-key=ca-key.pem - | cfssljson -bare server
{
"hosts": [
"*.${CLUSTER_NAME}-mysql",
"*.${CLUSTER_NAME}-mysql.${NAMESPACE}",
"*.${CLUSTER_NAME}-mysql.${NAMESPACE}.svc",
"*.${CLUSTER_NAME}-orchestrator",
"*.${CLUSTER_NAME}-orchestrator.${NAMESPACE}",
"*.${CLUSTER_NAME}-orchestrator.${NAMESPACE}.svc",
"*.${CLUSTER_NAME}-router",
"*.${CLUSTER_NAME}-router.${NAMESPACE}",
"*.${CLUSTER_NAME}-router.${NAMESPACE}.svc"
],
"CN": "${CLUSTER_NAME}-mysql",
"key": {
"algo": "rsa",
"size": 2048
}
}
EOF

$ kubectl create secret generic my-cluster-ssl --from-file=tls.crt=server.pem --
from-file=tls.key=server-key.pem --from-file=ca.crt=ca.pem --
type=kubernetes.io/tls
You can generate TLS certificates manually instead of using the Operator's automatic certificate generation. This approach gives you full control over certificate properties and is useful for production environments with specific security requirements.

### What you'll create

When you follow the steps from this guide, you'll generate these certificate files:

* `server.pem` - Server certificate for MongoDB nodes
* `server-key.pem` - Private key for the server certificate
* `ca.pem` - Certificate Authority certificate
* `ca-key.pem` - Certificate Authority private key

Next, create the server TLS certificates using the CA keys, certs, and server details and then reference this Secret in the Custom Resource.

### Prerequisites

Before you start, make sure you have:

* `cfssl` and `cfssljson` tools installed on your system
* Your cluster name and namespace ready
* Access to your Kubernetes cluster

### Generate certificates

1. Replace `ps-cluster1` and `my-namespace` with your actual cluster name and namespace in the commands below:

```bash
CLUSTER_NAME=ps-cluster1
NAMESPACE=my-namespace
```


2. Generate a Certificate Authority (CA). Uou will use it to sign your server certificates.

```bash
cat <<EOF | cfssl gencert -initca - | cfssljson -bare ca
{
"CN": "Root CA",
"key": {
"algo": "rsa",
"size": 2048
}
}
EOF
```

The output is two files: `ca.pem` (the CA certificate) and `ca-key.pem` (the CA private key).

3. Generate the Server Certificate using the CA. This command generates a server certificate and key, signed by your newly-created CA. The certificate will be valid for all hosts required by your cluster components.

```bash
cat <<EOF | cfssl gencert -ca=ca.pem -ca-key=ca-key.pem - | cfssljson -bare server
{
"hosts": [
"*.${CLUSTER_NAME}-mysql",
"*.${CLUSTER_NAME}-mysql.${NAMESPACE}",
"*.${CLUSTER_NAME}-mysql.${NAMESPACE}.svc",
"*.${CLUSTER_NAME}-orchestrator",
"*.${CLUSTER_NAME}-orchestrator.${NAMESPACE}",
"*.${CLUSTER_NAME}-orchestrator.${NAMESPACE}.svc",
"*.${CLUSTER_NAME}-router",
"*.${CLUSTER_NAME}-router.${NAMESPACE}",
"*.${CLUSTER_NAME}-router.${NAMESPACE}.svc"
],
"CN": "${CLUSTER_NAME}-mysql",
"key": {
"algo": "rsa",
"size": 2048
}
}
EOF
```

The outputs are `server.pem` (the server certificate) and `server-key.pem` (the server private key).

### Create the Kubernetes Secret from the generated certificates

This command packages the generated certificate and key files into a Kubernetes secret named `my-cluster-ssl` in your chosen namespace.

```bash
kubectl create secret generic my-cluster-ssl -n $NAMESPACE \
--from-file=tls.crt=server.pem \
--from-file=tls.key=server-key.pem \
--from-file=ca.crt=ca.pem \
--type=kubernetes.io/tls
```

### Configure your cluster

After creating the Secret, reference it in your cluster configuration in the deploy/cr.yaml.

```yaml
spec:
sslSecretName: my-cluster-ssl
```

Apply the configuration to update the cluster:

```bash
kubectl apply -f deploy/cr.yaml -n $NAMESPACE
```

This triggers your Pods to restart.

14 changes: 10 additions & 4 deletions docs/assets/fragments/monitor-db.txt
Original file line number Diff line number Diff line change
Expand Up @@ -16,9 +16,11 @@ PMM Server and PMM Client are installed separately.

## Install PMM Server

You must have PMM Server up and running. You can run PMM Server as a *Docker image*, a *virtual appliance*, or on an *AWS instance*.
You must have PMM Server up and running. You can run PMM Server as a *Docker container*, a *virtual appliance*, or on an *AWS instance*.
Please refer to the [official PMM documentation :octicons-link-external-16:](https://docs.percona.com/percona-monitoring-and-management/3/install-pmm/install-pmm-server/index.html)
for the installation instructions.
for the installation instructions.

For Kubernetes environment, we recommend to install PMM from the [Helm chart :octicons-link-external-16:](https://docs.percona.com/percona-monitoring-and-management/3/install-pmm/install-pmm-server/deployment-options/helm/index.html).

## Install PMM Client

Expand All @@ -27,7 +29,9 @@ PMM Client is installed as a side-car container in the database Pods in your Kub

1. Authorize PMM Client within PMM Server.

1. PMM3 uses Grafana service accounts to control access to PMM server components and resources. To authenticate in PMM server, you need a service account token. [Generate a service account and token :octicons-link-external-16:](https://docs.percona.com/percona-monitoring-and-management/3/api/authentication.html?h=authe#generate-a-service-account-and-token). Specify the Admin role for the service account.
1. PMM3 uses Grafana service accounts to control access to PMM server components and resources. To authenticate in PMM server, you need a service account token. Use PMM documentation to [generate a service account with the **Admin** role and token :octicons-link-external-16:](https://docs.percona.com/percona-monitoring-and-management/3/api/authentication.html?h=authe#generate-a-service-account-and-token).

The token must have the format `glsa_*************************_9e35351b`.

!!! warning

Expand All @@ -45,7 +49,7 @@ PMM Client is installed as a side-car container in the database Pods in your Kub
=== ":simple-apple: in macOS"

```{.bash data-prompt="$"}
$ kubectl patch secret/ps-cluster1-secrets -p "$(echo -n '{"data":{"pmmservertoken":"'$(echo -n new_key | base64)'"}}')"
$ kubectl patch secret/ps-cluster1-secrets -p "$(echo -n '{"data":{"pmmservertoken":"'$(echo -n <my-token> | base64)'"}}')"
```

2. Update the `pmm` section in the [deploy/cr.yaml :octicons-link-external-16:](https://github.com/percona/percona-server-mysql-operator/blob/v{{ release }}/deploy/cr.yaml) file:
Expand All @@ -65,6 +69,8 @@ PMM Client is installed as a side-car container in the database Pods in your Kub
``` {.bash data-prompt="$"}
$ kubectl apply -f deploy/cr.yaml -n <namespace>
```

This triggers the Operator to restart your cluster Pods.

4. Check that corresponding Pods are not in a cycle of stopping and restarting.
This cycle occurs if there are errors on the previous steps:
Expand Down
Loading