Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions docs/ReleaseNotes/Kubernetes-Operator-for-PS-RN0.6.0.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,10 +24,10 @@ Percona Operator for MySQL allows users to deploy MySQL clusters with both async

## Improvements

* {{ k8spsjira(162) }}: Now [MySQL X protocol :octicons-link-external-16:](https://www.percona.com/blog/understanding-mysql-x-all-flavors) can be used with HAProxy load balancing
* {{ k8spsjira(162) }}: Now [MySQL X protocol](https://www.percona.com/blog/understanding-mysql-x-all-flavors) can be used with HAProxy load balancing
* {{ k8spsjira(163) }}: Percona Monitoring and Management (PMM) is now able to gather HAProxy metrics
* {{ k8spsjira(205) }}: Update user passwords on a per-user basis instead of a cumulative update so that if an error occurs while changing a user's password, other system users are not affected
* {{ k8spsjira(270) }}: Use more clear [Controller :octicons-link-external-16:](https://kubernetes.io/docs/concepts/architecture/controller/) names in log messages to ease troubleshooting
* {{ k8spsjira(270) }}: Use more clear [Controller](https://kubernetes.io/docs/concepts/architecture/controller/) names in log messages to ease troubleshooting
* {{ k8spsjira(280) }}: Full cluster crash recovery with group replication is now using MySQL shell built-in checks to detect the member with latest transactions and reboots from it, making the cluster prone to data loss
* {{ k8spsjira(281) }}: The Operator [can now be run locally :octicons-link-external-16:](https://github.com/percona/percona-server-mysql-operator/blob/v{{release}}/CONTRIBUTING.md#1-contributing-to-the-source-tree) against a remote Kubernetes cluster, which simplifies the development process, substantially shortening the way to make and try minor code improvements

Expand Down
2 changes: 1 addition & 1 deletion docs/ReleaseNotes/Kubernetes-Operator-for-PS-RN0.7.0.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ With our latest release, we put an all-hands-on-deck approach towards fine-tunin
## New features

* {{ k8spsjira(275) }}: The Operator now checks if the needed Secrets exist and connects to the storage to check the existence of a backup before starting the restore process
* {{ k8spsjira(277) }}: The new `topologySpreadConstraints` Custom Resource option allows to use [Pod Topology Spread Constraints :octicons-link-external-16:](https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/#spread-constraints-for-pods) to achieve even distribution of Pods across the Kubernetes cluster
* {{ k8spsjira(277) }}: The new `topologySpreadConstraints` Custom Resource option allows to use [Pod Topology Spread Constraints](https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/#spread-constraints-for-pods) to achieve even distribution of Pods across the Kubernetes cluster

## Improvements

Expand Down
119 changes: 7 additions & 112 deletions docs/TLS.md
Original file line number Diff line number Diff line change
@@ -1,124 +1,19 @@
# Transport Layer Security (TLS)

The Percona Operator for MySQL uses Transport Layer
Security (TLS) cryptographic protocol for the following types of communication:
Security (TLS) cryptographic protocol for the communication between the client application and the cluster.

* Internal - communication between Percona Server for MySQL instances,
* External - communication between the client application and the cluster.
You can configure TLS security in several ways.

The internal certificate is also used as an authorization method.

TLS security can be configured in several ways.

* By default, the Operator *generates long-term certificates* automatically if
there are no certificate secrets available.

??? note "The Operator's self-signed issuer is local to the Operator Namespace"
This self-signed issuer is created because Percona Distribution for MySQL
* By default, the Operator **generates long-term certificates** automatically during the cluster creation if there are no certificate secrets available. The Operator's self-signed issuer is local to the Operator Namespace. This self-signed issuer is created because Percona Distribution for MySQL
requires all certificates issued by the same source.

* The Operator can use a specifically installed *cert-manager*, which will
automatically *generate and renew short-term TLS certificate*
* The Operator can use a *cert-manager*, which will
automatically **generate and renew short-term TLS certificates**. You must explicitly install cert-manager for this scenario.

??? note "The *cert-manager* acts as a self-signed issuer and generates certificates"
It is still a self-signed issuer which allows you to deploy and use the
The *cert-manager* acts as a self-signed issuer and generates certificates allowing you to deploy and use the
Percona Operator without a separate certificate issuer.

* Certificates can be generated manually: obtained from some other issuer and
provided to the Operator.

## Install and use the *cert-manager*

### About the *cert-manager*

A [cert-manager :octicons-link-external-16:](https://cert-manager.io/docs/) is a Kubernetes certificate
management controller which is widely used to automate the management and
issuance of TLS certificates. It is community-driven, and open source.

When you have already installed *cert-manager*, nothing else is needed: just
deploy the Operator, and the Operator will request a certificate from the
*cert-manager*.

### Installation of the *cert-manager*

The steps to install the *cert-manager* are the following:

* Create a namespace,

* Disable resource validations on the cert-manager namespace,

* Install the cert-manager.

The following commands perform all the needed actions:

```bash
kubectl create namespace cert-manager
kubectl label namespace cert-manager certmanager.k8s.io/disable-validation=true
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v{{ certmanagerrecommended }}/cert-manager.yaml
```

After the installation, you can verify the *cert-manager* by running the following command:

```bash
kubectl get pods -n cert-manager
```

The result should display the *cert-manager* and webhook active and running.

## Generate certificates manually

To generate certificates manually, follow these steps:

1. Provision a Certificate Authority (CA) to generate TLS certificates

2. Generate a CA key and certificate file with the server details

3. Create the server TLS certificates using the CA keys, certs, and server
details

The set of commands generate certificates with the following attributes:

* `Server-pem` - Certificate

* `Server-key.pem` - the private key

* `ca.pem` - Certificate Authority

A secret must be added to `cr.yaml/spec/sslSecretName`.

```bash
cat <<EOF | cfssl gencert -initca - | cfssljson -bare ca
{
"CN": "Root CA",
"key": {
"algo": "rsa",
"size": 2048
}
}
EOF
* You can generate TLS certificates manually or obtain them from some other issuer and provide to the Operator.

cat <<EOF | cfssl gencert -ca=ca.pem -ca-key=ca-key.pem - | cfssljson -bare server
{
"hosts": [
"*.${CLUSTER_NAME}-mysql",
"*.${CLUSTER_NAME}-mysql.${NAMESPACE}",
"*.${CLUSTER_NAME}-mysql.${NAMESPACE}.svc",
"*.${CLUSTER_NAME}-orchestrator",
"*.${CLUSTER_NAME}-orchestrator.${NAMESPACE}",
"*.${CLUSTER_NAME}-orchestrator.${NAMESPACE}.svc",
"*.${CLUSTER_NAME}-router",
"*.${CLUSTER_NAME}-router.${NAMESPACE}",
"*.${CLUSTER_NAME}-router.${NAMESPACE}.svc"
],
"CN": "${CLUSTER_NAME}-mysql",
"key": {
"algo": "rsa",
"size": 2048
}
}
EOF

kubectl create secret generic my-cluster-ssl --from-file=tls.crt=server.pem --
from-file=tls.key=server-key.pem --from-file=ca.crt=ca.pem --
type=kubernetes.io/tls
```
12 changes: 9 additions & 3 deletions docs/assets/fragments/monitor-db.txt
Original file line number Diff line number Diff line change
Expand Up @@ -16,9 +16,11 @@ PMM Server and PMM Client are installed separately.

## Install PMM Server

You must have PMM Server up and running. You can run PMM Server as a *Docker image*, a *virtual appliance*, or on an *AWS instance*.
You must have PMM Server up and running. You can run PMM Server as a *Docker container*, a *virtual appliance*, or on an *AWS instance*.
Please refer to the [official PMM documentation :octicons-link-external-16:](https://docs.percona.com/percona-monitoring-and-management/3/install-pmm/install-pmm-server/index.html)
for the installation instructions.
for the installation instructions.

For Kubernetes environment, we recommend to install PMM from the [Helm chart :octicons-link-external-16:](https://docs.percona.com/percona-monitoring-and-management/3/install-pmm/install-pmm-server/deployment-options/helm/index.html).

## Install PMM Client

Expand All @@ -27,7 +29,9 @@ PMM Client is installed as a sidecar container in the database Pods in your Kube

1. Authorize PMM Client within PMM Server.

1. PMM3 uses Grafana service accounts to control access to PMM server components and resources. To authenticate in PMM server, you need a service account token. [Generate a service account and token :octicons-link-external-16:](https://docs.percona.com/percona-monitoring-and-management/3/api/authentication.html?h=authe#generate-a-service-account-and-token). Specify the Admin role for the service account.
1. PMM3 uses Grafana service accounts to control access to PMM server components and resources. To authenticate in PMM server, you need a service account token. Use PMM documentation to [generate a service account with the **Admin** role and token :octicons-link-external-16:](https://docs.percona.com/percona-monitoring-and-management/3/api/authentication.html?h=authe#generate-a-service-account-and-token).

The token must have the format `glsa_*************************_9e35351b`.

!!! warning

Expand Down Expand Up @@ -65,6 +69,8 @@ PMM Client is installed as a sidecar container in the database Pods in your Kube
```bash
kubectl apply -f deploy/cr.yaml -n <namespace>
```

This triggers the Operator to restart your cluster Pods.

4. Check that corresponding Pods are not in a cycle of stopping and restarting.
This cycle occurs if there are errors on the previous steps:
Expand Down
Loading