diff --git a/docs/en/install-upgrade/images/install-stack-metrics-dashboard.png b/docs/en/install-upgrade/images/install-stack-metrics-dashboard.png new file mode 100644 index 000000000..890b4d465 Binary files /dev/null and b/docs/en/install-upgrade/images/install-stack-metrics-dashboard.png differ diff --git a/docs/en/install-upgrade/images/stack-install-final-state.png b/docs/en/install-upgrade/images/stack-install-final-state.png new file mode 100644 index 000000000..1b651cace Binary files /dev/null and b/docs/en/install-upgrade/images/stack-install-final-state.png differ diff --git a/docs/en/install-upgrade/index.asciidoc b/docs/en/install-upgrade/index.asciidoc index f3192cd5f..962c0f702 100644 --- a/docs/en/install-upgrade/index.asciidoc +++ b/docs/en/install-upgrade/index.asciidoc @@ -17,6 +17,10 @@ include::overview.asciidoc[] include::installing-stack.asciidoc[] +include::installing-stack-demo-self.asciidoc[] + +include::installing-stack-demo-secure.asciidoc[] + include::air-gapped-install.asciidoc[] include::upgrading-stack.asciidoc[] diff --git a/docs/en/install-upgrade/installing-stack-demo-secure.asciidoc b/docs/en/install-upgrade/installing-stack-demo-secure.asciidoc new file mode 100644 index 000000000..7dfb93b6b --- /dev/null +++ b/docs/en/install-upgrade/installing-stack-demo-secure.asciidoc @@ -0,0 +1,899 @@ +//For testing on currently available builds: +//:version: 8.11. + +[[install-stack-demo-secure]] +=== Tutorial 2: Securing a self-managed {stack} + +This tutorial is a follow-on to <>. The first tutorial describes how to configure a multi-node {es} cluster and then set up {kib}, followed by {fleet-server} and {agent}. In a production environment, it's recommended after completing the {kib} setup to proceed directly to this tutorial to configure your SSL certificates. These steps guide you through that process, and then describe how to configure {fleet-server} and {agent} with the certificates in place. + +**Securing the {stack}** + +Beginning with Elastic 8.0, security is enabled in the {stack} by default, meaning that traffic between {es} nodes and between {kib} and {es} is SSL-encrypted. While this is suitable for testing non-production viability of the Elastic platform, most production networks have requirements for the use of trusted CA-signed certificates. These steps demonstrate how to update the out-of-the-box self-signed certificates with your own trusted CA-signed certificates. + +For traffic to be encrypted between {es} cluster nodes and between {kib} and {es}, SSL certificates must be created for the transport ({es} inter-node communication) and HTTP (for the {es} REST API) layers. Similarly, when setting up {fleet-server} you'll generate and configure a new certificate bundle, and then {elastic-agent} uses the generated certificates to communicate with both {fleet-server} and {es}. The process to set things up is as follows: + +* <> +* <> +* <> +* <> +* <> +* <> +* <> +* <> +* <> + +It should take between one and two hours to complete these steps. + +[discrete] +[[install-stack-demo-secure-prereqs]] +== Prerequisites and assumptions + +Before starting, you'll need to have set up an on-premises {es} cluster with {kib}, following the steps in <>. + +The examples in this guide use RPM packages to install the {stack} components on hosts running Red Hat Enterprise Linux 8. The steps for other install methods and operating systems are similar, and can be found in the documentation linked from each section. + +Special considerations such as firewalls and proxy servers are not covered here. + +[discrete] +[[install-stack-demo-secure-ca]] +== Step 1: Generate a new self-signed CA certificate + +In a production environment you would typically use the CA certificate from your own organization, along with the certificate files generated for the hosts where the {stack} is being installed. For demonstration purposes, we'll use the Elastic certificate utility to configure a self-signed CA certificate. + +. On the first node in your {es} cluster, stop the {es} service: ++ +["source","shell"] +---- +sudo systemctl stop elasticsearch.service +---- + +. Generate a CA certificate using the provided certificate utility, `elasticsearch-certutil`. Note that the location of the utility depends on the installation method you used to install {es}. Refer to {ref}/certutil.html[elasticsearch-certutil] for the command details and to {ref}/update-node-certs-different.html[Update security certificates with a different CA] for details about the procedure as a whole. ++ +Run the following command. When prompted, specify a unique name for the output file, such as `elastic-stack-ca.zip`: ++ +["source","shell"] +---- +sudo /usr/share/elasticsearch/bin/elasticsearch-certutil ca -pem +---- + +. Move the output file to the `/etc/elasticsearch/certs` directory. This directory is created automatically when you install {es}. ++ +["source","shell"] +---- +sudo mv /usr/share/elasticsearch/elastic-stack-ca.zip /etc/elasticsearch/certs/ +---- + +. Unzip the file: ++ +["source","shell"] +---- +sudo unzip -d /etc/elasticsearch/certs /etc/elasticsearch/certs/elastic-stack-ca.zip +---- + +. View the files that were unpacked into a new `ca` directory: ++ +["source","shell"] +---- +sudo ls /etc/elasticsearch/certs/ca/ +---- ++ +`ca.crt`:: The generated certificate (or you can substitute this with your own certificate, signed by your organizations's certificate authority) +`ca.key`:: The certificate authority's private key + ++ +These steps to generate new self-signed CA certificates need to be done only on the first {es} node. The other {es} nodes will use the same `ca.crt` and `ca.key` files. + +. From the `/etc/elasticsearch/certs/ca/` directory, import the newly created CA certificate into the {es} truststore. This step ensures that your cluster trusts the new CA certificate. ++ +NOTE: On a new installation a new keystore and truststore are created automatically. If you're running these steps on an existing {es} installation and you know the password to the keystore and the truststore, follow the instructions in {ref}/update-node-certs-different.html[Update security certificates with a different CA] to import the CA certificate. ++ +Run the `keytool` command as shown, replacing `` with a unique password for the truststore, and store the password securely: ++ +["source","shell"] +---- +sudo /usr/share/elasticsearch/jdk/bin/keytool -importcert -trustcacerts -noprompt -keystore /etc/elasticsearch/certs/elastic-stack-ca.p12 -storepass -alias new-ca -file /etc/elasticsearch/certs/ca/ca.crt +---- + +. Ensure that the new key was added to the keystore: ++ +["source","shell"] +---- +keytool -keystore elastic-stack-ca.p12 -list +---- ++ +NOTE: The keytool utility is provided as part of the {es} installation and is located at: `/usr/share/elasticsearch/jdk/bin/keytool` on RPM installations. ++ +Enter your password when prompted. The result should show the details for your newly added key: ++ +["source","shell"] +---- +Keystore type: jks +Keystore provider: SUN +Your keystore contains 1 entry +new-ca, Jul 12, 2023, trustedCertEntry, +Certificate fingerprint (SHA-256): F0:86:6B:57:FC... +---- + +[discrete] +[[install-stack-demo-secure-transport]] +== Step 2: Generate a new certificate for the transport layer + +This guide assumes the use of self-signed certificates, but the process to import CA-signed certificates is the same. If you're using a CA provided by your organization, you need to generate Certificate Signing Requests (CSRs) and then use the signed certificates in this step. Once the certificates are generated, whether self-signed or CA-signed, the steps are the same. + +. From the {es} installation directory, using the newly-created CA certificate and private key, create a new certificate for your elasticsearch node: ++ +["source","shell"] +---- +sudo /usr/share/elasticsearch/bin/elasticsearch-certutil cert --ca-cert /etc/elasticsearch/certs/ca/ca.crt --ca-key /etc/elasticsearch/certs/ca/ca.key +---- ++ +When prompted, choose an output file name (you can use the default `elastic-certificates.p12`) and a password for the certificate. + +. Move the generated file to the `/etc/elasticsearch/certs` directory: ++ +["source","shell"] +---- +sudo mv /usr/share/elasticsearch/elastic-certificates.p12 /etc/elasticsearch/certs/ +---- + ++ +[IMPORTANT] +==== +If you're running these steps on a production cluster that already contains data: + +* In a cluster with multiple {es} nodes, before proceeding you first need to perform a {ref}/restart-cluster.html#restart-cluster-rolling[Rolling restart] beginning with the node where you're updating the keystore. Stop at the `Perform any needed changes` step, and then proceed to the next step in this guide. +* In a single node cluster, always stop {es} before proceeding. +==== + ++ +. Because you've created a new truststore and keystore, you need to update the `/etc/elasticsearch/elasticsearch.yml` settings file with the new truststore and keystore filenames. ++ +Open the {es} configuration file in a text editor and adjust the following values to reflect the newly created keystore and truststore filenames and paths: ++ +["source","shell"] +---- +xpack.security.transport.ssl: + ... + keystore.path: /etc/elasticsearch/certs/elastic-certificates.p12 + truststore.path: /etc/elasticsearch/certs/elastic-stack-ca.p12 +---- + +[discrete] +[[install-stack-demo-secure-transport-es-keystore]] +=== Update the {es} keystore + +{es} uses a separate keystore to hold the passwords of the keystores and truststores holding the CA and node certificates created in the previous steps. Access to this keystore is through the use of a utility called `elasticsearch-keystore`. + +. From the {es} installation directory, list the contents of the existing keystore: ++ +["source","shell"] +---- +sudo /usr/share/elasticsearch/bin/elasticsearch-keystore list +---- ++ +The results should be like the following: ++ +["source","yaml"] +---- +keystore.seed +xpack.security.http.ssl.keystore.secure_password +xpack.security.transport.ssl.keystore.secure_password +xpack.security.transport.ssl.truststore.secure_password +---- ++ +Notice that there are entries for: ++ +* The `transport.ssl.truststore` that holds the CA certificate +* The `transport.ssl.keystore` that holds the CA-signed certificates +* The `http.ssl.keystore` for the HTTP layer ++ +These entries were created at installation and need to be replaced with the passwords to the newly-created truststore and keystores. + +. Remove the existing keystore values for the default transport keystore and truststore: ++ +["source","shell"] +---- +sudo /usr/share/elasticsearch/bin/elasticsearch-keystore remove xpack.security.transport.ssl.keystore.secure_password + +sudo /usr/share/elasticsearch/bin/elasticsearch-keystore remove xpack.security.transport.ssl.truststore.secure_password +---- + +. Update the `elasticsearch-keystore` with the passwords for the new keystore and truststore created in the previous steps. This ensures that {es} can read the new stores: ++ +["source","shell"] +---- +sudo /usr/share/elasticsearch/bin/elasticsearch-keystore add xpack.security.transport.ssl.keystore.secure_password + +sudo /usr/share/elasticsearch/bin/elasticsearch-keystore add xpack.security.transport.ssl.truststore.secure_password +---- + +[discrete] +[[install-stack-demo-secure-http]] +== Step 3: Generate new certificate(s) for the HTTP layer + +Now that communication between {es} nodes (the transport layer) has been secured with SSL certificates, the same must be done for the communications that use the REST API, including {kib}, clients, and any other components on the HTTP layer. + +. Similar to the process for the transport layer, on the first node in your {es} cluster use the certificate utility to generate a CA certificate for HTTP communications: ++ +["source","shell"] +---- +sudo /usr/share/elasticsearch/bin/elasticsearch-certutil http +---- ++ +Respond to the command prompts as follows: + +* When asked if you want to generate a CSR, enter `n`. +* When asked if you want to use an existing CA, enter `y`. ++ +NOTE: If you're using your organization's CA certificate, specify that certificate and key in the following two steps. ++ +* Provide the absolute path to your newly created CA certificate: `/etc/elasticsearch/certs/ca/ca.crt`. +* Provide the absolute path to your newly created CA key: `/etc/elasticsearch/certs/ca/ca.key`. +* Enter an expiration value for your certificate. You can enter the validity period in years, months, or days. For example, enter `1y` for one year. +* When asked if you want to generate one certificate per node, enter `y`. ++ +You'll be guided through the creation of certificates for each node. Each certificate will have its own private key, and will be issued for a specific hostname or IP address. + +.. Enter the hostname for your first {es} node, for example `mynode-es1`. ++ +["source","shell"] +---- +mynode-es1 +---- +.. When prompted, confirm that the settings are correct. +.. Add the network IP address that clients can use to connect to the first {es} node. This is the same value that's described in Step 2 of <>, for example `10.128.0.84`: ++ +["source","shell"] +---- +10.128.0.84 +---- +.. When prompted, confirm that the settings are correct. +.. When prompted, choose to generate additional certificates, and then repeat the previous steps to add hostname and IP settings for each node in your {es} cluster. +.. Provide a password for the generated `http.p12` keystore file. +.. The generated files will be included in a zip archive. At the prompt, provide a path and filename for where the archive should be created. ++ +For this example we use: `/etc/elasticsearch/certs/elasticsearch-ssl-http.zip`: ++ +["source","shell"] +---- +What filename should be used for the output zip file? [/usr/share/elasticsearch/elasticsearch-ssl-http.zip] /etc/elasticsearch/certs/elasticsearch-ssl-http.zip +---- + +. Earlier, when you generated the certificate for the transport layer, the default filename was `elastic-certificates.p12`. Now, when generating a certificate for the HTTP layer, the default filename is `http.p12`. This matches the name of the existing HTTP layer certificate file from when the initial {es} cluster was first installed. ++ +Just to avoid any possible name collisions, rename the existing `http.p12` file to distinguish it from the newly-created keystore: ++ +["source","shell"] +---- +sudo mv /etc/elasticsearch/certs/http.p12 /etc/elasticsearch/certs/http-old.p12 +---- + +. Unzip the generated `elasticsearch-ssl-http.zip` archive: ++ +["source","shell"] +---- +sudo unzip -d /usr/share/elasticsearch/ /usr/share/elasticsearch/elasticsearch-ssl-http.zip +---- + +. When the archive is unpacked, the certificate files are located in separate directories for each {es} node and for the {kib} node. ++ +You can run a recursive `ls` command to view the file structure: ++ +["source","shell"] +---- +ls -lR /usr/share/elasticsearch/{elasticsearch,kibana} +---- ++ +["source","shell"] +---- +elasticsearch: +total 0 +drwxr-xr-x. 2 root root 56 Dec 12 19:13 mynode-es1 +drwxr-xr-x. 2 root root 72 Dec 12 19:04 mynode-es2 +drwxr-xr-x. 2 root root 72 Dec 12 19:04 mynode-es3 + +elasticsearch/mynode-es1: +total 8 +-rw-r--r--. 1 root root 1365 Dec 12 19:04 README.txt +-rw-r--r--. 1 root root 845 Dec 12 19:04 sample-elasticsearch.yml + +elasticsearch/mynode-es2: +total 12 +-rw-r--r--. 1 root root 3652 Dec 12 19:04 http.p12 +-rw-r--r--. 1 root root 1365 Dec 12 19:04 README.txt +-rw-r--r--. 1 root root 845 Dec 12 19:04 sample-elasticsearch.yml + +elasticsearch/mynode-es3: +total 12 +-rw-r--r--. 1 root root 3652 Dec 12 19:04 http.p12 +-rw-r--r--. 1 root root 1365 Dec 12 19:04 README.txt +-rw-r--r--. 1 root root 845 Dec 12 19:04 sample-elasticsearch.yml + +kibana: +total 12 +-rw-r--r--. 1 root root 1200 Dec 12 19:04 elasticsearch-ca.pem +-rw-r--r--. 1 root root 1306 Dec 12 19:04 README.txt +-rw-r--r--. 1 root root 1052 Dec 12 19:04 sample-kibana.yml +---- + +. Replace your existing keystore with the new keystore. The location of your certificate directory may be different than what is shown here, depending on the installation method you chose. ++ +Run the `mv` command, replacing `` with the hostname of your initial {es} node: ++ +["source","shell"] +---- +sudo mv /usr/share/elasticsearch/elasticsearch//http.p12 /etc/elasticsearch/certs/ +---- + +. Because this is a new keystore, the {es} configuration file needs to be updated with the path to its location. Open `/etc/elasticsearch/elasticsearch.yml` and update the HTTP SSL settings with the new path: ++ +["source","yaml"] +---- +xpack.security.http.ssl: + enabled: true + #keystore.path: certs/http.p12 + keystore.path: /etc/elasticsearch/certs/http.p12 +---- + +. Since you also generated a new keystore password, the {es} keystore needs to be updated as well. From the {es} installation directory, first remove the existing HTTP keystore entry: ++ +["source","shell"] +---- +sudo /usr/share/elasticsearch/bin/elasticsearch-keystore remove xpack.security.http.ssl.keystore.secure_password +---- + +. Add the updated HTTP keystore password, using the password you generated for this keystore: ++ +["source","shell"] +---- +sudo /usr/share/elasticsearch/bin/elasticsearch-keystore add xpack.security.http.ssl.keystore.secure_password +---- + +. Because we've added files to the {es} configuration directory during this tutorial, we need to ensure that the permissions and ownership are correct before restarting {es}. + +.. Change the files to be owned by `root:elasticsearch`: ++ +["source","shell"] +---- +sudo chown -R root:elasticsearch /etc/elasticsearch/certs/ +---- + +.. Set the files in `/etc/elasticsearch/certs` to have read and write permissions by the owner (`root`) and read permission by the `elastic` user: ++ +["source","shell"] +---- +sudo chmod 640 /etc/elasticsearch/certs/elastic-certificates.p12 +sudo chmod 640 /etc/elasticsearch/certs/elastic-stack-ca.p12 +sudo chmod 640 /etc/elasticsearch/certs/http_ca.crt +sudo chmod 640 /etc/elasticsearch/certs/http.p12 +---- + +.. Change the `/etc/elasticsearch/certs` and `/etc/elasticsearch/certs/ca` directories to be executable by the owner: ++ +["source","shell"] +---- +sudo chmod 750 /etc/elasticsearch/certs +sudo chmod 750 /etc/elasticsearch/certs/ca +---- + +. Restart the {es} service: ++ +["source","shell"] +---- +sudo systemctl start elasticsearch.service +---- + +. Run the status command to confirm that {es} is running: ++ +["source","shell"] +---- +sudo systemctl status elasticsearch.service +---- ++ +In the event of any problems, you can also monitor the {es} logs for any issues by tailing the {es} log file: ++ +["source","shell"] +---- +sudo tail -f /var/log/elasticsearch/elasticsearch-demo.log +---- ++ +A line in the log file like the following indicates that SSL has been properly configured: ++ +["source","shell"] +---- +[2023-07-12T13:11:29,154][INFO ][o.e.x.s.Security ] [es-ssl-test] Security is enabled +---- + +[discrete] +[[install-stack-demo-secure-second-node]] +== Step 4: Configure security on additional {es} nodes + +Now that the security is configured for the first {es} node, some steps need to be repeated on each additional {es} node. + +. To avoid filename collisions, on each additional {es} node rename the existing `http.p12` file in the `/etc/elasticsearch/certs/` directory: ++ +["source","shell"] +---- +mv http.p12 http-old.p12 +---- + +. Copy the CA and truststore files that you generated on the first {es} node so that they can be reused on all other nodes: + +* Copy the `/ca` directory (that contains `ca.crt` and `ca.key`) from `/etc/elasticsearch/certs/` on the first {es} node to the same path on all other {es} nodes. + +* Copy the `elastic-stack-ca.p12` file from `/etc/elasticsearch/certs/` to the `/etc/elasticsearch/certs/` directory on all other {es} nodes. + +* Copy the `http.p12` file from each node directory in `/usr/share/elasticsearch/elasticsearch` (that is, `elasticsearch/mynode-es1`, `elasticsearch/mynode-es2` and `elasticsearch/mynode-es3`) to the `/etc/elasticsearch/certs/` directory on each corresponding cluster node. + +. On each {es} node, repeat the steps to generate a new certificate for the transport layer: + +.. Stop the {es} service: ++ +["source","shell"] +---- +sudo systemctl stop elasticsearch.service +---- + +.. From the `/etc/elasticsearch/certs` directory, create a new certificate for the {es} node: ++ +["source","shell"] +---- +sudo /usr/share/elasticsearch/bin/elasticsearch-certutil cert --ca-cert /etc/elasticsearch/certs/ca/ca.crt --ca-key /etc/elasticsearch/certs/ca/ca.key +---- ++ +When prompted, choose an output file name and specify a password for the certificate. For this example, we'll use `/etc/elasticsearch/certs/elastic-certificates.p12`. + +.. Update the `/etc/elasticsearch/elasticsearch.yml` settings file with the new truststore and keystore filename and path: ++ +["source","shell"] +---- +xpack.security.transport.ssl: + ... + keystore.path: /etc/elasticsearch/certs/elastic-certificates.p12 + truststore.path: /etc/elasticsearch/certs/elastic-stack-ca.p12 +---- + +.. List the content of the {es} keystore: ++ +["source","shell"] +---- +/usr/share/elasticsearch/bin/elasticsearch-keystore list +---- ++ +The results should be like the following: ++ +["source","yaml"] +---- +keystore.seed +xpack.security.http.ssl.keystore.secure_password +xpack.security.transport.ssl.keystore.secure_password +xpack.security.transport.ssl.truststore.secure_password +---- + +.. Remove the existing keystore values for the default transport keystore and truststore: ++ +["source","shell"] +---- +sudo /usr/share/elasticsearch/bin/elasticsearch-keystore remove xpack.security.transport.ssl.keystore.secure_password + +sudo /usr/share/elasticsearch/bin/elasticsearch-keystore remove xpack.security.transport.ssl.truststore.secure_password +---- + +.. Update the `elasticsearch-keystore` with the passwords for the new keystore and truststore: ++ +["source","shell"] +---- +sudo /usr/share/elasticsearch/bin/elasticsearch-keystore add xpack.security.transport.ssl.keystore.secure_password + +sudo /usr/share/elasticsearch/bin/elasticsearch-keystore add xpack.security.transport.ssl.truststore.secure_password +---- + +. For the HTTP layer, the certificates have been generated already on the first {es} node. Each additional {es} node just needs to be configured to use the new certificates. + +.. Update the `/etc/elasticsearch/elasticsearch.yml` settings file with the new truststore and keystore filenames: ++ +["source","shell"] +---- +xpack.security.http.ssl: + enabled: true + #keystore.path: certs/http.p12 + keystore.path: /etc/elasticsearch/certs/http.p12 +---- + +.. Remove the existing HTTP keystore entry: ++ +["source","shell"] +---- +sudo /usr/share/elasticsearch/bin/elasticsearch-keystore remove xpack.security.http.ssl.keystore.secure_password +---- + +.. Add the updated HTTP keystore password: ++ +["source","shell"] +---- +sudo /usr/share/elasticsearch/bin/elasticsearch-keystore add xpack.security.http.ssl.keystore.secure_password +---- + +.. Change the certificate files to be owned by the `root:elasticsearch` group: ++ +["source","shell"] +---- +sudo chown -R root:elasticsearch /etc/elasticsearch/certs/ +---- + +.. Set the files in `/etc/elasticsearch/certs` to have read and write permissions by the owner (`root`) and read permission by the `elastic` user: ++ +["source","shell"] +---- +chmod 640 * +---- + +.. Change the `/etc/elasticsearch/certs` and `/etc/elasticsearch/certs/ca` directories to be executable by the owner: ++ +["source","shell"] +---- +chmod 750 /etc/elasticsearch/certs +chmod 750 /etc/elasticsearch/certs/ca +---- + +. Restart the {es} service. ++ +["source","shell"] +---- +sudo systemctl start elasticsearch.service +---- + +. Run the status command to confirm that {es} is running. ++ +["source","shell"] +---- +sudo systemctl status elasticsearch.service +---- + +[discrete] +[[install-stack-demo-secure-kib-es]] +== Step 5: Generate a certificate for {kib} to access {es} + +Now that the transport and HTTP layers are configured with encryption using the new certificates, we'll set up certificates for encryption between {kib} and {es}. For additional details about any of these steps, refer to {kibana-ref}/elasticsearch-mutual-tls.html[Mutual TLS authentication between {kib} and {es}]. + +. In Step 3, when you generated a new certificate for the HTTP layer, the process created an archive `elasticsearch-ssl-http.zip`. ++ +From the `kibana` directory in the expanded archive, copy the `elasticsearch-ca.pem` private key file to the {kib} host machine. + +. On the {kib} host machine, copy `elasticsearch-ca.pem` to the {kib} configuration directory (depending on the installation method that you used, the location of the configuration directory may be different from what's shown): ++ +["source","shell"] +---- +mv elasticsearch-ca.pem /etc/kibana +---- + +. Stop the {kib} service: ++ +["source","shell"] +---- +sudo systemctl stop kibana.service +---- + +. Update the `/etc/kibana/kibana.yml` settings file to reflect the location of the `elasticsearch-ca.pem`: ++ +["source","sh",subs="attributes"] +---- +elasticsearch.ssl.certificateAuthorities: [/etc/kibana/elasticsearch-ca.pem] +---- + +. Restart the {kib} service: ++ +["source","shell"] +---- +sudo systemctl start kibana.service +---- + +. Confirm that {kib} is running: ++ +["source","shell"] +---- +sudo systemctl status kibana.service +---- ++ +If everything is configured correctly, connection to {es} will be established and {kib} will start normally. + +. You can also view the {kib} log file to gather more detail: ++ +["source","shell"] +---- +tail -f /var/log/kibana/kibana.log +---- ++ +In the log file you should find a `Kibana is now available` message. + +. Open a web browser to the external IP address of the Kibana host machine: `https://:5601``. Note that the URL should use `https` and not `http`. + +. Log in using the `elastic` user and password that you configured in Step 1 of <>. + +Congratulations! You've successfully updated the SSL certificates between {es} and {kib}. + +[discrete] +[[install-stack-demo-secure-fleet]] +== Step 6: Install {fleet} with SSL certificates configured + +Now that {kib} is up and running, you can proceed to install {fleet-server}, which will manage the {agent} that we'll set up in a later step. + +If you'd like to learn more about these steps, refer to {fleet-guide}/add-fleet-server-on-prem.html[Deploy on-premises and self-managed] in the {fleet} and {agent} Guide. You can find detailed steps to generate and configure certificates in {fleet-guide}/secure-connections.html[Configure SSL/TLS for self-managed Fleet Servers]. + +. Log in to the first {es} node and use the certificate utility to generate a certificate bundle for {fleet-server}. In the command, replace `` and `IP address` with the name and IP address of your {fleet-server} host: ++ +["source","shell"] +---- +sudo /usr/share/elasticsearch/bin/elasticsearch-certutil cert --name fleet-server --ca-cert /etc/elasticsearch/certs/ca/ca.crt --ca-key /etc/elasticsearch/certs/ca/ca.key --dns --ip --pem +---- ++ +When prompted, specify a unique name for the output file, such as `fleet-cert-bundle.zip`. + +. On your {fleet-server} host, create a directory for the certificate files: ++ +["source","shell"] +---- +sudo mkdir /etc/fleet +---- + +. Copy the generated archive over to your {fleet-server} host and unpack it into `/etc/fleet/`: +** `/etc/fleet/fleet-server.crt` +** `/etc/fleet/fleet-server.key`` + +. From the first {es} node, copy the `ca.crt` file, and paste it into the `/etc/fleet/` directory on the {fleet-server} host. Just to help identify the file we'll also rename it to `es-ca.crt`: +** `/etc/fleet/es-ca.crt` + +. Update the permissions on the certificate files to ensure that they're readable. From inside the `/etc/fleet` directory, run: ++ +["source","shell"] +---- +sudo chmod 640 *.crt +sudo chmod 640 *.key +---- + +. Now that the certificate files are in place, on the {fleet-server} host create a working directory for the installation package: ++ +["source","shell"] +---- +mkdir fleet-install-files +---- + +. Change into the new directory: ++ +["source","shell"] +---- +cd fleet-install-files +---- + +. In the terminal, run the `ifconfig` command and copy the value for the host inet IP address (for example, `10.128.0.84`). You'll need this value later. + +. Back in your web browser, open the {kib} menu and go to **Management -> Fleet**. {fleet} opens with a message that you need to add a {fleet-server}. + +. Click **Add Fleet Server**. The **Add a Fleet Server** flyout opens. + +. In the flyout, select the **Advanced** tab. + +. On the **Create a policy for Fleet Server** step, keep the default {fleet-server} policy name and all advanced options at their defaults. ++ +Leave the option to collect system logs and metrics selected. Click *Create policy*. The policy takes a minute or so to create. + +. On the **Choose a deployment mode for security** step, select the **Production** option. This enables you to provide your own certificates. + +. On the **Add your Fleet Server host** step: + +.. Specify a name for your {fleet-server} host, for example `Fleet Server`. +.. Specify the host URL and where {agents} will reach {fleet-server}, including the default port `8220`. For example, `https://10.128.0.203:8220`. ++ +The URL is the inet value that you copied from the `ifconfig` output. ++ +For details about default port assignments, refer to {fleet-guide}/add-fleet-server-on-prem.html#default-port-assignments-on-prem[Default port assignments] in the on-premises {fleet-server} install documentation. + +.. Click **Add host**. + +. On the **Generate a service token** step, generate the token and save the output. The token will also be propagated automatically to the command to install {fleet-server}. + +. On the **Install Fleet Server to a centralized host** step, for this example we select the **Linux Tar** tab, but you can instead select the tab appropriate to the host operating system where you're setting up {fleet-server}. ++ +Note that TAR/ZIP packages are recommended over RPM/DEB system packages, since only the former support upgrading {fleet-server} using {fleet}. + +. Run the first three commands one-by-one in the terminal on your {fleet-server} host. ++ +These commands will, respectively: + +.. Download the {fleet-server} package from the {artifact-registry}. +.. Unpack the package archive. +.. Change into the directory containing the install binaries. + +. Before running the provided `elastic-agent install` command, you'll need to make a few changes: + +.. Update the paths to the correct file locations: +** The {es} CA file (`es-ca.crt`) +** The {fleet-server} certificate (`fleet-server.crt`) +** The {fleet-server} key (`fleet-server.key`) + +.. The `fleet-server-es-ca-trusted-fingerprint` also needs to be updated. On any of your {es} hosts, run the following command to get the correct fingerprint to use: ++ +["source","shell"] +---- +grep -v ^- /etc/elasticsearch/certs/ca/ca.crt | base64 -d | sha256sum +---- ++ +Save the fingerprint value. You'll need it in a later step. ++ +Replace the `fleet-server-es-ca-trusted-fingerprint` setting with the returned value. Your updated command should be similar to the following: ++ +["source","shell"] +---- +sudo ./elastic-agent install -url=https://10.128.0.208:8220 \ + --fleet-server-es=https://10.128.0.84:9200 \ + --fleet-server-service-token=AAEAAWVsYXN0aWMvZmxlZXQtc2VydmPyL6Rva2VuLTE5OTg4NzAxOTM4NDU6X1I0Q1RrRHZTSWlyNHhkSXQwNEJoQQ \ + --fleet-server-policy=fleet-server-policy \ + --fleet-server-es-ca-trusted-fingerprint=92b51cf91e7fa311f8c84849224d448ca44824eb \ + --certificate-authorities=/etc/fleet/es-ca.crt \ + --fleet-server-cert=/etc/fleet/fleet-server.crt \ + --fleet-server-cert-key=/etc/fleet/fleet-server.key \ + --fleet-server-port=8220 +---- ++ +For details about all of the install command options, refer to {fleet-guide}/elastic-agent-cmd-options.html#elastic-agent-install-command[`elastic-agent install`] in the {agent} command reference. + +. After you've made the required updates, run the `elastic-agent install` command to install {fleet-server}. ++ +When prompted, confirm that {agent} should run as a service. If everything goes well, the install will complete successfully: ++ +["source","shell"] +---- +Elastic Agent has been successfully installed. +---- ++ +TIP: Wondering why the command refers to {agent} rather than {fleet-server}? {fleet-server} is actually a subprocess that runs inside {agent} with a special {fleet-server} policy. Refer to {fleet-guide}/fleet-server.html[What is {fleet-server}] to learn more. + +. Return to the {kib} **Add a Fleet Server** flyout and wait for confirmation that {fleet-server} has connected. + +. Once the connection is confirmed, ignore the *Continue enrolling Elastic Agent* option and close the flyout. + +{fleet-server} is now fully set up! + +Before proceeding to install {agent}, there are a few steps needed to update the `kibana.yml` settings file with the {es} CA fingerprint: + +.. On your {kib} host, stop the {kib} service: ++ +["source","shell"] +---- +sudo systemctl stop kibana.service +---- +.. Open `/etc/kibana/kibana.yml` for editing. +.. Find the `xpack.fleet.outputs` setting. +.. Update `ca_trusted_fingerprint` to the value you captured earlier, when you ran the `grep` command on the {es} `ca.crt` file. ++ +The updated entry in `kibana.yml` should be like the following: ++ +["source","yaml"] +---- +xpack.fleet.outputs: [{id: fleet-default-output, name: default, is_default: true, is_default_monitoring: true, type: elasticsearch, hosts: [`https://10.128.0.84:9200`], ca_trusted_fingerprint: 92b51cf91e7fa311f8c84849224d448ca44824eb}] +---- +.. Save your changes. +.. Restart {kib}: ++ +["source","shell"] +---- +sudo systemctl start kibana.service +---- ++ +{kib} is now configured with the correct fingerprint for {agent} to access {es}. You're now ready to set up {agent}! + +[discrete] +[[install-stack-demo-secure-agent]] +== Step 7: Install {agent} + +Next, we'll install {agent} on another host and use the System integration to monitor system logs and metrics. You can find additional details about these steps in {fleet-guide}/secure-connections.html[Configure SSL/TLS for self-managed Fleet Servers]. + +. Log in to the host where you'd like to set up {agent}. + +. Create a directory for the {es} certificate file: ++ +["source","shell"] +---- +sudo mkdir /etc/agent +---- +. From the first {es} node, copy the `ca.crt` file, and paste it into the `/etc/agent/` directory on the {fleet-server} host. Just to help identify the file we'll also rename it to `es-ca.crt`: +** `/etc/fleet/es-ca.crt` + +. Create a working directory for the installation package: ++ +["source","shell"] +---- +mkdir agent-install-files +---- + +. Change into the new directory: ++ +["source","shell"] +---- +cd agent-install-files +---- + +. Open {kib} and go to **Management -> Fleet**. + +. On the **Agents** tab, you should see your new {fleet-server} policy running with a healthy status. + +. Click **Add agent**. The **Add agent** flyout opens. + +. In the flyout, choose an agent policy name, for example `Demo Agent Policy`. + +. Leave **Collect system logs and metrics** enabled. This will add the link:https://docs.elastic.co/integrations/system[System integration] to the {agent} policy. + +. Click **Create policy**. + +. For the **Enroll in Fleet?** step, leave **Enroll in Fleet** selected. + +. On the **Install Elastic Agent on your host** step, for this example we select the **Linux Tar** tab, but you can instead select the tab appropriate to the host operating system where you're setting up {fleet-server}. ++ +As with {fleet-server}, note that TAR/ZIP packages are recommended over RPM/DEB system packages, since only the former support upgrading {agent} using {fleet}. + +. Run the first three commands one-by-one in the terminal on your {agent} host. ++ +These commands will, respectively: + +.. Download the {agent} package from the {artifact-registry}. +.. Unpack the package archive. +.. Change into the directory containing the install binaries. + +. Before running the provided `elastic-agent install` command, you'll need to make a few changes: + +.. For the `--url` parameter, confirm that the port number is `8220` (this is the default port for on-premises {fleet-server}). ++ +.. Add a `--certificate-authorities` parameter with the full path of your CA certificate file. For example, `--certificate-authorities=/etc/agent/es-ca.crt`. ++ +The result should be like the following: ++ +["source","shell"] +---- +sudo ./elastic-agent install \ +--url=https://10.128.0.203:8220 \ +--enrollment-token=VWCobFhKd0JuUnppVYQxX0VKV5E6UmU3BGk0ck9RM2HzbWEmcS4Bc1YUUM== \ +--certificate-authorities=/etc/agent/es-ca.crt +---- + +. Run the `elastic-agent install` command. ++ +At the prompt, enter `Y` to install {agent} and run it as a service. wait for the installation to complete. ++ +If everything goes well, the install will complete successfully: ++ +["source","shell"] +---- +Elastic Agent has been successfully installed. +---- + +. In the {kib} **Add agent** flyout, wait for confirmation that {agent} has connected. + +. Wait for the **Confirm incoming data** step to complete. This may take a couple of minutes. + +. Once data is confirmed to be flowing, close the flyout. + +Your new {agent} is now installed an enrolled with {fleet-server}. + +[discrete] +[[install-stack-demo-secure-view-data]] +== Step 8: View your system data + +Now that all of the components have been installed, it's time to view your system data. + +View your system log data: + +. Open the {kib} menu and go to **Analytics -> Dashboard**. +. In the query field, search for `Logs System`. +. Select the `[Logs System] Syslog dashboard` link. The {kib} Dashboard opens with visualizations of Syslog events, hostnames and processes, and more. + +View your system metrics data: + +. Open the {kib} menu and return to **Analytics -> Dashboard**. +. In the query field, search for `Metrics System`. +. Select the `[Metrics System] Host overview` link. The {kib} Dashboard opens with visualizations of host metrics including CPU usage, memory usage, running processes, and more. ++ +image::images/install-stack-metrics-dashboard.png["The System metrics host overview showing CPU usage, memory usage, and other visualizations"] + +Congratulations! You've successfully configured security for {es}, {kib}, {fleet}, and {agent} using your own trusted CA-signed certificates. + +Now that you're all set up, visit our link:https://www.elastic.co/guide/index.html[Documentation landing page] to learn how to start using your new cluster. \ No newline at end of file diff --git a/docs/en/install-upgrade/installing-stack-demo-self.asciidoc b/docs/en/install-upgrade/installing-stack-demo-self.asciidoc new file mode 100644 index 000000000..0a7073a29 --- /dev/null +++ b/docs/en/install-upgrade/installing-stack-demo-self.asciidoc @@ -0,0 +1,723 @@ +//for testing on currently available builds: +//:version: 8.11.1 + +[[installing-stack-demo-self]] +=== Tutorial 1: Installing a self-managed {stack} + +This tutorial demonstrates how to install and configure the {stack} in a self-managed environment. Following these steps, you'll set up a three node {es} cluster, with {kib}, {fleet-server}, and {agent}, each on separate hosts. The {agent} will be configured with the System integration, enabling it to gather local system logs and metrics and deliver them into the {es} cluster. Finally, you'll learn how to view the system data in {kib}. + +It should take between one and two hours to complete these steps. + +* <> +* <> +* <> +* <> +* <> +* <> +* <> +* <> +* <> +* <> +* <> +* <> + +[IMPORTANT] +==== +If you're using these steps to configure a production cluster that uses trusted CA-signed certificates for secure communications, after completing Step 6 to install {kib} we recommend jumping directly to <>. + +The second tutorial includes steps to configure security across the {stack}, and then to set up {fleet-server} and {agent} with SSL certificates enabled. +==== + +[discrete] +[[install-stack-self-prereqs]] +== Prerequisites and assumptions + +To get started, you'll need the following: + +* A set of virtual or physical hosts on which to install each stack component. +* On each host, a super user account with `sudo` privileges. + +The examples in this guide use RPM packages to install the {stack} components on hosts running Red Hat Enterprise Linux 8. The steps for other install methods and operating systems are similar, and can be found in the documentation linked from each section. The packages that you'll install are: + +* https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-{version}-x86_64.rpm + +* https://artifacts.elastic.co/downloads/kibana/kibana-{version}-x86_64.rpm + +* https://artifacts.elastic.co/downloads/beats/elastic-agent/elastic-agent-{version}-linux-x86_64.tar.gz + +NOTE: For {agent} and {fleet-server} (both of which use the elastic-agent-{version}-linux-x86_64.tar.gz package) we recommend using TAR/ZIP packages over RPM/DEB system packages, since only the former support upgrading using {fleet}. + +Special considerations such as firewalls and proxy servers are not covered here. + +For the basic ports and protocols required for the installation to work, refer to the following overview section. + +[discrete] +[[install-stack-self-overview]] +== {stack} overview + +Before starting, take a moment to familiarize yourself with the {stack} components. + +image::images/stack-install-final-state.png[Image showing the relationships between stack components] + +To learn more about the {stack} and how each of these components are related, refer to {estc-welcome-current}/stack-components.html[An overview of the {stack}]. + +[discrete] +[[install-stack-self-elasticsearch-first]] +== Step 1: Set up the first {es} node + +To begin, use RPM to install {es} on the first host. This initial {es} instance will serve as the master node. + +. Log in to the host where you'd like to set up your first {es} node. + +. Create a working directory for the installation package: ++ +["source","shell"] +---- +mkdir elastic-install-files +---- + +. Change into the new directory: ++ +["source","shell"] +---- +cd elastic-install-files +---- + +. Download the {es} RPM and checksum file from the {artifact-registry}. You can find details about these steps in the section {ref}/rpm.html#install-rpm[Download and install the RPM manually]. ++ +["source","sh",subs="attributes"] +---- +wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-{version}-x86_64.rpm +wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-{version}-x86_64.rpm.sha512 +---- + +. Confirm the validity of the downloaded package by checking the SHA of the downloaded RPM against the published checksum: ++ +["source","sh",subs="attributes"] +---- +shasum -a 512 -c elasticsearch-{version}-x86_64.rpm.sha512 +---- ++ +The command should return: `elasticsearch-{version}-x86_64.rpm: OK`. + +. Run the {es} install command: ++ +["source","sh",subs="attributes"] +---- +sudo rpm --install elasticsearch-{version}-x86_64.rpm +---- ++ +The {es} install process enables certain security features by default, including the following: + +* Authentication and authorization are enabled, including a built-in `elastic` superuser account. +* Certificates and keys for TLS are generated for the transport and HTTP layer, and TLS is enabled and configured with these keys and certificates. + +. Copy the terminal output from the install command to a local file. In particular, you'll need the password for the built-in `elastic` superuser account. The output also contains the commands to enable {es} to run as a service, which you'll use in the next step. + +. Run the following two commands to enable {es} to run as a service using `systemd`. This enables {es} to start automatically when the host system reboots. You can find details about this and the following steps in {ref}/starting-elasticsearch.html#start-es-deb-systemd[Running {es} with `systemd`]. ++ +["source","sh",subs="attributes"] +---- +sudo systemctl daemon-reload +sudo systemctl enable elasticsearch.service +---- + +[discrete] +[[install-stack-self-elasticsearch-config]] +== Step 2: Configure the first {es} node for connectivity + +Before moving ahead to configure additional {es} nodes, you'll need to update the {es} configuration on this first node so that other hosts are able to connect to it. This is done by updating the settings in the `elasticsearch.yml` file. For details about all available settings refer to {ref}/settings.html[Configuring {es}]. + +. In a terminal, run the `ifconfig` command and copy the value for the host inet IP address (for example, `10.128.0.84`). You'll need this value later. + +. Open the {es} configuration file in a text editor, such as `vim`: ++ +["source","sh",subs="attributes"] +---- +sudo vim /etc/elasticsearch/elasticsearch.yml +---- + +. In a multi-node {es} cluster, all of the {es} instances need to have the same name. ++ +In the configuration file, uncomment the line `#cluster.name: my-application` and give the {es} instance any name that you'd like: ++ +[source,"yaml"] +---- +cluster.name: elasticsearch-demo +---- + +. By default, {es} runs on `localhost`. In order for {es} instances on other nodes to be able to join the cluster, you'll need to set up {es} to run on a routable, external IP address. ++ +Uncomment the line `#network.host: 192.168.0.1` and replace the default address with the value that you copied from the `ifconfig` command output. For example: ++ +[source,"yaml"] +---- +network.host: 10.128.0.84 +---- + +. {es} needs to be enabled to listen for connections from other, external hosts. ++ +Uncomment the line `#transport.host: 0.0.0.0`. The `0.0.0.0` setting enables {es} to listen for connections on all available network interfaces. Note that in a production environment you might want to restrict this by setting this value to match the value set for `network.host`. ++ +[source,"yaml"] +---- +transport.host: 0.0.0.0 +---- ++ +TIP: You can find details about the `network.host` and `transport.host` settings in the {es} {ref}/modules-network.html[Networking] documentation. + +. Save your changes and close the editor. + +[discrete] +[[install-stack-self-elasticsearch-start]] +== Step 3: Start {es} + +. Now, it's time to start the {es} service: ++ +["source","sh",subs="attributes"] +---- +sudo systemctl start elasticsearch.service +---- ++ +If you need to, you can stop the service by running `sudo systemctl stop elasticsearch.service`. + +. Make sure that {es} is running properly. ++ +["source","sh",subs="attributes"] +---- +sudo curl --cacert /etc/elasticsearch/certs/http_ca.crt -u elastic:$ELASTIC_PASSWORD https://localhost:9200 +---- ++ +In the command, replace `$ELASTIC_PASSWORD` with the `elastic` superuser password that you copied from the install command output. ++ +If all is well, the command returns a response like this: ++ +["source","js",subs="attributes,callouts"] +---- +{ + "name" : "Cp9oae6", + "cluster_name" : "elasticsearch", + "cluster_uuid" : "AT69_C_DTp-1qgIJlatQqA", + "version" : { + "number" : "{version_qualified}", + "build_type" : "{build_type}", + "build_hash" : "f27399d", + "build_flavor" : "default", + "build_date" : "2016-03-30T09:51:41.449Z", + "build_snapshot" : false, + "lucene_version" : "{lucene_version}", + "minimum_wire_compatibility_version" : "1.2.3", + "minimum_index_compatibility_version" : "1.2.3" + }, + "tagline" : "You Know, for Search" +} +---- + +. Finally, check the status of {es}: ++ +[source,"shell"] +---- +sudo systemctl status elasticsearch +---- ++ +As with the previous `curl` command, the output should confirm that {es} started successfully. Type `q` to exit from the `status` command results. + +[discrete] +[[install-stack-self-elasticsearch-second]] +== Step 4: Set up a second {es} node + +To set up a second {es} node, the initial steps are similar to those that you followed for <>. + +. Log in to the host where you'd like to set up your second {es} instance. + +. Create a working directory for the installation package: ++ +["source","shell"] +---- +mkdir elastic-install-files +---- + +. Change into the new directory: ++ +["source","shell"] +---- +cd elastic-install-files +---- + +. Download the {es} RPM and checksum file: ++ +["source","sh",subs="attributes"] +---- +wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-{version}-x86_64.rpm +wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-{version}-x86_64.rpm.sha512 +---- + +. Check the SHA of the downloaded RPM: ++ +["source","sh",subs="attributes"] +---- +shasum -a 512 -c elasticsearch-{version}-x86_64.rpm.sha512 +---- + +. Run the {es} install command: ++ +["source","sh",subs="attributes"] +---- +sudo rpm --install elasticsearch-{version}-x86_64.rpm +---- ++ +Unlike the setup for the first {es} node, in this case you don't need to copy the output of the install command, since these settings will be updated in a later step. + +. Enable {es} to run as a service: ++ +["source","sh",subs="attributes"] +---- +sudo systemctl daemon-reload +sudo systemctl enable elasticsearch.service +---- + +IMPORTANT: Don't start the {es} service yet! There are a few more configuration steps to do before restarting. + +. To enable this second {es} node to connect to the first, you need to configure an enrollment token. ++ +[IMPORTANT] +==== +Be sure to run all of these configuration steps before starting the {es} service. + +You can find additional details about these steps in {ref}/rpm.html#_reconfigure_a_node_to_join_an_existing_cluster_2[Reconfigure a node to join an existing cluster] and also in {ref}/add-elasticsearch-nodes.html#_enroll_nodes_in_an_existing_cluster_5[Enroll nodes in an existing cluster]. +==== ++ +Return to your terminal shell on the first {es} node and generate a node enrollment token: ++ +[source,"shell"] +---- +sudo /usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s node +---- + +. Copy the generated enrollment token from the command output. ++ +[TIP] +==== +Note the following tips about enrollment tokens: + +. An enrollment token has a lifespan of 30 minutes. In case the `elasticsearch-reconfigure-node` command returns an `Invalid enrollment token` error, try generating a new token. +. Be sure not to confuse an {ref}/starting-elasticsearch.html#_enroll_nodes_in_an_existing_cluster_3[{es} enrollment token] (for enrolling {es} nodes in an existing cluster) with a {kibana-ref}/start-stop.html#_run_kibana_from_the_command_line[{kib} enrollment token] (to enroll your {kib} instance with {es}, as described in the next section). These two tokens are not interchangeable. +==== + +. In the terminal shell for your second {es} node, pass the enrollment token as a parameter to the `elasticsearch-reconfigure-node` tool: ++ +[source,"shell"] +---- +sudo /usr/share/elasticsearch/bin/elasticsearch-reconfigure-node --enrollment-token +---- ++ +In the command, replace `] master node changed {previous [], current [...]} +---- ++ +Here, `hostname1` is your first {es} instance node, and `hostname2` is your second {es} instance node. ++ +The message indicates that the second {es} node has successfully contacted the initial {es} node and joined the cluster. + +. As a final check, run the following `curl` request on the new node to confirm that {es} is still running properly and viewable at the new node's `localhost` IP address. Note that you need to replace `$ELASTIC_PASSWORD` with the same `elastic` superuser password that you used on the first {es} node. ++ +["source","sh",subs="attributes"] +---- +sudo curl --cacert /etc/elasticsearch/certs/http_ca.crt -u elastic:$ELASTIC_PASSWORD https://localhost:9200 +---- ++ +["source","js",subs="attributes,callouts"] +---- +{ + "name" : "Cp9oae6", + "cluster_name" : "elasticsearch", + "cluster_uuid" : "AT69_C_DTp-1qgIJlatQqA", + "version" : { + "number" : "{version_qualified}", + "build_type" : "{build_type}", + "build_hash" : "f27399d", + "build_flavor" : "default", + "build_date" : "2016-03-30T09:51:41.449Z", + "build_snapshot" : false, + "lucene_version" : "{lucene_version}", + "minimum_wire_compatibility_version" : "1.2.3", + "minimum_index_compatibility_version" : "1.2.3" + }, + "tagline" : "You Know, for Search" +} +---- + +[discrete] +[[install-stack-self-elasticsearch-third]] +== Step 5: Set up additional {es} nodes + +To set up your next {es} node, follow exactly the same steps as you did previously in <>. The process is identical for each additional {es} node that you would like to add to the cluster. As a recommended best practice, create a new enrollment token for each new node that you add. + +[discrete] +[[install-stack-self-kibana]] +== Step 6: Install {kib} + +As with {es}, you can use RPM to install {kib} on another host. You can find details about all of the following steps in the section {kibana-ref}/rpm.html#install-rpm[Install {kib} with RPM]. + +. Log in to the host where you'd like to install {kib} and create a working directory for the installation package: ++ +["source","shell"] +---- +mkdir kibana-install-files +---- + +. Change into the new directory: ++ +["source","shell"] +---- +cd kibana-install-files +---- + +. Download the {kib} RPM and checksum file from the Elastic website. ++ +["source","sh",subs="attributes"] +---- +wget https://artifacts.elastic.co/downloads/kibana/kibana-{version}-x86_64.rpm +wget https://artifacts.elastic.co/downloads/kibana/kibana-{version}-x86_64.rpm.sha512 +---- + +. Confirm the validity of the downloaded package by checking the SHA of the downloaded RPM against the published checksum: ++ +["source","sh",subs="attributes"] +---- +shasum -a 512 -c kibana-{version}-x86_64.rpm.sha512 +---- ++ +The command should return: `kibana-{version}-x86_64.rpm: OK`. + +. Run the {kib} install command: ++ +["source","sh",subs="attributes"] +---- +sudo rpm --install kibana-{version}-x86_64.rpm +---- + +. As with each additional {es} node that you added, to enable this {kib} instance to connect to the first {es} node, you need to configure an enrollment token. ++ +Return to your terminal shell into the first {es} node. + +. Run the `elasticsearch-create-enrollment-token` command with the `-s kibana` option to generate a {kibana} enrollment token: ++ +[source,"shell"] +---- +sudo /usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s kibana +---- + +. Copy the generated enrollment token from the command output. + +. Run the following two commands to enable {kib} to run as a service using `systemd`, enabling {kib} to start automatically when the host system reboots. ++ +["source","sh",subs="attributes"] +---- +sudo systemctl daemon-reload +sudo systemctl enable kibana.service +---- + +. Before starting the {kib} service there's one configuration change to make, to set {kib} to run on the {es} host IP address. This is done by updating the settings in the `kibana.yml` file. For details about all available settings refer to {kibana-ref}/settings.html[Configure {kib}]. + +. In a terminal, run the `ifconfig` command and copy the value for the host inet IP address. + +. Open the {kib} configuration file for editing: ++ +["source","sh",subs="attributes"] +---- +sudo vim /etc/kibana/kibana.yml +---- + +. Uncomment the line `#server.host: localhost` and replace the default address with the inet value that you copied from the `ifconfig` command. For example: ++ +[source,"yaml"] +---- +server.host: 10.128.0.28 +---- + +. Save your changes and close the editor. + +. Start the {kib} service: ++ +["source","sh",subs="attributes"] +---- +sudo systemctl start kibana.service +---- ++ +If you need to, you can stop the service by running `sudo systemctl stop kibana.service`. + +. Run the `status` command to get details about the {kib} service. ++ +["source","sh",subs="attributes"] +---- +sudo systemctl status kibana +---- + +. In the `status` command output, a URL is shown with: +** A host address to access {kib} +** A six digit verification code ++ +For example: ++ +["source","sh",subs="attributes"] +---- +Kibana has not been configured. +Go to http://10.128.0.28:5601/?code= to get started. +---- ++ +Make a note of the verification code. + +. Open a web browser to the external IP address of the {kib} host machine, for example: `http://:5601`. ++ +It can take a minute or two for {kib} to start up, so refresh the page if you don't see a prompt right away. + +. When {kib} starts you're prompted to provide an enrollment token. Paste in the {kib} enrollment token that you generated earlier. + +. Click **Configure Elastic**. + +. If you're prompted to provide a verification code, copy and paste in the six digit code that was returned by the `status` command. Then, wait for the setup to complete. + +. When you see the **Welcome to Elastic** page, provide the `elastic` as the username and provide the password that you copied in Step 1, from the `install` command output when you set up your first {es} node. + +. Click **Log in**. + +{kib} is now fully set up and communicating with your {es} cluster! + +**IMPORTANT: Stop here if you intend to configure SSL certificates.** + +[IMPORTANT] +==== +For simplicity, in this tutorial we're setting up all of the {stack} components without configuring security certificates. You can proceed to configure {fleet}, {agent}, and then confirm that your system data appears in {kib}. + +However, in a production environment, before going further to install {fleet-server} and {agent} it's recommended to update your security settings to use trusted CA-signed certificates as described in <>. + +After new security certificates are configured any {agent}s would need to be reinstalled. If you're currently setting up a production environment, we recommend that you jump directly to Tutorial 2, which includes steps to secure the {stack} using certificates and then to set up {fleet} and {agent} with those certificates already in place. +==== + +[discrete] +[[install-stack-self-fleet-server]] +== Step 7: Install {fleet-server} + +Now that {kib} is up and running, you can install {fleet-server}, which will manage the {agent} that you'll set up in a later step. If you need more detail about these steps, refer to {fleet-guide}/add-fleet-server-on-prem.html[Deploy on-premises and self-managed] in the {fleet} and {agent} Guide. + +. Log in to the host where you'd like to set up {fleet-server}. + +. Create a working directory for the installation package: ++ +["source","shell"] +---- +mkdir fleet-install-files +---- + +. Change into the new directory: ++ +["source","shell"] +---- +cd fleet-install-files +---- + +. In the terminal, run `ifconfig` and copy the value for the host inet IP address (for example, `10.128.0.84`). You'll need this value later. + +. Back to your web browser, open the {kib} menu and go to **Management -> Fleet**. {fleet} opens with a message that you need to add a {fleet-server}. + +. Click **Add Fleet Server**. The **Add a Fleet Server** flyout opens. + +. In the flyout, select the **Quick Start** tab. + +. Specify a name for your {fleet-server} host, for example `Fleet Server`. + +. Specify the host URL where {agents} will reach {fleet-server}, for example: `http://10.128.0.203:8220`. This is the inet value that you copied from the `ifconfig` output. ++ +Be sure to include the port number. Port `8220` is the default used by {fleet-server} in an on-premises environment. Refer to {fleet-guide}/add-fleet-server-on-prem.html#default-port-assignments-on-prem[Default port assignments] in the on-premises {fleet-server} install documentation for a list of port assignments. + +. Click **Generate Fleet Server policy**. A policy is created that contains all of the configuration settings for the {fleet-server} instance. + +. On the **Install Fleet Server to a centralized host** step, for this example we select the **Linux Tar** tab, but you can instead select the tab appropriate to the host operating system where you're setting up {fleet-server}. ++ +Note that TAR/ZIP packages are recommended over RPM/DEB system packages, since only the former support upgrading {fleet-server} using {fleet}. + +. Copy the generated commands and then run them one-by-one in the terminal on your {fleet-server} host. ++ +These commands will, respectively: + +.. Download the {fleet-server} package from the {artifact-registry}. +.. Unpack the package archive. +.. Change into the directory containing the install binaries. +.. Install {fleet-server}. ++ +If you'd like to learn about the install command options, refer to {fleet-guide}/elastic-agent-cmd-options.html#elastic-agent-install-command[`elastic-agent install`] in the {agent} command reference. + +. At the prompt, enter `Y` to install {agent} and run it as a service. Wait for the installation to complete. + +. In the {kib} **Add a Fleet Server** flyout, wait for confirmation that {fleet-server} has connected. + +. For now, ignore the *Continue enrolling Elastic Agent* option and close the flyout. + +{fleet-server} is now fully set up! + +[discrete] +[[install-stack-self-elastic-agent]] +== Step 8: Install {agent} + +Next, you'll install {agent} on another host and use the System integration to monitor system logs and metrics. + +. Log in to the host where you'd like to set up {agent}. + +. Create a working directory for the installation package: ++ +["source","shell"] +---- +mkdir agent-install-files +---- + +. Change into the new directory: ++ +["source","shell"] +---- +cd agent-install-files +---- + +. Open {kib} and go to **Management -> Fleet**. + +. On the **Agents** tab, you should see your new {fleet-server} policy running with a healthy status. + +. Open the **Settings** tab. + +. Reopen the **Agents** tab and select **Add agent**. The **Add agent** flyout opens. + +. In the flyout, choose a policy name, for example `Demo Agent Policy`. + +. Leave **Collect system logs and metrics** enabled. This will add the link:https://docs.elastic.co/integrations/system[System integration] to the {agent} policy. + +. Click **Create policy**. + +. For the **Enroll in Fleet?** step, leave **Enroll in Fleet** selected. + +. On the **Install Elastic Agent on your host** step, for this example we select the **Linux Tar** tab, but you can instead select the tab appropriate to the host operating system where you're setting up {fleet-server}. ++ +As with {fleet-server}, note that TAR/ZIP packages are recommended over RPM/DEB system packages, since only the former support upgrading {agent} using {fleet}. + +. Copy the generated commands. + +. In the `sudo ./elastic-agent install` command, make two changes: +.. For the `--url` parameter, check that the port number is set to `8220` (used for on-premises {fleet-server}). +.. Append an `--insecure` flag at the end. ++ +TIP: If you want to set up secure communications using SSL certificates, refer to <>. ++ +The result should be like the following: ++ +["source","shell"] +---- +sudo ./elastic-agent install --url=https://10.128.0.203:8220 --enrollment-token=VWCobFhKd0JuUnppVYQxX0VKV5E6UmU3BGk0ck9RM2HzbWEmcS4Bc1YUUM== +---- + +. Run the commands one-by-one in the terminal on your {agent} host. The commands will, respectively: + +.. Download the {agent} package from the {artifact-registry}. +.. Unpack the package archive. +.. Change into the directory containing the install binaries. +.. Install {agent}. + +. At the prompt, enter `Y` to install {agent} and run it as a service. Wait for the installation to complete. ++ +If everything goes well, the install will complete successfully: ++ +["source","shell"] +---- +Elastic Agent has been successfully installed. +---- + +. In the {kib} **Add agent** flyout, wait for confirmation that {agent} has connected. + +. Close the flyout. + +Your new {agent} is now installed an enrolled with {fleet-server}. + +[discrete] +[[install-stack-self-view-data]] +== Step 9: View your system data + +Now that all of the components have been installed, it's time to view your system data. + +View your system log data: + +. Open the {kib} menu and go to **Analytics -> Dashboard**. +. In the query field, search for `Logs System`. +. Select the `[Logs System] Syslog dashboard` link. The {kib} Dashboard opens with visualizations of Syslog events, hostnames and processes, and more. + +View your system metrics data: + +. Open the {kib} menu and return to **Analytics -> Dashboard**. +. In the query field, search for `Metrics System`. +. Select the `[Metrics System] Host overview` link. The {kib} Dashboard opens with visualizations of host metrics including CPU usage, memory usage, running processes, and others. ++ +image::images/install-stack-metrics-dashboard.png["The System metrics host overview showing CPU usage, memory usage, and other visualizations"] + +Congratulations! You've successfully set up a three node {es} cluster, with {kib}, {fleet-server}, and {agent}. + +[discrete] +[[install-stack-self-next-steps]] +== Next steps + +Now that you've successfully configured an on-premises {stack}, you can learn how to configure the {stack} in a production environment using trusted CA-signed certificates. Refer to <> to learn more. diff --git a/docs/en/install-upgrade/installing-stack.asciidoc b/docs/en/install-upgrade/installing-stack.asciidoc index cee58d9c5..3018ed488 100644 --- a/docs/en/install-upgrade/installing-stack.asciidoc +++ b/docs/en/install-upgrade/installing-stack.asciidoc @@ -8,6 +8,8 @@ Kibana {version}, and Logstash {version}. If you're upgrading an existing installation, see <> for information about how to ensure compatibility with {version}. +For an example of installing and configuring the {stack}, you can try out our <>. After that, you can also learn how to secure your installation for production following the steps in <>. + [discrete] [[network-requirements]] === Network requirements