-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable verified SSL support for the Elasticsearch/Opensearch cluster #23
Comments
@MoisesGSalas do you know if it possible to use the certbot process to create this certs instead of relying on self-signed certs? |
I just figured out that Tutor 16 includes a newer version of One option for that could be: # ...
WAIT_SEARCH_SERVER="-wait $SEARCH_SERVER"
if [[ -n $(curl --verbose --silent "${SEARCH_SERVER}" 2>&1 | grep "TLS alert, unknown CA") ]]; then
CA_CERT_PATH="...somehow get the path..."
WAIT_SEARCH_SERVER="${WAIT_SEARCH_SERVER} -cacert ${CA_CERT_PATH}"
fi
# ...
dockerize $WAIT_MONGODB $WAIT_SEARCH_SERVER -wait-retry-interval 5s -timeout 600s |
@felipemontoya, @MoisesGSalas, @bradenmacdonald I checked the certbot approach and was working on that for a while to make it proper. However, I have some findings that I would love to discuss with you before going on. Indeed, it is possible to use certbot issuing a certificate for the Elasticsearch cluster. However, the Kubernetes internal resources are not running on a proper domain name ( Doing so will let us issue the certificate and configure it for Elasticsearch. The server will pick up the certificate with ease, though the question raises: How to communicate with the server? The main concern is that we have to a) reach out to the internet to get back to the server, or b) intervene into the Certificate validation process at some point. Going with the first option would mean we have to create an additional ingress controller to route the traffic to the Elasticsearch cluster. This introduces extra latency that could have been avoided and it feels off to do so. The second option makes the Let's Encrypt signed certificate pointless. The cert would not contain The connection parameters of the instances would still have to set So, to summarize: We have to use self-signed certificates and attach the CA (and update the containers' CA cache), or we use trusted CA signed certs, but the clients will still need to use But there is one thing that bugs me: Do we really want to struggle with this? The communication is not leaving the cluster; not even exposed to the internet, hence I'm not even convinced by now that this is necessary at all. What are your thoughts? |
Can't you use DNS within the cluster to point that domain name to a cluster-local endpoint, so we don't have to reach out over the internet? i.e. LetsEncrypt sees I'm also fine with using self-signed certs or some simpler option though; we have other priorities at the moment. |
@bradenmacdonald I was trying to dig into this. In the Kubernetes cluster, the DNS resolution of *.svc.local addresses are handled automatically by CoreDNS and .svc.local addresses are added/removed as needed. I did not find a way though to extend it with custom records. Also, I feel that would be a bit of hacking to "fake" the DNS record we want to get resolved ( I'm fine with self-signed certs as well, though my question still stands: Do we want to really make the effort to let the pods trusting the cluster-local CA? The whole traffic is happening within the cluster. |
Shared Elasticsearch was implemented in #13. While SSL is implemented it could not be completed due to overhangio/tutor#791. That PR has now been merged, so this can be tackled.
Secret
to store the CA certificate generated by the helm chart./etc/ssl/certs/elasticsearch-ca.pem
.The text was updated successfully, but these errors were encountered: