Skip to content

Conversation

@qdm12
Copy link
Owner

@qdm12 qdm12 commented Nov 17, 2025

Description

Run this with image tag :pr-2988. No setting change needed.

Following #2970 plan with a few adjustments:

  • Remove DNS_SERVER (aka DOT) option: the DNS server forwarder part is now always enabled (see below why)
  • Remove DNS_KEEP_NAMESERVER: the container will always use the built-in DNS server forwarder, because it can handle now local names with local resolvers (see feat(dns): resolve network-local names #2970), it can use the plain upstream type (see 5ed6e82) AND you can use DNS_UPSTREAM_PLAIN_ADDRESSES (see below)
  • Replace DNS_ADDRESS with DNS_UPSTREAM_PLAIN_ADDRESSES:
    • New CSV format with port, for example ip1:port1,ip2:port2
    • requires DNS_UPSTREAM_TYPE=plain to be set to use DNS_UPSTREAM_PLAIN_ADDRESSES (unless using retro DNS_ADDRESS)
    • retrocompatibility with DNS_ADDRESS. If set, force upstream type to plain and empty user-picked providers. 127.0.0.1 is now ignored since it's always set to this value internally.
    • Warning log on using private upstream resolvers updated

All in all, this greatly simplifies code and available options (less options for the same features is a win). It also allows you to specify multiple plain DNS resolvers on ports other than 53 if needed.

Issue

Assertions

  • I am aware that we do not accept manual changes to the servers.json file
  • I am aware that any changes to settings should be reflected in the wiki

- New CSV format with port, for example `ip1:port1,ip2:port2`
- retrocompatibility with `DNS_ADDRESS`. If set, force upstream type to plain and empty user-picked providers. 127.0.0.1 is now ignored since it's always set to this value internally.
- requires `DNS_UPSTREAM_TYPE=plain` must be set to use `DNS_UPSTREAM_PLAIN_ADDRESSES` (unless using retro `DNS_ADDRESS`)
- Warning log on using private upstream resolvers updated
@peterfour
Copy link

peterfour commented Nov 22, 2025

Adding to that PR, I've been pulling my hair for a while now with #2970. Had the container set up as a sidecar but it just wouldn't work but somehow was fine as a normal container.
However tried with that pr and it seems to work perfectly fine as a sidecar container.

But I'm still having an issue where from within the pod with the vpn, the other container can't seem to resolve service name from other pods

env used :
  FIREWALL_INPUT_PORTS: "8112,9696"
  HTTP_CONTROL_SERVER_AUTH_DEFAULT_ROLE: '{"auth":"apikey","apikey":"<redacted>"}'
  VPN_SERVICE_PROVIDER: "private internet access"
  VPN_TYPE: "openvpn"
  PORT_FORWARD_ONLY: "true"
  VPN_PORT_FORWARDING: "on"
  VPN_PORT_FORWARDING_STATUS_FILE: "/gluetun/forwarded_port"

@mglazenborg
Copy link

mglazenborg commented Nov 23, 2025

EDIT:
Using the image with tag latest seems to have fixed this.


I've tried this PR on Kubernetes, but didn't get it fully working. All requests seem to still go to cloudflare, as that is being set by Gluetun in /etc/resolv.conf. When using nslookup and specifying the local DNS resolver it does properly resolve the DNS record.

root@gluetun-868f9f88c-ct84d:/# nslookup kubernetes.default.svc.cluster.local 127.0.0.1
Server:         127.0.0.1
Address:        127.0.0.1:53

Name:   kubernetes.default.svc.cluster.local
Address: 10.96.0.1

I tried overwriting this using both DNS_ADDRESS and DNS_UPSTREAM_PLAIN_ADDRESSES, but to no success. This is likely due to 127.0.0.1 being ignored:

retrocompatibility with DNS_ADDRESS. If set, force upstream type to plain and empty user-picked providers. 127.0.0.1 is now ignored since it's always set to this value internally.

Deployment used:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: gluetun
  namespace: kube-system
spec:
  template:
    spec:
      containers:
        - name: gluetun
          image: ghcr.io/qdm12/gluetun:pr-2988
          env:
            - name: VPN_SERVICE_PROVIDER
              value: "private internet access"
            - name: FIREWALL_OUTBOUND_SUBNETS
              value: "10.244.0.0/16,10.96.0.0/12"
          envFrom:
            - secretRef:
                name: openvpn-auth
          securityContext:
            capabilities:
              add:
                - NET_ADMIN
          volumeMounts:
            - name: dev-net-tun
              mountPath: /dev/net/tun
      volumes:
        - name: dev-net-tun
          hostPath:
            path: /dev/net/tun
            type: CharDevice

@peterfour
Copy link

peterfour commented Nov 23, 2025

@mglazenborg well I'll be damned, I just tried with :latest as well and now everything is working. 🎉

sidecar vpn in the pod is all good and other container are behind it
and I can use the shorthand internal service name

EDIT: Spoke too soon actually most of it is working but looks like I'm missing dns resolution in the container in the pod somehow

Indeed after some more test inside the pod with the sidecar I have access to the internal domains, but no external name resolution it seems

@peterfour
Copy link

peterfour commented Nov 25, 2025

@mglazenborg Are you also using gluetun as a sidecar or a normal container in a pod ?


After some more digging I can see the same result as @mglazenborg

I'm using gluetun as a sidecar container in my pod to leverage the readiness probe feature.

With the pr-2988 tag the DNS is set to 1.1.1.1 in /etc/resolv.conf but this same dns appears to be set in the other container in the pod too.
And when I try using latest tag I do get the dns set to 127.0.0.1 and it works in the vpn container, but the other container in the pod also have that same 127.0.0.1 dns set and that won't work for them

@mglazenborg
Copy link

@peterfour I'm using it as a sidecar as well.

... but the other container in the pod also have that same 127.0.0.1 dns set and that won't work for them

What do you mean with this? From my testing this should be working, both in the gluetun container as well as the other container running in the pod.

Since both containers share the same network namespace the DNS queries will always go the the DNS resolver running in the gluetun container. If this isn't working I'd almost suspect a deeper underlying issue, but not sure.

@peterfour
Copy link

peterfour commented Nov 25, 2025

Since both containers share the same network namespace the DNS queries will always go the the DNS resolver running in the gluetun container. If this isn't working I'd almost suspect a deeper underlying issue, but not sure.

@mglazenborg I thought that was the case as well but I just can't get it to work, I either have external domain or internal but never both somehow. When I try the nslookup in the container (vpn or other) it works but not with container application for some reason

The only way I made it work was using the DNS_KEEP_NAMESERVER: "on" since that will make it go to kube-dns, and the upstream of that is my locally setup unbound

@qdm12
Copy link
Owner Author

qdm12 commented Nov 25, 2025

Interesting thanks for the feedback!
Alright let's dig into this;
To clarify, the Gluetun dns server now acts as a proxy such that:

  • local requests get forwarded to whatever private ip addresses were found in /etc/resolv.conf at container start
  • public names get forwarded over tls/https to upstream public providers

With the pr-2988 tag the DNS is set to 1.1.1.1 in /etc/resolv.conf but this same dns appears to be set in the other container in the pod too.

The nameserver should now only be 127.0.0.1 since the gluetun dns now proxies things itself. How is it set to 1.1.1.1??

And are you saying with gluetun as a sidecar container, the main container can't access the gluetun DNS server?

@peterfour
Copy link

With the pr-2988 tag the DNS is set to 1.1.1.1 in /etc/resolv.conf but this same dns appears to be set in the other container in the pod too.

The nameserver should now only be 127.0.0.1 since the gluetun dns now proxies things itself. How is it set to 1.1.1.1??

Not sure where this comes from but this is the ip I have in resolv.conf when I use pr-2988 if I then go on latest without changing any env var it's correctly set to 127.0.0.1

And are you saying with gluetun as a sidecar container, the main container can't access the gluetun DNS server?

Well when I did test on latest ( so using the 127.0.0.1 as nameserver ) if I was using nslookup it was working both in the sidecar and in the other container. But for some reason the actual application was throwing resolve error.

So indeed it appears the gluetun dns appears to be available on the other container when using the terminal, but somehow not for the apps themselves

And when I use DNS_KEEP_NAMESERVER set to on nameserver would be kept as the k8s one and then everything would work

@mglazenborg
Copy link

Not sure where this comes from but this is the ip I have in resolv.conf when I use pr-2988 if I then go on latest without changing any env var it's correctly set to 127.0.0.1

To add to this my assumption is that this is due to the Dockerfile in the master branch still having the ENV DNS_ADDRESS=127.0.0.1 set, while it has been removed in this PR.

I did take a look through the code changes but didn't spot where it sets the DNS in resolv.conf to 127.0.0.1

@peterfour
Copy link

Wonder if it might be because of that ?

u.DNSAddress = gosettings.DefaultComparable(u.DNSAddress, "1.1.1.1:53")

@hitem
Copy link

hitem commented Nov 29, 2025

image: qmcgaw/gluetun:pr-2988
DNS_UPSTREAM_TYPE: 'plain'
DNS_UPSTREAM_PLAIN_ADDRESSES: '192.168.1.10'
(also tried specific port :53, and i tried using multiple ip:port as i have multiple dns servers in my network).
Unfortunately its not working for me. it does not use my local dns in my network (192.168.1.10 (ie local address). I see no request coming through. I have not configured or touched anything within the container such as the resolver or anything as such adjustments was not mentioned.
in the startup terminal log i see:
├── DNS settings:
| ├── Keep existing nameserver(s): no
| ├── DNS server address to use: 127.0.0.1
| └── DNS forwarder server enabled: no
2025-YY-XXTXX:XX:XX+XX:00 INFO [dns] using plaintext DNS at address 1.1.1.1
2025-YY-XXTXX:XX:XX+XX:00 INFO [vpn] There is a new release v3.40.3 (v3.40.3) created 10 days ago
image

going back to setting the "soon to be legacy"
DNS_KEEP_NAMESERVER: 'on'
DNS_SERVER: '192.168.1.10'

├── DNS settings:
| └── Keep existing nameserver(s): yes
2025-YY-XXTXX:XX:XX+XX:00 WARN DNS address is set to 192.168.1.10 so the local forwarding DNS server will not be used. The default value changed to 127.0.0.1 so it uses the internal DNS server. If this server fails to start, the IPv4 address of the first plaintext DNS server corresponding to the first DNS provider chosen is used
2025-YY-XXTXX:XX:XX+XX:00 WARN [dns] ⚠️⚠️⚠️ keeping the default container nameservers, this will likely leak DNS traffic outside the VPN and go through your container network DNS outside the VPN tunnel!
2025-YY-XXTXX:XX:XX+XX:00 INFO [vpn] You are running on the bleeding edge of latest!

I once again see my DNS requests coming through my DNS server on 192.168.1.10.

If it helps at all, i do use mullvad config and i do use DOH through my own dns servers. I run Glut through DockerDesktop on a PC and not its own server in my network.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Status: 🔒 After next release Will be done after the next release

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Bug: Panic when using DNS_KEEP_NAMESERVER

5 participants