-
-
Notifications
You must be signed in to change notification settings - Fork 513
chore(dns): remove DNS_SERVER, DNS_KEEP_NAMESERVER and replace DNS_ADDRESS with DNS_UPSTREAM_PLAIN_ADDRESSES
#2988
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
- New CSV format with port, for example `ip1:port1,ip2:port2` - retrocompatibility with `DNS_ADDRESS`. If set, force upstream type to plain and empty user-picked providers. 127.0.0.1 is now ignored since it's always set to this value internally. - requires `DNS_UPSTREAM_TYPE=plain` must be set to use `DNS_UPSTREAM_PLAIN_ADDRESSES` (unless using retro `DNS_ADDRESS`) - Warning log on using private upstream resolvers updated
|
Adding to that PR, I've been pulling my hair for a while now with #2970. Had the container set up as a sidecar but it just wouldn't work but somehow was fine as a normal container. But I'm still having an issue where from within the pod with the vpn, the other container can't seem to resolve service name from other pods env used : FIREWALL_INPUT_PORTS: "8112,9696"
HTTP_CONTROL_SERVER_AUTH_DEFAULT_ROLE: '{"auth":"apikey","apikey":"<redacted>"}'
VPN_SERVICE_PROVIDER: "private internet access"
VPN_TYPE: "openvpn"
PORT_FORWARD_ONLY: "true"
VPN_PORT_FORWARDING: "on"
VPN_PORT_FORWARDING_STATUS_FILE: "/gluetun/forwarded_port" |
|
EDIT: I've tried this PR on Kubernetes, but didn't get it fully working. All requests seem to still go to cloudflare, as that is being set by Gluetun in I tried overwriting this using both DNS_ADDRESS and DNS_UPSTREAM_PLAIN_ADDRESSES, but to no success. This is likely due to 127.0.0.1 being ignored:
Deployment used:apiVersion: apps/v1
kind: Deployment
metadata:
name: gluetun
namespace: kube-system
spec:
template:
spec:
containers:
- name: gluetun
image: ghcr.io/qdm12/gluetun:pr-2988
env:
- name: VPN_SERVICE_PROVIDER
value: "private internet access"
- name: FIREWALL_OUTBOUND_SUBNETS
value: "10.244.0.0/16,10.96.0.0/12"
envFrom:
- secretRef:
name: openvpn-auth
securityContext:
capabilities:
add:
- NET_ADMIN
volumeMounts:
- name: dev-net-tun
mountPath: /dev/net/tun
volumes:
- name: dev-net-tun
hostPath:
path: /dev/net/tun
type: CharDevice |
|
EDIT: Spoke too soon actually most of it is working but looks like I'm missing dns resolution in the container in the pod somehow Indeed after some more test inside the pod with the sidecar I have access to the internal domains, but no external name resolution it seems |
|
@mglazenborg Are you also using gluetun as a sidecar or a normal container in a pod ? After some more digging I can see the same result as @mglazenborg I'm using gluetun as a sidecar container in my pod to leverage the readiness probe feature. With the |
|
@peterfour I'm using it as a sidecar as well.
What do you mean with this? From my testing this should be working, both in the gluetun container as well as the other container running in the pod. Since both containers share the same network namespace the DNS queries will always go the the DNS resolver running in the gluetun container. If this isn't working I'd almost suspect a deeper underlying issue, but not sure. |
@mglazenborg I thought that was the case as well but I just can't get it to work, I either have external domain or internal but never both somehow. When I try the The only way I made it work was using the |
|
Interesting thanks for the feedback!
The nameserver should now only be 127.0.0.1 since the gluetun dns now proxies things itself. How is it set to 1.1.1.1?? And are you saying with gluetun as a sidecar container, the main container can't access the gluetun DNS server? |
Not sure where this comes from but this is the ip I have in resolv.conf when I use
Well when I did test on latest ( so using the So indeed it appears the gluetun dns appears to be available on the other container when using the terminal, but somehow not for the apps themselves And when I use |
To add to this my assumption is that this is due to the Dockerfile in the master branch still having the ENV I did take a look through the code changes but didn't spot where it sets the DNS in resolv.conf to 127.0.0.1 |
|
Wonder if it might be because of that ?
|

Description
Run this with image tag
:pr-2988. No setting change needed.Following #2970 plan with a few adjustments:
plainupstream type (see 5ed6e82) AND you can useDNS_UPSTREAM_PLAIN_ADDRESSES(see below)DNS_ADDRESSwithDNS_UPSTREAM_PLAIN_ADDRESSES:ip1:port1,ip2:port2DNS_UPSTREAM_TYPE=plainto be set to useDNS_UPSTREAM_PLAIN_ADDRESSES(unless using retroDNS_ADDRESS)DNS_ADDRESS. If set, force upstream type to plain and empty user-picked providers. 127.0.0.1 is now ignored since it's always set to this value internally.All in all, this greatly simplifies code and available options (less options for the same features is a win). It also allows you to specify multiple plain DNS resolvers on ports other than 53 if needed.
Issue
Assertions