routing conflict because of duplicate routes on container interfaces set by connman #801
Replies: 7 comments
-
I was trying to set up a static ip with connman through a custom script file run at boot time but it just doesn't work, the VM is still using DHCP. Maybe it should be run as a runtime cmd or I don't know, but I spent an hour at least trying to make it work without success. I'll give your solution a try to see if it works for me as well. write_files:
- enconding: ""
content: |-
#!/bin/bash
write_log () {
local message="${1}"
logger -t "boot-cmd" "${message}"
echo "${message}"
}
write_log "Getting the service using eth0..."
ETH0=$(connmanctl services | awk '{ print $3 }' | while read -r s1; do connmanctl services $s1 | grep -q "eth0" && echo "$s1"; done)
write_log "eth0 is bound to: ${ETH0}"
write_log "Setting up manual net config..."
connmanctl config $ETH0 --ipv4 manual 10.0.0.88 255.255.255.0 10.0.0.1 --nameservers 10.0.0.2 10.0.0.3
connmanctl config $ETH0 --domains my.local
connmanctl config $ETH0 --timeservers 10.0.0.4 # keeps .1 as optional ntp server but prefers .4
connmanctl config $ETH0 --ipv6 off
write_log "Restarting connman..."
service connman restart
write_log "$(connmanctl services $ETH0)"
write_log "Network setup done."
owner: root:root
path: /etc/boot-cmd.sh
permissions: '0755'
boot_cmd:
- "/etc/boot-cmd.sh" |
Beta Was this translation helpful? Give feedback.
-
Not to mention that the log messages are nowhere to be found, it could have been too easy to debug otherwise 😀 not in |
Beta Was this translation helpful? Give feedback.
-
@immanuelfodor actually I can see a lot of
Is it different on your system? |
Beta Was this translation helpful? Give feedback.
-
It's not different, it's that there are no logs from the script at all. But look, here is another new bug report, #93 maybe my file isn't being run on boot. It could explain why there are no |
Beta Was this translation helpful? Give feedback.
-
Okay, so I had some time, and replaced every "boot" occurrence with "run" in my above example, and now it works for me as expected. Logs are in write_files:
- enconding: ""
content: |-
#!/bin/bash
write_log () {
local message="${1}"
logger -t "run-cmd" "${message}"
echo "${message}"
}
write_log "Getting the service using eth0..."
ETH0=$(connmanctl services | awk '{ print $3 }' | while read -r s1; do connmanctl services $s1 | grep -q "eth0" && echo "$s1"; done)
write_log "eth0 is bound to: ${ETH0}"
write_log "Setting up manual net config..."
connmanctl config $ETH0 --ipv4 manual 10.0.0.88 255.255.255.0 10.0.0.1 --nameservers 10.0.0.2 10.0.0.3
connmanctl config $ETH0 --domains my.local
connmanctl config $ETH0 --timeservers 10.0.0.4 # keeps .1 as optional ntp server but prefers .4
connmanctl config $ETH0 --ipv6 off
write_log "Restarting connman..."
service connman restart
write_log "$(connmanctl services $ETH0)"
write_log "Network setup done."
owner: root:root
path: /etc/run-cmd.sh
permissions: '0755'
run_cmd:
- "/etc/run-cmd.sh" Edit: forgot to replace boot->run, now done :D |
Beta Was this translation helpful? Give feedback.
-
This config seems to work but in the startup banner it says that interface eth0 is down. It would be cool if we could see the ip address of the node there. |
Beta Was this translation helpful? Give feedback.
-
That is a different issue with the init scripts not actually waiting for the network to be up. |
Beta Was this translation helpful? Give feedback.
-
I stumbled upon this unexpected behaviour:
I have created a k3os cluster with static network config for the nodes (i.e. no dhcp). I'm setting all of the interface config in a
connman
service file - including the default gateway and the DNS servers. Network config im/k3os/system/config.yaml
looks like:This seems to work fine for the host's main interface. But has an unexpected side effect on the virtual container interfaces that gets dynamically created for every running container (
veth########
). For every container interface, a static route to the default gateway and the IPs of the DNS servers gets created (which is just wrong and can't work).This leads to a routing conflict: The default gw and the DNS servers become unreachable both from the host and also from inside the containers. (Routing still continues to work though). The major issue this leads to is that the
CoreDNS
pod can't reach the internal DNS server. As CoreDNS is set up to fall back to Internet root dns servers it can still query public domains, but can't resolve any local dns zones only known by the Intranet dns server (i.e. zonemynetwork.local
on DNS server192.168.100.2
).I believe this is a
connman
issue and related to that connman creates a config file for every container interface (/var/lib/connman/ethernet_<id>_cable/settings
) with settings derived from the main interface.The workaround for me is to not have connman set routes and DNS servers. When I use the following network config in
/k3os/system/config.yaml
no duplicate routes appear on the container interfaces:in
/k3os/system/config.yaml
:With this the routes created for a container interface look like:
and everything works as expected (i.e.
CoreDNS
can reach the internal DNS server and resolve the zonemynetwork.local
).Beta Was this translation helpful? Give feedback.
All reactions