3

First of all it cracks me up how many articles are out there to force IPv6 OFF on linux servers. Come on folks, get with the new! :D

root@hodor:~# lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description:    Debian GNU/Linux 10 (buster)
Release:        10
Codename:       buster
root@hodor:~# uname -a
Linux hodor 4.19.0-8-amd64 #1 SMP Debian 4.19.98-1 (2020-01-26) x86_64 GNU/Linux

I have a repeatable problem where, after a reboot, one of my bridge interfaces and all child / slave interfaces for that bridge have IPv6 disabled. This causes a failure in setting ipv6 address on the host among other things. This is what I see

net.ipv6.conf.br0.disable_ipv6 = 1
net.ipv6.conf.enp175s0f0.disable_ipv6 = 1
net.ipv6.conf.enp175s0f1.disable_ipv6 = 1
net.ipv6.conf.hostveth0.disable_ipv6 = 1

I couldn't find anything of relevance in /etc/sysctl.d/* . Here is my sysctl.conf:

root@hodor:~# grep -v ^\# /etc/sysctl.conf






net.ipv4.ip_forward=1

net.ipv6.conf.all.forwarding=1



net.ipv6.conf.br0.disable_ipv6 = 0
net.ipv6.conf.br0/5.disable_ipv6 = 0
net.ipv6.conf.br0/90.disable_ipv6 = 0
net.ipv6.conf.enp175s0f0.disable_ipv6 = 0
net.ipv6.conf.enp175s0f1.disable_ipv6 = 0
net.ipv6.conf.hostveth0.disable_ipv6 = 0
net.ipv6.conf.lo.disable_ipv6 = 0
net.ipv6.conf.all.disable_ipv6 = 0

After I sysctl -p I can then manually set my ipv6 and fix all the other little nuances, but that sucks.

Also thought that maybe grub was my culprit, but I see nothing that refers to this kernel parameter.

root@hodor:~# grep -v ^\# /etc/default/grub

GRUB_DEFAULT=0
GRUB_TIMEOUT=1
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT=""
GRUB_CMDLINE_LINUX="console=tty1 console=ttyS0,115200 intel_iommu=on"
GRUB_TERMINAL="console serial"
GRUB_SERIAL_COMMAND="serial --speed=115200 --unit=0 --word=8 --parity=no --stop=1"

Here is /etc/network/interfaces (obfuscated) and there is nothing /etc/network/interfaces.d/:

source /etc/network/interfaces.d/*

auto lo
auto enp5s0
auto enp6s0
iface lo inet loopback
iface enp5s0 inet manual
iface enp6s0 inet manual


auto enp175s0f0
iface enp175s0f0 inet manual


auto enp175s0f1
iface enp175s0f1 inet manual

auto br0
iface br0 inet static
bridge_ports enp175s0f1 enp175s0f0 hostveth0
bridge_stp off
bridge_maxwait 5
address 172.16.10.35
netmask 255.255.254.0
gateway 172.16.10.1
dns-nameservers 172.16.10.1
hwaddress ether 9e:7d:01:6c:32:1b
        pre-up ip link add name hostveth0 type veth peer name dockerveth0
        pre-up ip link set hostveth0 up
        pre-up ip link set dockerveth0 up

iface br0 inet6 static
        address 2600:####:####:###0::face/64
        dns-nameservers 2600:####:####:###0::1
        gateway 2600:####:####:####0::1

auto virttap0
iface virttap0 inet manual
        pre-up modprobe dummy
        pre-up ip link add name virttap0 type dummy
        post-up ip link set virttap0 arp on multicast on

iface br0.5 inet manual
        vlan-raw-device br0

iface br0.90 inet manual
        vlan-raw-device br0

auto br5
iface br5 inet manual
bridge_ports br0.5
bridge_stp off
bridge_maxwait 5

auto br90
iface br90 inet manual
bridge_ports br0.90
bridge_stp off
bridge_maxwait 5

Hopefully this is an easy one. Please help if you can!

Lon Kaut
  • 151
  • 5

2 Answers2

3

I'm assuming that you are using these three packages to provide the options in use: ifupdown, bridge-utils, vlan. The two later provide the commands brctl and vconfig, both obsolete, but more importantly they provide Debian-specific plugin scripts to ifupdown. While brctl is still used in these scripts, vconfig is not even used (and replaced by modern ip link commands).

The problem is caused by the fact that br0 is parent to a VLAN sub-interface that gets created by bridge-utils scripts (not by scripts from the vlan package).

The bridge-utils's ifupdown plugin scripts prevent bridge ports to participate in routing:

# ls -l /etc/network/if-pre-up.d/bridge
lrwxrwxrwx. 1 root root 29 Jan 28  2019 bridge -> /lib/bridge-utils/ifupdown.sh

which is a Debian-specific script belonging to the bridge-utils package. Here's the relevant content (sorry this is a rare package that doesn't appear to be on https://salsa.debian.org, so no link):

      if [ -f /proc/sys/net/ipv6/conf/$port/disable_ipv6 ]
      then
        echo 1 > /proc/sys/net/ipv6/conf/$port/disable_ipv6
      fi

This is a desired setting for bridge ports.

But in OP's setup the bridge interface is intended to receive an address to participate in routing, and also to be a parent interface to a VLAN sub interface itself enslaved to a bridge. That's a topology not expected by bridge-utils.

The previous script calls /lib/bridge-utils/bridge-utils.sh which includes:

create_vlan_port()
{
# port doesn't yet exist
if [ ! -e "/sys/class/net/$port" ]
then
  local dev="${port%.*}"
  # port is a vlan and the device exists?
  if [ "$port" != "$dev" ] && [ -e "/sys/class/net/$dev" ]
  then
    if [ -f /proc/sys/net/ipv6/conf/$dev/disable_ipv6 ]
    then
      echo 1 > /proc/sys/net/ipv6/conf/$dev/disable_ipv6
    fi
    ip link set "$dev" up
    ip link add link "$dev" name "$port" type vlan id "${port#*.}"
  fi
fi
}

When the sub-interface doesn't exist (because it doesn't even need to have a configuration to be created at all with this script), its parent interface gets IPv6 disabled (while the ports itself will get it disabled from the previous script) for similar reasons to the bridge case: the parent interface is supposed to carry only VLAN tagged traffic, so is prevented to interfere with any routing for example by receiving automatic IPv6 addresses. This is also usually a desired setting, but not for OP's case where the same interface is intended to carry both tagged and untagged traffic.

In OP's setup the sub-interfaces are defined in the configuration and intended to be created on the system by plugin scripts from the vlan package, but since there aren't any auto br0.5 nor auto br0.90, the interfaces were not created at the system level when bridge-utils's script checked, so it executes the # port doesn't yet exist block: creates them but disable IPv6 on their parent interfaces first. It's important here to not confuse the logical interface as seen with ifupdown with the real interface on the system, despite them having the same name in almost all setups.

Solutions

Any of the three methods below should get the intended result. I'm also suggesting a 4th method, but integration with applications like Docker wouldn't be simple.

  • work-around this by adapting to the peculiarities of the (quite obsolete) bridge-utils package: bring up the configured sub-interfaces in advance, so they exist at the system level. Then the script above won't disable IPv6 on their parent interfaces (it won't match # port doesn't yet exist). Nor scripts from the vlan package which this time created the VLAN sub-interfaces.

    auto br0.5
    iface br0.5 inet manual
            vlan-raw-device br0
    
    auto br0.90
    iface br0.90 inet manual
            vlan-raw-device br0
    

    and make sure it happens before the configuration of br5 and br90 (which is the case now). After this, only these interfaces will have IPv6 disabled, as it should be: br0.5, br0.90 as well as enp175s0f1, enp175s0f0, hostveth0.

    While this is a simple change, it won't prevent problems later if ifup and ifdown are used in the "wrong order", where br0 can get IPv6 disabled again or some interfaces (ports) which should have it disabled won't. The only order guaranteed to work is the one from the configuration:

    ifdown br90
    ifdown br5
    ifdown br0.90 # even if they have now disappeared from the system
    ifdown br0.5  # they are still up for ifupdown's logic
    ifdown br0
    ifup br0
    ifup br0.5
    ifup br0.90
    ifup br5
    ifup br90
    
  • keep the bridge being a bridge only and use an additional pair of veth interfaces, with one end on the bridge and one end to participate in routing. This gives a clean separation between bridging and routing (and won't be subject to any side effects, for example when using Docker, but at the same time might require changes in your current setup with Docker):

    auto routing0
    iface routing0 inet static
        pre-up ip link add name routing0 address 9e:7d:01:6c:32:1b type veth peer name br0routing0 || :
        address 172.16.10.35
        netmask 255.255.254.0
        gateway 172.16.10.1
        dns-nameservers 172.16.10.1
    
    iface routing0 inet6 static
        address 2600:####:####:###0::face/64
        dns-nameservers 2600:####:####:###0::1
        gateway 2600:####:####:####0::1
    
    auto br0
    iface br0 inet manual
    bridge_ports br0routing0 enp175s0f1 enp175s0f0 hostveth0
    bridge_stp off
    bridge_maxwait 5
            pre-up ip link add name hostveth0 type veth peer name dockerveth0 || :
            pre-up ip link set hostveth0 up
            pre-up ip link set dockerveth0 up
    

    I don't know if the hardware address is a new one (assumed in the configuration above) or belongs to enp175s0f1 and is needed for some reason (in this case routing0 must not use it, and to avoid complexity don't use this solution). You'll possibly have to adapt the configuration of any unrelated service having br0 in its configuration and use routing0 instead.

  • switch to ifupdown2 which is an ifupdown complete re-implementation made by Cumulus Networks which provides switches and routers running Linux:

    ifupdown2 is a new implementation of debian’s network interface manager ifupdown. It understands interface dependency relationships, simplifies interface configuration, extends ifquery to support interface config validation, supports JSON and more.

    It has built-in bridge and VLAN handling and doesn't rely on the bridge-utils or vlan packages anymore.

    As usual, switching tools managing network might cause connectivity issues, so have a remote console access.

    Keeping your configuration as-is should work correctly, but from this comment in ifupdown2's version of interfaces(5):

    BUILTIN INTERFACES

    iface sections for some interfaces like physical interfaces or vlan interfaces in dot notation (like eth1.100) are understood by ifupdown. These interfaces do not need an entry in the interfaces file if they are dependents of other interfaces and don't need any specific configurations like addresses etc.

    you should completely remove the definitions for br0.5 and br0.90 from the configuration (except of course in the bridge_ports entries).

    Such configuration will get again IPv6 disabled only on bridge ports: br0.5, br0.90 as well as enp175s0f1, enp175s0f0, hostveth0. I still expect possible issues when using arbitrary ifdown/ifup commands.

  • suggestion only: ifupdown2 can also be configured to use VLAN aware bridges, turning the setup into one bridge and zero VLAN sub-interfaces.

    This should be the best setup, but not many applications currently support configuring VLAN IDs on a bridge port (eg: using the bridge vlan command). I don't think Docker supports this, so this would not be useful for OP's setup.

A.B
  • 9,037
  • 2
  • 19
  • 37
  • Love these answers. Thank you very much for providing such detail! recommendations #2 & #3 might be my desired solutions. I will report back with results. – Lon Kaut Oct 19 '20 at 19:46
  • "switch to ifupdown2 which is an ifupdown complete re-implementation made by Cumulus Networks which provides switches and routers running Linux:" is what I ultimately went with. It was a journey of trial and error but reached the desired outcome. – Lon Kaut Oct 21 '20 at 15:00
0

I ultimately got this working with suggestion from @A.B above:

"switch to ifupdown2 which is an ifupdown complete re-implementation made by Cumulus Networks which provides switches and routers running Linux:"

Lot's of lessons learned here when switching from ifupdown to ifupdown2:

  1. As warned by @A.B, There were immediate network issues when upgrading from ifupdown to ifupdown2. The main one being that my Interfaces were renamed (Swapped). What was enp175s0f0 became enp175s0f1 and the reverse. About 45 min of tcpdump etc lead me to resolution here.
  2. As of 10/21/2020 Debian repos provide an old version of ifupdown2
# apt-cache madison ifupdown2
 ifupdown2 |    1.2.5-1 | http://deb.debian.org/debian buster/main amd64 Packages
 ifupdown2 |    1.2.5-1 | http://deb.debian.org/debian buster/main i386 Packages
 ifupdown2 |    1.2.5-1 | http://deb.debian.org/debian buster/main Sources

My trials with this version resulted in a lot of frustration and could still not get the config in /etc/network/interfaces to assign an IPv6 address to my bridge or any interface for that matter. Not getting into syntax here because the syntax worked on later version. Please... easy to compile the latest .deb from here: Cumulus Github After using this version ifupdown2(ver.3) my /etc/network/interfaces config file produced the desired IPv6 IP on my interface.

  1. It was important to heed the warning from @A.B about "BUILTIN INTERFACES" When specifying interfaces that exist by having no further config about them such as auto enp175s0f0 and iface enp175s0f0 inet manual caused weird issues particularly with my KVM guests being prevented from auto booting; and since one of them used a PCI passthrough for a NIC, those NICs decided to go and pull IPv4 and IPv6 addresses from the infrastructure further confusing me.
  2. DNS.... My DNS entries in /etc/network/interfaces were being completely ignored and I had a lot of trouble finding out the RIGHT WAY to set DNS settings with ifupdown2.
  • I went through messing with NetworkManager and ultimately removed, but that still wouldn't allow me to set DNS using /etc/network/interfaces...
  • I've always known that on modern linux systems you don't manually edit /etc/resolv.conf because the entries would ultimately get overwriten by NetworkManager or ifupdown(2) or something else. Debian Docs on the matter
  • I learned that ifupdown2 leverages the use of the package resolvconf to interpret the dns settings in /etc/network/interfaces and deploy them to /etc/resolv.conf. Just because you have the directory /etc/resolvconf/ doesn't mean you have the resolvconf package installed! Ya gotta install it. After this, I was in business.

Here was my final /etc/network/interfaces (Much Simpler):

grep -v ^\# /etc/network/interfaces

source /etc/network/interfaces.d/*

auto lo
iface lo inet loopback




auto br0
iface br0 inet manual
bridge_ports enp175s0f1 enp175s0f0 hostveth0
bridge_stp off
bridge_maxwait 5
        up echo $IFACE is up;
        address 172.16.10.35/23
        address 2600:####:####:###0::face/64
        gateway 172.16.10.1
        gateway 2600:####:####:###0::1
        dns-nameservers 172.16.10.1 2600:####:####:###0::1
        dns-search ####.tld
        hwaddress ether 9e:7d:01:6c:32:1b
        pre-up ip link add name hostveth0 type veth peer name dockerveth0
        pre-up ip link set hostveth0 up
        pre-up ip link set dockerveth0 up


auto virttap0
iface virttap0 inet manual
        pre-up modprobe dummy
        pre-up ip link add name virttap0 type dummy
        post-up ip link set virttap0 arp on multicast on

auto br5
iface br5 inet manual
bridge_ports br0.5
bridge_stp off
bridge_maxwait 5


auto br90
iface br90 inet manual
bridge_ports br0.90
bridge_stp off
bridge_maxwait 5
Lon Kaut
  • 151
  • 5