2

I was working on a server today, debian squeeze. After testing this on two staging servers, I added a virtual network interface to /etc/network/interfaces like this:

# The primary network interface
auto lo
iface lo inet loopback
allow-hotplug eth0
iface eth0 inet static
    address 10.100.2.70
    netmask 255.255.0.0
    gateway 10.100.0.1

# adding this one
auto eth0:1
allow-hotplug eth0:1
iface eth0:1 inet static
    address 10.100.2.77
    netmask 255.255.0.0
    gateway 10.100.0.1

Keepalived is managing a virtual IP on the machine

ip addr show
....
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:24:81:81:e5:54 brd ff:ff:ff:ff:ff:ff
inet 10.100.2.70/16 brd 10.100.255.255 scope global eth0
inet 10.100.2.72/32 scope global eth0

With the new interfaces data in place, a sudo service networking restart brought down the network on the box. With a console via iDrac the network wouldn't start even with the new lines removed from the file, and it required a reboot. I know I could have done if-up eth0:0, but I wanted to see the whole thing work.

Is there some feature of keepalived that would have caused my problem? The only log messages I see in syslog are

Sep 30 15:48:17 pgpool01 Keepalived_vrrp: Kernel is reporting: interface eth0 DOWN
Sep 30 15:48:17 pgpool01 Keepalived_vrrp: VRRP_Instance(VI_1) Entering FAULT STATE
Sep 30 15:48:17 pgpool01 Keepalived_vrrp: VRRP_Instance(VI_1) Now in FAULT state

Like I say, I did not see the problems on the staging boxes.

Any tips would be helpful, thanks.

edit: adding ip link and ifconfig output

$ ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:24:81:81:e5:54 brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
    link/ether 00:24:81:81:e5:55 brd ff:ff:ff:ff:ff:ff

$ /sbin/ifconfig
eth0  Link encap:Ethernet  HWaddr 00:24:81:81:e5:54  
      inet addr:10.100.2.70  Bcast:10.100.255.255  Mask:255.255.0.0
      inet6 addr: fe80::224:81ff:fe81:e554/64 Scope:Link
      UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
      RX packets:2080027816 errors:0 dropped:0 overruns:0 frame:0
      TX packets:2498837332 errors:0 dropped:0 overruns:0 carrier:0
      collisions:0 txqueuelen:1000 
      RX bytes:683029542202 (636.1 GiB)  TX bytes:710577938507 (661.7 GiB)
      Interrupt:16 Memory:fc4c0000-fc4e0000 

lo    Link encap:Local Loopback  
      inet addr:127.0.0.1  Mask:255.0.0.0
      inet6 addr: ::1/128 Scope:Host
      UP LOOPBACK RUNNING  MTU:16436  Metric:1
      RX packets:45564 errors:0 dropped:0 overruns:0 frame:0
      TX packets:45564 errors:0 dropped:0 overruns:0 carrier:0
      collisions:0 txqueuelen:0 
      RX bytes:2279497 (2.1 MiB)  TX bytes:2279497 (2.1 MiB)

and keepalived.conf

vrrp_script chk_pgpool {           # Requires keepalived-1.1.13
    script "killall -0 pgpool"     # cheaper than pidof
    interval 2                      # check every 2 seconds
    weight 2                        # add 2 points of prio if OK
}
vrrp_instance VI_1 {
    interface eth0
    state MASTER
    virtual_router_id 72
    priority 101                    # 101 on master, 100 on backup
    virtual_ipaddress {
        10.100.2.72
    }
    track_script {
        chk_pgpool
    }
}
Kevin G.
  • 209
  • 3
  • 12
  • Do you happen to have the output of `ip link`? What may be happening is keepalived was trying to control the IP on the same VIF. Can you put your keepalived configuration up as well? – Gene Oct 01 '14 at 01:25
  • 1
    @Gene, I added those to the original question. All that suggest anything to you? – Kevin G. Oct 01 '14 at 18:15
  • Thank you for the additional information. I believe I have a solution for the problem, so I answered it below. – Gene Oct 02 '14 at 08:22
  • Normally the keepalived will work properly in physical interface, if you are trying in virtual interface, whenever network restart is happening, the virtual interface link id will change for every network restart. But keepalived will still hold the old link id of virtual interface, due to this even when MASTER server comes live, the transition wont happen.(VIP wont move to Master server from Backup server). So we have to restart the keepalived service in servers to become active with new link id of virtual interface. – Arunraj M Jul 13 '18 at 14:09

2 Answers2

2

Thank you for posting the additional information. I forgot that keepalived doesn't assign vrrp instances to virtual interfaces (e.g. eth0:0).

Since you did a service networking restart keepalived flipped out when eth0 disappeared.

What you'll need to do is configure your interfaces before starting keepalived or manually configure new ones up rather than doing a network restart. You can do this by adding the new interface to '/etc/network/interfaces' then run ifup eth0:#.

Gene
  • 3,633
  • 19
  • 39
0

Keepalived has a builtin feature of adding a label to each VIP after the device. The same label value can be used for multiple VIPs if needed:

virtual_ipaddress {
    192.168.200.18/24 dev eth2 label eth2:1
}

The label eth2:1 does not need any if up ... preparation and results in ifconfig as the original poster (probably) intended:

eth2:1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
    inet 192.168.200.18 netmask 255.255.255.0  broadcast 0.0.0.0
    ether xx:xx:xx:xx:xx:d8  txqueuelen 1000  (Ethernet)

(I am using the labelled subinterface to prevent SSSD from catching those VIPs at the main interface and thus prevent the VIPs from being used in DNS updates which I do not want.)

tourendal
  • 1
  • 1