1

We have a couple of Red Hat Enterprise linux server with clustering

uname -a:

Linux deda-ora1 2.6.18-194.el5 #1 SMP Mon Mar 29 22:10:29 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux 

rpm -qf /etc/redhat-release 

enterprise-release-5-0.0.22

4 nic every node:

eth0 slave bond

eth2 salve bond

eth1 unused

bond 0 master bond

ifconfig first node: 
bond0     Link encap:Ethernet  HWaddr D8:D3:85:B5:B6:AE   
      inet addr:172.19.19.65  Bcast:172.19.19.255  Mask:255.255.255.0 
      inet6 addr: fe80::dad3:85ff:feb5:b6ae/64 Scope:Link 
      UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1 
      RX packets:62794748 errors:0 dropped:28 overruns:0 frame:0 
      TX packets:67609557 errors:0 dropped:0 overruns:0 carrier:0 
      collisions:0 txqueuelen:0 
      RX bytes:17019400666 (15.8 GiB)  TX bytes:48301294532 (44.9 GiB) 


eth0      Link encap:Ethernet  HWaddr D8:D3:85:B5:B6:AE   
      UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1 
      RX packets:60616622 errors:0 dropped:28 overruns:0 frame:0 
      TX packets:67609557 errors:0 dropped:0 overruns:0 carrier:0 
      collisions:0 txqueuelen:1000 
      RX bytes:16815386111 (15.6 GiB)  TX bytes:48301294532 (44.9 GiB) 
      Interrupt:82 Memory:fa000000-fa012800 


eth2      Link encap:Ethernet  HWaddr D8:D3:85:B5:B6:AE   
      UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1 
      RX packets:2178126 errors:0 dropped:0 overruns:0 frame:0 
      TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 
      collisions:0 txqueuelen:1000 
      RX bytes:204014555 (194.5 MiB)  TX bytes:0 (0.0 b) 
      Interrupt:90 Memory:f8000000-f8012800 


lo        Link encap:Local Loopback   
      inet addr:127.0.0.1  Mask:255.0.0.0 
      inet6 addr: ::1/128 Scope:Host 
      UP LOOPBACK RUNNING  MTU:16436  Metric:1 
      RX packets:32107580 errors:0 dropped:0 overruns:0 frame:0 
      TX packets:32107580 errors:0 dropped:0 overruns:0 carrier:0 
      collisions:0 txqueuelen:0 
      RX bytes:2185420255 (2.0 GiB)  TX bytes:2185420255 (2.0 GiB) 

ip addr first node: 

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue 
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 
inet 127.0.0.1/8 scope host lo 
inet6 ::1/128 scope host 
   valid_lft forever preferred_lft forever 

2: __tmp92808343: <BROADCAST,MULTICAST> mtu 1500 qdisc noop qlen 1000 
link/ether f4:ce:46:87:86:50 brd ff:ff:ff:ff:ff:ff 

3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop qlen 1000 
link/ether f4:ce:46:87:86:51 brd ff:ff:ff:ff:ff:ff 

4: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc     pfifo_fast master bond0 qlen 1000 
link/ether d8:d3:85:b5:b6:ae brd ff:ff:ff:ff:ff:ff 

5: eth2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 qlen 1000 
link/ether d8:d3:85:b5:b6:ae brd ff:ff:ff:ff:ff:ff 

6: sit0: <NOARP> mtu 1480 qdisc noop 
link/sit 0.0.0.0 brd 0.0.0.0 

7: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue 
link/ether d8:d3:85:b5:b6:ae brd ff:ff:ff:ff:ff:ff 
inet 172.19.19.65/24 brd 172.19.19.255 scope global bond0 
inet 172.19.19.164/24 scope global secondary bond0 
inet6 fe80::dad3:85ff:feb5:b6ae/64 scope link 
   valid_lft forever preferred_lft forever 


cat /etc/sysconfig/network 
NETWORKING=yes 
NETWORKING_IPV6=no 
HOSTNAME=deda-ora1 
GATEWAY=172.19.19.5 

The cluster services are:

1 Oracle database

1 Virtual ip address "172.19.19.164"

For the past five years all works perfectly , with the following routing table:

Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface 

172.19.19.0     *               255.255.255.0   U         0 0          0 bond0 

169.254.0.0     *               255.255.0.0     U         0 0          0 bond0 

default         172.19.19.5     0.0.0.0         UG        0 0          0 bond0 

Last night somebody enable on a firewall appliance "pfense" the rip protocol and the oracle server starts to nor respond.

Finally we discovered that enabling the virtual ip cluster service, after few second a new default route appeared.

The routing table become:

172.19.19.0     0.0.0.0         255.255.255.0   U     0      0        0 bond0

169.254.0.0     0.0.0.0         255.255.0.0     U     0      0        0 bond0

0.0.0.0         172.19.19.11    0.0.0.0         UG    0      0        0 bond0

0.0.0.0         172.19.19.5     0.0.0.0         UG    0      0        0 bond0

172.18.19.11 is the pfsense appliance ip

We resolved with a route del 0.0.0.0. gw 172.19.19.11

Neither "routed , or "gated" was installed or running on the two cluster node.

It seems that the cluster listen the broadcasted "routing table" from pfesese.

Is it possible?

Thank you for help

0 Answers0