0

I configured the bridge which are connected to guests to communicate with outside. And In the guest, i configure gateway to indicate the bridge IP. (currently achieve this with DHCP)

eth0 <--> bridge <--> guest

It's surely works. But when i migrate (live migration), the problem arises. The guest network doesn't work because the gateway in the guest have not changed and the guest consistently indicating the gateway bridge of the source host node. If i can use restarting the guest, it will be easy(editing libvrit xml file or configuring DHCP). But what i want to do live migration. More over i can't use ssh to run command in a guest. Because i supposed not to know the username and the password of the guest. How can i automatically change the gateway of the guest to the appropriate gateway when i do live migration? I thought open vswitch can solve this issue. But it looks like that open vswitch can only change the routing and can't change the gateway of the guest. Problem still there. If there are a way to run a command in the guest without ssh, it will be also appropriate. If i can do that, i could force the DHCP client of the guest to signal DHCPDISCOVERY. But sadly, this way is only available in vmware(VIX api). I can't find corresponding feature in kvm.

add : the bridge have the actual ip and the bridge ip is the gateway for the guest in a host. so there are multiple gateways in one subnet and each host has one gateway per one subnet. I have configured each host to handle NAT for their guests. a guest public ip is handled by their hosts. Why i used this approach is avoiding 'single point of failure' and distributing NAT workload to each hosts. Should i have to throw out above structure to achieve live migration? is it bad approach for constructing virtual machine cluster?

jinhwan
  • 183
  • 5

1 Answers1

0

Is the bridge (the host actually) IP the actual gateway on this subnet? If not, there should be no need to use the bridge IP as the gateway in the VM, instead, use the subnet gateway. Treat the bridge as a dumb switch or even a hub, just a logical object your VM is plugged into that will pass it's traffic to the real network out there. This means the VM is as much on the subnet as the physical host is, and so it should be using the same network infrastructure definitions - gateways, DNS, DHCP... everything is just the same as with physical hosts

dyasny
  • 18,482
  • 6
  • 48
  • 63
  • Thanks for answer. It looks like my question is not good. I'll fix it. the bridge have the actual ip and the bridge ip is the gateway. so there are multiple gateways in one subnet and each host has one gateway per one subnet. I have configured each host to handle NAT for their guests. a guest public ip is handled by their hosts. Why i used this approach is avoiding 'single point of failure' and distributing NAT workload to each hosts. is it bad approach for constructing virtual machine cluster? – jinhwan Apr 03 '13 at 08:33
  • exactly, if you need HA for the gateway, you can cluster cisco routers (or use some other solution of course), but when you tie anything down to a hypervisor, you effectively cancel live migration for everything that uses the host resource you hardcoded. that is true for pci passthrough devices, host level infrastructure settings etc – dyasny Apr 03 '13 at 12:26