5

I'm trying to enable IPv6 on my Debian nodes (OpenVZ) using venet; my host is under Proxmox 2.2 (kernel 2.6.32-16-pve), and it seems the routing fails.

My host correctly pings all my nodes; all my nodes ping my host, but none of the nodes can ping another node or the outside world. When I run a traceroute to my node from my computer, it stops before reaching my host (a traceroute to my node works well).

Here's my network configuration:

  • Netmask provided by my ISP: 2001:41d0:2:52ae::/56
  • Host Netmask: 2001:41d0:2:52ae::/64
  • Host IPv6: 2001:41d0:2:52ae::1
  • NodeX netmask: 2001:41d0:2:520X::/64
  • NodeX IPv6: 2001:41d0:2:520X::1

On my host, the vmbr0 config is (from /etc/network/interfaces):

iface vmbr0 inet6 static
    address 2001:41d0:2:52ae::1
    netmask 64
    gateway 2001:41d0:2:52ff:ff:ff:ff:ff
    post-up ip -6 route add 2001:41d0:2:52ff:ff:ff:ff:ff/128 dev vmbr0 #gateway
    post-up ip -6 route add default via 2001:41d0:2:52ff:ff:ff:ff:ff #gateway
    post-up ip -6 route add 2001:41d0:2:520X::/64 dev vmbr1 # node X
    post-up ip -6 neigh add proxy 2001:41d0:2:52ff:ff:ff:ff:ff dev vmbr1
    post-up ip -6 neigh add proxy 2001:41d0:2:520X::1 dev vmbr0 # node X

On each node (from /etc/network/interfaces, automatically generated by proxmox):

iface venet0 inet6 manual
    up ifconfig venet0 add 2001:41d0:2:520X::1/128
    down ifconfig venet0 del 2001:41d0:2:520X::1/128
    up route -A inet6 add default dev venet0
    down route -A inet6 del default dev venet0

Am I missing something, or it is simply not possible via venet ?

Edit: here's the output of ip -6 route show on my host:

2001:41d0:2:520X::1 dev venet0  metric 1024  mtu 1500 advmss 1440 hoplimit 4294967295
2001:41d0:2:52ae::/64 dev vmbr0  proto kernel  metric 256  mtu 1500 advmss 1440 hoplimit 4294967295
2001:41d0:2:5200::/56 dev vmbr0  proto kernel  metric 256  expires 0sec mtu 1500 advmss 1440 hoplimit 4294967295
fe80::1 dev venet0  proto kernel  metric 256  mtu 1500 advmss 1440 hoplimit 4294967295
fe80::/64 dev dummy0  proto kernel  metric 256  mtu 1500 advmss 1440 hoplimit 4294967295
fe80::/64 dev vmbr1  proto kernel  metric 256  mtu 1500 advmss 1440 hoplimit 4294967295
fe80::/64 dev vmbr0  proto kernel  metric 256  mtu 1500 advmss 1440 hoplimit 4294967295
fe80::/64 dev eth0  proto kernel  metric 256  mtu 1500 advmss 1440 hoplimit 4294967295
fe80::/64 dev venet0  proto kernel  metric 256  mtu 1500 advmss 1440 hoplimit 4294967295
default via fe80::5:73ff:fea0:0 dev vmbr0  proto kernel  metric 1024  expires 0sec mtu 1500 advmss 1440 hoplimit 64

And on my node:

2001:41d0:2:520X::1 dev venet0  proto kernel  metric 256  mtu 1500 advmss 1440 hoplimit 0
fe80::/64 dev venet0  proto kernel  metric 256  mtu 1500 advmss 1440 hoplimit 0
default dev venet0  metric 1  mtu 1500 advmss 1440 hoplimit 0

Edit2: I switched from venet to veth, and it works without a glitch. Yet, I'm still interested in finding a way to make it work via venet... Using ip -6 route show with veth shows a gateway route, it was not present with venet. Maybe this could be the reason...

tmuguet
  • 151
  • 3
  • That ping you mentioned, is this _ping6_ ? _traceroute6_ ? (on fedora 18 beta anyway) show us the output for 'ip -6 route show' – ArrowInTree Dec 27 '12 at 04:13
  • Yes, I'm using _ping6_ and _traceroute6_. I edited my question with the output of 'ip -6 route show' – tmuguet Dec 27 '12 at 12:40
  • Try turning on IPv6 forwarding: `for i in /proc/sys/net/ipv6/conf/*/forwarding; echo 1 > "$i"; done` – 0xFF Dec 27 '12 at 13:57
  • Thanks, but it doesn't work: turning IPv6 forwarding on _all_ breaks my connectivity, and forwarding was already activated on the other interfaces. – tmuguet Dec 27 '12 at 14:32

1 Answers1

-1

The /etc/network/interfaces configuration you show cannot possibly work.

You are pointing your default gateway at a local address you are configuring on the host. Your default gateway (almost certainly) needs to be pointed to some address on your provider's network.

You haven't put any public IPv6 address on your eth0 interface to talk to your provider (Most likely this is where your 2001:41d0:2:52ae::1/64 address should go and most likely your default gateway should be 2001:41d0:2:52ff:ff:ff:ff:ff via device eth0.

Once you've got the basic networking working on eth0, then you can work on routing the other /64s in your /56 to your other VMs.