4

We a dedicated server at OVH, assigned 2001:41d0:a:72xx::/64

I have set the machines on a segment bridged to the wan, as per IPv6 public routing of virtual machines from host

The gateway is 2001:41d0:a:72ff:ff:ff:ff:ff, outside the network

We're running a bunch of virtual debian servers.

Some of our (older) servers are happy to route ipv6 to the gateway, but the new I'm trying to setup are saying ."Destination unreachable; Address Unreachable" when pinging the gw.

Firewall is setup equally (rules for /64, not on host-level), and /etc/network/interfaces are equal; ipv6 are set static. (different adresses of cause)

On both working and non-working machine, netstat -rn6|grep eth1 show

2001:41d0:a:72xx::/64          ::                         U    256 2    40 eth1
2001:41d0:a:7200::/56          ::                         UAe  256 2    71 eth1
2001:41d0:a1:72xx::/64         ::                         UAe  256 0     0 eth1
2000::/3                       2001:41d0:a:72ff:ff:ff:ff:ff UG   1024 2 63479 eth1
fe80::/64                      ::                         U    256 0     0 eth1
::/0                           fe80::205:73ff:fea0:1      UGDAe 1024 1     2 eth1
::/0                           fe80::20c:29ff:fe22:60f8   UGDAe 1024 0     0 eth1
ff00::/8                       ::                         U    256 2108951 eth1

On the non-working machines, pinging the gw or the workd returns "Destination unreachable."

The machines can all reach each other on the local lan.

I don't know if it is relevant, but

ping -c3 ff02::2%eth1
64 bytes from fe80::20c:29ff:fedb:a137%eth1: icmp_seq=1 ttl=64 time=0.240 ms
64 bytes from fe80::20c:29ff:fe22:60f8%eth1: icmp_seq=1 ttl=64 time=0.250 ms (DUP!)
64 bytes from fe80::2ff:ffff:feff:fffd%eth1: icmp_seq=1 ttl=64 time=3.57 ms (DUP!)
64 bytes from fe80::2ff:ffff:feff:fffe%eth1: icmp_seq=1 ttl=64 time=5.97 ms (DUP!)

On the non-working

ping -c3 ff02::2%ens34
PING ff02::2%ens34(ff02::2%ens34) 56 data bytes
64 bytes from fe80::20c:29ff:fedb:a137%ens34: icmp_seq=1 ttl=64 time=0.130 ms
64 bytes from fe80::20c:29ff:fe22:60f8%ens34: icmp_seq=1 ttl=64 time=0.138 ms (DUP!)

The :fffd amd :fffe addresses missing.

All the ipv6-addresses have been assigned at OVH control panel.

TL;DR: Something must be different between the old and new servers, but I can't find it.

UPDATE: A clone of a working machine does not work.

On the outside of the pfsense, set up as bridge, the machine sends this:

12:33:23.087778 IP6 test1.example.org > fe80::2ff:ffff:feff:fffe: ICMP6, neighbor advertisement, tgt is test1.example.org, length 32 12:33:24.106302 IP6 test1.example.org > par10s28-in-x0e.1e100.net: ICMP6, echo request, seq 451, length 64

But nothing ever gets back. Pings from outside doesn't go through either.

As the machine is an exact clone of a working machine, except for the ip-addresses, it must be an upstream problem at OVH.

UPDATE 2 Now OVH claims that to get data routed to an IPv6, the mac need to be associated to an IPv4 address. OMG The working IPv6's are not.

Lenne
  • 917
  • 1
  • 12
  • 30
  • Can you update the question details to provide the routing table for the non-working host as well? – guzzijason Sep 24 '18 at 15:02
  • A gateway must be on the same network as the host because it is the host on the network that knows how to forward traffic off the network. You would need a gateway to get to the gateway, and it doesn't work that way. – Ron Maupin Sep 25 '18 at 00:14
  • Tell that to OVH, that's the way they have set it up. It works for some of my hosts, – Lenne Sep 25 '18 at 00:17
  • @guzzijason, actually, even if I set the routing address to the same on non- as on working, it doesn't help. – Lenne Sep 25 '18 at 00:21
  • 1
    Actually, you can have a gateway address that's on a "different" subnet, as long as the gateway is directly attached, and you have an interface route for that destination. In fact, you *do* appear to have such an interface route, for the prefix `2001:41d0:a:7200::/56`, which your desired gateway actually falls into. If those routes exists on both hosts, then I'm not sure what the difference might be yet. – guzzijason Sep 25 '18 at 02:27
  • How exactly did you set up virtual networking? – Michael Hampton Sep 25 '18 at 12:46
  • A pfsense bridging wan to separate network with second nic on each machine running static ipv6 gateway set to the OVH router. But it seems it's the OVH-router not answering to all my ip's. I've opened a ticket. – Lenne Sep 25 '18 at 13:38

2 Answers2

2

OVH does not know how to do IPv6 properly, their setup only works in certain situations, not applicable everywhere.

It only works without special hoop-jumping when the servers are exposed to the world and also have public IPv4-addresses.

They can't supply one public ipv6 and a subnet routed to it, which is needed if one wants to run VM's behind ones own firewall.

Until they get their stuff working, it is better to look elsewhere, if you are interested in IPv6.

Lenne
  • 917
  • 1
  • 12
  • 30
1

OVH runs switch port security on their switches, so that only whitelisted MAC addresses can use any given port. This doesn't apply to vRack; switch port security is disabled on vRack. But OVH won't let you route IPv6 subnets to vRack yet. Nor can you failover an IPv6 subnet to another server. This is a critical oversight; until both of these capabilities exist, OVH's IPv6 support is considered limited.

So this is how I've set up an OVH server running a few dozen virtual machines:

On the host server, br3 is a bridge containing eno3 and virtual network interfaces on which I route IPv6. The host is configured as:

# cat /etc/sysconfig/network-scripts/ifcfg-br3
DEVICE="br3"
TYPE="Bridge"
STP="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_FAILURE_FATAL="no"
NAME="br3"
ONBOOT="yes"
ZONE="public"
BOOTPROTO="static"
IPADDR="203.0.113.24"
PREFIX="24"
GATEWAY="203.0.113.1"
IPV6_AUTOCONF="no"
IPV6ADDR="2001:db8:1f3:c187::/64"

I have static routes configured as such:

# cat /etc/sysconfig/network-scripts/route6-br3 
2001:db8:1f3:c187::/64 dev br3
2001:db8:1f3:c1ff:ff:ff:ff:ff dev br3
default via 2001:db8:1f3:c1ff:ff:ff:ff:ff dev br3

I then run ndppd, which answers NDP neighbor solicitation queries for any address in my /64. It's configured as such:

# cat /etc/ndppd.conf 
route-ttl 30000
proxy br3 {
   router yes
   timeout 500   
   ttl 30000
   rule 2001:db8:1f3:c187::/64 {
      static
   }
}

This causes the MAC address of the host to be used for all IPv6 addresses in the subnet, which I then route to virtual interfaces in libvirt, split into /80 networks. One example is configured as such:

# cat /etc/libvirt/qemu/networks/v6bridge_1.xml 
<!--
WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE
OVERWRITTEN AND LOST. Changes to this xml configuration should be made using:
  virsh net-edit v6bridge_1
or other application using the libvirt API.
-->

<network>
  <name>v6bridge_1</name>
  <uuid>7007a2b2-08b8-4cd5-a4aa-49654ae0829b</uuid>
  <forward mode='route'/>
  <bridge name='v6bridge_1' stp='on' delay='0'/>
  <mac address='52:54:00:fc:d4:da'/>
  <ip family='ipv6' address='2001:db8:1f3:c187:1::' prefix='80'>
  </ip>
</network>

All VMs in this particular network are assigned manual IPv6 addresses, but you could set up DHCPv6 if you wanted. That would look like:

  <ip family='ipv6' address='2001:db8:1f3:c187:1::' prefix='80'>
    <dhcp>
      <range start="2001:db8:1f3:c187:1::100" end="2001:db8:1f3:c187:1::1ff"/>
    </dhcp>
  </ip>

I then route IPv4 failover addresses to the vRack, which is bridged to a single bridge br4 on eno4 that all my VMs get a second virtual NIC from. Thus they have IPv6 on one interface and IPv4 on another. This is optional; you could just keep IPv4 failover addresses on your main interface (if you don't have a vRack, for instance).

Michael Hampton
  • 237,123
  • 42
  • 477
  • 940
  • So the host server is the esxi or my pfsense (running as a vm on the esxi) ? – Lenne Oct 02 '18 at 07:50
  • @Lenne Oops, I see nothing about your hypervisor in your question, and I (apparently wrongly) assumed you were using KVM. The general comments about OVH's network design and how to work around it still apply though. Though I don't think you can run ndppd on any hypervisor other than KVM, and it's a critical part of making this work. – Michael Hampton Oct 02 '18 at 12:27
  • I finally gave up. When I made OVH look into why some hosts worked and some didn't, they made all equal, so now noone working?!? So now I have made a IPv6 tunnel to HE instead. – Lenne Oct 02 '18 at 14:33