Force dnsmasq to send router advertisement to specific interface

1

I have following setup:

  • remote server that has a static IPv6 /64
  • local (for now) IPv4 only home network with a server
  • IPv4 OpenVPN connection that tunnels the upper half of IPv6 /64 as /65 between the two servers

Thanks to the tunnel my server can now connect successfully to the internet via IPv6 but I cannot get dnsmasq to provide my other devices with IPv6.

Here is the relevant part of my /etc/dnsmasq.conf:

except-interface=tun0
# pick up prefix from tun0
dhcp-range=::2,::500,constructor:tun0,slaac, 12h
enable-ra
# try to force advertisement on br0
ra-param=br0,30

When starting dnsmasq I get the following outputs (I translated them to english and left out parts that are not about ipv6/router advertisement):

Compile options: IPv6 GNU-getopt DBus i18n IDN2 DHCP DHCPv6 no-Lua TFTP conntrack ipset auth DNSSEC loop-detect inotify dumpfile
DHCP, IP-range 192.168.0.2 -- 192.168.0.100, Lease time 12h
DHCPv6, IP-range ::2 -- ::500, Lease Time 12h, template for tun0
Router-Advertisment on tun0
IPv6-Router-Advertisement enabled

By default the br0 interface only has a link local addres and none from the range used by dnsmasq. However, even after giving it an address from this range, the advertisement is still only reported for tun0.

How do I get dnsmasq to do advertisement via br0?

The redacted IP adresses are

remote Server:

eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
    inet xx.xx.xx.xx brd xx.xx.xx.xx scope global eth0
       valid_lft forever preferred_lft forever
    inet6 2a01:xxxx:xxxx:xxxx::1/64 scope global deprecated 
       valid_lft forever preferred_lft 0sec
    inet6 fe80::xxxx:xx:xxxx:xxxx/64 scope link 
       valid_lft forever preferred_lft forever
tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 100
    link/none 
    inet 10.8.0.1 peer 10.8.0.2/32 scope global tun0
       valid_lft forever preferred_lft forever
    inet6 2a01:xxxx:xxxx:xxxx:8000::1/65 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::xxxx:xx:xxxx:xxxx/64 scope link stable-privacy 
       valid_lft forever preferred_lft forever

On my local server

br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.1/24 brd 192.168.0.255 scope global br0
       valid_lft forever preferred_lft forever
    inet6 2a01:xxxx:xxxx:xxxx:8000::500/65 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::xx:xxxx:xxxx:xxxx/64 scope link 
       valid_lft forever preferred_lft forever


tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 100
    link/none 
    inet 10.8.0.6 peer 10.8.0.5/32 scope global tun0
       valid_lft forever preferred_lft forever
    inet6 2a01:xxxx:xxxx:xxxx:8000::1000/65 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::xxxx:xxxx:xxxx/64 scope link stable-privacy 
       valid_lft forever preferred_lft forever

The 64 bit Prefix is the same for all 2a01 adresses.

EDIT

I tried following settings from grawity's answer:

On the remote server /etc/openvpn/server.conf

server-ipv6 fc00::/96 
# use low metric to override existing route
route-ipv6 2a01:xxx:xxxx:xxxx::/64 ::1 1
# enable routing to remote on local server
push "route-ipv6 2a01:xxxx:xxxx:xxxx::1/128 ::1 1"
$ ip addr
eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 96:00:00:27:b7:14 brd ff:ff:ff:ff:ff:ff
    inet xx.xx.xx.xx brd 116.202.98.219 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 2a01:xxxx:xxxx:xxxx::1/64 scope global deprecated 
       valid_lft forever preferred_lft 0sec
    inet6 fe80::xxxx:xxxx:xxxx:xxxx/64 scope link 
       valid_lft forever preferred_lft forever
tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 100
    link/none 
    inet 10.8.0.1 peer 10.8.0.2/32 scope global tun0
       valid_lft forever preferred_lft forever
    inet6 fc00::1/96 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::xxxx:xxxx:xxxx:xxxx/64 scope link stable-privacy 
       valid_lft forever preferred_lft forever

$ ip -6 route
2a01:xxxx:xxxx:xxxx::/64 dev tun0 metric 1 pref medium
2a01:xxxx:xxxx:xxxx::/64 dev eth0 proto kernel metric 256 pref medium
fc00::/96 dev tun0 proto kernel metric 256 pref medium
fe80::/64 dev eth0 proto kernel metric 256 pref medium
fe80::/64 dev tun0 proto kernel metric 256 pref medium
default via fe80::1 dev eth0 metric 1024 pref medium

On my local server: /etc/dnsmasq.conf

# start with 3 to avoid assigning the remote eth0 and local br0 addresses
dhcp-range=::3,constructor:br0,slaac, 12h
enable-ra
$ ip addr
br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 0c:c4:7a:02:09:cc brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.1/24 brd 192.168.0.255 scope global br0
       valid_lft forever preferred_lft forever
    inet6 2a01:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:9cc/64 scope global dynamic mngtmpaddr noprefixroute 
       valid_lft 6930sec preferred_lft 6930sec
    inet6 2a01:xxxx:xxxx:xxxx::2/64 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::xxxx:xxxx:xxxx:xxxx/64 scope link 
       valid_lft forever preferred_lft forever
tun0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 100
    link/none 
    inet 10.8.0.6 peer 10.8.0.5/32 scope global tun0
       valid_lft forever preferred_lft forever
    inet6 fc00::1000/96 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::xxxx:xxxx:xxxx:xxxx/64 scope link stable-privacy 
       valid_lft forever preferred_lft forever
$ ip -6 route
::1 dev lo proto kernel metric 256 pref medium
2a01:xxxx:xxxx:xxxx::1 dev tun0 metric 1024 pref medium
2a01:xxxx:xxxx:xxxx::/64 dev br0 proto kernel metric 256 pref medium
2a01:xxxx:xxxx:xxxx::/64 dev br0 proto ra metric 1024 expires 6243sec pref medium
fc00::/96 dev tun0 proto kernel metric 256 pref medium
fe80::/64 dev br0 proto kernel metric 256 pref medium
fe80::/64 dev tun0 proto kernel metric 256 pref medium
default dev tun0 metric 1024 pref medium

Using this config my LAN devices get an IPv6 from 2a01:xxxx:xxxx:xxxx/64. I can successfully ping on these adresses within my LAN but the crossing of the tunnel seems to be broken:

Having following IPs:

  1. Remote Server eth0 2a01:xxxx:xxxx:xxxx::1
  2. Remote Server tun0 fc00::1
  3. Local Server br0 2a01:xxxx:xxxx:xxxx::2
  4. Local Server tun0 fc00::1000

From remote server I can ping all but the 3rd (local server br0). From local server I can ping all. From my LAN I can ping all local but no remote.

So half of it seems to work. Additionally, I could verify that all traffic for 2a01:xxx:xxxx:xxxx/64 is routed to eth0 on remote via tcpdump on remote and ping from another IPv6 host.

Nobody

Posted 2019-06-06T00:07:13.983

Reputation: 685

Exactly what addresses (or rather, what prefixes and prefix lengths) are you assigning to br0 and tun0? That part seems a bit suspicious. (Would be great if you provided the actual ip -6 addr output.) – user1686 – 2019-06-06T04:38:18.273

Additionally: Does the server have a /64 routed to it, or does it have a /64 available on-link? (If it's on-link, are you already using something like proxy_ndp on the server?) – user1686 – 2019-06-06T04:49:51.837

@grawity I am not sure what you mean. I posted the ip addr output above and have the /64 on eth0. I am not aware of proxy_ndp so I assume I don't use it. – Nobody – 2019-06-06T07:18:22.603

Answers

2

Does the server have a /64 routed to it, or does it have a /64 available on-link? (If it's on-link, are you already using something like proxy_ndp on the server?)

I asked this because many VPS providers assign on-link ranges (instead of recommended routed prefixes), and this means that the provider's local gateway thinks all addresses in the /64 are local: it expects to send NDP (ARP) queries for them, and expects your server to respond for any of them.

Normally a system only responds to Neighbour Solicitations for individual addresses assigned to its interfaces, e.g. in your case the server would only respond for 2a01:xxxx:xxxx:xxxx::1 but would do nothing about all the addresses you use for the VPN. The result is that the server provider's gateway thinks all those addresses don't exist in the LAN and reports them as unreachable.

This can be hacked around by enabling "NDP proxy" on the server, to make it send 'spoofed' responses for the /65 that you're using on the VPN. In theory this should be indistinguishable from just having the addresses locally, from the provider's perspective.

IPv4 OpenVPN connection that tunnels the upper half of IPv6 /64 as /65 between the two servers

Generally you won't be able to use SLAAC for anything else than /64. (Originally it was because autoconfiguration used EUI64-based addresses.) So your first step should be to obtain a shorter prefix, e.g. a /56 or at least a /60 that you could divide into several networks.

If you must use a non-/64 prefix for your LAN, then you'll only be able to use static IP configuration or perhaps DHCPv6 (for devices which support it; not all do).

If you have no other options but to share a /64 and you require SLAAC to work... well, it must be configured and advertised as a /64 on your br0 interface, and you may have to use proxy-NDP to patch both sides together. (It's like proxy-ARP but for IPv6.) That is, you'd have to use ndppd to respond to ND queries for the lower /65.

By default the br0 interface only has a link local addres and none from the range used by dnsmasq.

Your system is acting as a router between two networks, and each interface should normally have an address from the network it belongs to. (Just like a router that handles 192.168.1.0/24 will itself have an address from that range.)

But more importantly, each link needs to have a unique prefix assigned to it. In your 'local' system, you have two interfaces using the same 2a01:xxxx:xxxx:xxxx:8000::/65 prefix – so even if your LAN hosts get addresses configured, the router will not be able to correctly forward packets. I.e. it won't know whether any given address is reachable via tun0 vs via br0 – it only has two blanket /65 routes and it'll always pick the same route for all traffic.

If you're lucky, it'll pick the br0 route and your LAN hosts just won't be able to reach the server itself at 2a01:xxxx:xxxx:xxxx:8000::1, but everything else may still work. If you're unlucky, it'll pick the tun0 route, and the server won't be able to send anything to LAN hosts as packets will get reflected back.

In this situation, the OpenVPN tunnel doesn't actually need to use addressing from the /65 at all; e.g. it could use private addresses. In any case the /65 should be dedicated to your local br0 interface, and you should just use constructor:br0 in dnsmasq.

Example server OpenVPN configuration:

server-ipv6 fd6a:d884:2a8b:11b::/96
route-ipv6 2a01:xxxx:xxxx:xxxx:8000::/65

And in the ccd for the client:

iroute-ipv6 2a01:xxxx:xxxx:xxxx:8000::/65

Example server interfaces:

eth0:  2a01:xxxx:xxxx:xxxx:0000::1/65  (note prefix length)
tun0:  fd6a:d884:2a8b:11b::1/96

Example client interfaces:

tun0:  fd6a:d884:2a8b:11b::1000/96
br0:   2a01:xxxx:xxxx:xxxx:8000::1/65

Example dnsmasq configuration:

enable-ra
dhcp-range = ::2, ::500, constructor:br0, 12h
# you can't get SLAAC with a non-/64 prefix, so it's DHCPv6 only

user1686

Posted 2019-06-06T00:07:13.983

Reputation: 283 655

Thanks for the suggestions. It seems I can't get a shorter prefix so I will try the other way. – Nobody – 2019-06-06T07:52:58.210

Thanks, adapting your ideas I got half through. I have IPv6 addresses assigned in my LAN now. However, traffic accross the VPN tunnel seems to be stuck in one direction. I updated by question with more information. – Nobody – 2019-06-07T10:08:29.270

You still have colliding routes on the server – two identical /64's. Either make the VPN use a /65 again, or keep the VPN as /64 and remove it from the server's eth0. (Really the eth0 address can be configured as a /128, though I'm not sure how that'll work with the kernel's proxy ND feature.) – user1686 – 2019-06-07T10:10:29.560

Also, please provide more details about how you verified the eth0 routing with tcpdump. What specific packets were you looking for, what did you make the conclusion from? – user1686 – 2019-06-07T10:11:47.170

Regarding the verification: I used an external IPv6 host and did ping 2a01:xxxx:xxxx:xxxx:12:12 (an unused IP), on the remote I did tcpdump -i eth0 -u ip6 and got lines like11:52:12.071881 IP6 2001:638:xxx:xxx:xxx:xxx:xxx:xxx > 2a01:xxxx:xxxx:xxxx::12:12: ICMP6, echo request, seq 1, length 64, so the ping packet was routed to the remote server. `. – Nobody – 2019-06-07T10:31:53.990

Regarding the colliding routes on the server: I'd hoped that the metric would ensure that one takes priority over the other? – Nobody – 2019-06-07T10:33:21.957

The metric should help, yes, but I just feel like it's a fragile and unnecessary method. As for tcpdump, the results seem to be good – as long as the router went straight to delivering the ICMPv6 Echo Request, and didn't try to discover the address using an ICMPv6 Neighbour Solicitation. (If you see a Neighbour Solicitation, then the address is on-link, not routed.) – user1686 – 2019-06-07T10:41:22.090

Let us continue this discussion in chat.

– Nobody – 2019-06-07T10:45:52.370

Okay, now I got it. I tried to put the iroute into the wrong configuration (it should be in the ccd file for the respective client) and I obviously had to use iroute-ipv6. – Nobody – 2019-06-10T08:09:09.420

Ah yes, that might be the problem. Though I somehow remember iroute accepting a nexthop, but I might have confused it with route. – user1686 – 2019-06-10T08:42:30.490

iroute seems to be for IPv4 only and has the netmask as optional second parameter – Nobody – 2019-06-12T08:54:56.223