Fixed: It turned out to be a combo of what BatchyX said (missing the 172.16.101.0/24 route on the remote end), and tinc on the remote side failing to run a -up script (the script wasn't executable).
So now everything works super, thanks for the help everyone :)
=============================================================================
My problem is surprisingly difficult to explain, so I'll break it up into smaller pieces, sorry in advance for the long text :)
I have a server at a hosting provider, that has a public ip, this server is running tinc (vpn software).
At home I have two vlans, VLAN1 (my normal subnet for pc's etc, sitting behind NAT), and VLAN20, for my vmware lab. environment.
What I would like to set up, is so that my VLAN20 network, can use the server at the hosting provider as its gateway (its external ip), instead of the external gateway I have at home.
For that I have a server at home, that has two network interfaces, one nic on VLAN1, and one nic on VLAN20.
Lets say I have the following ips:
Server at hosting provider:
Public IP: 123.123.123.123 (eth0)
Private IP: 10.1.0.1/24 (tun0)
Network at home:
VLAN1 - 192.168.1.0/24 (.1 is the gateway)
VLAN20 - 172.16.101.0/24
Network on server at home:
NIC1 (VLAN1) - 192.168.1.50/24 (eth0)
NIC2 (VLAN20) - 172.16.101.1/24 (eth1)
Tunnel - 10.1.0.2/24 (tun0)
I have set up tinc, so that my server at home works over the tunnel, I can ping 10.1.0.1 from the server at home, and 10.1.0.2 from the server at the hosting provider.
In addition to this I have set up so the server at home uses the tunnel for the default gateway, this all works from the actual server at home, my problem is that I cant get clients on the VLAN20 network to access the internet.
So the problem I have is that I cant figure out how to set up routing so that the 172.16.101.0/24 network uses the default gateway on the tunnel.
The routes on the server at home are these:
root@home:/etc/tinc/vpn/hosts# ip route
0.0.0.0/1 dev tun0 scope link
default via 192.168.1.1 dev eth0
10.1.0.0/24 dev tun0 proto kernel scope link src 10.1.0.2
123.123.132.123 via 192.168.1.1 dev eth0
128.0.0.0/1 dev tun0 scope link
172.16.101.0/24 dev eth1 proto kernel scope link src 172.16.101.1
192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.50
the /1's get added when the tunnel is up with:
ip route add 0.0.0.0/1 dev $INTERFACE
ip route add 128.0.0.0/1 dev $INTERFACE
Doing a traceroute from the server at home to 8.8.8.8:
root@home:/etc/tinc/vpn/hosts# traceroute -s 10.1.0.2 8.8.8.8 -n
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
1 10.1.0.1 33.681 ms 33.698 ms 33.658 ms
2 Router_At_Hosting_Provider 34.930 ms 34.907 ms 34.875 ms
So the "tunnel" subnet (10.1.0.0) works fine with the default gateway over the tunnel.
This also works fine:
root@home:/etc/tinc/vpn/hosts# traceroute -s 172.16.101.1 10.1.0.2 -n
traceroute to 10.1.0.2 (10.1.0.2), 30 hops max, 60 byte packets
1 10.1.0.2 0.032 ms 0.003 ms 0.005 ms
But my problem is this:
root@home:/etc/tinc/vpn/hosts# traceroute -s 172.16.101.1 8.8.8.8 -n
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 60 byte packets
1 * * *
2 * * *
3 * * *
If anyone have any pointers to where I should be be looking, it would be really greatly appreciated.
(The full list of the changes done to both servers after a plain debian install, is here http://pastebin.com/r3Vsvycq)
Edit I suck badly at Visio, but here's my attempt at showing what I'm trying to set up: http://i.stack.imgur.com/ff2R6.png (cant put inline yet because my rep isn't high enough).