2

I'm probably not going to use this for the long term, but I still want to know why this isn't working.

I've set up a simple ssh tunnel from my Linode instance to one AWS EC2 instance within my VPC. I have three running instances, one has a public IP (not an elastic IP, just a public IP). My ifup command brings up the tunnel using something like:

manual tun0
iface tun0 inet static
    pre-up /usr/bin/ssh -i /root/.ssh/awsvpn -S /var/run/ssh-aws-vpn-control -NMfw 0:0 jump.aws.mydomain.org
    pre-up sleep 5
    address     192.168.11.2
    pointopoint 192.168.11.1
    broadcast   255.255.255.0
    up  route add -net 10.11.0.0 netmask 255.255.0.0 gw 192.168.11.2
    post-down /usr/bin/ssh -i /root/.ssh/awsvpn -S /var/run/ssh-aws-vpn-control -O exit jump.aws.mydomain.org

... run from the Linode to the awsgw (jump.aws.mydomain.org). The IP addresses are being resolved via /etc/hosts.

After this I can see my tunnel from both sides. I can ping to the far side of the tunnel (192.168.11.) and to the eth0 on the far side (10.11.0.xx on the AWS side, 123.45.67. on the Linode side). (Note: all address have been sanitized here). I can also ssh through the tunnel from either side.

The part that is NOT working is any attempt to reach the other two nodes within my VPC through that tunnel. I can ping and ssh from jump.aws.* to either of them (anode and bnode).

On the jump system I have done:

net.ipv4.ip_forward = 1

... and

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination
MASQUERADE  all  --  10.0.0.0/8           0.0.0.0/0

... and on the other two nodes I've set this as my default router/gateway:

## From anode:
root@anode:~# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.11.0.35      0.0.0.0         UG    0      0        0 eth0
10.11.0.0       0.0.0.0         255.255.255.0   U     0      0        0 eth0
10.11.0.0       0.0.0.0         255.255.255.0   U     0      0        0 eth1
169.254.0.0     0.0.0.0         255.255.0.0     U     0      0        0 eth0

On jump the routing table looks like this:

## From jump:
root@ip-10-11-0-35:~# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.11.0.1       0.0.0.0         UG    0      0        0 eth0
10.11.0.0       0.0.0.0         255.255.255.0   U     0      0        0 eth0
169.254.0.0     0.0.0.0         255.255.0.0     U     0      0        0 eth0
192.168.11.2    0.0.0.0         255.255.255.255 UH    0      0        0 tun0

... and, finally, on the Linode it looks like this:

## From linode
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         123.45.67.89    0.0.0.0         UG    0      0        0 eth0
10.11.0.0       0.0.0.0         255.255.0.0     U     0      0        0 tun0
192.168.11.1    0.0.0.0         255.255.255.255 UH    0      0        0 tun0
123.45.67.89    0.0.0.0         255.255.255.0   U     0      0        0 eth0

There are no other iptables rules currently active on any of these nodes. The only active rule(s) are on the jump box in my VPC. The AWS SG is allowing all traffic among the VPC nodes (and I've already established that I can reach anode and bnode from jump and the linode from there.

If I run tcpdump -n -v dst 10.11.0.39 (the anode) on jump (the NAT router I'm trying to configure) and try to ssh from the Linode to anode through it I do see the traffic hitting the jump/router. I see successful ARP traffic as well. But I don't see any traffic going out to the anode nor do I see any being received from a tcpdump on anode itself (other than the ssh between jump and anode).

For that matter the NAT/Masquerading isn't working at all. Regardless of the tunnel it's as if my default route on anode (and bnode) are being ignored.

Is this some artifact of Amazon's AWS VPC SDN (software defined networking)? How is it that I can configure a default route to point at one of my nodes ... a node I can ping and ssh to ... and my traffic isn't being routed to that node?

(All of the AWS EC2 nodes in this configuration are Debian Wheezy 7.1 AMI with "shellshock" patches ... and the VPC is in us-west-2 (Oregon)).

What am I missing?

Jim Dennis
  • 807
  • 1
  • 10
  • 22

1 Answers1

-1

Sounds like a routing problem, as the tunnel is up and fine. I did this and it worked... it does not work anymore :-). I have not yet adapted to the new way of tun devices or my server is just borked.

The routing part should still be ok.

Get the ssh presentation here, it has a section on ssh VPN, with the whole setup (iptables, routing etc) in one script blob inside .ssh/config: https://wiki.hackerspace.lu/wiki/SSH_-_Secure_Shell

Gunstick
  • 101
  • 1