0

I am currently building a private cloud cluster using Proxmox. My cluster contains a head node and two compute nodes.

My head node hosts a NAT server and openvpn server, and three NICs: one for outbound & one per compute node. The NAT allows me to interface with both compute nodes. On both compute nodes I am hosting a router with ~30 vlans per router.

My goal is to be able to see the address from the vpn client when connecting to the routers. Currently, I connect to the head using the vpn, then attempt to either ping or ssh to the router and it shows the connection as coming from the head node's ip address. Any help is greatly appreciated!

My routes are as follows:

default via *.*.*.1 dev eno1 onlink 
10.10.1.0/24 via 10.10.1.2 dev tun0 
10.10.1.2 dev tun0 proto kernel scope link src 10.10.1.1 
*.*.*.0/25 dev eno1 proto kernel scope link src *.*.*.46 
192.168.0.0/19 via 192.168.77.1 dev vmbr0 
192.168.32.0/19 via 192.168.76.6 dev vmbr1 
192.168.76.0/24 dev vmbr1 proto kernel scope link src 10.10.1.1
192.168.77.0/24 dev vmbr0 proto kernel scope link src 192.168.77.1 

And the NAT rule (I am currently using firewalld):

-A POST_public_allow ! -o lo -j MASQUERADE

1 Answers1

0

The NAT rule you showed provides almost zero information because it modifies a custom chain, which is supposedly called from other (standard) chains in other tables (supposedly POSTROUTING of the nat table; you can see it using iptables -t nat -L POSTROUTING).

The problem you're experiencing is that supposedly masquerading gets applied on the interface which is connecting the head node with compute nodes.

A way to deal with that is to only have SNAT on the interface which the head node uses to connect to the Internet.

Note one issue though. The fact you're using VPN to access to head node (if I read the question correctly) means that when you will disable excessive masquerading on the head node your compute nodes will see the packets you're sending over VPN as coming from whatever network your sending program is located on. As it's supposedly a private subnet, you must make sure that - Both your source network and the network which connects the head node and the compute nodes have different addresses (and they both must be disinct from the subnet the tunnel uses) - The compute nodes must have a route rule for your source network (to send the packets destined there to the head node). - The head node must have IP forwarding enabled.

kostix
  • 1,100
  • 1
  • 7
  • 13
  • I corrected this by removing firewalld, whose masquerade rule was operating on the loopback interface. – CybeSSK Jan 27 '20 at 22:23