5

I am attempting to set up a CentOS 7 VM with firewalld to route traffic between 2 different subnets.

I have 2 network interfaces, ens192 for the external network and ens224 for the internal network:

$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:62:88:ba brd ff:ff:ff:ff:ff:ff
    inet 10.212.21.26/16 brd 10.212.255.255 scope global ens192
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe62:88ba/64 scope link
       valid_lft forever preferred_lft forever
3: ens224: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:62:88:c4 brd ff:ff:ff:ff:ff:ff
    inet 192.168.99.1/16 brd 192.168.255.255 scope global ens224
       valid_lft forever preferred_lft forever
    inet6 fe80::d301:8174:1d11:d550/64 scope link
       valid_lft forever preferred_lft forever

The internal interface is in the internal zone:

$ sudo firewall-cmd --list-all --zone=internal
internal (active)
  target: default
  icmp-block-inversion: no
  interfaces: ens224
  sources:
  services: dhcpv6-client mdns samba-client ssh
  ports:
  protocols:
  masquerade: no
  forward-ports:
  sourceports:
  icmp-blocks:
  rich rules:

The external interface is in the external zone with masquerading enabled:

$ sudo firewall-cmd --list-all --zone=external
external (active)
  target: default
  icmp-block-inversion: no
  interfaces: ens192
  sources:
  services: ssh
  ports:
  protocols:
  masquerade: yes
  forward-ports:
  sourceports:
  icmp-blocks:
  rich rules:

The default gateway of the internal interface is set to the IP address of the external interface:

$ ip ro
default via 10.212.0.10 dev ens192  proto static  metric 100
default via 10.212.21.26 dev ens224  proto static  metric 101
10.212.0.0/16 dev ens192  proto kernel  scope link  src 10.212.21.26  metric 100
10.212.21.26 dev ens224  proto static  scope link  metric 100
192.168.0.0/16 dev ens224  proto kernel  scope link  src 192.168.99.1  metric 100

Packet forwarding is turned on:

$ sudo sysctl -a | grep net.ipv4.ip_forward
net.ipv4.ip_forward = 1

From the internal network, I can access the external network. And from the external network, I can ping an IP address on the internal network. However, I am unable to ssh to the same internal IP address from the external network despite the ssh service being enabled on both zones.

I've tried a number of different rich rules/passthrough with no luck. Would someone please be so kind as to give me a steer in the right direction?

Thanks.

EDIT:

I removed the 10.212.21.26 route and set SELinux mode to permissive:

sudo ip ro del 10.212.21.26
sudo setenforce permissive

I can ping:

$ ping 192.168.99.100

Pinging 192.168.99.100 with 32 bytes of data:
Reply from 192.168.99.100: bytes=32 time<1ms TTL=63`

But I can't ssh:

$ ssh -vvv 192.168.99.100
OpenSSH_6.8p1, OpenSSL 1.0.2a 19 Mar 2015
debug1: Reading configuration data /home/clay.rowland/.ssh/config
debug2: ssh_connect: needpriv 0
debug1: Connecting to 192.168.99.100 [192.168.99.100] port 22.
debug1: connect to address 192.168.99.100 port 22: Connection timed out
ssh: connect to host 192.168.99.100 port 22: Connection timed out
rowlanch
  • 71
  • 1
  • 6
  • If I understand you correctly, you are able to ping a host from `external` to `internal` but not `ssh` to it? What does `ssh -vvv host_internal_network` tell you? This line `10.212.21.26 dev ens224 proto static scope link metric 100` isn't needed and causes problems. Furthermore, try to set `SElinux` to `permissive` by using `setenforce 0` as root. – Valentin Bajrami Feb 28 '17 at 09:00
  • @val0x00ff, thanks for the comment. I removed the route and setenforce permissive. The results are the same. I can ping but cannot ssh. See the edits above for results. – rowlanch Feb 28 '17 at 13:11
  • Alright, so you need some debugging. Check if `sshd` is listening on port `22` . The line `lsof -i :22` could help. What is your default zone? `firewall-cmd --get-default-zone` and see if ssh is enabled on your default zone. Also how is your `/etc/ssh/sshd_config` configured? – Valentin Bajrami Feb 28 '17 at 13:25
  • sshd is definitely up and running on 192.168.99.100. I am able to successfully connect from inside the internal network. – rowlanch Feb 28 '17 at 13:39
  • And the default zone is `internal` with the ssh service added. – rowlanch Feb 28 '17 at 13:46
  • Withiin your ssh server(internal zone) try to run: `firewall-cmd --permanent --zone=internal --add-source=10.212.21.0/24` This should allow the external zone (10.212.21.26) to be able to reach hosts within the internal zone. – Valentin Bajrami Feb 28 '17 at 13:50
  • Thanks again @val0x00ff, I was successful after adding direct filter FORWARD rules. Unfortunately, adding the source to the internal zone did not work. – rowlanch Feb 28 '17 at 15:45

2 Answers2

2

After much digging and keyboard smashing, I found that the following direct rich rules on the FORWARD chain will enable a successful ssh connection. Someone wiser may be able to provide a more elegant solution.

sudo firewall-cmd --direct --add-rule ipv4 filter FORWARD 0 -i ens224 -o ens192 -p tcp --sport 22 -j ACCEPT
sudo firewall-cmd --direct --add-rule ipv4 filter FORWARD 0 -i ens192 -o ens224 -p tcp --dport 22 -j ACCEPT
rowlanch
  • 71
  • 1
  • 6
2

The importance of this post/question cannot be overestimated. In fact, this situation arises generically in a setup where we have a private (NAT-ed) network behind a linux router, where, some of the machines in the private network have been assigned public IP addesses.

In other words, we have:

[pc's with local 10.0.0.0/24, and some public 20.0.0.0/24] > --- [router with 10... and 20... address on internal side]---WAN

The "internal" interface ens224 of the router (of the OP) has thus two ip's, say 10.0.0.1 and 20.0.0.1. Now, having masquerade: yeson the egress of ens192 is inconvenient, as it would mask the true origin of the packets coming from public IP's of the interfaces with 20.0.0.0/24 addresses, and can be replaced by a direct rule of the sort:

firewall-cmd --direct --add-rule ipv4 nat POSTROUTING 0 -s 10.0.0.0/24 -j MASQUERADE,

but what is central to the question is the point, that once we enable firewalld it starts to act as... well... a firewall. That is, it will inspect all ens192-inbound and all ens224-inbound packets. In the absence of direct rules, as indicated by the accepted answer, the packets are rejected with a verbose explaination (CentOS 7):

  • on the router (tcpdump) ICMP host 20.0.0.2 unreachable - admin prohibited,
  • on the external peer (telnet 20.0.0.2 22) Unable to connect to remote host: No route to host,

Unless somebody comes up with a better rule, the solution provided by the OP seems the most elegant one, but is very detailed indeed:

  • all traffic to chosen ports in the internal network must explicitly be allowed by direct rules of the sort: firewall-cmd --direct --add-rule ipv4 filter FORWARD 0 -i ens192 -o ens224 -p tcp --dport 22 -j ACCEPT, firewall-cmd --direct --add-rule ipv4 filter FORWARD 0 -i ens192 -o ens224 -p tcp --dport 80 -j ACCEPT (etc)
  • all traffic outbound from the internal network (in my case, from the 20.0.0.0/24 network), must explicitly be allowed; here I'm using firewall-cmd --direct --add-rule ipv4 filter FORWARD 0 -i ens224 -j ACCEPT

What we (or at least I am) missing here is a possibility to define an IP-range based "zone" with a chance to pass egress traffic only destined to selected ports. I don't know if this is at all possible with firewalld.

P Marecki
  • 161
  • 4