I think I'm using the wrong technique, but not sure of the right one.
Machine: Red Hat release 7.2
firewalld.noarch: 0.3.9-14.el7
I've been asked to close two ports but insure that all other ports are open. The solution needs to be easy to turn on and off. To that end I have done:
bring up firewalld
set "trusted" as default zone # Trusted opens all ports
firewall-cmd --zone=trusted --add-interface=eno16780032 # only Ethernet interface on this server.
For testing purposes, executing nc -l port_number to have something answering on that port.
Test by: go to a different machine, execute "telnet machine_name port_number" and observe that I get a response. (Restarting nc after each test.)
Turn off port:
- firewall-cmd --zone=trusted --remove-port port_number/tcp
Verify:
- firewall-cmd --zone=trusted --query-port port_number/tcp
Returns "no"
At this point, nc should be listening on port_number, but it should be blocked by firewalld. I shouldn't be able to connect to it.
However, "telnet machine_name port_number" from a different machine still connects.
I'm not even trying to make it persistent at this point, just trying to get the rule to work. What am I doing wrong?
The application: We have a homegrown back end service that runs as a master/slave configuration. The slave is up at all times, to sync data with the master. Only the system designated "master" can be used by the front end. (To make it a true cluster would involve too much work, the developers tell me.)
There's a load balancer in "the cloud" (over which we don't have direct control) that points to both machines. The objective is to block two key ports on the slave so the load balancer always goes to the master. When we fail over, the ports on the "slave" (now master) are unblocked and the ports on the "master" (now slave) are blocked, forcing the load balancer to go to the new master.
This is probably not a good use of the load balancer or of firewalld, but it's an odd application and we're just trying to find something that works that doesn't involve either mucking with the load balancer or shutting down services on the slave.
Any ideas?