I have a VPS running Centos 6.8 where I installed SoftEther VPN server with a local bridge.
VPN IP range is 192.168.86.0/24. Client VPN IP addresses are provided by SoftEther VPN Server's DHCP server. There is virtual network adapter used for SoftEther's local bridge because I would like to access some services on the VPS through the VPN (in SoftEther this is not possible without local bridge). VPN default gateway address is 192.168.86.3.
The VPS has a public IPv4 address on eth0 interface, plus I created an alias eth0:0 with IPv4 addr 192.168.86.2 (this is out of the range provided by DHCP of the VPN and differs from VPN's default gateway).
When I connect from Windows PC, everything seems to be right. I can ping both the 192.168.86.3 (which is the SoftEther VPN server's network interface for connected VPN clients) and the 192.168.86.2 (which is out of the VPN server, being a "physical" network interface on VPS).
However, I cannot connect to any service running on VPS via the VPN connection - neither SSH on port 22 (none of the two addresses, .2 nor .3), nor can I connect to a simple web server running as root on port 80 on the VPS (using nodejs). Direct connections (to the public IPv4 address), however, work.
What exactly did I miss? Should I look into SSHD configuration for the interfaces, or could the problem be in iptables setup, or is it something to be fixed in SELinux? I am afraid I have no idea where to look for the problem.
The only thing I believe to be sure is that this is not directly related to SoftEther VPN server - before I activated the local bridge function, I could not ping any of the VPN IP addresses except the default gateway, now the local alias 192.168.86.2 became visible and responds to pings.
ip addr
on VPS returns this:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 52:54:00:00:46:07 brd ff:ff:ff:ff:ff:ff
inet 46.28.111.205/24 brd 46.28.111.255 scope global eth0
inet 192.168.86.2/24 brd 192.168.86.255 scope global eth0:0
inet6 2a02:2b88:2:1::4607:1/64 scope global
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fe00:4607/64 scope link
valid_lft forever preferred_lft forever
4: tap_tap01: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500
link/ether 00:ac:56:ec:e5:3c brd ff:ff:ff:ff:ff:ff
inet6 fe80::2ac:56ff:feec:e53c/64 scope link
valid_lft forever preferred_lft forever
ip route
on the VPS returns this:
192.168.86.0/24 dev eth0 proto kernel scope link src 192.168.86.2
46.28.111.0/24 dev eth0 proto kernel scope link src 46.28.111.205
169.254.0.0/16 dev eth0 scope link metric 1002
default via 46.28.111.1 dev eth0
It seems that SoftEther VPN server does not configure a IPv4 address on tap_tap01
(SoftEther's virtual network interface for the bridge). Interestingly, it is possible to ping both IPv4 addresses from within VPN session but the VPN network is unreachable/invisible in the VPS. Which is contrary to what I would expect from the local bridge.