Apache virtual hosts not reachable publicly after portforwarding

0

I decided to install Proxmox on my new server to host Web, Email and VPS servers. That way, I can set up multiple VMs for each server type.

I went with Debian 9 for my apache web server. And I have also already managed to import my WordPress sites imported using the duplicator plugin, and that worked flawlessly. I edited my WP_Options table in PHPMyAdmin to https instead of HTTP as the site URL, then I set up my Apache virtual host like this (mydomain.com.conf)

<VirtualHost *:80>
    ServerName mydomain.com
    ServerAdmin root@localhost
    Redirect "/" "https:// mydomain.com"
</VirtualHost>

<VirtualHost *:443>
    ServerAdmin root@localhost
    DocumentRoot /var/www/mydomain.com
    ServerName mydomain.com
    ServerAlias www.mydomain.com
    SSLEngine on
    SSLCertificateFile /etc/letsencrypt/live/mydomain.com/fullchain.pem
    SSLCertificateKeyFile /etc/letsencrypt/live/mydomain.com/privkey.pem

    <Directory /var/www/mydomain.com>
        AllowOverride All
        DirectoryIndex index.php
        Require all granted
    </Directory>

</VirtualHost>

I can access my website locally by setting up a rule in my Windows hosts file like this:

192.168.10.104      mydomain.com

I also set up a static IP to avoid the web server getting a new IP address. I did that in /etc/network/interfaces

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

source /etc/network/interfaces.d/*

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
allow-hotplug ens18
iface ens18 inet static
address 192.168.10.104
netmask 255.255.255.0
gateway 192.168.10.1

# This is an autoconfigured IPv6 interface
iface ens18 inet6 auto

I do want to point out that I made a virtual host for my [public IP address][1].

To avoid conflicts, I disabled the UFW firewall and removed fail2ban. Just for now.

In my view, my DNS, nameservers and port forwarding is set up correctly. However, I could be wrong, though.

[My domain registrar NS configurations][2]

[My DNS configurations on DigitalOcean][3]

[My Portforwarding configurations][4]

If I try to reach my website using [UpTrends][5] I get "TCP Connection Failed". There is also no issues on my SSL certificated from my local point of view on the website.

Any advice?

Fiskebullar

Posted 2019-07-03T11:10:07.487

Reputation: 3

A direct virtual host for your IP seems to work. Does the site reliably work if you use the local DNS entry? What's in the apache logs if anything? – Seth – 2019-07-03T11:59:12.950

Answers

0

Your DNS configuration is correct, and visiting http://example.com successfully connects to publicip:80 and serves an HTTP redirect.

However, connecting to publicip:443 (i.e. https://example.com) returns ICMP "Host unreachable", which in this case has to be coming either from your router (the device with the public IP address, whether it's the Proxmox host or a dedicated router doesn't matter), or from the web server itself.

Since the ICMP error comes with a delay, it most likely comes from the router and indicates that the port-forwarding rule for :443 points to the wrong IP address (one that isn't in use, i.e. ARP reply timeout).

(If it were a firewall block, the connection attempt would either return a RST or an ICMP error immediately, or nothing whatsoever.)

If the settings look right, use tcpdump to take a look at what is actually happening. Running tcpdump -e -n -i <interface> "arp or port 443" on the Proxmox host (on the interface facing your web VM) will show the actual IP address that the port-forwarding rule is trying to reach.


The same applies in general; it is not enough to just turn things off. You also need to look at the current firewall rules (the actual firewall is either iptables or nft); i.e. check whether the current state matches the requested state. You don't know for sure that disabling ufw has left the firewall completely empty until you check with iptables-save, iptables-legacy-save, nft list ruleset.

user1686

Posted 2019-07-03T11:10:07.487

Reputation: 283 655

Okay, thanks for reaching out. I ran the TCP dump on my Proxmox server terminal, you can see it for yourself here.

My IP addresses are as follows: 192.168.10.110 = Proxmox server 192.168.10.104 = Web server (Apache on Debian) 192.168.10.172 = Old web server on Raspberry Pi (Apache on Raspian)

– Fiskebullar – 2019-07-03T12:32:55.333

I do want to point out that the old server is not operational anymore. I can turn it on if necessary though. – Fiskebullar – 2019-07-03T12:47:48.843

The server itself doesn't matter. If the port-forwarding rule points to the wrong address, then just change the port-forwarding rule. – user1686 – 2019-07-03T12:54:01.187

(Note that the only ARP requests which matter are those that show up immediately after someone tries to connect to your website on 443. The rest likely belong to other traffic.) – user1686 – 2019-07-03T12:54:47.570

Okay, so I changed the IP addresses on both my Proxmox server and the Web server. Still not working. – Fiskebullar – 2019-07-03T19:43:16.733

It might have something to do with the router building the device list and firewall rules based on the DHCP leases it has stored. What do your port-forwarding rules look like? – user1686 – 2019-07-03T20:29:11.347

This is how my port forwarding looks now. Image

– Fiskebullar – 2019-07-03T21:54:07.330

0

Finally!

Turns out the fix was to just restart the router (with power disconnected for 10 seconds), in addition to a server restart.

Sometimes the problems are fixable the easy, but unexpected way:)

Fiskebullar

Posted 2019-07-03T11:10:07.487

Reputation: 3

0

my solution would be as Ive done this with my proxmox host

setup nginx as reverse proxy install the Containers on behalf you like With apache2 nginx will forward the request And cache that, to improve the speed

nginx site example config

server { server_name domain.de *.domain.de; listen 80; return 301 https://$host$request_uri; } server { proxy_read_timeout 3600; listen 443 ssl http2; server_name domain.de *.domain.de; location / { proxy_cache hd_cache; proxy_set_header X-Cache-Status $upstream_cache_status; proxy_cache_valid 200 1w; proxy_pass https://10.10.200.4; proxy_set_header Host $http_host; proxy_buffers 16 8m; proxy_buffer_size 2m; gzip on; gzip_vary on; gzip_comp_level 9; gzip_proxied any;

remind that hd_cache is defined in nginx.conf

cat /etc/nginx/nginx.conf |grep hd_cache proxy_cache_path /var/cache/nginx/cache levels=1:2 keys_zone=hd_cache:10m max_size=10g inactive=2d use_temp_path=off;

path has to be created With mkdir

so whats the benefit? lets Encrypt needs only to be runned on the host nginx Cache every request i can setup any sites in Containers and just enable them With an copy ans Pate wntry on the proxmox host

I hope i could help you - if someone could Format the answer a bit i would be happy since im writing on my mobile

djdomi

Posted 2019-07-03T11:10:07.487

Reputation: 1