1

this is the network design https://imagebin.ca/v/5NNDpuDwq9PT

it's hosted on hetzner, 1 firewall with a public facing interface, pfsense. 1 docker host with two interfaces, one connected to the private network and one facing the internet, the public facing interface is DOWN and is required to stay so. the docker host is ubuntu 20.04, same behaviour happened with the 18.04 tho.

the docker host reaches the internet correctly via the pfsense, the containers do not when in bridged networking. they do reach the internet and can make DNS requests and ping, but as soon as I make an slightly bigger http/s request packets are dropped and it doesn't go through.

this is a packet capture on a "curl https://www.google.com"

https://imagebin.ca/v/5NPvwMusFW5i

lef side is packet capture on the pfsense box between the pfsense box and the docker host, right side is packet capture on the host between the host and the container

if I set up docker networking in host mode instead of bridged mode it all works.

it's as if packets greater than 1288 do not go through from the host to the container, which seems pretty weird. what could be the cause of such a behaviour? and it's only happening in bridged networking but not in host networking. the very weird thing is that bridged networking works if I bring up the public facing interface on the host and make it surf the network without going through the private network/pfsense box.

the private network interface and the public facing interface have different drivers, one is eth and the other ens.

any ideas?

docker network inspect (on the bridged network)

[
    {
        "Name": "customername",
        "Id": "fb5efb66279ac3d33e7b671f542a017d7bb13bf935444df43b0a7d95da60dc75",
        "Created": "2020-05-22T15:19:23.03700718+02:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "192.168.1.0/24",
                    "Gateway": "192.168.1.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": true,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "a09dfb38562b518d9c4dbf44f6f5ea6ac43904c17b6714e5aa13fe35cc7d5f5a": {
                "Name": "romantic_dijkstra",
                "EndpointID": "516ddfd29d82b161e197109d83ee0ccdfb52257dc2874ef11aa1f1ae15adbcd3",
                "MacAddress": "02:42:c0:a8:01:02",
                "IPv4Address": "192.168.1.2/24",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.bridge.default_bridge": "false",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "10.0.10.3",
            "com.docker.network.bridge.name": "customername-br",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    }
]
Michael Hampton
  • 237,123
  • 42
  • 477
  • 940
dada216
  • 33
  • 3

1 Answers1

0

Bit late to your problem, but I'm currently facing similar issue. What I've found is that private interface at Hetzner has mtu 1450 as opposite to public interface with mtu 1500. This is a serious problem with dockers as it's default mtu is 1500.

There is a partial solution but not for docker-compose I'm using https://rahulait.wordpress.com/2016/02/28/modifying-default-mtu-for-docker-containers/

twk
  • 171
  • 1
  • 1
  • 10
  • 1
    the solution I applied was slightly different, modify the docker network, you can control the mtu with which docker creates the bridge like so. docker network create --driver bridge --attachable --subnet 192.168.1.0/24 --gateway 192.168.1.1 --opt com.docker.network.bridge.default_bridge=false --opt com.docker.network.bridge.enable_icc=true --opt com.docker.network.bridge.enable_ip_masquerade=true --opt com.docker.network.bridge.host_binding_ipv4=10.0.10.3 --opt com.docker.network.bridge.name=private-br1 --opt com.docker.network.driver.mtu=1450 private-br1 – dada216 Feb 26 '21 at 09:34
  • Thanks for sharing. Does it work for default network for containers built using docker-compose? Because typically, it does not. What I did, I set the environment variable DOCKER_MTU=1450 in /etc/environment then using it as parameter ${DOCKER_MTU} in my docker-compose.yml files like: ` networks: default: driver: bridge driver_opts: com.docker.network.driver.mtu: ${DOCKER_MTU} ` – twk Feb 27 '21 at 10:18