10

On a Debian-Stretch host (connected to physical LAN) I have a new docker installation (v18.09) with one database container (port mapped to the host) and I run KVM/libvirt with some Debian-Stretch VMs. I can access the docker container and the VMs from the LAN (depending on the configuration trough SSH tunnel or direct) but I am struggling to access the docker container from the VMs.

enter image description here

# brctl show
bridge name         bridge id           STP enabled interfaces
br-f9f3ccd64037     8000.0242b3ebe3a0   no      
docker0             8000.024241f39b89   no      veth35454ac
virbr0              8000.525400566522   yes     virbr0-nic

After reading for days, I found one very compelling solution in this post Docker and KVM with a bridge (original) that I did not get to work. The solution suggest to initiate docker with a one-line config daemon.json code to use the KVM "default" bridge. How nice would that be! Is there any hope?

I tried two different configurations for networking between the KVM VMs. In both cases the communication between the VMs and to the LAN+router+cloud is flawless but I just don't know how to get over the fence - to the greener grass... :)

Conf 1 - KVM default bridge with NAT: I can ssh to the Debian host and access the docker container port but is there a setup with a direct route?

Conf 2 - macvtap adapter in Bridge mode to the LAN: I can not ping the host LAN IP from the VM although both are connected to the same router. Response from the VM itself is Destination Host Unreachable. Any thought why?

Would it be better to run the docker daemon in a separate VM rather than directly on the Debian host? This way both, the container and the VM, could access the KVM default bridge. But I thought it is kinda strange to run docker in a VM on a KVM host.

Any clear guidance would be appreciated!

Btw, the bridge br-f9f3ccd64037 is a user-defined bridge I created with docker for future inter-container communication. It is not used.

Update:

I just realized that with the first configuration I can simply connect to the docker container by its IP address (172.17.0.2) from the VM guests.

My initial setup was the second configuration because I wanted to RDP into the VMs, which is easier since the macvtap driver connects the VMs directly to the LAN and no SSH link is needed. That's when I could not reach the container.

sdittmar
  • 363
  • 1
  • 3
  • 15

3 Answers3

5

The solution was as simple as stated in the linked article. I am not sure why my configuration did not change the first time I restarted the docker daemon.

After I found evidence in the Docker daemon documentation for the bridge argument in daemon.json, I gave it another try and the docker daemon picked up the KVM default bridge on startup.

First I created the configuration file /etc/docker/daemon.json as suggested in the documentation with the following content (the iptables line may not even be needed):

{
"bridge": "virbr0",
"iptables": false
}

all that was needed was:

docker stop mysql
systemctl stop docker
systemctl start docker
docker start mysql

And the existing docker container was running on the KVM bridge. The IP address of the container can be checked with:

docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' mysql
192.168.122.2

I am not sure if I can remove the docker0 bridge now, but the container is listed under virbr0 together with the three VMs.

brctl show
bridge name bridge id           STP enabled interfaces
docker0     8000.024241f39b89   no      
virbr0      8000.068ff2a4a56e   yes         veth2abcff1
                                            virbr0-nic
                                            vnet0
                                            vnet1
                                            vnet2
sdittmar
  • 363
  • 1
  • 3
  • 15
  • I tried this but did not work after restart as I wanted docker containers to have main host's IP, not get new one from network like the VMs do. There was some more info here https://bbs.archlinux.org/viewtopic.php?id=233727. Best thing I can do is start docker after boot completes, not as service in systemd - then everything works for me, but isn't ideal. – Richard Feb 21 '19 at 06:26
  • I just setup another Debian box and it worked again for me. Best to check the configuration with `docker network inspect bridge`. If the json-config file does not work than it may be because the docs state: "Note: You cannot set options in daemon.json that have already been set on daemon startup as a flag." – sdittmar Jan 20 '20 at 11:55
2

When I read the question, I was looking to see if there was a way to connect virbr0 to a Docker network. The image below is my modification of what I think was being asked:

enter image description here

If that is the case, the answer is to use a macvlan network, which allows you to attach a docker network directly to a host device. So something like the following would get you what you want:

docker network create --driver=macvlan --subnet=192.168.0.0/16 -o parent=virbr0 mynet

Agricola
  • 121
  • 6
2

I'm used to implement that using the following setup :

  • I create a br0 bridge with the physical nic inside

  • kvm machines are connected on the bridge using qemu xml config snippet below

    <interface type='bridge'>
      <mac address='52:54:00:a9:28:0a'/>
      <source bridge='br0'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
  • docker stacks are all running the same : I reserve a public routable IP for each stack. I connect this public IP to the bridge br0 using the opensvc service config snippet below.

    • the [ip#0] section tells that we want the ip 1.2.3.4 configured into the container with resource id container#0 which is a docker google/pause, and connected to bridge br0

    • all other dockers in the stack herit network config from container#0 due to config netns = container#0 in the docker declaration

    • when the opensvc service starts, the network setup is done by the agent, producing all the commands reported in the logs below

opensvc service configuration

[DEFAULT]
docker_daemon_args = --log-opt max-size=1m --storage-driver=zfs --iptables=false
docker_data_dir = /{env.base_dir}/docker
env = PRD
nodes = srv1.acme.com srv2.acme.com
orchestrate = start
id = 4958b24d-4d0f-4c30-71d2-bb820e043a5d

[fs#1]
dev = {env.pool}/{namespace}-{svcname}
mnt = {env.base_dir}
mnt_opt = rw,xattr,acl
type = zfs

[fs#2]
dev = {env.pool}/{namespace}-{svcname}/docker
mnt = {env.base_dir}/docker
mnt_opt = rw,xattr,acl
type = zfs

[fs#3]
dev = {env.pool}/{namespace}-{svcname}/data
mnt = {env.base_dir}/data
mnt_opt = rw,xattr,acl
type = zfs

[ip#0]
netns = container#0
ipdev = br0
ipname = 1.2.3.4
netmask = 255.255.255.224
gateway = 1.2.3.1
type = netns

[container#0]
hostname = {svcname}
image = google/pause
rm = true
run_command = /bin/sh
type = docker

[container#mysvc]
image = mysvc/mysvc:4.1.3
netns = container#0
run_args = -v /etc/localtime:/etc/localtime:ro
    -v {env.base_dir}/data/mysvc:/home/mysvc/server/data
type = docker

[env]
base_dir = /srv/{namespace}-{svcname}
pool = data

startup log

2019-01-04 11:27:14,617 - srv1.acme.com.appprd.mysvc.ip#0 - INFO - checking 1.2.3.4 availability
2019-01-04 11:27:18,565 - srv1.acme.com.appprd.mysvc.fs#1 - INFO - mount -t zfs -o rw,xattr,acl data/appprd-mysvc /srv/appprd-mysvc
2019-01-04 11:27:18,877 - srv1.acme.com.appprd.mysvc.fs#2 - INFO - mount -t zfs -o rw,xattr,acl data/appprd-mysvc/docker /srv/appprd-mysvc/docker
2019-01-04 11:27:19,106 - srv1.acme.com.appprd.mysvc.fs#3 - INFO - mount -t zfs -o rw,xattr,acl data/appprd-mysvc/data /srv/appprd-mysvc/data
2019-01-04 11:27:19,643 - srv1.acme.com.appprd.mysvc - INFO - starting docker daemon
2019-01-04 11:27:19,644 - srv1.acme.com.appprd.mysvc - INFO - dockerd -H unix:///var/lib/opensvc/namespaces/appprd/services/mysvc/docker.sock --data-root //srv/appprd-mysvc/docker -p /var/lib/opensvc/namespaces/appprd/services/mysvc/docker.pid --exec-root /var/lib/opensvc/namespaces/appprd/services/mysvc/docker_exec --log-opt max-size=1m --storage-driver=zfs --iptables=false --exec-opt native.cgroupdriver=cgroupfs
2019-01-04 11:27:24,669 - srv1.acme.com.appprd.mysvc.container#0 - INFO - docker -H unix:///var/lib/opensvc/namespaces/appprd/services/mysvc/docker.sock run --name=appprd..mysvc.container.0 --detach --hostname mysvc --net=none --cgroup-parent /opensvc.slice/appprd.slice/mysvc.slice/container.slice/container.0.slice google/pause /bin/sh
2019-01-04 11:27:30,965 - srv1.acme.com.appprd.mysvc.container#0 - INFO - output:
2019-01-04 11:27:30,965 - srv1.acme.com.appprd.mysvc.container#0 - INFO - f790e192b5313d7c3450cb257d075620f40c2bad3d69d52c8794eccfe954f250
2019-01-04 11:27:30,987 - srv1.acme.com.appprd.mysvc.container#0 - INFO - wait for up status
2019-01-04 11:27:31,031 - srv1.acme.com.appprd.mysvc.container#0 - INFO - wait for container operational
2019-01-04 11:27:31,186 - srv1.acme.com.appprd.mysvc.ip#0 - INFO - bridge mode
2019-01-04 11:27:31,268 - srv1.acme.com.appprd.mysvc.ip#0 - INFO - /sbin/ip link add name veth0pl20321 mtu 1500 type veth peer name veth0pg20321 mtu 1500
2019-01-04 11:27:31,273 - srv1.acme.com.appprd.mysvc.ip#0 - INFO - /sbin/ip link set veth0pl20321 master br0
2019-01-04 11:27:31,277 - srv1.acme.com.appprd.mysvc.ip#0 - INFO - /sbin/ip link set veth0pl20321 up
2019-01-04 11:27:31,281 - srv1.acme.com.appprd.mysvc.ip#0 - INFO - /sbin/ip link set veth0pg20321 netns 20321
2019-01-04 11:27:31,320 - srv1.acme.com.appprd.mysvc.ip#0 - INFO - /usr/bin/nsenter --net=/var/lib/opensvc/namespaces/appprd/services/mysvc/docker_exec/netns/fc2fa9b2eaa4 ip link set veth0pg20321 name eth0
2019-01-04 11:27:31,356 - srv1.acme.com.appprd.mysvc.ip#0 - INFO - /usr/bin/nsenter --net=/var/lib/opensvc/namespaces/appprd/services/mysvc/docker_exec/netns/fc2fa9b2eaa4 ip addr add 1.2.3.4/27 dev eth0
2019-01-04 11:27:31,362 - srv1.acme.com.appprd.mysvc.ip#0 - INFO - /usr/bin/nsenter --net=/var/lib/opensvc/namespaces/appprd/services/mysvc/docker_exec/netns/fc2fa9b2eaa4 ip link set eth0 up
2019-01-04 11:27:31,372 - srv1.acme.com.appprd.mysvc.ip#0 - INFO - /usr/bin/nsenter --net=/var/lib/opensvc/namespaces/appprd/services/mysvc/docker_exec/netns/fc2fa9b2eaa4 ip route replace default via 1.2.3.1
2019-01-04 11:27:31,375 - srv1.acme.com.appprd.mysvc.ip#0 - INFO - /usr/bin/nsenter --net=/var/lib/opensvc/namespaces/appprd/services/mysvc/docker_exec/netns/fc2fa9b2eaa4 /usr/bin/python3 /usr/share/opensvc/lib/arp.py eth0 1.2.3.4
2019-01-04 11:27:32,534 - srv1.acme.com.appprd.mysvc.container#mysvc - INFO - docker -H unix:///var/lib/opensvc/namespaces/appprd/services/mysvc/docker.sock run --name=appprd..mysvc.container.mysvc -v /etc/localtime:/etc/localtime:ro -v /srv/appprd-mysvc/data/mysvc:/home/mysvc/server/data --detach --net=container:appprd..mysvc.container.0 --cgroup-parent /opensvc.slice/appprd.slice/mysvc.slice/container.slice/container.mysvc.slice mysvc/mysvc:4.1.3
2019-01-04 11:27:37,776 - srv1.acme.com.appprd.mysvc.container#mysvc - INFO - output:
2019-01-04 11:27:37,777 - srv1.acme.com.appprd.mysvc.container#mysvc - INFO - 1616cade9257d0616346841c3e9f0d639a9306e1af6fd750fe70e17903a11011
2019-01-04 11:27:37,797 - srv1.acme.com.appprd.mysvc.container#mysvc - INFO - wait for up status
2019-01-04 11:27:37,833 - srv1.acme.com.appprd.mysvc.container#mysvc - INFO - wait for container operational
Chaoxiang N
  • 1,218
  • 4
  • 10
  • Thank you for your answer! I am not familiar with opensvc and I will have to chew on your suggestion for a while... Also, for a connection between KVM and docker would I need a physical nic connected to the bridge?? Apart from the nic is your bridge any different than the KVM default bridge virbr0? – sdittmar Jan 10 '19 at 09:02
  • you should not need a physical nic in the bridge in your case. as far as I know, no difference with kvm virbr0. you may have missed it, but there is no need for docker & iptables magic stuff in my setup. easy to troubleshoot ! – Chaoxiang N Jan 10 '19 at 10:15
  • Looking at OpenSVC, I just feel I am cracking a nut with a sledgehammer... even though it is really interesting... – sdittmar Jan 10 '19 at 10:26