1
2
We're currently using the CentOS libvirt-LXC tool set for creating and managing containers under CentOS 7.1. This tool set is being deprecated though so we plan to change our containers to run under the linuxcontainers.org framework instead. I'll refer to this as simply LXC instead of libvirt-LXC.
Under libvirt-LXC, we have our containers configured to use host bridging and so are connected to the host network. Each container has their own static IP and appear as physical machines on the network. They can see each other as well as other systems running on the same network.
I've been unable so far to get host bridging to work with LXC. There is a fair amount of information available on networking for LXC but many seem to describe slightly different ways to set things up and nothing I've tried works the way I'd expect. I am able to get the containers to see each other but have not been able to get them to see the host network. The config I am using for my containers looks like this:
lxc.utsname = test1
lxc.network.type = veth
lxc.network.link = br0
lxc.network.flags = up
The br0 interface is the same bridge interface that I have configured for use with my libvirt-LXC containers. Some of the sites I've come across that discuss setting up host bridging for LXC say to configure rules in iptables. However, we do not need any such rules with libvirt-LXC, and in fact iptables (or more accurately, firewalld under CentOS 7) isn't even enabled.
In addition to this config I'm using, I have also created /etc/sysconfig/network-scripts/ifcfg-eth0 with the following entries:
DEVICE=eth0
NM_CONTROLLED=no
ONBOOT=yes
BOOTPROTO=none
IPADDR=172.16.110.222
NETMASK=255.255.0.0
GATEWAY=172.16.0.1
This is the exact same file that I use for my libvirt-LXC based containers. As I stated, the containers can see each other, but they cannot access the host network. They can't even ping their own host. The routing table though is the same for both my LXC and libvirt-LXC containers:
# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 172.16.0.1 0.0.0.0 UG 0 0 0 eth0
link-local 0.0.0.0 255.255.0.0 U 1021 0 0 eth0
172.16.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
I'm not sure what LXC magic I am missing to open the containers up to the outside network. I'm using the same template for both my LXC and libvirt-LXC tests, and I am using the same host for both. What am I missing?
The output of "bridge link show br0" with one container running is:
# bridge link show br0
3: bond0 state UP : <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 master br0 state forwarding priority 32 cost 19
6: virbr0-nic state DOWN : <BROADCAST,MULTICAST> mtu 1500 master virbr0 state disabled priority 32 cost 100
22: veth5BJDXU state UP : <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master virbr0 state forwarding priority 32 cost 2
The veth name is chosen automatically by LXC. The equivalent setup using libvirt-LXC with one container produces essentially the same output except the generated name is veth0.
The virbr0-nic interface BTW is created by libvirt and is used with libvirt-LXC containers and VMs that are configured to use NAT instead of bridging. Interestingly, if I use NAT addressing with my libvirt-LXC containers, they behave the same as my LXC containers that as supposed to be using bridged networking through br0. It makes me wonder if in fact I am somehow using NAT addressing with my LXC containers instead of brdiged networking.
So you have a bridge with nothing but
veth
interfaces connected to it, right? Where is the traffic supposed to go? Do you want to route it? Or bridge it? Please provide the output ofbridge link show br0
. – Daniel B – 2015-08-28T15:34:13.097I've updated the post to include this command. Regarding your question, I believe I want to bridge it, although I confess I'm not entirely sure. It's not something I had to consider in setting up the bridged interfaces with libvirt-LXC. – user3280383 – 2015-08-28T18:30:59.177
I discovered the problem, and it was simple user error. When I created my container I specified the --dir option to point to an alternative directory to store the rootfs. I assume that also meant that the container's config file would be moved there. So the config file I was creating to set up bridged networking didn't work like I'd expect because it wasn't even being processed. As soon as I realized my mistake and modified the correct config file, my bridged networking worked as expected. – user3280383 – 2015-08-30T20:03:03.433
I see. Well, you should either answer your own question (and then accept this answer) or delete it, if still possible. – Daniel B – 2015-08-30T21:57:55.177
Don't delete. This info you have pasted is still useful – Otheus – 2017-01-03T15:11:12.747