3

I have recently bought myself a physical server and I am trying to create containers which would have their IPs.

The physical machine has both IPv4 and IPv6 addresses. I have accessible another IPv4 and some other IPv6 addresses which I would like to assign to the container. I managed to assign the addresses as follows:

# vzctl set 101 --ipadd 144.76.195.252 --save

I can ping to the machine from the physical machine, but not from the outside world. This also applies to the IPv6 I assigned as well.

This is ifconfig of the physical machine:

eth0      Link encap:Ethernet  HWaddr d4:3d:7e:ec:e0:04
          inet addr:144.76.195.232  Bcast:144.76.195.255  Mask:255.255.255.224
          inet6 addr: 2a01:4f8:200:71e7::2/64 Scope:Global
          inet6 addr: fe80::d63d:7eff:feec:e004/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:217895 errors:0 dropped:0 overruns:0 frame:0
          TX packets:16779 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:322481419 (307.5 MiB)  TX bytes:1672628 (1.5 MiB)

venet0    Link encap:UNSPEC  HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
          inet6 addr: fe80::1/128 Scope:Link
          UP BROADCAST POINTOPOINT RUNNING NOARP  MTU:1500  Metric:1
          RX packets:12 errors:0 dropped:0 overruns:0 frame:0
          TX packets:12 errors:0 dropped:3 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:1108 (1.0 KiB)  TX bytes:1108 (1.0 KiB)

This is ifconfig of the OpenVZ container:

# ifconfig

venet0    Link encap:UNSPEC  HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
          inet addr:127.0.0.2  P-t-P:127.0.0.2  Bcast:0.0.0.0  Mask:255.255.255.255
          inet6 addr: 2a01:4f8:200:71e7::3/64 Scope:Global
          UP BROADCAST POINTOPOINT RUNNING NOARP  MTU:1500  Metric:1
          RX packets:12 errors:0 dropped:0 overruns:0 frame:0
          TX packets:12 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:1108 (1.0 KiB)  TX bytes:1108 (1.0 KiB)

venet0:0  Link encap:UNSPEC  HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
          inet addr:144.76.195.252  P-t-P:144.76.195.252  Bcast:144.76.195.252  Mask:255.255.255.255
          UP BROADCAST POINTOPOINT RUNNING NOARP  MTU:1500  Metric:1

What do I need to do to have the container accessible from the outside world? What could I have forgotten?

Thanks.

Vojtěch
  • 275
  • 3
  • 11
  • 1
    Maybe your host is configured to bridge your internal virtual network? What does the command brctl show returns on host? If you can't have access from the outside to your VM, do you still have access from your VM to the outside? Do you have any particular configuration? (Host itself virtualized?) – philippe Nov 04 '13 at 13:03
  • I haven't configured any bridges, do I have to? `brctl show` shows no bridges configured. Yes you are right, I can confirm I can't access the outside world from the container. What do I have to do? – Vojtěch Nov 04 '13 at 13:58
  • 1
    It depends on if you prefer routing your traffic (traffic goes through HN and is then routed by it) or if you prefer bridging it, (traffic goes directly at layer 3 on containers). As HN and container seem to share same subnet, I would advise creating a bridge; you can find documentation here: http://openvz.org/VEs_and_HNs_in_same_subnets. Note that venet interfaces will be removed (they do not support bridge). Hope it helps :) – philippe Nov 04 '13 at 14:09
  • And if I wanted to route the traffic, it would be done how? What would be advantage? Which is better for using IPv6 as well? If you post it as an answer, I will accept it. – Vojtěch Nov 04 '13 at 14:32
  • According to this: http://openvz.org/Common_Networking_HOWTOs, it should be sufficient to add the IP as described in `Public VEs (with their own IP addresses)`, but it is not sufficient. – Vojtěch Nov 04 '13 at 14:45

1 Answers1

4

Decision about routing or bridging are functional more than technical; there are pros and cons and its a choice. I prefer routing if I have only one interface, because this way let me having a single point of control (the HN) and on it, I can put some iptables or extra-protection for container which are not by default accessible from the Internet. If you prefer routing, you need to make sure the value of net.ipv4.conf.all.forwarding = 1 (run the command sysctl -a | grep forward). If not, echo 1 > /proc/sys/net/ipv4/ip_forward; (but won't survive a reboot) or add the line

net.ipv4.conf.all.forwarding = 1

in /etc/sysctl.conf and run sysctl -P after. Usually, one route instead of bridging because it allows NAT, which helps when lacking of IP4 addresses, but this is not your case, you have at least two of them.

On the other hand, bridging puts your HN and your VPS equal to equal. You can do this directly on the Internet because you seems to have enough IP addresses. You may need then extra protection on each container (iptables on each container and on host for instance).

To come back to your (routing) problem, if setting ip_forward to 1 does not help; try arp -an (sees if it resolves at this point) from both HN and VPS and tcpdump to get more details, when the packets are lost? at layer 2 or 3?

About IPv6, I really don't know :/

philippe
  • 2,131
  • 4
  • 30
  • 53