1

I need to write a CoreOS Cloud Config file in which I need to setup networking. But i have a problem because I have a bit weird environment. What I'm trying to accomplish;

I have a Sophos UTM thats acts as a router/FW, because atm i don't have enough space for a switch, the UTM has a lot of NIC's attached, so that the servers on which I need to run CoreOS on, can directly attach to the UTM. The ports on the UTM that are used for the server connections, are setup as VLAN interfaces. As stated by the Sophos KB (section Ethernet VLAN)

the port the UTM is connected must be configured as a trunk port, and it must be a TAGGED member of each VLAN that you want the UTM to use.

So here my problem comes in. How would I create such interface on the CoreOS box? I want it done via the Cloud Config file, but if that isn't possible then a manual way is also appreciated.

I hope some can help.

radriaanse
  • 188
  • 2
  • 11

1 Answers1

4

Put something like this (adjusted for your nic) in your cloud-config file:

- name: 00-vlan1.netdev
  runtime: true
  content: |
    [NetDev]
    Name=vlan1
    Kind=vlan

    [VLAN]
    Id=1
- name: 00-vlan2.netdev
  runtime: true
  content: |
    [NetDev]
    Name=vlan2
    Kind=vlan

    [VLAN]
    Id=2
- name: 30-ens192.network
  runtime: true
  content: |
    [Match]
    Name=ens192

    [Network]
    DHCP=yes
    VLAN=vlan1
    VLAN=vlan2

This information come from two sources: the manpages for systemd-networkd and the coreos blog https://coreos.com/blog/intro-to-systemd-networkd

I tried this on one of my own systems since I also will need vlans soon. Works very well. Here is the result:

core@localhost ~ $ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:48:9e:49 brd ff:ff:ff:ff:ff:ff
    inet 10.9.8.123/24 brd 10.9.8.255 scope global dynamic ens192
       valid_lft 3200sec preferred_lft 3200sec
    inet6 fe80::20c:29ff:fe48:9e49/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
    link/ether 56:84:7a:fe:97:99 brd ff:ff:ff:ff:ff:ff
    inet 172.17.42.1/16 scope global docker0
       valid_lft forever preferred_lft forever
4: vlan1@ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP 
    link/ether 42:cf:24:c7:c6:bd brd ff:ff:ff:ff:ff:ff
    inet6 fe80::40cf:24ff:fec7:c6bd/64 scope link 
       valid_lft forever preferred_lft forever
5: vlan2@ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP 
    link/ether da:0d:c5:73:91:1d brd ff:ff:ff:ff:ff:ff
    inet6 fe80::d80d:c5ff:fe73:911d/64 scope link 
       valid_lft forever preferred_lft forever
core@localhost ~ $ 
mike.schmidt
  • 106
  • 3
  • I forgot 1 additional caveat: since you are running on bare metal, make sure that the nic and its drivers support vlans, otherwise it will of course not work. You can test this manually by just doing the following: "ip link add link ens192 name ens192.10 type vlan id 10" (adjusted for your nic of course) – mike.schmidt Dec 30 '14 at 18:22