1

I'm fairly new to Openstack and have very limited insight into Advanced networking concepts. I'm trying to setup a simple Openstack Mitaka setup for work, following the official installation guides. I thought I was making pretty good progress till I reached the networking bit.

I have the following configuration:

  1. Single controller node on a VMWare virtual machine on 10.110.166.XXX subnet - 2 NICs present
  2. Single compute node on Bare Metal server on 10.110.167.XXX subnet - 2 NICs present

I've deployed Keystone, Nova, Glance and Neutron components on the controller. Nova-compute service and linuxbridge-agent on the compute node. Trying to get the network setup working with the ML2 plugin which was recommended in the installation guide.

After following all the instructions listed, I can't seem to ping any of the VM instances that are launched from my neither my compute nor my controller node. This could also be the reason I'm not able to get a VNC console connection to the instances. I can clearly see that my networking setup is wrong since my Ubuntu instance gets stuck at boot waiting for the network interfaces to come up.

I have recreated the network setup within Openstack multiple times but haven't been successful, Right now, I've cleared everything out and have no provider or any other networks defined within Neutron. I'd really appreciate it if somebody could guide me through this process.

I have doubts about whether it is possible to have controller and compute nodes on different LAN segments. Haven't been able to find solutions for this.

The setup seems to be good so far judging by this output:

neutron agent-list
+--------------------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+
| id                                   | agent_type         | host       | availability_zone | alive | admin_state_up | binary                    |
+--------------------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+
| 11ff8610-8eb2-45d5-91e8-d7905beb668c | Linux bridge agent | controller |                   | :-)   | True           | neutron-linuxbridge-agent |
| 370794ac-0091-4908-8293-00d007f7f8be | Linux bridge agent | compute1   |                   | :-)   | True           | neutron-linuxbridge-agent |
| 444eca8d-3a34-4018-97ab-f23925e65713 | DHCP agent         | controller | nova              | :-)   | True           | neutron-dhcp-agent        |
| 90f70ca7-afd2-4127-97f1-f623fac26e29 | Metadata agent     | controller |                   | :-)   | True           | neutron-metadata-agent    |
| ad2a6012-a348-47c7-8ee9-f41401fb048f | L3 agent           | controller | nova              | :-)   | True           | neutron-l3-agent          |
+--------------------------------------+--------------------+------------+-------------------+-------+----------------+---------------------------+

I've added configs below that might help with this problem.

Controller Node:

/etc/network/interfaces

# The loopback network interface
auto lo
iface lo inet loopback

# The provider network interface
auto ens192
iface ens192 inet manual
up ip link set dev $IFACE up
down ip link set dev $IFACE down

# primary network interface
auto ens160
iface ens160 inet dhcp

Output of ifconfig:

ens160    Link encap:Ethernet  HWaddr 00:50:56:99:c5:74
          inet addr:10.105.166.87  Bcast:10.105.166.255  Mask:255.255.255.0
          inet6 addr: fe80::250:56ff:fe99:c574/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:995476 errors:0 dropped:0 overruns:0 frame:0
          TX packets:639007 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:578268831 (578.2 MB)  TX bytes:522577815 (522.5 MB)

ens192    Link encap:Ethernet  HWaddr 00:50:56:99:14:d4
          inet6 addr: fe80::250:56ff:fe99:14d4/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:23276 errors:0 dropped:292 overruns:0 frame:0
          TX packets:28 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:1697881 (1.6 MB)  TX bytes:1988 (1.9 KB)
          Interrupt:19 Memory:fd3a0000-fd3c0000

route -n:

Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.105.166.251  0.0.0.0         UG    0      0        0 ens160
10.105.166.0    0.0.0.0         255.255.255.0   U     0      0        0 ens160

/etc/neutron/plugins/ml2/linuxbridge_agent.ini: [linux_bridge]

#
# From neutron.ml2.linuxbridge.agent
#

# Comma-separated list of <physical_network>:<physical_interface> tuples
# mapping physical network names to the agent's node-specific physical network
# interfaces to be used for flat and VLAN networks. All physical networks
# listed in network_vlan_ranges on the server should have mappings to
# appropriate interfaces on each agent. (list value)
physical_interface_mappings = provider:ens192

# List of <physical_network>:<physical_bridge> (list value)
#bridge_mappings =


[vxlan]

#
# From neutron.ml2.linuxbridge.agent
#

# Enable VXLAN on the agent. Can be enabled when agent is managed by ml2 plugin
# using linuxbridge mechanism driver (boolean value)
enable_vxlan = True
local_ip = 10.105.166.87
l2_population = True

Compute Node:

/etc/network/interfaces:

# The provider network interface
auto eno1
iface eno1 inet manual
up ip link set dev $IFACE up
down ip link set dev $IFACE down

# main network
auto enp4s0f3
iface enp4s0f3 inet dhcp

ifconfig:

eno1      Link encap:Ethernet  HWaddr 00:1e:67:d8:ae:36
          inet6 addr: fe80::21e:67ff:fed8:ae36/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:62075 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:10075695 (10.0 MB)  TX bytes:648 (648.0 B)
          Memory:91920000-9193ffff

enp4s0f3  Link encap:Ethernet  HWaddr 00:1e:67:d8:ae:37
          inet addr:10.105.167.134  Bcast:10.105.167.255  Mask:255.255.255.0
          inet6 addr: fe80::21e:67ff:fed8:ae37/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:358572 errors:0 dropped:0 overruns:0 frame:0
          TX packets:243401 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:203461227 (203.4 MB)  TX bytes:75659105 (75.6 MB)
          Memory:91900000-9191ffff

route -a:

root@compute1:~# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.105.167.251  0.0.0.0         UG    0      0        0 enp4s0f3
10.105.167.0    0.0.0.0         255.255.255.0   U     0      0        0 enp4s0f3

/etc/neutron/plugins/ml2/linuxbridge_agent.ini:

[linux_bridge]


#
# From neutron.ml2.linuxbridge.agent
#

# Comma-separated list of <physical_network>:<physical_interface> tuples
# mapping physical network names to the agent's node-specific physical network
# interfaces to be used for flat and VLAN networks. All physical networks
# listed in network_vlan_ranges on the server should have mappings to
# appropriate interfaces on each agent. (list value)
physical_interface_mappings = provider:eno1

# List of <physical_network>:<physical_bridge> (list value)
#bridge_mappings =


[securitygroup]

#
# From neutron.ml2.linuxbridge.agent
#

# Driver for security groups firewall in the L2 agent (string value)
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

These are my goals as far as the network setup is concerned:

1) I don't want to use any more IPs on the physical network 2) VMs instances will be allocated IPs on a virtualized network segment 3) Some kind of an overlay network allowing them to connect to one another 4) Network connectivity to the Internet is required

I've been reading about OpenVSwitch and how it can be used here, however it seems fairly complex and I am unsure if I need to actually invest effort into getting this setup. I'd appreciate some pointers on how this can be setup.

Really appreciate the help. Thanks!

0 Answers0