1

Background

After having installed OpenStack with PackStack we are left with a problem with the networking. We wanted a super-simple network with one subnet where all the virtual machines should reside, as this looked like the simplest solution. We have to begin with three nodes which is running nova.

The answer file we have used is this: answer file (pastebin)

Our setup

Three nodes with CentOS 6.5 where the nodes are connected to two switches.

  • eth0: public
  • eth1: internal network, ip 10.0.20.0/24 where node1 (10.0.20.1), node2 (10.0.20.2)...
  • The switch ports is not vlan tagged or trunked (unfortunately).
  • We want the instances to be able to communicate, but also access internet.

Security group rules

Ingress IPv4    TCP 1 - 65535   0.0.0.0/0 (CIDR)
Ingress IPv4    TCP 22 (SSH)    0.0.0.0/0 (CIDR)
Ingress IPv6    Any -       default
Ingress IPv4    TCP 22 (SSH)    10.0.20.0/24 (CIDR)
Ingress IPv4    TCP 53 (DNS)    0.0.0.0/0 (CIDR)
Egress  IPv4    Any -       0.0.0.0/0 (CIDR)
Ingress IPv4    ICMP    -       0.0.0.0/0 (CIDR)
Egress  IPv6    Any -       ::/0 (CIDR)
Ingress IPv4    Any -       default

Neutron

(neutron) agent-list
+--------------------------------------+--------------------+------------------------
| id                                   | agent_type         | host  | alive | admin_state_up
+--------------------------------------+--------------------+------------------------
| 09add8dd-0328-4c63-8a79-5c61322a8314 | L3 agent           | host3 | :-)   | True
| 0d0748a9-4289-4a5d-b1d9-d06a764a8d25 | Open vSwitch agent | host2 | :-)   | True
| 258c92fe-8e3a-4760-864e-281a47523e85 | Open vSwitch agent | host1 | :-)   | True
| 2e886dc1-af93-4f4f-b66c-61177a6c9dba | L3 agent           | host1 | :-)   | True
| 50f37a33-2bfc-43f2-9d2f-4f42564d234d | Open vSwitch agent | host3 | :-)   | True
| 535bf0a3-06aa-4072-ae5a-1b1ba1d377ab | L3 agent           | host2 | :-)   | True
| 9b17ef73-a602-4b5d-a4e9-e97445e594b4 | DHCP agent         | host1 | :-)   | True

ovs-vsctl

Host1

ovs-vsctl show

43da814e-223c-4f66-ba2d-c3c9de91e1f8
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
    Bridge br-int
        Port br-int
            Interface br-int
                type: internal
        Port "tap3e0d3121-32"
            tag: 4
            Interface "tap3e0d3121-32"
                type: internal
        Port "tap4a397755-29"
            tag: 4
            Interface "tap4a397755-29"
    ovs_version: "1.11.0"
Host2
afa75816-6a40-4f0c-842f-236a3a94cd63
    Bridge br-int
        Port br-int
            Interface br-int
                type: internal
        Port "tap46f55af8-73"
            tag: 1
            Interface "tap46f55af8-73"
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
    ovs_version: "1.11.0"

Out problem

Instances are not able to communicate to each other, and are not able to reach internet. Frankly we are not sure what the requirements for this is when using a multi-node nova setup when the "internal" network between the nodes is only using one link. I think the problem is a routing problem, since we are not able to connect between the instances on different nodes, but after having read a LOT of documentation, I am not quite sure how to proseed. If I tcpdump the interface br-int I can se the ARP requests, but nothing more. That is if I try to ping from a instance on the respective host.

The question

So the question is: How can we proseed in finding the solution to this multi-node network problem, and what do we need to think about? Could it be the routing, or a misconfiguration with the host os or openstack? (Running CentOS).

Any feedback is highly appriciated, since we have been stuck at this point for a couple of weeks. Sorry for the long post, but I hope the needed information is in here. If not; dont be shy :)

Update

I have been able to fix the internal network between the nodes, so that the instances are able to communicate between the physical nodes.

- Changed the /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini:
[database]
connection = mysql://neutron:password@127.0.0.1:3306/ovs_neutron

[OVS]
tennant_network_type = gre
tunnel_id_ranges = 1:1000
integration_brige = br-int
tunnel_bridge = br-tun
local_ip = 10.0.20.1
enable_tunneling = True

- Restarted the service on controller
cd /etc/init.d/; for i in $( ls neutron-* ); do sudo service $i restart; done )
service openvswitch restart

This were done on all the nodes and created the GRE tunnels. Although the flow did not work, so i needed to use ovs-ofctl add-flow br-tun action=normal.

The current problem that I now have is being able to route the internal subnet to internet, so that all the instances get internet access. Do I need floating ips to be able to connect to the internet? There is no patch between br-int and br-ex or the routers, so is this needed to be able to route the traffic to internet?

Can I add a default route with ip netns exec ... ip add default gw via (ip of br-ex) or do I need to add some new interfaces?

larhauga
  • 11
  • 1
  • 4

2 Answers2

0

Sit down and watch this.

It walks through setting up a simple multi-node cluster and I found it very clear. The last bit about setting up NAT will not apply, because he is running his cluster on Virtualbox.

There is an associated slideshow linked in the video description.

chriscowley
  • 523
  • 4
  • 17
  • This helped me a lot. I managed with this video get a overview of what needed to be done with the GRE tunneling. I also used [OpenStack (RH) GRE manual description](http://openstack.redhat.com/Using_GRE_Tenant_Networks). I added my needed configuration change, but I am still having trouble with being able to connect to internet. Do you have any input on how this should be done? The problem here is that I have manually reconfigured the setup to support GRE and with the current setup ext net is not working. – larhauga Mar 14 '14 at 15:32
0

As previously updated I managed to get the network up and running by using GRE tunneling instead of nova network. Nova networking seams to be a ok solution if you have a spare physical network interface, but when you dont, it does not work that well.

The GRE setup were done with the following: - Changed the /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini:

[database]
connection = mysql://neutron:password@127.0.0.1:3306/ovs_neutron

[OVS]
tennant_network_type = gre
tunnel_id_ranges = 1:1000
integration_brige = br-int
tunnel_bridge = br-tun
local_ip = "internal ip"
enable_tunneling = True

- Restarted the service on controller
cd /etc/init.d/; for i in $( ls neutron-* ); do sudo service $i restart; done )
service openvswitch restart

This were of copyed onto each compute node. The important part though when using GRE tunnels is that when you create a new network, you need to specify the segmentation id. If you try to create the network via horizon it will not work.

keystone tenant-list
admin=$(keystone tenant-list | grep admin | awk -F' |' '{ print $2 }')
neutron net-create --tenant-id $admin network --provider:network_type gre --provider:segmentation_id 3
neutron subnet-create --tenant-id $admin network 192.168.0.0/24 --getaway 192.168.0.1

You can also add a external network with the following commands:

neutron net-create ext
neutron net-list
neutron subnet-create extnet --allocation-pool start=10.0.0.10,end=10.0.0.100 --gateway=10.0.0.1 --enable_dhcp=False 10.0.0.0/24

Then we can create a new router and attach the network to this external router. This is of cource just one of many solutions.

intsubnet=$(neutron subnet-list | grep 192.168.0.0/24| awk -F' |' '{ print $2 }')
extnet=$(neutron net-list | grep ext | awk -F' |' '{ print $2 }')

neutron router-create ext-to-int --tenant-id $admin
router=$(neutron router-list | grep ext-to-int | awk -F' |' '{ print $2 }')

neutron router-interface-add $router $intsubnet
neutron router-gateway-set $router $extnet

I had in the beginning very low throughput from the instances. This got solved when I distributed new MTU (1454) with DCHP (create a dhcp configuration file under /etc/neutron/ and add dhcp-option-force=26,1454 to the the file. Update dnsmasq_config_file in /etc/neutron/dhcp_agent.ini

This worked for me and was all that were needed.

larhauga
  • 11
  • 1
  • 4