3

There does not seem to be a guide that follows through on how to deploy a baremetal Kubernetes cluster using magnum. I got to the point of having the baremetal servers turn on and initiate the pxe request, however, the dnsmasq server does not respond for BOOTP requests. What are the required steps for this to work?

Update:
Not sure who closed this question. If there is something off topic please let me know.

Magnum was able to initiate the pxe requests after setting the fixed-network to the baremetal provisioning network. However, the external-network flag is also required and there is only one vlan in this setup (which has been setup as the bm-provision network). I attempted to create a public network like this (without any underlying physical network devices):

openstack network create public --provider-network-type vxlan \
                                  --external \
                                  --project service

openstack subnet create public-subnet --network public \
                                  --subnet-range 172.16.10.0/24 \
                                  --gateway 172.16.10.1 \
                                  --ip-version 4 

openstack coe cluster template create bmt \
--image fa27 \
--keypair mykey \
--external-network public \
--fixed-network bm-provision \
--fixed-subnet bm-provision-subnet \
--master-flavor bm.dev \
--flavor bm.dev \
--network-driver calico \
--coe kubernetes

openstack coe cluster create bm \
                        --cluster-template bmt \
                        --master-count 1 \
                        --node-count 1 \
                        --keypair mykey 

And this provisions the baremetal with the os over PXE via ironic, however, run into this:

{
  "default-master": "Resource CREATE failed: NotFound: resources.kube_masters.resources[0].resources.kube_master_floating: External network fa5174bb-d01d-48ca-a564-bbff283b1141 is not reachable from subnet 3e68266a-e28b-4d2c-8cd6-4042ac5a38ac.  Therefore, cannot associate Port 4950a517-31c6-4ed2-b7f8-03c3286063b3 with a Floating IP.\nNeutron server returns request_ids: ['req-bc587047-bf37-49ba-a46c-4d299f85812b']",
  "default-worker": "Resource CREATE failed: NotFound: resources.kube_masters.resources[0].resources.kube_master_floating: External network fa5174bb-d01d-48ca-a564-bbff283b1141 is not reachable from subnet 3e68266a-e28b-4d2c-8cd6-4042ac5a38ac.  Therefore, cannot associate Port 4950a517-31c6-4ed2-b7f8-03c3286063b3 with a Floating IP.\nNeutron server returns request_ids: ['req-bc587047-bf37-49ba-a46c-4d299f85812b']"
}


For this experiment, I don't need the external network, the bm-provision has a gateway pointing to nat so there is already access to the internet. Trying to achieve this with one VLAN. Will this be possible?

If you need any additional information, please let me know.

Answer
Cannot make a separate answer since this question is closed but here is what was essentially executed:

# add some more interfaces facing into the same vlan
# note macvlan was attempted also but BOOTP requests did no go through for some reason
ip link add kolla_i type veth peer name kolla_b
for i in `seq 1 10`; do ip link add p${i}_i type veth peer name p${i}_b; done
ip link add eno2_br type bridge
ip link set eno2_br up
ip link set eno2 master eno2_br
ip link set kolla_b master eno2_br
ip link set kolla_b up
ip link set kolla_i up 
ip a add 10.0.0.4/16 dev kolla_i
for i in `seq 1 10`; do ip link set p${i}_b master eno2_br; done
for i in `seq 1 10`; do ip link set p${i}_b up; done
for i in `seq 1 10`; do ip link set p${i}_i up; done

in globals.yml file (for kolla-ansible provisioning):

kolla_internal_vip_address: "10.0.0.4"
network_interface: "kolla_i"
neutron_external_interface: "p1_i,p2_i,p3_i,p4_i,p5_i,p6_i,p7_i,p8_i,p9_i,p10_i"
# this option does not exist so just add it into globals.yml
neutron_bridge_name: "br-ex1,br-ex2,br-ex3,br-ex4,br-ex5,br-ex6,br-ex7,br-ex8,br-ex9,br-ex10"
ironic_dnsmasq_interface: "p1_i"
ironic_dnsmasq_dhcp_range: "10.0.2.1,10.0.2.5"

This basically gives you 10 physical network to use within the same vlan. Here is how the ovs conf file looked after installation:

docker exec -it --user root neutron_openvswitch_agent bash -c "cat /etc/neutron/plugins/ml2/openvswitch_agent.ini" 
[agent]
tunnel_types = vxlan
l2_population = true
arp_responder = true

[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

[ovs]
bridge_mappings = physnet1:br-ex1,physnet2:br-ex2,physnet3:br-ex3,physnet4:br-ex4,physnet5:br-ex5,physnet6:br-ex6,physnet7:br-ex7,physnet8:br-ex8,physnet9:br-ex9,physnet10:br-ex10
datapath_type = system
ovsdb_connection = tcp:127.0.0.1:6640
local_ip = 10.0.0.4

The next step was to create the networks:

openstack network create \
--share \
--provider-network-type flat \
--provider-physical-network physnet1 \
--external \
provision

openstack subnet create \
--network provision \
--allocation-pool start=10.0.2.6,end=10.0.2.230 \
--gateway 10.0.0.10 \
--subnet-range 10.0.0.0/16 \
provision-subnet 


openstack network create \
--share \
--provider-network-type flat \
--provider-physical-network physnet2 \
--external \
public

openstack subnet create \
--network public \
--allocation-pool start=10.1.0.1,end=10.1.0.10 \
--allocation-pool start=10.1.0.12,end=10.1.0.250 \
--gateway 10.1.0.11 \
--subnet-range 10.1.0.0/16 \
public-subnet 

network topology create the router

openstack router create provision-public
openstack router set provision-public --external-gateway public
openstack router add subnet provision-public provision-subnet

register the bare-metal nodes:

openstack flavor create --ram 1048576 --disk 100 --vcpus 64 bm.dev
openstack flavor set --property baremetal=true bm.dev
openstack flavor set --property resources:CUSTOM_BAREMETAL_DEV=1 bm.dev
openstack flavor set --property resources:VCPU=0 bm.dev
openstack flavor set --property resources:MEMORY_MB=0 bm.dev
openstack flavor set --property resources:DISK_GB=0 bm.dev


openstack baremetal node create --name dev02 \
--driver ipmi \
--driver-info ipmi_username=<user> \
--driver-info ipmi_password=<pass> \
--driver-info ipmi_address=<ipmi_addr> \
--driver-info deploy_kernel=http://10.0.0.4:8089/ironic-agent.kernel \
--driver-info deploy_ramdisk=http://10.0.0.4:8089/ironic-agent.initramfs \
--driver-info cleaning_network=provision \
--driver-info provisioning_network=provision \
--deploy-interface=direct \
--network-interface=flat \
--driver-info force_persistent_boot_device=True \
--property capabilities=boot_mode:uefi \
--property cpu_arch=x86_64 \
--property local_gb=1000 \
--resource-class baremetal.dev

openstack baremetal port create <mac_addr> --node <id>
openstack baremetal node manage dev02
openstack baremetal node provide dev02

openstack baremetal node create --name dev03 \
--driver ipmi \
--driver-info ipmi_username=<user> \
--driver-info ipmi_password=<pass> \
--driver-info ipmi_address=<ipmi_addr> \
--driver-info deploy_kernel=http://10.0.0.4:8089/ironic-agent.kernel \
--driver-info deploy_ramdisk=http://10.0.0.4:8089/ironic-agent.initramfs \
--driver-info cleaning_network=provision \
--driver-info provisioning_network=provision \
--deploy-interface=direct \
--network-interface=flat \
--driver-info force_persistent_boot_device=True \
--property capabilities=boot_mode:uefi \
--property cpu_arch=x86_64 \
--property local_gb=1000 \
--resource-class baremetal.dev

openstack baremetal port create <mac_addr> --node <id>
openstack baremetal node manage dev03 
openstack baremetal node provide dev03

docker exec --user root  nova_conductor bash -c "nova-manage cell_v2 discover_hosts --by-service"

create template and deploy

openstack coe cluster template create bmt \
--image bm \
--keypair mykey \
--external-network public \
--fixed-network provision \
--fixed-subnet provision-subnet \
--master-flavor bm.dev \
--flavor bm.dev \
--network-driver calico \
--coe kubernetes

openstack coe cluster create bm \
                        --cluster-template bmt \
                        --master-count 1 \
                        --node-count 1 \
                        --keypair mykey

Note, this is not for production since all networks will be in the same broadcast domain but suitable for experiments.

  • 2
    I don't see how this is off-topic. It is true that the question lacks a little bit of detail, but it does concern the configuration of an OpenStack cloud. My tentative answer would be "don't set up the Kubernetes servers manually, but install an OpenStack cloud with Magnum and Ironic, then register the baremetal servers in Ironic and deploy a Kubernetes cluster using Magnum". – berndbausch Dec 19 '20 at 11:00
  • It does not seem like openstack supports provisioning directly through the ironic interface in ussuri and victoria releases. However, they did move ironic provisioning into nova API so now fedora atomic VM can also do bm. But what you suggest is what was performed here. – John Karasev Dec 21 '20 at 18:42
  • It should say "needs details or clarity" rather than "off topic" but that's a limitation of the software. I voted to reopen as the necessary details now seem to be present. – Michael Hampton Dec 21 '20 at 21:30
  • My comment was not really thought through, but this is why it was just a comment, not an answer. It seems Magnum wants to connect servers to a tenant network that is routed to an external network. While Ironic also features multitenancy (https://docs.openstack.org/ironic/latest/admin/multitenancy.html), I don't think this is the typical setup of an external network to which tenant networks are connected via routers. Right now I doubt that you can provision Container servers on baremetal. – berndbausch Dec 22 '20 at 02:41

0 Answers0