I have a fully working system running Rocky that I built using Openstack-Ansible on Ubuntu 18.04. I am using it as a POC to test out for use in my datacenter, which has a large Hyper-V investment.
I am using the ML2 Linuxbridge agent with my KVM hosts, with VLAN, VXLAN, and FLAT networking all working great.
While attempting to add some Hyper-V nodes into the mix, all is working fine, I can provision new Instances from image, Cinder volumes mount via iSCSI, and they boot up successfully. The issue is that the Hyper-V virtual switch remains in a disconnected state, and does not tag a VLAN when it should. The port state also shows as "down" in the Horizon Web UI or from the Command line.
On my neutron server containers, I have installed the "networking-hyperv" driver. My /etc/neutron/plugins/ml2/ml2_conf.ini file has the following:
[ml2]
extension_drivers = port_security,qos
mechanism_drivers = linuxbridge,hyperv
tenant_network_types = vxlan,flat,vlan
type_drivers = flat,vlan,vxlan,local
Here are my Neutron Server Container Logs:
I think the key line is: Device fbb9010c-3326-4689-b46c-c3393e43f23b has no active binding in host None
2019-04-14 02:10:59.141 805 DEBUG neutron.db.db_base_plugin_common [req-c40d2349-2ade-4367-a017-181e6644c1fb 202a012dda274d0a8e974e22690e340f fde3db651e3a424a95a34bef449949a3 - default default] Allocated IP 172.31.60.62 (7559f5d9-bf1e-431a-b947-af8cc85bcf91/0c59baf1-3880-4b6c-8198-037923ec8cfb/fbb9010c-3326-4689-b46c-c3393e43f23b) _store_ip_allocation /openstack/venvs/neutron-18.1.4/lib/python2.7/site-packages/neutron/db/db_base_plugin_common.py:121
2019-04-14 02:10:59.151 805 DEBUG neutron.db.db_base_plugin_common [req-c40d2349-2ade-4367-a017-181e6644c1fb 202a012dda274d0a8e974e22690e340f fde3db651e3a424a95a34bef449949a3 - default default] Allocated IP 2001:470:3ab1:ff3c::11 (7559f5d9-bf1e-431a-b947-af8cc85bcf91/d6704a54-bdd2-4ced-a1c6-ea425e87f772/fbb9010c-3326-4689-b46c-c3393e43f23b) _store_ip_allocation /openstack/venvs/neutron-18.1.4/lib/python2.7/site-packages/neutron/db/db_base_plugin_common.py:121
2019-04-14 02:10:59.826 805 DEBUG neutron.db.provisioning_blocks [req-c40d2349-2ade-4367-a017-181e6644c1fb 202a012dda274d0a8e974e22690e340f fde3db651e3a424a95a34bef449949a3 - default default] Transition to ACTIVE for port object fbb9010c-3326-4689-b46c-c3393e43f23b will not be triggered until provisioned by entity DHCP. add_provisioning_component /openstack/venvs/neutron-18.1.4/lib/python2.7/site-packages/neutron/db/provisioning_blocks.py:73
2019-04-14 02:10:59.964 805 DEBUG neutron.api.rpc.handlers.resources_rpc [req-c40d2349-2ade-4367-a017-181e6644c1fb 202a012dda274d0a8e974e22690e340f fde3db651e3a424a95a34bef449949a3 - - -] Pushing event updated for resources: {'Port': ['ID=fbb9010c-3326-4689-b46c-c3393e43f23b,revision_number=2']} push /openstack/venvs/neutron-18.1.4/lib/python2.7/site-packages/neutron/api/rpc/handlers/resources_rpc.py:241
2019-04-14 02:11:02.847 1024 INFO neutron.wsgi [req-f5b3d393-7550-477d-9dac-118149b09680 343681e3a838455796d46e4449625438 1868e5b19c6443c7a0ec010ece16447d - default default] 192.168.41.6,192.168.41.58 "PUT /v2.0/ports/fbb9010c-3326-4689-b46c-c3393e43f23b HTTP/1.1" status: 200 len: 1169 time: 2.3884900
2019-04-14 02:11:03.366 1024 INFO neutron.notifiers.nova [-] Nova event response: {u'status': u'completed', u'tag': u'fbb9010c-3326-4689-b46c-c3393e43f23b', u'name': u'network-changed', u'server_uuid': u'4286e04d-84f7-4e1d-9ad6-d150debafca1', u'code': 200}
2019-04-14 02:11:03.438 788 INFO neutron.wsgi [req-e491113e-0eec-4688-bbb4-835fa2473c0f 343681e3a838455796d46e4449625438 1868e5b19c6443c7a0ec010ece16447d - default default] 192.168.41.6,192.168.41.58 "GET /v2.0/floatingips?fixed_ip_address=2001%3A470%3A3ab1%3Aff3c%3A%3A11&port_id=fbb9010c-3326-4689-b46c-c3393e43f23b HTTP/1.1" status: 200 len: 212 time: 0.0544460
2019-04-14 02:11:04.616 1024 INFO neutron.wsgi [req-c3b90556-cd97-40d8-90d5-ac7fbf19247b 202a012dda274d0a8e974e22690e340f fde3db651e3a424a95a34bef449949a3 - default default] 192.168.41.151,192.168.41.58 "GET /v2.0/floatingips?tenant_id=fde3db651e3a424a95a34bef449949a3&port_id=fbb9010c-3326-4689-b46c-c3393e43f23b HTTP/1.1" status: 200 len: 212 time: 0.0501890
2019-04-14 02:11:21.842 1024 INFO neutron.wsgi [req-b7271e74-bffe-40fc-b7c5-9f50c2496626 202a012dda274d0a8e974e22690e340f fde3db651e3a424a95a34bef449949a3 - default default] 192.168.41.151,192.168.41.58 "GET /v2.0/floatingips?tenant_id=fde3db651e3a424a95a34bef449949a3&port_id=fbb9010c-3326-4689-b46c-c3393e43f23b HTTP/1.1" status: 200 len: 212 time: 0.0771079
2019-04-14 02:11:45.530 790 DEBUG neutron.plugins.ml2.rpc [req-feb90349-b6da-4c5f-9d6b-027bc4b6e36b - - - - -] Device fbb9010c-3326-4689-b46c-c3393e43f23b details requested by agent hyperv_HOST-H with host None get_device_details /openstack/venvs/neutron-18.1.4/lib/python2.7/site-packages/neutron/plugins/ml2/rpc.py:79
2019-04-14 02:11:45.731 790 DEBUG neutron.plugins.ml2.db [req-feb90349-b6da-4c5f-9d6b-027bc4b6e36b - - - - -] For port fbb9010c-3326-4689-b46c-c3393e43f23b, host HOST-H, got binding levels [<neutron.plugins.ml2.models.PortBindingLevel[object at 7f505617b910] {port_id=u'fbb9010c-3326-4689-b46c-c3393e43f23b', host=u'HOST-H', level=0, driver=u'hyperv', segment_id=u'598c754b-1b2a-4865-9211-ef3a1e5ed95c'}>] get_binding_levels /openstack/venvs/neutron-18.1.4/lib/python2.7/site-packages/neutron/plugins/ml2/db.py:77
2019-04-14 02:11:45.753 790 DEBUG neutron.plugins.ml2.rpc [req-feb90349-b6da-4c5f-9d6b-027bc4b6e36b - - - - -] Device fbb9010c-3326-4689-b46c-c3393e43f23b has no active binding in host None _get_device_details /openstack/venvs/neutron-18.1.4/lib/python2.7/site-packages/neutron/plugins/ml2/rpc.py:133
2019-04-14 02:11:49.538 1024 INFO neutron.wsgi [req-4213915d-3fee-40ff-8921-439c9ada3a30 202a012dda274d0a8e974e22690e340f fde3db651e3a424a95a34bef449949a3 - default default] 192.168.41.151,192.168.41.58 "GET /v2.0/floatingips?tenant_id=fde3db651e3a424a95a34bef449949a3&port_id=fbb9010c-3326-4689-b46c-c3393e43f23b HTTP/1.1" status: 200 len: 212 time: 0.0534980
2019-04-14 02:12:07.359 788 INFO neutron.wsgi [req-6d6e94e6-9100-4923-894a-341a9b4be439 202a012dda274d0a8e974e22690e340f fde3db651e3a424a95a34bef449949a3 - default default] 192.168.41.151,192.168.41.58 "GET /v2.0/floatingips?tenant_id=fde3db651e3a424a95a34bef449949a3&port_id=fbb9010c-3326-4689-b46c-c3393e43f23b HTTP/1.1" status: 200 len: 212 time: 0.0527611
2019-04-14 02:12:29.106 789 INFO neutron.wsgi [req-507d6cf9-a93e-4988-948c-e409dc226de7 202a012dda274d0a8e974e22690e340f fde3db651e3a424a95a34bef449949a3 - default default] 192.168.41.151,192.168.41.58 "GET /v2.0/floatingips?tenant_id=fde3db651e3a424a95a34bef449949a3&port_id=fbb9010c-3326-4689-b46c-c3393e43f23b HTTP/1.1" status: 200 len: 212 time: 0.0640750
Here is my Neutron Hyper-V Agent Log:
I believe the key line here is: No port fbb9010c-3326-4689-b46c-c3393e43f23b defined on agent
2019-04-13 22:11:02.830 4220 DEBUG networking_hyperv.neutron.agent.layer2 [req-f5b3d393-7550-477d-9dac-118149b09680 343681e3a838455796d46e4449625438 1868e5b19c6443c7a0ec010ece16447d - - -] port_update received: fbb9010c-3326-4689-b46c-c3393e43f23b port_update C:\Program Files\Cloudbase Solutions\OpenStack\Nova\Python\lib\site-packages\networking_hyperv\neutron\agent\layer2.py:436
2019-04-13 22:11:02.846 4220 DEBUG networking_hyperv.neutron.agent.layer2 [req-f5b3d393-7550-477d-9dac-118149b09680 343681e3a838455796d46e4449625438 1868e5b19c6443c7a0ec010ece16447d - - -] No port fbb9010c-3326-4689-b46c-c3393e43f23b defined on agent. port_update C:\Program Files\Cloudbase Solutions\OpenStack\Nova\Python\lib\site-packages\networking_hyperv\neutron\agent\layer2.py:449
2019-04-13 22:11:44.890 4220 INFO networking_hyperv.neutron.agent.layer2 [-] Hyper-V VM vNIC added: fbb9010c-3326-4689-b46c-c3393e43f23b
2019-04-13 22:11:45.530 4220 DEBUG networking_hyperv.neutron.agent.layer2 [req-feb90349-b6da-4c5f-9d6b-027bc4b6e36b - - - - -] Agent loop has new devices! _work C:\Program Files\Cloudbase Solutions\OpenStack\Nova\Python\lib\site-packages\networking_hyperv\neutron\agent\layer2.py:427
2019-04-13 22:11:45.765 4220 INFO networking_hyperv.neutron.agent.layer2 [req-feb90349-b6da-4c5f-9d6b-027bc4b6e36b - - - - -] Adding port fbb9010c-3326-4689-b46c-c3393e43f23b
2019-04-13 22:11:45.765 4220 DEBUG networking_hyperv.neutron.agent.layer2 [req-feb90349-b6da-4c5f-9d6b-027bc4b6e36b - - - - -] Missing port_id from device details: fbb9010c-3326-4689-b46c-c3393e43f23b. Details: {'device': 'fbb9010c-3326-4689-b46c-c3393e43f23b', 'no_active_binding': True} _treat_devices_added C:\Program Files\Cloudbase Solutions\OpenStack\Nova\Python\lib\site-packages\networking_hyperv\neutron\agent\layer2.py:374
2019-04-13 22:11:45.765 4220 DEBUG networking_hyperv.neutron.agent.layer2 [req-feb90349-b6da-4c5f-9d6b-027bc4b6e36b - - - - -] Remove the port from added ports set, so it doesn't get reprocessed. _treat_devices_added C:\Program Files\Cloudbase Solutions\OpenStack\Nova\Python\lib\site-packages\networking_hyperv\neutron\agent\layer2.py:376
If I happen to "catch" the VM in the Hyper-V console while it's being provisioned, I can attach the vNIC to the Hyper-V Switch, and check the box and tag the VLAN and the system will get an IP from the DHCP Agent, Pull its hostname and ssh key from the Metadata Agent, and everything else - although Openstack will still have the port status as "Down"
There's not a lot of info out there on specific nuances of Nova with Hyper-V, If I need to look into the Open vSwitch Agent, I'm not opposed to going that direction either. I also realize that I may need to reach out to Cloudbase directly for better support.