docs/doc/source/deploy_install_guides/r5_release/bare_metal/inc-openstack-specific-host...

114 lines
3.8 KiB
ReStructuredText

.. incl-config-controller-0-openstack-specific-aio-simplex-start:
.. important::
**This step is required only if the StarlingX OpenStack application
(stx-openstack) will be installed.**
#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in
support of installing the stx-openstack manifest and helm-charts later.
::
system host-label-assign controller-0 openstack-control-plane=enabled
system host-label-assign controller-0 openstack-compute-node=enabled
system host-label-assign controller-0 openvswitch=enabled
system host-label-assign controller-0 sriov=enabled
#. **For OpenStack only:** Configure the system setting for the vSwitch.
.. only:: starlingx
StarlingX has |OVS| (kernel-based) vSwitch configured as default:
* Runs in a container; defined within the helm charts of stx-openstack
manifest.
* Shares the core(s) assigned to the platform.
If you require better performance, |OVS-DPDK| (|OVS| with the Data
Plane Development Kit, which is supported only on bare metal hardware)
should be used:
* Runs directly on the host (it is not containerized).
Requires that at least 1 core be assigned/dedicated to the vSwitch
function.
To deploy the default containerized |OVS|:
::
system modify --vswitch_type none
This does not run any vSwitch directly on the host, instead, it uses
the containerized |OVS| defined in the helm charts of stx-openstack
manifest.
To deploy |OVS-DPDK|, run the following command:
.. parsed-literal::
system modify --vswitch_type |ovs-dpdk|
Default recommendation for an |AIO|-controller is to use a single core
for |OVS-DPDK| vswitch.
.. code-block:: bash
# assign 1 core on processor/numa-node 0 on controller-0 to vswitch
system host-cpu-modify -f vswitch -p0 0 controller-0
When using |OVS-DPDK|, configure 1x 1G huge page for vSwitch memory on
each |NUMA| node where vswitch is running on this host, with the
following command:
.. code-block:: bash
# assign 1x 1G huge page on processor/numa-node 0 on controller-0 to vswitch
system host-memory-modify -f vswitch -1G 1 controller-0 0
|VMs| created in an |OVS-DPDK| environment must be configured to use
huge pages to enable networking and must use a flavor with property:
hw:mem_page_size=large
Configure the huge pages for |VMs| in an |OVS-DPDK| environment with the
command:
::
system host-memory-modify -1G <1G hugepages number> <hostname or id> <processor>
For example:
::
system host-memory-modify controller-0 0 -1G 10
system host-memory-modify controller-0 1 -1G 10
.. note::
After controller-0 is unlocked, changing vswitch_type requires
locking and unlocking all compute-labeled worker nodes (and/or AIO
controllers) to apply the change.
#. **For OpenStack only:** Set up disk partition for nova-local volume
group, which is needed for stx-openstack nova ephemeral disks.
::
export NODE=controller-0
echo ">>> Getting root disk info"
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"
echo ">>>> Configuring nova-local"
NOVA_SIZE=34
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${NOVA_SIZE})
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
system host-lvg-add ${NODE} nova-local
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
sleep 2
.. incl-config-controller-0-openstack-specific-aio-simplex-end: