Merge "Merge Virtual and Bare Metal install docs"

This commit is contained in:
Zuul 2024-01-17 16:12:46 +00:00 committed by Gerrit Code Review
commit b517a5fdfb
27 changed files with 5319 additions and 2503 deletions

View File

@ -1,6 +1,3 @@
.. Greg updates required for -High Security Vulnerability Document Updates
.. _aio_simplex_install_kubernetes_r7:
=================================================
@ -32,33 +29,230 @@ Overview
Minimum hardware requirements
-----------------------------
.. include:: /shared/_includes/prepare-servers-for-installation-91baad307173.rest
:start-after: begin-min-hw-reqs-sx
:end-before: end-min-hw-reqs-sx
.. only:: starlingx
.. tabs::
.. group-tab:: Bare Metal
.. include:: /shared/_includes/prepare-servers-for-installation-91baad307173.rest
:start-after: begin-min-hw-reqs-sx
:end-before: end-min-hw-reqs-sx
.. group-tab:: Virtual
The following sections describe system requirements and host setup for
a workstation hosting virtual machine(s) where StarlingX will be
deployed; i.e., a |VM| for each StarlingX node (controller,
AIO-controller, worker or storage node).
.. rubric:: **Hardware requirements**
The host system should have at least:
* **Processor:** x86_64 only supported architecture with BIOS enabled
hardware virtualization extensions
* **Cores:** 8
* **Memory:** 32GB RAM
* **Hard Disk:** 500GB HDD
* **Network:** One network adapter with active Internet connection
.. rubric:: **Software requirements**
The host system should have at least:
* A workstation computer with Ubuntu 16.04 LTS 64-bit
All other required packages will be installed by scripts in the
StarlingX tools repository.
.. rubric:: **Host setup**
Set up the host with the following steps:
#. Update OS:
::
apt-get update
#. Clone the StarlingX tools repository:
::
apt-get install -y git
cd $HOME
git clone https://opendev.org/starlingx/virtual-deployment.git
#. Install required packages:
::
cd $HOME/virtual-deployment/libvirt
bash install_packages.sh
apt install -y apparmor-profiles
apt-get install -y ufw
ufw disable
ufw status
.. note::
On Ubuntu 16.04, if apparmor-profile modules were installed as
shown in the example above, you must reboot the server to fully
install the apparmor-profile modules.
.. only:: partner
.. include:: /shared/_includes/prepare-servers-for-installation-91baad307173.rest
:start-after: begin-min-hw-reqs-sx
:end-before: end-min-hw-reqs-sx
--------------------------
Installation Prerequisites
--------------------------
.. include:: /shared/_includes/installation-prereqs.rest
:start-after: begin-install-prereqs
:end-before: end-install-prereqs
.. only:: starlingx
.. tabs::
.. group-tab:: Bare Metal
.. include:: /shared/_includes/installation-prereqs.rest
:start-after: begin-install-prereqs
:end-before: end-install-prereqs
.. group-tab:: Virtual
Several pre-requisites must be completed prior to starting the |prod|
installation.
Before attempting to install |prod|, ensure that you have the the
|prod| host installer ISO image file.
Get the latest |prod| ISO from the `StarlingX mirror
<https://mirror.starlingx.cengn.ca/mirror/starlingx/release/latest_release/debian/monolithic/outputs/iso/>`__.
Alternately, you can get an older release ISO from `here <https://mirror.starlingx.cengn.ca/mirror/starlingx/release/>`__.
.. only:: partner
.. include:: /shared/_includes/installation-prereqs.rest
:start-after: begin-install-prereqs
:end-before: end-install-prereqs
--------------------------------
Prepare Servers for Installation
--------------------------------
.. include:: /shared/_includes/prepare-servers-for-installation-91baad307173.rest
:start-after: start-prepare-servers-common
:end-before: end-prepare-servers-common
.. only:: starlingx
.. tabs::
.. group-tab:: Bare Metal
.. include:: /shared/_includes/prepare-servers-for-installation-91baad307173.rest
:start-after: start-prepare-servers-common
:end-before: end-prepare-servers-common
.. group-tab:: Virtual
.. note::
The following commands for host, virtual environment setup, and
host power-on use KVM / virsh for virtual machine and VM
management technology. For an alternative virtualization
environment, see: :ref:`Install StarlingX in VirtualBox
<install_virtualbox>`.
#. Prepare virtual environment.
Set up the virtual platform networks for virtual deployment:
::
bash setup_network.sh
#. Prepare virtual servers.
Create the XML definitions for the virtual servers required by this
configuration option. This will create the XML virtual server
definition for:
* simplex-controller-0
The following command will start/virtually power on:
* The 'simplex-controller-0' virtual server
* The X-based graphical virt-manager application
::
bash setup_configuration.sh -c simplex -i ./bootimage.iso
If there is no X-server present errors will occur and the X-based
GUI for the virt-manager application will not start. The
virt-manager GUI is not absolutely required and you can safely
ignore errors and continue.
.. only:: partner
.. include:: /shared/_includes/prepare-servers-for-installation-91baad307173.rest
:start-after: start-prepare-servers-common
:end-before: end-prepare-servers-common
--------------------------------
Install Software on Controller-0
--------------------------------
.. include:: /shared/_includes/inc-install-software-on-controller.rest
:start-after: incl-install-software-controller-0-aio-start
:end-before: incl-install-software-controller-0-aio-end
.. only:: starlingx
.. tabs::
.. group-tab:: Bare Metal
.. include:: /shared/_includes/inc-install-software-on-controller.rest
:start-after: incl-install-software-controller-0-aio-start
:end-before: incl-install-software-controller-0-aio-end
.. group-tab:: Virtual
In the last step of :ref:`aio_simplex_environ`, the controller-0
virtual server 'simplex-controller-0' was started by the
:command:`setup_configuration.sh` command.
On the host, attach to the console of virtual controller-0 and select
the appropriate installer menu options to start the non-interactive
install of StarlingX software on controller-0.
.. note::
When entering the console, it is very easy to miss the first
installer menu selection. Use ESC to navigate to previous menus, to
ensure you are at the first installer menu.
::
virsh console simplex-controller-0
Make the following menu selections in the installer:
#. First menu: Select 'All-in-one Controller Configuration'
#. Second menu: Select 'Serial Console'
Wait for the non-interactive install of software to complete and for
the server to reboot. This can take 5-10 minutes, depending on the
performance of the host machine.
.. only:: partner
.. include:: /shared/_includes/inc-install-software-on-controller.rest
:start-after: incl-install-software-controller-0-aio-start
:end-before: incl-install-software-controller-0-aio-end
--------------------------------
Bootstrap system on controller-0
@ -79,22 +273,25 @@ Bootstrap system on controller-0
#. Verify and/or configure IP connectivity.
External connectivity is required to run the Ansible bootstrap playbook. The
StarlingX boot image will |DHCP| out all interfaces so the server may have
obtained an IP address and have external IP connectivity if a |DHCP| server
is present in your environment. Verify this using the :command:`ip addr` and
:command:`ping 8.8.8.8` commands.
.. only:: starlingx
Otherwise, manually configure an IP address and default IP route. Use the
PORT, IP-ADDRESS/SUBNET-LENGTH and GATEWAY-IP-ADDRESS applicable to your
deployment environment.
.. tabs::
::
.. group-tab:: Bare Metal
sudo ip address add <IP-ADDRESS>/<SUBNET-LENGTH> dev <PORT>
sudo ip link set up dev <PORT>
sudo ip route add default via <GATEWAY-IP-ADDRESS> dev <PORT>
ping 8.8.8.8
.. include:: /shared/_includes/aio_simplex_install_kubernetes.rest
:start-after: begin-aio-sx-install-verify-ip-connectivity
:end-before: end-aio-sx-install-verify-ip-connectivity
.. group-tab:: Virtual
Not applicable.
.. only:: partner
.. include:: /shared/_includes/aio_simplex_install_kubernetes.rest
:start-after: begin-aio-sx-install-verify-ip-connectivity
:end-before: end-aio-sx-install-verify-ip-connectivity
#. Specify user configuration overrides for the Ansible bootstrap playbook.
@ -115,8 +312,9 @@ Bootstrap system on controller-0
configuration override files for hosts. For example:
``$HOME/<hostname>.yml``.
.. only:: starlingx
.. include:: /shared/_includes/ansible_install_time_only.txt
.. include:: /shared/_includes/ansible_install_time_only.txt
Specify the user configuration override file for the Ansible bootstrap
playbook using one of the following methods:
@ -152,7 +350,7 @@ Bootstrap system on controller-0
.. only:: starlingx
In either of the above options, the bootstrap playbooks default
In either of the above options, the bootstrap playbook's default
values will pull all container images required for the |prod-p| from
Docker hub
@ -220,9 +418,9 @@ Bootstrap system on controller-0
- 1.2.3.4
Refer to :ref:`Ansible Bootstrap Configurations <ansible_bootstrap_configs_r7>`
for information on additional Ansible bootstrap configurations for advanced
Ansible bootstrap scenarios.
Refer to :ref:`Ansible Bootstrap Configurations
<ansible_bootstrap_configs_r7>` for information on additional Ansible
bootstrap configurations for advanced Ansible bootstrap scenarios.
#. Run the Ansible bootstrap playbook:
@ -248,27 +446,71 @@ The newly installed controller needs to be configured.
source /etc/platform/openrc
#. Configure the |OAM| interface of controller-0 and specify the attached
network as "oam". The following example configures the OAM interface on a
physical untagged ethernet port, use |OAM| port name that is applicable to
your deployment environment, for example eth0:
network as "oam".
.. only:: starlingx
::
.. tabs::
~(keystone_admin)$ OAM_IF=<OAM-PORT>
~(keystone_admin)$ system host-if-modify controller-0 $OAM_IF -c platform
~(keystone_admin)$ system interface-network-assign controller-0 $OAM_IF oam
.. group-tab:: Bare Metal
To configure a vlan or aggregated ethernet interface, see :ref:`Node
Interfaces <node-interfaces-index>`.
.. include:: /shared/_includes/aio_simplex_install_kubernetes.rest
:start-after: begin-config-controller-0-oam-interface-sx
:end-before: end-config-controller-0-oam-interface-sx
.. group-tab:: Virtual
::
OAM_IF=enp7s1
system host-if-modify controller-0 $OAM_IF -c platform
system interface-network-assign controller-0 $OAM_IF oam
.. only:: partner
.. include:: /shared/_includes/aio_simplex_install_kubernetes.rest
:start-after: begin-config-controller-0-oam-interface-sx
:end-before: end-config-controller-0-oam-interface-sx
#. Configure |NTP| servers for network time synchronization:
::
.. only:: starlingx
~(keystone_admin)$ system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
.. tabs::
To configure |PTP| instead of |NTP|, see :ref:`PTP Server Configuration
<ptp-server-config-index>`.
.. group-tab:: Bare Metal
.. include:: /shared/_includes/aio_simplex_install_kubernetes.rest
:start-after: begin-config-controller-0-ntp-interface-sx
:end-before: end-config-controller-0-ntp-interface-sx
.. code-block:: none
~(keystone_admin)$ system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
To configure |PTP| instead of |NTP|, see :ref:`PTP Server
Configuration <ptp-server-config-index>`.
.. end-config-controller-0-ntp-interface-sx
.. group-tab:: Virtual
.. note::
In a virtual environment, this can sometimes cause Ceph clock
skew alarms. Also, the virtual instances clock is synchronized
with the host clock, so it is not absolutely required to
configure |NTP| in this step.
.. code-block:: none
~(keystone_admin)$ system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
.. only:: partner
.. include:: /shared/_includes/aio_simplex_install_kubernetes.rest
:start-after: begin-config-controller-0-ntp-interface-sx
:end-before: end-config-controller-0-ntp-interface-sx
.. only:: openstack
@ -310,15 +552,32 @@ The newly installed controller needs to be configured.
:end-before: ref1-end
#. **For OpenStack only:** Due to the additional OpenStack services running
on the |AIO| controller platform cores, a minimum of 4 platform cores are
required, 6 platform cores are recommended.
on the |AIO| controller platform cores, additional platform cores may be
required.
Increase the number of platform cores with the following commands:
.. only:: starlingx
.. code-block:: bash
.. tabs::
# Assign 6 cores on processor/numa-node 0 on controller-0 to platform
~(keystone_admin)$ system host-cpu-modify -f platform -p0 6 controller-0
.. group-tab:: Bare Metal
.. include:: /shared/_includes/aio_simplex_install_kubernetes.rest
:start-after: begin-config-controller-0-OS-add-cores-sx
:end-before: end-config-controller-0-OS-add-cores-sx
.. group-tab:: Virtual
The |VMs| being used for hosts only have 4 cores; 2 for platform
and 2 for |VMs|. There are no additional cores available for
platform in this scenario.
.. only:: partner
.. include:: /shared/_includes/aio_simplex_install_kubernetes.rest
:start-after: begin-config-controller-0-OS-add-cores-sx
:end-before: end-config-controller-0-OS-add-cores-sx
#. Due to the additional OpenStack services' containers running on the
controller host, the size of the Docker filesystem needs to be
@ -354,124 +613,87 @@ The newly installed controller needs to be configured.
.. only:: starlingx
StarlingX has |OVS| (kernel-based) vSwitch configured as default:
.. tabs::
* Runs in a container; defined within the helm charts of |prefix|-openstack
manifest.
* Shares the core(s) assigned to the platform.
.. group-tab:: Bare Metal
If you require better performance, |OVS-DPDK| (|OVS| with the Data
Plane Development Kit, which is supported only on bare metal hardware)
should be used:
StarlingX has |OVS| (kernel-based) vSwitch configured as default:
* Runs in a container; defined within the helm charts of
|prefix|-openstack manifest.
* Runs directly on the host (it is not containerized).
Requires that at least 1 core be assigned/dedicated to the vSwitch
function.
* Shares the core(s) assigned to the platform.
If you require better performance, |OVS-DPDK| (|OVS| with the
Data Plane Development Kit, which is supported only on bare
metal hardware) should be used:
* Runs directly on the host (it is not containerized). Requires
that at least 1 core be assigned/dedicated to the vSwitch
function.
To deploy the default containerized |OVS|:
::
~(keystone_admin)$ system modify --vswitch_type none
This does not run any vSwitch directly on the host, instead, it
uses the containerized |OVS| defined in the helm charts of
|prefix|-openstack manifest.
To deploy the default containerized |OVS|:
.. include:: /shared/_includes/aio_simplex_install_kubernetes.rest
:start-after: begin-config-controller-0-OS-vswitch-sx
:end-before: end-config-controller-0-OS-vswitch-sx
::
.. group-tab:: Virtual
~(keystone_admin)$ system modify --vswitch_type none
The default vSwitch is the containerized |OVS| that is packaged
with the ``stx-openstack`` manifest/helm-charts. |prod| provides
the option to use OVS-DPDK on the host, however, in the virtual
environment OVS-DPDK is not supported, only |OVS| is supported.
Therefore, simply use the default |OVS| vSwitch here.
This does not run any vSwitch directly on the host, instead, it uses
the containerized |OVS| defined in the helm charts of
|prefix|-openstack manifest.
.. only:: partner
To deploy |OVS-DPDK|, run the following command:
.. include:: /shared/_includes/aio_simplex_install_kubernetes.rest
:start-after: begin-config-controller-0-OS-vswitch-sx
:end-before: end-config-controller-0-OS-vswitch-sx
.. parsed-literal::
~(keystone_admin)$ system modify --vswitch_type |ovs-dpdk|
Default recommendation for an |AIO|-controller is to use a single core
for |OVS-DPDK| vSwitch.
.. code-block:: bash
# assign 1 core on processor/numa-node 0 on controller-0 to vswitch
~(keystone_admin)$ system host-cpu-modify -f vswitch -p0 1 controller-0
When using |OVS-DPDK|, configure 1G of huge pages for vSwitch memory on
each |NUMA| node on the host. It is recommended
to configure 1x 1G huge page (-1G 1) for vSwitch memory on each |NUMA|
node on the host.
However, due to a limitation with Kubernetes, only a single huge page
size is supported on any one host. If your application |VMs| require 2M
huge pages, then configure 500x 2M huge pages (-2M 500) for vSwitch
memory on each |NUMA| node on the host.
.. code-block::
# Assign 1x 1G huge page on processor/numa-node 0 on controller-0 to vswitch
~(keystone_admin)$ system host-memory-modify -f vswitch -1G 1 controller-0 0
# Assign 1x 1G huge page on processor/numa-node 1 on controller-0 to vswitch
~(keystone_admin)$ system host-memory-modify -f vswitch -1G 1 controller-0 1
.. important::
|VMs| created in an |OVS-DPDK| environment must be configured to use
huge pages to enable networking and must use a flavor with property:
hw:mem_page_size=large
Configure the huge pages for |VMs| in an |OVS-DPDK| environment on
this host, the following commands are an example that assumes that 1G
huge page size is being used on this host:
.. code-block:: bash
# assign 1x 1G huge page on processor/numa-node 0 on controller-0 to applications
~(keystone_admin)$ system host-memory-modify -f application -1G 10 controller-0 0
# assign 1x 1G huge page on processor/numa-node 1 on controller-0 to applications
~(keystone_admin)$ system host-memory-modify -f application -1G 10 controller-0 1
.. note::
After controller-0 is unlocked, changing vswitch_type requires
locking and unlocking controller-0 to apply the change.
#. **For OpenStack only:** Add an instances filesystem OR Set up a disk
#. **For OpenStack only:** Add an instances filesystem or set up a disk
based nova-local volume group, which is needed for |prefix|-openstack
nova ephemeral disks.
.. note::
.. only:: starlingx
Both cannot exist at the same time.
.. tabs::
Add an 'instances' filesystem
.. group-tab:: Bare Metal
.. code-block:: bash
.. include:: /shared/_includes/aio_simplex_install_kubernetes.rest
:start-after: begin-config-controller-0-OS-add-fs-sx
:end-before: end-config-controller-0-OS-add-fs-sx
export NODE=controller-0
.. group-tab:: Virtual
# Create instances filesystem
system host-fs-add ${NODE} instances=<size>
Set up an "instances" filesystem, which is needed for
stx-openstack nova ephemeral disks.
OR add a 'nova-local' volume group
.. code-block:: bash
.. code-block:: bash
~(keystone_admin)$ export NODE=controller-0
~(keystone_admin)$ system host-fs-add ${NODE} instances=34
export NODE=controller-0
# Create nova-local local volume group
system host-lvg-add ${NODE} nova-local
.. only:: partner
# Get UUID of an unused DISK to to be added to the nova-local volume
# group. CEPH OSD Disks can NOT be used
# List hosts disks and take note of UUID of disk to be used
system host-disk-list ${NODE}
# Add the unused disk to the nova-local volume group
system host-pv-add ${NODE} nova-local <DISK_UUID>
.. include:: /shared/_includes/aio_simplex_install_kubernetes.rest
:start-after: begin-config-controller-0-OS-add-fs-sx
:end-before: end-config-controller-0-OS-add-fs-sx
#. **For OpenStack only:** Configure data interfaces for controller-0.
Data class interfaces are vSwitch interfaces used by vSwitch to provide
VM virtio vNIC connectivity to OpenStack Neutron Tenant Networks on the
|VM| virtio vNIC connectivity to OpenStack Neutron Tenant Networks on the
underlying assigned Data Network.
.. important::
@ -481,35 +703,53 @@ The newly installed controller needs to be configured.
* Configure the data interfaces for controller-0.
.. code-block:: bash
.. only:: starlingx
~(keystone_admin)$ NODE=controller-0
.. tabs::
# List inventoried hosts ports and identify ports to be used as data interfaces,
# based on displayed linux port name, pci address and device type.
~(keystone_admin)$ system host-port-list ${NODE}
.. group-tab:: Bare Metal
# List hosts auto-configured ethernet interfaces,
# find the interfaces corresponding to the ports identified in previous step, and
# take note of their UUID
~(keystone_admin)$ system host-if-list -a ${NODE}
.. include:: /shared/_includes/aio_simplex_install_kubernetes.rest
:start-after: begin-config-controller-0-OS-data-interface-sx
:end-before: end-config-controller-0-OS-data-interface-sx
# Modify configuration for these interfaces
# Configuring them as data class interfaces, MTU of 1500 and named data#
~(keystone_admin)$ system host-if-modify -m 1500 -n data0 -c data ${NODE} <data0-if-uuid>
~(keystone_admin)$ system host-if-modify -m 1500 -n data1 -c data ${NODE} <data1-if-uuid>
.. group-tab:: Virtual
# Create Data Networks that vswitch 'data' interfaces will be connected to
~(keystone_admin)$ DATANET0='datanet0'
~(keystone_admin)$ DATANET1='datanet1'
~(keystone_admin)$ system datanetwork-add ${DATANET0} vlan
~(keystone_admin)$ system datanetwork-add ${DATANET1} vlan
.. code-block:: bash
# Assign Data Networks to Data Interfaces
~(keystone_admin)$ system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${DATANET0}
~(keystone_admin)$ system interface-datanetwork-assign ${NODE} <data1-if-uuid> ${DATANET1}
~(keystone_admin)$ DATA0IF=eth1000
~(keystone_admin)$ DATA1IF=eth1001
~(keystone_admin)$ export NODE=controller-0
~(keystone_admin)$ PHYSNET0='physnet0'
~(keystone_admin)$ PHYSNET1='physnet1'
~(keystone_admin)$ SPL=/tmp/tmp-system-port-list
~(keystone_admin)$ SPIL=/tmp/tmp-system-host-if-list
~(keystone_admin)$ system host-port-list ${NODE} --nowrap > ${SPL}
~(keystone_admin)$ system host-if-list -a ${NODE} --nowrap > ${SPIL}
~(keystone_admin)$ DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
~(keystone_admin)$ DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
~(keystone_admin)$ DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
~(keystone_admin)$ DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
~(keystone_admin)$ DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
~(keystone_admin)$ DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
~(keystone_admin)$ DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
~(keystone_admin)$ DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
~(keystone_admin)$ system datanetwork-add ${PHYSNET0} vlan
~(keystone_admin)$ system datanetwork-add ${PHYSNET1} vlan
~(keystone_admin)$ system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
~(keystone_admin)$ system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
~(keystone_admin)$ system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
~(keystone_admin)$ system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1
.. only:: partner
.. include:: /shared/_includes/aio_simplex_install_kubernetes.rest
:start-after: begin-config-controller-0-OS-data-interface-sx
:end-before: end-config-controller-0-OS-data-interface-sx
*****************************************
Optionally Configure PCI-SRIOV Interfaces
*****************************************
@ -526,58 +766,64 @@ Optionally Configure PCI-SRIOV Interfaces
have the same Data Networks assigned to them as vswitch data interfaces.
* Configure the |PCI|-SRIOV interfaces for controller-0.
#. Configure the |PCI|-SRIOV interfaces for controller-0.
.. code-block:: bash
~(keystone_admin)$ export NODE=controller-0
# List inventoried hosts ports and identify ports to be used as pci-sriov interfaces,
# based on displayed linux port name, pci address and device type.
~(keystone_admin)$ system host-port-list ${NODE}
# List hosts auto-configured ethernet interfaces,
# find the interfaces corresponding to the ports identified in previous step, and
# take note of their UUID
~(keystone_admin)$ system host-if-list -a ${NODE}
# Modify configuration for these interfaces
# Configuring them as pci-sriov class interfaces, MTU of 1500 and named sriov#
~(keystone_admin)$ system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} <sriov0-if-uuid> -N <num_vfs>
~(keystone_admin)$ system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} <sriov1-if-uuid> -N <num_vfs>
# If not already created, create Data Networks that the 'pci-sriov' interfaces will
# be connected to
~(keystone_admin)$ DATANET0='datanet0'
~(keystone_admin)$ DATANET1='datanet1'
~(keystone_admin)$ system datanetwork-add ${DATANET0} vlan
~(keystone_admin)$ system datanetwork-add ${DATANET1} vlan
# Assign Data Networks to PCI-SRIOV Interfaces
~(keystone_admin)$ system interface-datanetwork-assign ${NODE} <sriov0-if-uuid> ${DATANET0}
~(keystone_admin)$ system interface-datanetwork-assign ${NODE} <sriov1-if-uuid> ${DATANET1}
.. code-block:: bash
~(keystone_admin)$ export NODE=controller-0
# List inventoried hosts ports and identify ports to be used as pci-sriov interfaces,
# based on displayed linux port name, pci address and device type.
~(keystone_admin)$ system host-port-list ${NODE}
# List hosts auto-configured ethernet interfaces,
# find the interfaces corresponding to the ports identified in previous step, and
# take note of their UUID
~(keystone_admin)$ system host-if-list -a ${NODE}
# Modify configuration for these interfaces
# Configuring them as pci-sriov class interfaces, MTU of 1500 and named sriov#
~(keystone_admin)$ system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} <sriov0-if-uuid> -N <num_vfs>
~(keystone_admin)$ system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} <sriov1-if-uuid> -N <num_vfs>
# If not already created, create Data Networks that the 'pci-sriov' interfaces will
# be connected to
~(keystone_admin)$ DATANET0='datanet0'
~(keystone_admin)$ DATANET1='datanet1'
~(keystone_admin)$ system datanetwork-add ${DATANET0} vlan
~(keystone_admin)$ system datanetwork-add ${DATANET1} vlan
# Assign Data Networks to PCI-SRIOV Interfaces
~(keystone_admin)$ system interface-datanetwork-assign ${NODE} <sriov0-if-uuid> ${DATANET0}
~(keystone_admin)$ system interface-datanetwork-assign ${NODE} <sriov1-if-uuid> ${DATANET1}
* **For Kubernetes Only:** To enable using |SRIOV| network attachments for
the above interfaces in Kubernetes hosted application containers:
#. **For Kubernetes Only:** To enable using |SRIOV| network attachments for
the above interfaces in Kubernetes hosted application containers:
.. only:: starlingx
.. tabs::
.. group-tab:: Bare Metal
.. include:: /shared/_includes/aio_simplex_install_kubernetes.rest
:start-after: begin-config-controller-0-OS-k8s-sriov-sx
:end-before: end-config-controller-0-OS-k8s-sriov-sx
.. group-tab:: Virtual
Configure the Kubernetes |SRIOV| device plugin.
.. code-block:: none
~(keystone_admin)$ system host-label-assign controller-0 sriovdp=enabled
* Configure the Kubernetes |SRIOV| device plugin.
::
~(keystone_admin)$ system host-label-assign controller-0 sriovdp=enabled
* If you are planning on running |DPDK| in Kubernetes hosted application
containers on this host, configure the number of 1G Huge pages required
on both |NUMA| nodes.
.. code-block:: bash
# assign 10x 1G huge page on processor/numa-node 0 on controller-0 to applications
~(keystone_admin)$ system host-memory-modify -f application controller-0 0 -1G 10
# assign 10x 1G huge page on processor/numa-node 1 on controller-0 to applications
~(keystone_admin)$ system host-memory-modify -f application controller-0 1 -1G 10
.. only:: partner
.. include:: /shared/_includes/aio_simplex_install_kubernetes.rest
:start-after: begin-config-controller-0-OS-k8s-sriov-sx
:end-before: end-config-controller-0-OS-k8s-sriov-sx
***************************************************************
@ -687,7 +933,7 @@ machine.
::
~(keystone_admin)$ system host-disk-wipe -s --confirm controller-0 /dev/sdb
~(keystone_admin)$ system host-disk-wipe -s --confirm controller-0 /dev/sdb
``values.yaml`` for rook-ceph-apps.

View File

@ -1,3 +1,5 @@
.. _aio_duplex_environ:
============================
Prepare Host and Environment
============================
@ -13,7 +15,7 @@ for a StarlingX |this-ver| virtual All-in-one Duplex deployment configuration.
Physical host requirements and setup
------------------------------------
.. include:: physical_host_req.txt
.. include:: /shared/_includes/physical_host_req.txt
---------------------------------------
Prepare virtual environment and servers

View File

@ -1,3 +1,5 @@
.. _aio_simplex_environ:
============================
Prepare Host and Environment
============================
@ -13,7 +15,7 @@ for a StarlingX |this-ver| virtual All-in-one Simplex deployment configuration.
Physical host requirements and setup
------------------------------------
.. include:: physical_host_req.txt
.. include:: /shared/_includes/physical_host_req.txt
---------------------------------------
Prepare virtual environment and servers

View File

@ -19,7 +19,7 @@ This section describes how to prepare the physical host and virtual
environment for an automated StarlingX |this-ver| virtual deployment in
VirtualBox.
.. include:: automated_setup.txt
.. include:: /shared/_includes/automated_setup.txt
---------------------------
Installation Configurations

View File

@ -14,7 +14,7 @@ configuration.
Physical host requirements and setup
------------------------------------
.. include:: physical_host_req.txt
.. include:: /shared/_includes/physical_host_req.txt
---------------------------------------
Prepare virtual environment and servers

View File

@ -14,7 +14,7 @@ configuration.
Physical host requirements and setup
------------------------------------
.. include:: physical_host_req.txt
.. include:: /shared/_includes/physical_host_req.txt
---------------------------------------
Prepare virtual environment and servers

View File

@ -1,365 +1,370 @@
===============================
Install StarlingX in VirtualBox
===============================
This guide describes how to run StarlingX in a set of VirtualBox :abbr:`VMs
(Virtual Machines)`, which is an alternative to the default StarlingX
instructions using libvirt.
.. contents::
:local:
:depth: 1
-------------
Prerequisites
-------------
* A Windows or Linux computer for running VirtualBox.
* VirtualBox is installed on your computer. The latest verified version is
5.2.22. Download from: http://www.virtualbox.org/wiki/Downloads
* VirtualBox Extension Pack is installed.
To boot worker nodes via the controller, you must install the
VirtualBox Extension Pack to add support for PXE boot of Intel cards. Download
the extension pack from: https://www.virtualbox.org/wiki/Downloads
.. note::
A set of scripts for deploying VirtualBox VMs can be found in the
`STX tools repository
<https://opendev.org/starlingx/tools/src/branch/master/deployment/virtualbox>`_,
however, the scripts may not be updated to the latest StarlingX
recommendations.
---------------------------------------------------
Create VMs for controller, worker and storage hosts
---------------------------------------------------
For each StarlingX host, configure a VirtualBox VM with the following settings.
.. note::
The different settings for controller, worker, and storage nodes are
embedded in the particular sections below.
***************************
OS type and memory settings
***************************
* Type: Linux
* Version: Other Linux (64-bit)
* Memory size:
* Controller node: 16384 MB
* Worker node: 8192MB
* Storage node: 4096 MB
* All-in-one node: 20480 MB
****************
Disk(s) settings
****************
Use the default disk controller and default disk format (for example IDE/vdi)
for VirtualBox VMs.
* Minimum disk size requirements:
* Controller nodes (minimum of 2 disks required):
* Disk 1: 240GB disk
* Disk 2: 10GB disk (Note: Use 30GB if you are planning to work on the
analytics.)
* Worker nodes: 80GB root disk (Note: Use 100GB if you are installing
StarlingX AIO node.)
* When the node is configured for local storage, this will provide ~12GB of
local storage space for disk allocation to VM instances.
* Additional disks can be added to the node to extend the local storage
but are not required.
* Storage nodes (minimum of 2 disks required):
* 80GB disk for rootfs.
* 10GB disk (or larger) for each OSD. The size depends on how many VMs you
plan to run.
* Storage tree, see empty CD-ROM. Click cd-rom, click ``+`` to choose a CD/DVD
iso, and browse to ISO location. Use this ISO only for the first controller
node. The second controller node and worker nodes will network boot from the
first controller node.
***************
System settings
***************
* System->Motherboard:
* Boot Order: Enable the Network option. Order should be: Floppy, CD/DVD,
Hard Disk, Network.
* System->Processors:
* Controller node: 4 CPU
* Worker node: 3 CPU
.. note::
This will allow only a single instance to be launched. More processors
are required to launch more instances. If more than 4 CPUs are
allocated, you must limit vswitch to a single CPU before unlocking your
worker node, otherwise your worker node will **reboot in a loop**
(vswitch will fail to start, in-test will detect that a critical service
failed to start and reboot the node). Use the following command to limit
vswitch:
::
system host-cpu-modify worker-0 -f vswitch -p0 1
* Storage node: 1 CPU
****************
Network settings
****************
The OAM network has the following options:
* Host Only Network - **Strongly Recommended.** This option
requires the router VM to forward packets from the controllers to the external
network. Follow the instructions at :doc:`Install VM as a router <config_virtualbox_netwk>`
to set it up. Create one network adapter for external OAM. The IP addresses
in the example below match the default configuration.
* VirtualBox: File -> Preferences -> Network -> Host-only Networks. Click
``+`` to add Ethernet Adapter.
* Windows: This creates a ``VirtualBox Host-only Adapter`` and prompts
with the Admin dialog box. Click ``Accept`` to create an interface.
* Linux: This creates a ``vboxnet<x>`` per interface.
* External OAM: IPv4 Address: 10.10.10.254, IPv4 Network Mask: 255.255.255.0,
DHCP Server: unchecked.
* NAT Network - This option provides external network access to the controller
VMs. Follow the instructions at :doc:`Add NAT Network in VirtualBox <config_virtualbox_netwk>`.
Adapter settings for the different node types are as follows:
* Controller nodes:
* Adapter 1 setting depends on your choice for the OAM network above. It can
be either of the following:
* Adapter 1: Host-Only Adapter; VirtualBox Host-Only Ethernet Adapter 1),
Advanced: Intel PRO/1000MT Desktop, Promiscuous Mode: Deny
* Adapter 1: NAT Network; Name: NatNetwork
* Adapter 2: Internal Network, Name: intnet-management; Intel PRO/1000MT
Desktop, Advanced: Promiscuous Mode: Allow All
* Worker nodes:
* Adapter 1:
Internal Network, Name: intnet-unused; Advanced: Intel
PRO/1000MT Desktop, Promiscuous Mode: Allow All
* Adapter 2: Internal Network, Name: intnet-management; Advanced: Intel
PRO/1000MT Desktop, Promiscuous Mode: Allow All
* Adapter 3: Internal Network, Name: intnet-data1; Advanced:
Paravirtualized Network (virtio-net), Promiscuous Mode: Allow All
* Windows: If you have a separate Ubuntu VM for Linux work, then add
another interface to your Ubuntu VM and add it to the same
intnet-data1 internal network.
* Linux: If you want to access the VM instances directly, create a new
``Host-only`` network called ``vboxnet<x>`` similar to the external OAM
one above. Ensure DHCP Server is unchecked, and that the IP address is
on a network unrelated to the rest of the addresses we're configuring.
(The default will often be fine.) Now attach adapter-3 to the new
Host-only network.
* Adapter 4: Internal Network, Name: intnet-data2; Advanced: Paravirtualized
Network (virtio-net), Promiscuous Mode: Allow All
Additional adapters can be added via command line, for :abbr:`LAG (Link
Aggregation Group)` purposes. For example:
::
"\Program Files\Oracle\VirtualBox\VBoxManage.exe" modifyvm worker-0 --nic5 intnet --nictype5 virtio --intnet5 intnet-data1 --nicpromisc5 allow-all
"\Program Files\Oracle\VirtualBox\VBoxManage.exe" modifyvm worker-0 --nic6 intnet --nictype6 virtio --intnet6 intnet-data2 --nicpromisc6 allow-all
"\Program Files\Oracle\VirtualBox\VBoxManage.exe" modifyvm worker-0 --nic7 intnet --nictype7 82540EM --intnet7 intnet-infra --nicpromisc7 allow-all
* Storage nodes:
* Adapter 1: Internal Network, Name: intnet-unused; Advanced: Intel
PRO/1000MT Desktop, Promiscuous Mode: Allow All
* Adapter 2: Internal Network, Name: intnet-management; Advanced:
Intel PRO/1000MT Desktop, Promiscuous Mode: Allow All
* Set the boot priority for interface 2 (eth1) on ALL VMs (controller, worker
and storage):
::
# First list the VMs
bwensley@yow-bwensley-lx:~$ VBoxManage list vms
"YOW-BWENSLEY-VM" {f6d4df83-bee5-4471-9497-5a229ead8750}
"controller-0" {3db3a342-780f-41d5-a012-dbe6d3591bf1}
"controller-1" {ad89a706-61c6-4c27-8c78-9729ade01460}
"worker-0" {41e80183-2497-4e31-bffd-2d8ec5bcb397}
"worker-1" {68382c1d-9b67-4f3b-b0d5-ebedbe656246}
"storage-0" {7eddce9e-b814-4c40-94ce-2cde1fd2d168}
# Then set the priority for interface 2. Do this for ALL VMs.
# Command syntax: VBoxManage modifyvm <uuid> --nicbootprio2 1
bwensley@yow-bwensley-lx:~$ VBoxManage modifyvm 3db3a342-780f-41d5-a012-dbe6d3591bf1 --nicbootprio2 1
# OR do them all with a foreach loop in linux
bwensley@yow-bwensley-lx:~$ for f in $(VBoxManage list vms | cut -f 1 -d " " | sed 's/"//g'); do echo $f; VBoxManage modifyvm $f --nicbootprio2 1; done
# NOTE: In windows, you need to specify the full path to the VBoxManage executable - for example:
"\Program Files\Oracle\VirtualBox\VBoxManage.exe"
* Alternative method for debugging:
* Turn on VM and press F12 for the boot menu.
* Press ``L`` for LAN boot.
* Press CTL+B for the iPXE CLI (this has a short timeout).
* The autoboot command opens a link with each interface sequentially
and tests for netboot.
********************
Serial port settings
********************
To use serial ports, you must select Serial Console during initial boot using
one of the following methods:
* Windows: Select ``Enable Serial Port``, port mode to ``Host Pipe``. Select
``Create Pipe`` (or deselect ``Connect to existing pipe/socket``). Enter
a Port/File Path in the form ``\\.\pipe\controller-0`` or
``\\.\pipe\worker-1``. Later, you can use this in PuTTY to connect to the
console. Choose speed of 9600 or 38400.
* Linux: Select ``Enable Serial Port`` and set the port mode to ``Host Pipe``.
Select ``Create Pipe`` (or deselect ``Connect to existing pipe/socket``).
Enter a Port/File Path in the form ``/tmp/controller_serial``. Later, you can
use this with ``socat`` as shown in this example:
::
socat UNIX-CONNECT:/tmp/controller_serial stdio,raw,echo=0,icanon=0
***********
Other notes
***********
If you're using a Dell PowerEdge R720 system, it's important to execute the
command below to avoid any kernel panic issues:
::
VBoxManage? setextradata VBoxInternal?/CPUM/EnableHVP 1
----------------------------------------
Start controller VM and allow it to boot
----------------------------------------
Console usage:
#. To use a serial console: Select ``Serial Controller Node Install``, then
follow the instructions above in the ``Serial Port`` section to connect to
it.
#. To use a graphical console: Select ``Graphics Text Controller Node
Install`` and continue using the Virtual Box console.
For details on how to specify installation parameters such as rootfs device
and console port, see :ref:`config_install_parms_r7`.
Follow the :ref:`StarlingX Installation and Deployment Guides <index-install-e083ca818006>`
to continue.
* Ensure that boot priority on all VMs is changed using the commands in the "Set
the boot priority" step above.
* In an AIO-DX and standard configuration, additional
hosts must be booted using controller-0 (rather than ``bootimage.iso`` file).
* On Virtual Box, click F12 immediately when the VM starts to select a different
boot option. Select the ``lan`` option to force a network boot.
.. _config_install_parms_r7:
------------------------------------
Configurable installation parameters
------------------------------------
StarlingX allows you to specify certain configuration parameters during
installation:
* Boot device: This is the device that is to be used for the boot partition. In
most cases, this must be ``sda``, which is the default, unless the BIOS
supports using a different disk for the boot partition. This is specified with
the ``boot_device`` option.
* Rootfs device: The root filesystem is now a logical volume ``cgts-vg/root-lv``.
This value should be the same as the boot_device.
* Install output: Text mode vs graphical. The default is ``text``. This is
specified with the ``install_output`` option.
* Console: This is the console specification, allowing the user to specify the
console port and/or baud. The default value is ``ttyS0,115200``. This is
specified with the ``console`` option.
*********************************
Install controller-0 from ISO/USB
*********************************
The initial boot menu for controller-0 is built-in, so modification of the
installation parameters requires direct modification of the boot command line.
This is done by scrolling to the boot option you want (for example, Serial
Controller Node Install vs Graphics Controller Node Install), and hitting the
tab key to allow command line modification. The example below shows how to
modify the ``rootfs_device`` specification.
.. figure:: /deploy_install_guides/release/figures/install_virtualbox_configparms.png
:scale: 100%
:alt: Install controller-0
************************************
Install nodes from active controller
************************************
The installation parameters are part of the system inventory host details for
each node, and can be specified when the host is added or updated. These
parameters can be set as part of a host-add or host-bulk-add, host-update, or
via the GUI when editing a host.
For example, if you prefer to see the graphical installation, you can enter the
following command when setting the personality of a newly discovered host:
::
system host-update 2 personality=controller install_output=graphical console=
If you dont set up a serial console, but prefer the text installation, you
can clear out the default console setting with the command:
::
system host-update 2 personality=controller install_output=text console=
If youd prefer to install to the second disk on your node, use the command:
::
system host-update 3 personality=compute hostname=compute-0 rootfs_device=sdb
Alternatively, these values can be set from the GUI via the ``Edit Host``
option.
.. figure:: /deploy_install_guides/release/figures/install_virtualbox_guiscreen.png
:scale: 100%
:alt: Install controller-0
.. _install_virtualbox:
===============================
Install StarlingX in VirtualBox
===============================
This guide describes how to run StarlingX in a set of VirtualBox :abbr:`VMs
(Virtual Machines)`, which is an alternative to the default StarlingX
instructions using libvirt.
.. contents::
:local:
:depth: 1
-------------
Prerequisites
-------------
* A Windows or Linux computer for running VirtualBox.
* VirtualBox is installed on your computer. The latest verified version is
5.2.22. Download from: http://www.virtualbox.org/wiki/Downloads
* VirtualBox Extension Pack is installed.
To boot worker nodes via the controller, you must install the
VirtualBox Extension Pack to add support for PXE boot of Intel cards. Download
the extension pack from: https://www.virtualbox.org/wiki/Downloads
.. note::
A set of scripts for deploying VirtualBox VMs can be found in the
`STX tools repository
<https://opendev.org/starlingx/tools/src/branch/master/deployment/virtualbox>`_,
however, the scripts may not be updated to the latest StarlingX
recommendations.
---------------------------------------------------
Create VMs for controller, worker and storage hosts
---------------------------------------------------
For each StarlingX host, configure a VirtualBox VM with the following settings.
.. note::
The different settings for controller, worker, and storage nodes are
embedded in the particular sections below.
***************************
OS type and memory settings
***************************
* Type: Linux
* Version: Other Linux (64-bit)
* Memory size:
* Controller node: 16384 MB
* Worker node: 8192MB
* Storage node: 4096 MB
* All-in-one node: 20480 MB
****************
Disk(s) settings
****************
Use the default disk controller and default disk format (for example IDE/vdi)
for VirtualBox VMs.
* Minimum disk size requirements:
* Controller nodes (minimum of 2 disks required):
* Disk 1: 240GB disk
* Disk 2: 10GB disk (Note: Use 30GB if you are planning to work on the
analytics.)
* Worker nodes: 80GB root disk (Note: Use 100GB if you are installing
StarlingX AIO node.)
* When the node is configured for local storage, this will provide ~12GB of
local storage space for disk allocation to VM instances.
* Additional disks can be added to the node to extend the local storage
but are not required.
* Storage nodes (minimum of 2 disks required):
* 80GB disk for rootfs.
* 10GB disk (or larger) for each OSD. The size depends on how many VMs you
plan to run.
* Storage tree, see empty CD-ROM. Click cd-rom, click ``+`` to choose a CD/DVD
iso, and browse to ISO location. Use this ISO only for the first controller
node. The second controller node and worker nodes will network boot from the
first controller node.
***************
System settings
***************
* System->Motherboard:
* Boot Order: Enable the Network option. Order should be: Floppy, CD/DVD,
Hard Disk, Network.
* System->Processors:
* Controller node: 4 CPU
* Worker node: 3 CPU
.. note::
This will allow only a single instance to be launched. More processors
are required to launch more instances. If more than 4 CPUs are
allocated, you must limit vswitch to a single CPU before unlocking your
worker node, otherwise your worker node will **reboot in a loop**
(vswitch will fail to start, in-test will detect that a critical service
failed to start and reboot the node). Use the following command to limit
vswitch:
::
system host-cpu-modify worker-0 -f vswitch -p0 1
* Storage node: 1 CPU
****************
Network settings
****************
The OAM network has the following options:
* Host Only Network - **Strongly Recommended.** This option
requires the router VM to forward packets from the controllers to the external
network. Follow the instructions at :doc:`Install VM as a router <config_virtualbox_netwk>`
to set it up. Create one network adapter for external OAM. The IP addresses
in the example below match the default configuration.
* VirtualBox: File -> Preferences -> Network -> Host-only Networks. Click
``+`` to add Ethernet Adapter.
* Windows: This creates a ``VirtualBox Host-only Adapter`` and prompts
with the Admin dialog box. Click ``Accept`` to create an interface.
* Linux: This creates a ``vboxnet<x>`` per interface.
* External OAM: IPv4 Address: 10.10.10.254, IPv4 Network Mask: 255.255.255.0,
DHCP Server: unchecked.
* NAT Network - This option provides external network access to the controller
VMs. Follow the instructions at :doc:`Add NAT Network in VirtualBox <config_virtualbox_netwk>`.
Adapter settings for the different node types are as follows:
* Controller nodes:
* Adapter 1 setting depends on your choice for the OAM network above. It can
be either of the following:
* Adapter 1: Host-Only Adapter; VirtualBox Host-Only Ethernet Adapter 1),
Advanced: Intel PRO/1000MT Desktop, Promiscuous Mode: Deny
* Adapter 1: NAT Network; Name: NatNetwork
* Adapter 2: Internal Network, Name: intnet-management; Intel PRO/1000MT
Desktop, Advanced: Promiscuous Mode: Allow All
* Worker nodes:
* Adapter 1:
Internal Network, Name: intnet-unused; Advanced: Intel
PRO/1000MT Desktop, Promiscuous Mode: Allow All
* Adapter 2: Internal Network, Name: intnet-management; Advanced: Intel
PRO/1000MT Desktop, Promiscuous Mode: Allow All
* Adapter 3: Internal Network, Name: intnet-data1; Advanced:
Paravirtualized Network (virtio-net), Promiscuous Mode: Allow All
* Windows: If you have a separate Ubuntu VM for Linux work, then add
another interface to your Ubuntu VM and add it to the same
intnet-data1 internal network.
* Linux: If you want to access the VM instances directly, create a new
``Host-only`` network called ``vboxnet<x>`` similar to the external OAM
one above. Ensure DHCP Server is unchecked, and that the IP address is
on a network unrelated to the rest of the addresses we're configuring.
(The default will often be fine.) Now attach adapter-3 to the new
Host-only network.
* Adapter 4: Internal Network, Name: intnet-data2; Advanced: Paravirtualized
Network (virtio-net), Promiscuous Mode: Allow All
Additional adapters can be added via command line, for :abbr:`LAG (Link
Aggregation Group)` purposes. For example:
::
"\Program Files\Oracle\VirtualBox\VBoxManage.exe" modifyvm worker-0 --nic5 intnet --nictype5 virtio --intnet5 intnet-data1 --nicpromisc5 allow-all
"\Program Files\Oracle\VirtualBox\VBoxManage.exe" modifyvm worker-0 --nic6 intnet --nictype6 virtio --intnet6 intnet-data2 --nicpromisc6 allow-all
"\Program Files\Oracle\VirtualBox\VBoxManage.exe" modifyvm worker-0 --nic7 intnet --nictype7 82540EM --intnet7 intnet-infra --nicpromisc7 allow-all
* Storage nodes:
* Adapter 1: Internal Network, Name: intnet-unused; Advanced: Intel
PRO/1000MT Desktop, Promiscuous Mode: Allow All
* Adapter 2: Internal Network, Name: intnet-management; Advanced:
Intel PRO/1000MT Desktop, Promiscuous Mode: Allow All
* Set the boot priority for interface 2 (eth1) on ALL VMs (controller, worker
and storage):
::
# First list the VMs
bwensley@yow-bwensley-lx:~$ VBoxManage list vms
"YOW-BWENSLEY-VM" {f6d4df83-bee5-4471-9497-5a229ead8750}
"controller-0" {3db3a342-780f-41d5-a012-dbe6d3591bf1}
"controller-1" {ad89a706-61c6-4c27-8c78-9729ade01460}
"worker-0" {41e80183-2497-4e31-bffd-2d8ec5bcb397}
"worker-1" {68382c1d-9b67-4f3b-b0d5-ebedbe656246}
"storage-0" {7eddce9e-b814-4c40-94ce-2cde1fd2d168}
# Then set the priority for interface 2. Do this for ALL VMs.
# Command syntax: VBoxManage modifyvm <uuid> --nicbootprio2 1
bwensley@yow-bwensley-lx:~$ VBoxManage modifyvm 3db3a342-780f-41d5-a012-dbe6d3591bf1 --nicbootprio2 1
# OR do them all with a foreach loop in linux
bwensley@yow-bwensley-lx:~$ for f in $(VBoxManage list vms | cut -f 1 -d " " | sed 's/"//g'); do echo $f; VBoxManage modifyvm $f --nicbootprio2 1; done
# NOTE: In windows, you need to specify the full path to the VBoxManage executable - for example:
"\Program Files\Oracle\VirtualBox\VBoxManage.exe"
* Alternative method for debugging:
* Turn on VM and press F12 for the boot menu.
* Press ``L`` for LAN boot.
* Press CTL+B for the iPXE CLI (this has a short timeout).
* The autoboot command opens a link with each interface sequentially
and tests for netboot.
********************
Serial port settings
********************
To use serial ports, you must select Serial Console during initial boot using
one of the following methods:
* Windows: Select ``Enable Serial Port``, port mode to ``Host Pipe``. Select
``Create Pipe`` (or deselect ``Connect to existing pipe/socket``). Enter
a Port/File Path in the form ``\\.\pipe\controller-0`` or
``\\.\pipe\worker-1``. Later, you can use this in PuTTY to connect to the
console. Choose speed of 9600 or 38400.
* Linux: Select ``Enable Serial Port`` and set the port mode to ``Host Pipe``.
Select ``Create Pipe`` (or deselect ``Connect to existing pipe/socket``).
Enter a Port/File Path in the form ``/tmp/controller_serial``. Later, you can
use this with ``socat`` as shown in this example:
::
socat UNIX-CONNECT:/tmp/controller_serial stdio,raw,echo=0,icanon=0
***********
Other notes
***********
If you're using a Dell PowerEdge R720 system, it's important to execute the
command below to avoid any kernel panic issues:
::
VBoxManage? setextradata VBoxInternal?/CPUM/EnableHVP 1
----------------------------------------
Start controller VM and allow it to boot
----------------------------------------
Console usage:
#. To use a serial console: Select ``Serial Controller Node Install``, then
follow the instructions above in the ``Serial Port`` section to connect to
it.
#. To use a graphical console: Select ``Graphics Text Controller Node
Install`` and continue using the Virtual Box console.
For details on how to specify installation parameters such as rootfs device
and console port, see :ref:`config_install_parms_r7`.
.. Link to root of install guides here *must* be external
Follow the `StarlingX Installation and Deployment Guides
<https://docs.starlingx.io/deploy_install_guides/index-install-e083ca818006.html>`_
to continue.
* Ensure that boot priority on all |VMs| is changed using the commands in the
"Set the boot priority" step above.
* In an AIO-DX and standard configuration, additional
hosts must be booted using controller-0 (rather than ``bootimage.iso`` file).
* On Virtual Box, click F12 immediately when the |VM| starts to select a
different boot option. Select the ``lan`` option to force a network boot.
.. _config_install_parms_r7:
------------------------------------
Configurable installation parameters
------------------------------------
StarlingX allows you to specify certain configuration parameters during
installation:
* Boot device: This is the device that is to be used for the boot partition. In
most cases, this must be ``sda``, which is the default, unless the BIOS
supports using a different disk for the boot partition. This is specified with
the ``boot_device`` option.
* Rootfs device: The root filesystem is now a logical volume ``cgts-vg/root-lv``.
This value should be the same as the boot_device.
* Install output: Text mode vs graphical. The default is ``text``. This is
specified with the ``install_output`` option.
* Console: This is the console specification, allowing the user to specify the
console port and/or baud. The default value is ``ttyS0,115200``. This is
specified with the ``console`` option.
*********************************
Install controller-0 from ISO/USB
*********************************
The initial boot menu for controller-0 is built-in, so modification of the
installation parameters requires direct modification of the boot command line.
This is done by scrolling to the boot option you want (for example, Serial
Controller Node Install vs Graphics Controller Node Install), and hitting the
tab key to allow command line modification. The example below shows how to
modify the ``rootfs_device`` specification.
.. figure:: /deploy_install_guides/release/figures/install_virtualbox_configparms.png
:scale: 100%
:alt: Install controller-0
************************************
Install nodes from active controller
************************************
The installation parameters are part of the system inventory host details for
each node, and can be specified when the host is added or updated. These
parameters can be set as part of a host-add or host-bulk-add, host-update, or
via the GUI when editing a host.
For example, if you prefer to see the graphical installation, you can enter the
following command when setting the personality of a newly discovered host:
::
system host-update 2 personality=controller install_output=graphical console=
If you dont set up a serial console, but prefer the text installation, you
can clear out the default console setting with the command:
::
system host-update 2 personality=controller install_output=text console=
If youd prefer to install to the second disk on your node, use the command:
::
system host-update 3 personality=compute hostname=compute-0 rootfs_device=sdb
Alternatively, these values can be set from the GUI via the ``Edit Host``
option.
.. figure:: /deploy_install_guides/release/figures/install_virtualbox_guiscreen.png
:scale: 100%
:alt: Install controller-0

View File

@ -14,7 +14,7 @@ configuration.
Physical host requirements and setup
------------------------------------
.. include:: physical_host_req.txt
.. include:: /shared/_includes/physical_host_req.txt
---------------------------------------
Prepare virtual environment and servers

View File

@ -1,4 +1,5 @@
.. Greg updates required for -High Security Vulnerability Document Updates
.. pja1558616715987
@ -295,9 +296,9 @@ subcloud, the subcloud installation process has two phases:
~(keystone_admin)]$ system certificate-install -m docker_registry path_to_cert
.. pre-include:: /_includes/installing-a-subcloud-without-redfish-platform-management-service.rest
:start-after: begin-prepare-files-to-copy-deployment-config
:end-before: end-prepare-files-to-copy-deployment-config
#. At the Central Cloud / system controller, monitor the progress of the
subcloud bootstraping and deployment by using the deploy status field of
@ -433,4 +434,4 @@ subcloud, the subcloud installation process has two phases:
interface: mgmt0
metric: 1
prefix: 64
subnet: <Central Cloud mgmt subnet>
subnet: <Central Cloud mgmt subnet>

View File

@ -15,8 +15,43 @@ This section describes the steps to extend capacity with worker nodes on a
Install software on worker nodes
--------------------------------
#. Power on the worker node servers and force them to network boot with the
appropriate BIOS boot options for your particular server.
#. Power on the worker node servers.
.. only:: starlingx
.. tabs::
.. group-tab:: Bare Metal
.. include:: /shared/_includes/aio_duplex_install_kubernetes.rest
:start-after: begin-install-sw-on-workers-power-on-dx
:end-before: end-install-sw-on-workers-power-on-dx
.. group-tab:: Virtual
#. On the host, power on the worker-0 and worker-1 virtual servers.
They will automatically attempt to network boot over the
management network:
.. code-block:: none
$ virsh start duplex-worker-0
$ virsh start duplex-worker-1
#. Attach to the consoles of worker-0 and worker-1.
.. code-block:: none
$ virsh console duplex-worker-0
$ virsh console duplex-worker-1~(keystone_admin)$
.. only:: partner
.. include:: /shared/_includes/aio_duplex_install_kubernetes.rest
:start-after: begin-install-sw-on-workers-power-on-dx
:end-before: end-install-sw-on-workers-power-on-dx
#. As the worker nodes boot, a message appears on their console instructing
you to configure the personality of the node.
@ -26,7 +61,7 @@ Install software on worker nodes
::
system host-list
~(keystone_admin)$ system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
@ -40,8 +75,8 @@ Install software on worker nodes
.. code-block:: bash
system host-update 3 personality=worker hostname=worker-0
system host-update 4 personality=worker hostname=worker-1
~(keystone_admin)$ system host-update 3 personality=worker hostname=worker-0
~(keystone_admin)$ system host-update 4 personality=worker hostname=worker-1
This initiates the install of software on worker nodes.
This can take 5-10 minutes, depending on the performance of the host machine.
@ -59,7 +94,7 @@ Install software on worker nodes
::
system host-list
~(keystone_admin)$ system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
@ -97,79 +132,53 @@ Configure worker nodes
**These steps are required only if the StarlingX OpenStack application
(|prefix|-openstack) will be installed.**
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
support of installing the |prefix|-openstack manifest and helm-charts later.
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes
in support of installing the |prefix|-openstack manifest and helm-charts
later.
.. parsed-literal::
.. only:: starlingx
for NODE in worker-0 worker-1; do
system host-label-assign $NODE openstack-compute-node=enabled
kubectl taint nodes $NODE openstack-compute-node:NoSchedule
system host-label-assign $NODE |vswitch-label|
system host-label-assign $NODE sriov=enabled
done
.. tabs::
.. group-tab:: Bare Metal
.. include:: /shared/_includes/aio_duplex_install_kubernetes.rest
:start-after: begin-os-specific-host-config-labels-dx
:end-before: end-os-specific-host-config-labels-dx
.. group-tab:: Virtual
No additional steps are required.
.. only:: partner
.. include:: /shared/_includes/aio_duplex_install_kubernetes.rest
:start-after: begin-os-specific-host-config-labels-dx
:end-before: end-os-specific-host-config-labels-dx
#. **For OpenStack only:** Configure the host settings for the vSwitch.
If using |OVS-DPDK| vswitch, run the following commands:
.. only:: starlingx
Default recommendation for worker node is to use two cores on numa-node 0
for |OVS-DPDK| vSwitch; physical |NICs| are typically on first numa-node.
This should have been automatically configured, if not run the following
command.
.. tabs::
.. code-block:: bash
.. group-tab:: Bare Metal
for NODE in worker-0 worker-1; do
.. include:: /shared/_includes/aio_duplex_install_kubernetes.rest
:start-after: begin-os-specific-host-config-vswitch-dx
:end-before: end-os-specific-host-config-vswitch-dx
# assign 2 cores on processor/numa-node 0 on worker-node to vswitch
system host-cpu-modify -f vswitch -p0 2 $NODE
.. group-tab:: Virtual
done
No additional configuration is required for the OVS vswitch in
virtual environment.
When using |OVS-DPDK|, configure 1G of huge pages for vSwitch memory on
each |NUMA| node on the host. It is recommended to configure 1x 1G huge
page (-1G 1) for vSwitch memory on each |NUMA| node on the host.
.. only:: partner
.. include:: /shared/_includes/aio_duplex_install_kubernetes.rest
:start-after: begin-os-specific-host-config-vswitch-dx
:end-before: end-os-specific-host-config-vswitch-dx
However, due to a limitation with Kubernetes, only a single huge page
size is supported on any one host. If your application VMs require 2M
huge pages, then configure 500x 2M huge pages (-2M 500) for vSwitch
memory on each |NUMA| node on the host.
.. code-block:: bash
for NODE in worker-0 worker-1; do
# assign 1x 1G huge page on processor/numa-node 0 on worker-node to vswitch
system host-memory-modify -f vswitch -1G 1 $NODE 0
# assign 1x 1G huge page on processor/numa-node 0 on worker-node to vswitch
system host-memory-modify -f vswitch -1G 1 $NODE 1
done
.. important::
|VMs| created in an |OVS-DPDK| environment must be configured to use
huge pages to enable networking and must use a flavor with property:
hw:mem_page_size=large
Configure the huge pages for |VMs| in an |OVS-DPDK| environment on
this host, assuming 1G huge page size is being used on this host, with
the following commands:
.. code-block:: bash
for NODE in worker-0 worker-1; do
# assign 10x 1G huge page on processor/numa-node 0 on worker-node to applications
system host-memory-modify -f application -1G 10 $NODE 0
# assign 10x 1G huge page on processor/numa-node 1 on worker-node to applications
system host-memory-modify -f application -1G 10 $NODE 1
done
#. **For OpenStack only:** Setup disk partition for nova-local volume group,
needed for |prefix|-openstack nova ephemeral disks.
@ -177,16 +186,16 @@ Configure worker nodes
.. code-block:: bash
for NODE in worker-0 worker-1; do
system host-lvg-add ${NODE} nova-local
~(keystone_admin)$ system host-lvg-add ${NODE} nova-local
# Get UUID of DISK to create PARTITION to be added to nova-local local volume group
# CEPH OSD Disks can NOT be used
# For best performance, do NOT use system/root disk, use a separate physical disk.
# List hosts disks and take note of UUID of disk to be used
system host-disk-list ${NODE}
~(keystone_admin)$ system host-disk-list ${NODE}
# ( if using ROOT DISK, select disk with device_path of
# system host-show ${NODE} | fgrep rootfs )
# 'system host-show ${NODE} | fgrep rootfs' )
# Create new PARTITION on selected disk, and take note of new partitions uuid in response
# The size of the PARTITION needs to be large enough to hold the aggregate size of
@ -197,10 +206,10 @@ Configure worker nodes
# Additional PARTITION(s) from additional disks can be added later if required.
PARTITION_SIZE=30
system host-disk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE}
~(keystone_admin)$ system host-disk-partition-add -t lvm_phys_vol ${NODE} <disk-uuid> ${PARTITION_SIZE}
# Add new partition to nova-local local volume group
system host-pv-add ${NODE} nova-local <NEW_PARTITION_UUID>
~(keystone_admin)$ system host-pv-add ${NODE} nova-local <NEW_PARTITION_UUID>
sleep 2
done
@ -211,40 +220,64 @@ Configure worker nodes
.. important::
A compute-labeled worker host **MUST** have at least one Data class interface.
A compute-labeled worker host **MUST** have at least one Data class
interface.
* Configure the data interfaces for worker nodes.
.. code-block:: bash
.. only:: starlingx
# Execute the following lines with
export NODE=worker-0
# and then repeat with
export NODE=worker-1
.. tabs::
# List inventoried hosts ports and identify ports to be used as data interfaces,
# based on displayed linux port name, pci address and device type.
system host-port-list ${NODE}
.. group-tab:: Bare Metal
# List hosts auto-configured ethernet interfaces,
# find the interfaces corresponding to the ports identified in previous step, and
# take note of their UUID
system host-if-list -a ${NODE}
.. include:: /shared/_includes/aio_duplex_install_kubernetes.rest
:start-after: begin-os-specific-host-config-data-dx
:end-before: end-os-specific-host-config-data-dx
# Modify configuration for these interfaces
# Configuring them as data class interfaces, MTU of 1500 and named data#
system host-if-modify -m 1500 -n data0 -c data ${NODE} <data0-if-uuid>
system host-if-modify -m 1500 -n data1 -c data ${NODE} <data1-if-uuid>
.. group-tab:: Virtual
.. code-block:: none
# Execute the following lines with
~(keystone_admin)$ export NODE=worker-0
# and then repeat with
~(keystone_admin)$ export NODE=worker-1
~(keystone_admin)$ DATA0IF=eth1000
~(keystone_admin)$ DATA1IF=eth1001
~(keystone_admin)$ PHYSNET0='physnet0'
~(keystone_admin)$ PHYSNET1='physnet1'
~(keystone_admin)$ SPL=/tmp/tmp-system-port-list
~(keystone_admin)$ SPIL=/tmp/tmp-system-host-if-list
~(keystone_admin)$ system host-port-list ${NODE} --nowrap > ${SPL}
~(keystone_admin)$ system host-if-list -a ${NODE} --nowrap > ${SPIL}
~(keystone_admin)$ DATA0PCIADDR=$(cat $SPL | grep $DATA0IF | awk '{print $8}')
~(keystone_admin)$ DATA1PCIADDR=$(cat $SPL | grep $DATA1IF | awk '{print $8}')
~(keystone_admin)$ DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
~(keystone_admin)$ DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
~(keystone_admin)$ DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
~(keystone_admin)$ DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
~(keystone_admin)$ DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
~(keystone_admin)$ DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
~(keystone_admin)$ system datanetwork-add ${PHYSNET0} vlan
~(keystone_admin)$ system datanetwork-add ${PHYSNET1} vlan
~(keystone_admin)$ system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
~(keystone_admin)$ system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
~(keystone_admin)$ system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
~(keystone_admin)$ system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
.. only:: partner
.. include:: /shared/_includes/aio_duplex_install_kubernetes.rest
:start-after: begin-os-specific-host-config-data-dx
:end-before: end-os-specific-host-config-data-dx
# Create Data Networks that vswitch 'data' interfaces will be connected to
DATANET0='datanet0'
DATANET1='datanet1'
system datanetwork-add ${DATANET0} vlan
system datanetwork-add ${DATANET1} vlan
# Assign Data Networks to Data Interfaces
system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${DATANET0}
system interface-datanetwork-assign ${NODE} <data1-if-uuid> ${DATANET1}
*****************************************
Optionally Configure PCI-SRIOV Interfaces
@ -267,62 +300,65 @@ Optionally Configure PCI-SRIOV Interfaces
.. code-block:: bash
# Execute the following lines with
export NODE=worker-0
~(keystone_admin)$ export NODE=worker-0
# and then repeat with
export NODE=worker-1
~(keystone_admin)$ export NODE=worker-1
# List inventoried hosts ports and identify ports to be used as pci-sriov interfaces,
# based on displayed linux port name, pci address and device type.
system host-port-list ${NODE}
~(keystone_admin)$ system host-port-list ${NODE}
# List hosts auto-configured ethernet interfaces,
# find the interfaces corresponding to the ports identified in previous step, and
# take note of their UUID
system host-if-list -a ${NODE}
~(keystone_admin)$ system host-if-list -a ${NODE}
# Modify configuration for these interfaces
# Configuring them as pci-sriov class interfaces, MTU of 1500 and named sriov#
system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} <sriov0-if-uuid> -N <num_vfs>
system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} <sriov1-if-uuid> -N <num_vfs>
~(keystone_admin)$ system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} <sriov0-if-uuid> -N <num_vfs>
~(keystone_admin)$ system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} <sriov1-if-uuid> -N <num_vfs>
# If not already created, create Data Networks that the 'pci-sriov'
# interfaces will be connected to
DATANET0='datanet0'
DATANET1='datanet1'
system datanetwork-add ${DATANET0} vlan
system datanetwork-add ${DATANET1} vlan
~(keystone_admin)$ DATANET0='datanet0'
~(keystone_admin)$ DATANET1='datanet1'
~(keystone_admin)$ system datanetwork-add ${DATANET0} vlan
~(keystone_admin)$ system datanetwork-add ${DATANET1} vlan
# Assign Data Networks to PCI-SRIOV Interfaces
system interface-datanetwork-assign ${NODE} <sriov0-if-uuid> ${DATANET0}
system interface-datanetwork-assign ${NODE} <sriov1-if-uuid> ${DATANET1}
~(keystone_admin)$ system interface-datanetwork-assign ${NODE} <sriov0-if-uuid> ${DATANET0}
~(keystone_admin)$ system interface-datanetwork-assign ${NODE} <sriov1-if-uuid> ${DATANET1}
* **For Kubernetes only** To enable using |SRIOV| network attachments for
the above interfaces in Kubernetes hosted application containers:
* Configure the Kubernetes |SRIOV| device plugin.
.. only:: starlingx
.. code-block:: bash
.. tabs::
for NODE in worker-0 worker-1; do
system host-label-assign $NODE sriovdp=enabled
done
.. group-tab:: Bare Metal
* If planning on running |DPDK| in Kubernetes hosted application
containers on this host, configure the number of 1G Huge pages required
on both |NUMA| nodes.
.. include:: /shared/_includes/aio_duplex_install_kubernetes.rest
:start-after: begin-os-specific-host-config-sriov-dx
:end-before: end-os-specific-host-config-sriov-dx
.. code-block:: bash
.. group-tab:: Virtual
for NODE in worker-0 worker-1; do
Configure the Kubernetes |SRIOV| device plugin.
# assign 10x 1G huge page on processor/numa-node 0 on worker-node to applications
system host-memory-modify -f application $NODE 0 -1G 10
.. code-block:: bash
# assign 10x 1G huge page on processor/numa-node 1 on worker-node to applications
system host-memory-modify -f application $NODE 1 -1G 10
for NODE in worker-0 worker-1; do
system host-label-assign $NODE sriovdp=enabled
done
done
.. only:: partner
.. include:: /shared/_includes/aio_duplex_install_kubernetes.rest
:start-after: begin-os-specific-host-config-sriov-dx
:end-before: end-os-specific-host-config-sriov-dx
-------------------
@ -342,3 +378,4 @@ service. This can take 5-10 minutes, depending on the performance of the host
machine.
.. end-aio-duplex-extend

View File

@ -0,0 +1,555 @@
.. begin-aio-dx-install-verify-ip-connectivity
External connectivity is required to run the Ansible bootstrap
playbook. The StarlingX boot image will |DHCP| out all interfaces
so the server may have obtained an IP address and have external IP
connectivity if a |DHCP| server is present in your environment.
Verify this using the :command:`ip addr` and :command:`ping
8.8.8.8` command.
Otherwise, manually configure an IP address and default IP route.
Use the ``PORT``, ``IP-ADDRESS``/``SUBNET-LENGTH`` and
``GATEWAY-IP-ADDRESS`` applicable to your deployment environment.
.. code-block:: bash
sudo ip address add <IP-ADDRESS>/<SUBNET-LENGTH> dev <PORT>
sudo ip link set up dev <PORT>
sudo ip route add default via <GATEWAY-IP-ADDRESS> dev <PORT>
ping 8.8.8
.. end-aio-dx-install-verify-ip-connectivity
.. begin-config-controller-0-oam-interface-dx
The following example configures the |OAM| interface on a physical
untagged ethernet port, use |OAM| port name that is applicable to
your deployment environment, for example eth0:
.. code-block:: none
~(keystone_admin)$ OAM_IF=<OAM-PORT>
~(keystone_admin)$ system host-if-modify controller-0 $OAM_IF -c platform
~(keystone_admin)$ system interface-network-assign controller-0 $OAM_IF oam
.. end-config-controller-0-oam-interface-dx
.. begin-config-controller-0-ntp-interface-dx
.. code-block:: none
~(keystone_admin)$ system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
To configure |PTP| instead of |NTP|, see :ref:`PTP Server
Configuration <ptp-server-config-index>`.
.. end-config-controller-0-ntp-interface-dx
.. begin-config-controller-0-OS-k8s-sriov-dx
* Configure the Kubernetes |SRIOV| device plugin.
.. code-block:: none
~(keystone_admin)$ system host-label-assign controller-0 sriovdp=enabled
* If you are planning on running |DPDK| in Kubernetes hosted application
containers on this host, configure the number of 1G Huge pages required on
both |NUMA| nodes.
.. code-block:: bash
# assign 10x 1G huge page on processor/numa-node 0 on controller-0 to applications
~(keystone_admin)$ system host-memory-modify -f application controller-0 0 -1G 10
# assign 10x 1G huge page on processor/numa-node 1 on controller-0 to applications
~(keystone_admin)$ system host-memory-modify -f application controller-0 1 -1G 10
.. end-config-controller-0-OS-k8s-sriov-dx
.. begin-power-on-controller-1-server-dx
Power on the controller-1 server and force it to network boot with
the appropriate BIOS boot options for your particular server.
.. end-power-on-controller-1-server-dx
.. begin-config-controller-1-server-oam-dx
The following example configures the |OAM| interface on a physical untagged
ethernet port, use the |OAM| port name that is applicable to your
deployment environment, for example eth0:
.. code-block:: none
~(keystone_admin)$ OAM_IF=<OAM-PORT>
~(keystone_admin)$ system host-if-modify controller-1 $OAM_IF -c platform
~(keystone_admin)$ interface-network-assign controller-1 $OAM_IF oam
.. end-config-controller-1-server-oam-dx
.. begin-config-k8s-sriov-controller-1-dx
* Configure the Kubernetes |SRIOV| device plugin.
.. code-block:: bash
~(keystone_admin)$ system host-label-assign controller-1 sriovdp=enabled
* If planning on running |DPDK| in Kubernetes hosted application
containers on this host, configure the number of 1G Huge pages required
on both |NUMA| nodes.
.. code-block:: bash
# assign 10x 1G huge page on processor/numa-node 0 on controller-1 to applications
~(keystone_admin)$ system host-memory-modify -f application controller-1 0 -1G 10
# assign 10x 1G huge page on processor/numa-node 1 on controller-1 to applications
~(keystone_admin)$ system host-memory-modify -f application controller-1 1 -1G 10
.. end-config-k8s-sriov-controller-1-dx
.. begin-install-sw-on-workers-power-on-dx
Power on the worker node servers and force them to network boot with the
appropriate BIOS boot options for your particular server.
.. end-install-sw-on-workers-power-on-dx
.. begin-os-specific-host-config-sriov-dx
* Configure the Kubernetes |SRIOV| device plugin.
.. code-block:: bash
for NODE in worker-0 worker-1; do
system host-label-assign $NODE sriovdp=enabled
done
* If planning on running |DPDK| in Kubernetes hosted application containers on
this host, configure the number of 1G Huge pages required on both |NUMA|
nodes.
.. code-block:: bash
for NODE in worker-0 worker-1; do
# assign 10x 1G huge page on processor/numa-node 0 on worker-node to applications
~(keystone_admin)$ system host-memory-modify -f application $NODE 0 -1G 10
# assign 10x 1G huge page on processor/numa-node 1 on worker-node to applications
~(keystone_admin)$ system host-memory-modify -f application $NODE 1 -1G 10
done
.. end-os-specific-host-config-sriov-dx
.. begin-config-controller-0-OS-add-cores-dx
A minimum of 4 platform cores are required, 6 platform cores are
recommended.
Increase the number of platform cores with the following
commands. This example assigns 6 cores on processor/numa-node 0
on controller-0 to platform.
.. code-block:: bash
~(keystone_admin)$ system host-cpu-modify -f platform -p0 6 controller-0
.. end-config-controller-0-OS-add-cores-dx
.. begin-config-controller-0-OS-vswitch-dx
To deploy |OVS-DPDK|, run the following command:
.. parsed-literal::
~(keystone_admin)$ system modify --vswitch_type |ovs-dpdk|
Default recommendation for an |AIO|-controller is to use a single core
for |OVS-DPDK| vSwitch.
.. code-block:: bash
# assign 1 core on processor/numa-node 0 on controller-0 to vswitch
~(keystone_admin)$ system host-cpu-modify -f vswitch -p0 1 controller-0
Once vswitch_type is set to |OVS|-|DPDK|, any subsequent nodes
created will default to automatically assigning 1 vSwitch core
for AIO controllers and 2 vSwitch cores (both on numa-node 0;
physical NICs are typically on first numa-node) for
compute-labeled worker nodes.
When using |OVS-DPDK|, configure 1G of huge pages for vSwitch memory on
each |NUMA| node on the host. It is recommended
to configure 1x 1G huge page (-1G 1) for vSwitch memory on each |NUMA|
node on the host.
However, due to a limitation with Kubernetes, only a single huge page
size is supported on any one host. If your application |VMs| require 2M
huge pages, then configure 500x 2M huge pages (-2M 500) for vSwitch
memory on each |NUMA| node on the host.
.. code-block:: bash
# Assign 1x 1G huge page on processor/numa-node 0 on controller-0 to vswitch
~(keystone_admin)$ system host-memory-modify -f vswitch -1G 1 controller-0 0
# Assign 1x 1G huge page on processor/numa-node 1 on controller-0 to vswitch
~(keystone_admin)$ system host-memory-modify -f vswitch -1G 1 controller-0 1
.. important::
|VMs| created in an |OVS-DPDK| environment must be configured to use
huge pages to enable networking and must use a flavor with property:
``hw:mem_page_size=large``
Configure the huge pages for |VMs| in an |OVS-DPDK| environment on
this host, the following commands are an example that assumes that 1G
huge page size is being used on this host:
.. code-block:: bash
# assign 1x 1G huge page on processor/numa-node 0 on controller-0 to applications
~(keystone_admin)$ system host-memory-modify -f application -1G 10 controller-0 0
# assign 1x 1G huge page on processor/numa-node 1 on controller-0 to applications
~(keystone_admin)$ system host-memory-modify -f application -1G 10 controller-0 1
.. note::
After controller-0 is unlocked, changing vswitch_type requires
locking and unlocking controller-0 to apply the change.
.. end-config-controller-0-OS-vswitch-dx
.. begin-config-controller-0-OS-add-fs-dx
.. note::
Both cannot exist at the same time.
Add an 'instances' filesystem
.. code-block:: bash
~(keystone_admin)$ export NODE=controller-0
# Create instances filesystem
~(keystone_admin)$ system host-fs-add ${NODE} instances=<size>
Or add a 'nova-local' volume group:
.. code-block:: bash
~(keystone_admin)$ export NODE=controller-0
# Create nova-local local volume group
~(keystone_admin)$ system host-lvg-add ${NODE} nova-local
# Get UUID of an unused DISK to to be added to the nova-local volume
# group. CEPH OSD Disks can NOT be used
# List hosts disks and take note of UUID of disk to be used
~(keystone_admin)$ system host-disk-list ${NODE}
# Add the unused disk to the nova-local volume group
~(keystone_admin)$ system host-pv-add ${NODE} nova-local <DISK_UUID>
.. end-config-controller-0-OS-add-fs-dx
.. begin-config-controller-0-OS-data-interface-dx
.. code-block:: bash
~(keystone_admin)$ NODE=controller-0
# List inventoried hosts ports and identify ports to be used as data interfaces,
# based on displayed linux port name, pci address and device type.
~(keystone_admin)$ system host-port-list ${NODE}
# List hosts auto-configured ethernet interfaces,
# find the interfaces corresponding to the ports identified in previous step, and
# take note of their UUID
~(keystone_admin)$ system host-if-list -a ${NODE}
# Modify configuration for these interfaces
# Configuring them as data class interfaces, MTU of 1500 and named data#
~(keystone_admin)$ system host-if-modify -m 1500 -n data0 -c data ${NODE} <data0-if-uuid>
~(keystone_admin)$ system host-if-modify -m 1500 -n data1 -c data ${NODE} <data1-if-uuid>
# Create Data Networks that vswitch 'data' interfaces will be connected to
~(keystone_admin)$ DATANET0='datanet0'
~(keystone_admin)$ DATANET1='datanet1'
# Assign Data Networks to Data Interfaces
~(keystone_admin)$ system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${DATANET0}
~(keystone_admin)$ system interface-datanetwork-assign ${NODE} <data1-if-uuid> ${DATANET1}
.. end-config-controller-0-OS-data-interface-dx
.. begin-increase-cores-controller-1-dx
Increase the number of platform cores with the following commands:
.. code-block::
# assign 6 cores on processor/numa-node 0 on controller-1 to platform
~(keystone_admin)$ system host-cpu-modify -f platform -p0 6 controller-1
.. end-increase-cores-controller-1-dx
.. begin-config-vswitch-controller-1-dx
If using |OVS-DPDK| vswitch, run the following commands:
Default recommendation for an |AIO|-controller is to use a single core
for |OVS-DPDK| vSwitch. This should have been automatically configured,
if not run the following command.
.. code-block:: bash
# assign 1 core on processor/numa-node 0 on controller-1 to vswitch
~(keystone_admin)$ system host-cpu-modify -f vswitch -p0 1 controller-1
When using |OVS-DPDK|, configure 1G of huge pages for vSwitch memory on
each |NUMA| node on the host. It is recommended
to configure 1x 1G huge page (-1G 1) for vSwitch memory on each |NUMA|
node on the host.
However, due to a limitation with Kubernetes, only a single huge page
size is supported on any one host. If your application VMs require 2M
huge pages, then configure 500x 2M huge pages (-2M 500) for vSwitch
memory on each |NUMA| node on the host.
.. code-block:: bash
# assign 1x 1G huge page on processor/numa-node 0 on controller-1 to vswitch
~(keystone_admin)$ system host-memory-modify -f vswitch -1G 1 controller-1 0
# Assign 1x 1G huge page on processor/numa-node 1 on controller-0 to vswitch
~(keystone_admin)$ system host-memory-modify -f vswitch -1G 1 controller-1 1
.. important::
|VMs| created in an |OVS-DPDK| environment must be configured to use
huge pages to enable networking and must use a flavor with property:
``hw:mem_page_size=large``.
Configure the huge pages for |VMs| in an |OVS-DPDK| environment on
this host, assuming 1G huge page size is being used on this host, with
the following commands:
.. code-block:: bash
# assign 10x 1G huge page on processor/numa-node 0 on controller-1 to applications
~(keystone_admin)$ system host-memory-modify -f application -1G 10 controller-1 0
# assign 10x 1G huge page on processor/numa-node 1 on controller-1 to applications
~(keystone_admin)$ system host-memory-modify -f application -1G 10 controller-1 1
.. end-config-vswitch-controller-1-dx
.. begin-config-fs-controller-1-dx
.. note::
Both cannot exist at the same time.
* Add an 'instances' filesystem:
.. code-block:: bash
~(keystone_admin)$ export NODE=controller-1
# Create instances filesystem
~(keystone_admin)$ system host-fs-add ${NODE} instances=<size>
**Or**
* Add a 'nova-local' volume group:
.. code-block:: bash
~(keystone_admin)$ export NODE=controller-1
# Create nova-local local volume group
~(keystone_admin)$ system host-lvg-add ${NODE} nova-local
# Get UUID of an unused DISK to to be added to the nova-local volume
# group. CEPH OSD Disks can NOT be used
# List hosts disks and take note of UUID of disk to be used
~(keystone_admin)$ system host-disk-list ${NODE}
# Add the unused disk to the nova-local volume group
~(keystone_admin)$ system host-pv-add ${NODE} nova-local <DISK_UUID>
.. end-config-fs-controller-1-dx
.. begin-config-data-interfaces-controller-1-dx
.. code-block:: bash
export NODE=controller-1
# List inventoried host's ports and identify ports to be used as 'data' interfaces,
# based on displayed linux port name, pci address and device type.
system host-port-list ${NODE}
# List hosts auto-configured ethernet interfaces,
# find the interfaces corresponding to the ports identified in previous step, and
# take note of their UUID
system host-if-list -a ${NODE}
# Modify configuration for these interfaces
# Configuring them as 'data' class interfaces, MTU of 1500 and named data#
~(keystone_admin)$ system host-if-modify -m 1500 -n data0 -c data ${NODE} <data0-if-uuid>
~(keystone_admin)$ system host-if-modify -m 1500 -n data1 -c data ${NODE} <data1-if-uuid>
# Create Data Networks that vswitch 'data' interfaces will be connected to
~(keystone_admin)$ DATANET0='datanet0'
~(keystone_admin)$ DATANET1='datanet1'
# Assign Data Networks to Data Interfaces
~(keystone_admin)$ system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${DATANET0}
~(keystone_admin)$ system interface-datanetwork-assign ${NODE} <data1-if-uuid> ${DATANET1}
.. end-config-data-interfaces-controller-1-dx
.. begin-os-specific-host-config-data-dx
.. code-block:: bash
# Execute the following lines with
~(keystone_admin)$ export NODE=worker-0
# and then repeat with
~(keystone_admin)$ export NODE=worker-1
# List inventoried hosts ports and identify ports to be used as `data` interfaces,
# based on displayed linux port name, pci address and device type.
~(keystone_admin)$ system host-port-list ${NODE}
# List hosts auto-configured ethernet interfaces,
# find the interfaces corresponding to the ports identified in previous step, and
# take note of their UUID
~(keystone_admin)$ system host-if-list -a ${NODE}
# Modify configuration for these interfaces
# Configuring them as data class interfaces, MTU of 1500 and named data#
~(keystone_admin)$ system host-if-modify -m 1500 -n data0 -c data ${NODE} <data0-if-uuid>
~(keystone_admin)$ system host-if-modify -m 1500 -n data1 -c data ${NODE} <data1-if-uuid>
# Create Data Networks that vswitch 'data' interfaces will be connected to
~(keystone_admin)$ DATANET0='datanet0'
~(keystone_admin)$ DATANET1='datanet1'
~(keystone_admin)$ system datanetwork-add ${DATANET0} vlan
~(keystone_admin)$ system datanetwork-add ${DATANET1} vlan
# Assign Data Networks to Data Interfaces
~(keystone_admin)$ system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${DATANET0}
~(keystone_admin)$ system interface-datanetwork-assign ${NODE} <data1-if-uuid> ${DATANET1}
.. end-os-specific-host-config-data-dx
.. begin-os-specific-host-config-labels-dx
.. parsed-literal::
for NODE in worker-0 worker-1; do
system host-label-assign $NODE openstack-compute-node=enabled
kubectl taint nodes $NODE openstack-compute-node:NoSchedule
system host-label-assign $NODE |vswitch-label|
system host-label-assign $NODE sriov=enabled
done
.. end-os-specific-host-config-labels-dx
.. begin-os-specific-host-config-vswitch-dx
If using |OVS-DPDK| vswitch, run the following commands:
Default recommendation for worker node is to use two cores on
numa-node 0 for |OVS-DPDK| vSwitch; physical |NICs| are
typically on first numa-node. This should have been
automatically configured, if not run the following command.
.. code-block:: bash
for NODE in worker-0 worker-1; do
# assign 2 cores on processor/numa-node 0 on worker-node to vswitch
~(keystone_admin)$ system host-cpu-modify -f vswitch -p0 2 $NODE
done
When using |OVS-DPDK|, configure 1G of huge pages for vSwitch
memory on each |NUMA| node on the host. It is recommended to
configure 1x 1G huge page (-1G 1) for vSwitch memory on each
|NUMA| node on the host.
However, due to a limitation with Kubernetes, only a single huge
page size is supported on any one host. If your application VMs
require 2M huge pages, then configure 500x 2M huge pages (-2M
500) for vSwitch memory on each |NUMA| node on the host.
.. code-block:: bash
for NODE in worker-0 worker-1; do
# assign 1x 1G huge page on processor/numa-node 0 on worker-node to vswitch
~(keystone_admin)$ system host-memory-modify -f vswitch -1G 1 $NODE 0
# assign 1x 1G huge page on processor/numa-node 0 on worker-node to vswitch
~(keystone_admin)$ system host-memory-modify -f vswitch -1G 1 $NODE 1
done
.. important::
|VMs| created in an |OVS-DPDK| environment must be configured
to use huge pages to enable networking and must use a flavor
with property: ``hw:mem_page_size=large``.
Configure the huge pages for |VMs| in an |OVS-DPDK|
environment on this host, assuming 1G huge page size is being
used on this host, with the following commands:
.. code-block:: bash
for NODE in worker-0 worker-1; do
# assign 10x 1G huge page on processor/numa-node 0 on worker-node to applications
~(keystone_admin)$ system host-memory-modify -f application -1G 10 $NODE 0
# assign 10x 1G huge page on processor/numa-node 1 on worker-node to applications
~(keystone_admin)$ system host-memory-modify -f application -1G 10 $NODE 1
done
.. end-os-specific-host-config-vswitch-dx

View File

@ -0,0 +1,225 @@
.. begin-aio-sx-install-verify-ip-connectivity
External connectivity is required to run the Ansible bootstrap
playbook. The StarlingX boot image will |DHCP| out all interfaces
so the server may have obtained an IP address and have external IP
connectivity if a |DHCP| server is present in your environment.
Verify this using the :command:`ip addr` and :command:`ping
8.8.8.8` commands.
Otherwise, manually configure an IP address and default IP route.
Use the PORT, IP-ADDRESS/SUBNET-LENGTH and GATEWAY-IP-ADDRESS
applicable to your deployment environment.
::
sudo ip address add <IP-ADDRESS>/<SUBNET-LENGTH> dev <PORT>
sudo ip link set up dev <PORT>
sudo ip route add default via <GATEWAY-IP-ADDRESS> dev <PORT>
ping 8.8.8
.. end-aio-sx-install-verify-ip-connectivity
.. begin-config-controller-0-oam-interface-sx
The following example configures the |OAM| interface on a physical
untagged ethernet port, use |OAM| port name that is applicable to
your deployment environment, for example eth0:
::
~(keystone_admin)$ OAM_IF=<OAM-PORT>
~(keystone_admin)$ system host-if-modify controller-0 $OAM_IF -c platform
~(keystone_admin)$ system interface-network-assign controller-0 $OAM_IF oam
To configure a vlan or aggregated ethernet interface, see
:ref:`Node Interfaces <node-interfaces-index>`.
.. end-config-controller-0-oam-interface-sx
.. begin-config-controller-0-ntp-interface-sx
.. code-block:: none
~(keystone_admin)$ system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
To configure |PTP| instead of |NTP|, see :ref:`PTP Server
Configuration <ptp-server-config-index>`.
.. end-config-controller-0-ntp-interface-sx
.. begin-config-controller-0-OS-k8s-sriov-sx
#. Configure the Kubernetes |SRIOV| device plugin.
::
~(keystone_admin)$ system host-label-assign controller-0 sriovdp=enabled
#. |optional| If you are planning on running |DPDK| in
Kubernetes hosted application containers on this host,
configure the number of 1G Huge pages required on both |NUMA|
nodes.
.. code-block:: bash
# assign 10x 1G huge page on processor/numa-node 0 on controller-0 to applications
~(keystone_admin)$ system host-memory-modify -f application controller-0 0 -1G 10
# assign 10x 1G huge page on processor/numa-node 1 on controller-0 to applications
~(keystone_admin)$ system host-memory-modify -f application controller-0 1 -1G
.. end-config-controller-0-OS-k8s-sriov-sx
.. begin-config-controller-0-OS-add-cores-sx
A minimum of 4 platform cores are required, 6 platform cores are
recommended.
Increase the number of platform cores with the following
commands. This example assigns 6 cores on processor/numa-node 0
on controller-0 to platform.
.. code-block:: bash
~(keystone_admin)$ system host-cpu-modify -f platform -p0 6 controller-0
.. end-config-controller-0-OS-add-cores-sx
.. begin-config-controller-0-OS-vswitch-sx
To deploy |OVS-DPDK|, run the following command:
.. parsed-literal::
~(keystone_admin)$ system modify --vswitch_type |ovs-dpdk|
Default recommendation for an |AIO|-controller is to use a
single core for |OVS-DPDK| vSwitch.
.. code-block:: bash
# assign 1 core on processor/numa-node 0 on controller-0 to vswitch
~(keystone_admin)$ system host-cpu-modify -f vswitch -p0 1 controller-0
When using |OVS-DPDK|, configure 1G of huge pages for vSwitch
memory on each |NUMA| node on the host. It is recommended to
configure 1x 1G huge page (-1G 1) for vSwitch memory on each
|NUMA| node on the host.
However, due to a limitation with Kubernetes, only a single huge
page size is supported on any one host. If your application
|VMs| require 2M huge pages, then configure 500x 2M huge pages
(-2M 500) for vSwitch memory on each |NUMA| node on the host.
.. code-block::
# Assign 1x 1G huge page on processor/numa-node 0 on controller-0 to vswitch
~(keystone_admin)$ system host-memory-modify -f vswitch -1G 1 controller-0 0
# Assign 1x 1G huge page on processor/numa-node 1 on controller-0 to vswitch
~(keystone_admin)$ system host-memory-modify -f vswitch -1G 1 controller-0 1
.. important::
|VMs| created in an |OVS-DPDK| environment must be configured
to use huge pages to enable networking and must use a flavor
with property: hw:mem_page_size=large
Configure the huge pages for |VMs| in an |OVS-DPDK|
environment on this host, the following commands are an
example that assumes that 1G huge page size is being used on
this host:
.. code-block:: bash
# assign 1x 1G huge page on processor/numa-node 0 on controller-0 to applications
~(keystone_admin)$ system host-memory-modify -f application -1G 10 controller-0 0
# assign 1x 1G huge page on processor/numa-node 1 on controller-0 to applications
~(keystone_admin)$ system host-memory-modify -f application -1G 10 controller-0 1
.. note::
After controller-0 is unlocked, changing vswitch_type
requires locking and unlocking controller-0 to apply the
change.
.. end-config-controller-0-OS-vswitch-sx
.. begin-config-controller-0-OS-add-fs-sx
.. note::
Both cannot exist at the same time.
Add an 'instances' filesystem
.. code-block:: bash
~(keystone_admin)$ export NODE=controller-0
# Create instances filesystem
~(keystone_admin)$ system host-fs-add ${NODE} instances=<size>
Or add a 'nova-local' volume group:
.. code-block:: bash
~(keystone_admin)$ export NODE=controller-0
# Create nova-local local volume group
~(keystone_admin)$ system host-lvg-add ${NODE} nova-local
# Get UUID of an unused DISK to to be added to the nova-local volume
# group. CEPH OSD Disks can NOT be used
# List hosts disks and take note of UUID of disk to be used
~(keystone_admin)$ system host-disk-list ${NODE}
# Add the unused disk to the nova-local volume group
~(keystone_admin)$ system host-pv-add ${NODE} nova-local <DISK_UUID>
.. end-config-controller-0-OS-add-fs-sx
.. begin-config-controller-0-OS-data-interface-sx
.. code-block:: bash
~(keystone_admin)$ NODE=controller-0
# List inventoried hosts ports and identify ports to be used as data interfaces,
# based on displayed linux port name, pci address and device type.
~(keystone_admin)$ system host-port-list ${NODE}
# List hosts auto-configured ethernet interfaces,
# find the interfaces corresponding to the ports identified in previous step, and
# take note of their UUID
~(keystone_admin)$ system host-if-list -a ${NODE}
# Modify configuration for these interfaces
# Configuring them as data class interfaces, MTU of 1500 and named data#
~(keystone_admin)$ system host-if-modify -m 1500 -n data0 -c data ${NODE} <data0-if-uuid>
~(keystone_admin)$ system host-if-modify -m 1500 -n data1 -c data ${NODE} <data1-if-uuid>
# Create Data Networks that vswitch 'data' interfaces will be connected to
~(keystone_admin)$ DATANET0='datanet0'
~(keystone_admin)$ DATANET1='datanet1'
~(keystone_admin)$ system datanetwork-add ${DATANET0} vlan
~(keystone_admin)$ system datanetwork-add ${DATANET1} vlan
# Assign Data Networks to Data Interfaces
~(keystone_admin)$ system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${DATANET0}
~(keystone_admin)$ system interface-datanetwork-assign ${NODE} <data1-if-uuid> ${DATANET1}
.. end-config-controller-0-OS-data-interface-sx

View File

@ -0,0 +1,477 @@
.. start-install-sw-on-controller-0-and-workers-standard-with-storage
#. Power on the controller-1 server and force it to network boot with the
appropriate BIOS boot options for your particular server.
#. As controller-1 boots, a message appears on its console instructing you to
configure the personality of the node.
#. On the console of controller-0, list hosts to see newly discovered
controller-1 host (hostname=None):
::
system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | unlocked | enabled | available |
| 2 | None | None | locked | disabled | offline |
+----+--------------+-------------+----------------+-------------+--------------+
#. Using the host id, set the personality of this host to 'controller':
::
system host-update 2 personality=controller
This initiates the install of software on controller-1. This can take
5-10 minutes, depending on the performance of the host machine.
#. While waiting for the previous step to complete, power on the worker
nodes. Set the personality to 'worker' and assign a unique hostname
for each.
For example, power on worker-0 and wait for the new host
(hostname=None) to be discovered by checking ``system host-list``:
::
system host-update 3 personality=worker hostname=worker-0
Repeat for worker-1. Power on worker-1 and wait for the new host
(hostname=None) to be discovered by checking 'system host-list':
::
system host-update 4 personality=worker hostname=worker-1
.. only:: starlingx
.. Note::
A node with Edgeworker personality is also available. See
:ref:`deploy-edgeworker-nodes` for details.
#. Wait for the software installation on controller-1, worker-0, and
worker-1 to complete, for all servers to reboot, and for all to show
as locked/disabled/online in 'system host-list'.
::
system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | unlocked | enabled | available |
| 2 | controller-1 | controller | locked | disabled | online |
| 3 | worker-0 | worker | locked | disabled | online |
| 4 | worker-1 | worker | locked | disabled | online |
+----+--------------+-------------+----------------+-------------+--------------+
.. end-install-sw-on-controller-0-and-workers-standard-with-storage
.. start-config-worker-nodes-std-with-storage-bare-metal
.. start-config-worker-nodes-std-with-storage-bm-and-virt
#. Add the third Ceph monitor to a worker node:
(The first two Ceph monitors are automatically assigned to
controller-0 and controller-1.)
::
system ceph-mon-add worker-0
#. Wait for the worker node monitor to complete configuration:
::
system ceph-mon-list
+--------------------------------------+-------+--------------+------------+------+
| uuid | ceph_ | hostname | state | task |
| | mon_g | | | |
| | ib | | | |
+--------------------------------------+-------+--------------+------------+------+
| 64176b6c-e284-4485-bb2a-115dee215279 | 20 | controller-1 | configured | None |
| a9ca151b-7f2c-4551-8167-035d49e2df8c | 20 | controller-0 | configured | None |
| f76bc385-190c-4d9a-aa0f-107346a9907b | 20 | worker-0 | configured | None |
+--------------------------------------+-------+--------------+------------+------+
#. Assign the cluster-host network to the MGMT interface for the worker
nodes:
(Note that the MGMT interfaces are partially set up automatically by
the network install procedure.)
.. code-block:: bash
for NODE in worker-0 worker-1; do
system interface-network-assign $NODE mgmt0 cluster-host
done
.. end-config-worker-nodes-std-with-storage-bm-and-virt
.. only:: openstack
*************************************
OpenStack-specific host configuration
*************************************
.. important::
These steps are required only if the |prod-os| application
(|prefix|-openstack) will be installed.
#. **For OpenStack only:** Assign OpenStack host labels to the worker
nodes in support of installing the |prefix|-openstack manifest and
helm-charts later.
.. parsed-literal::
for NODE in worker-0 worker-1; do
system host-label-assign $NODE openstack-compute-node=enabled
kubectl taint nodes $NODE openstack-compute-node:NoSchedule
system host-label-assign $NODE |vswitch-label|
done
.. note::
If you have a |NIC| that supports |SRIOV|, then you can enable
it by using the following:
.. code-block:: none
system host-label-assign controller-0 sriov=enabled
#. **For OpenStack only:** Configure the host settings for the
vSwitch.
If using |OVS-DPDK| vswitch, run the following commands:
Default recommendation for worker node is to use two cores on
numa-node 0 for |OVS-DPDK| vSwitch; physical NICs are typically on
first numa-node. This should have been automatically configured, if
not run the following command.
.. code-block:: bash
for NODE in worker-0 worker-1; do
# assign 2 cores on processor/numa-node 0 on worker-node to vswitch
system host-cpu-modify -f vswitch -p0 2 $NODE
done
When using |OVS-DPDK|, configure 1G of huge pages for vSwitch
memory on each |NUMA| node on the host. It is recommended to
configure 1x 1G huge page (-1G 1) for vSwitch memory on each |NUMA|
node on the host.
However, due to a limitation with Kubernetes, only a single huge
page size is supported on any one host. If your application |VMs|
require 2M huge pages, then configure 500x 2M huge pages (-2M 500)
for vSwitch memory on each |NUMA| node on the host.
.. code-block:: bash
for NODE in worker-0 worker-1; do
# assign 1x 1G huge page on processor/numa-node 0 on worker-node to vswitch
system host-memory-modify -f vswitch -1G 1 $NODE 0
# assign 1x 1G huge page on processor/numa-node 0 on worker-node to vswitch
system host-memory-modify -f vswitch -1G 1 $NODE 1
done
.. important::
|VMs| created in an |OVS-DPDK| environment must be configured to
use huge pages to enable networking and must use a flavor with
the property ``hw:mem_page_size=large``
Configure the huge pages for |VMs| in an |OVS-DPDK| environment
on this host, the following commands are an example that assumes
that 1G huge page size is being used on this host:
.. code-block:: bash
for NODE in worker-0 worker-1; do
# assign 10x 1G huge page on processor/numa-node 0 on worker-node to applications
system host-memory-modify -f application -1G 10 $NODE 0
# assign 10x 1G huge page on processor/numa-node 1 on worker-node to applications
system host-memory-modify -f application -1G 10 $NODE 1
done
#. **For OpenStack only:** Add an instances filesystem OR Set up a
disk based nova-local volume group, which is needed for
|prefix|-openstack nova ephemeral disks.
.. note::
Both cannot exist at the same time.
Add an 'instances' filesystem:
.. code-block:: bash
# Create instances filesystem
for NODE in worker-0 worker-1; do
system host-fs-add ${NODE} instances=<size>
done
OR add a 'nova-local' volume group
.. code-block:: bash
for NODE in worker-0 worker-1; do
# Create nova-local local volume group
system host-lvg-add ${NODE} nova-local
# Get UUID of an unused DISK to to be added to the nova-local volume
# group. CEPH OSD Disks can NOT be used. Assume /dev/sdb is unused
# on all workers
DISK_UUID=$(system host-disk-list ${NODE} | awk '/sdb/{print $2}')
# Add the unused disk to the nova-local volume group
system host-pv-add ${NODE} nova-local ${DISK_UUID}
done
#. **For OpenStack only:** Configure data interfaces for worker nodes.
Data class interfaces are vswitch interfaces used by vswitch to
provide |VM| virtio vNIC connectivity to OpenStack Neutron Tenant
Networks on the underlying assigned Data Network.
.. important::
A compute-labeled worker host **MUST** have at least one Data
class interface.
* Configure the data interfaces for worker nodes.
.. code-block:: bash
# Execute the following lines with
export NODE=worker-0
# and then repeat with
export NODE=worker-1
# List inventoried hosts ports and identify ports to be used as data interfaces,
# based on displayed linux port name, pci address and device type.
system host-port-list ${NODE}
# List hosts auto-configured ethernet interfaces,
# find the interfaces corresponding to the ports identified in previous step, and
# take note of their UUID
system host-if-list -a ${NODE}
# Modify configuration for these interfaces
# Configuring them as data class interfaces, MTU of 1500 and named data#
system host-if-modify -m 1500 -n data0 -c data ${NODE} <data0-if-uuid>
system host-if-modify -m 1500 -n data1 -c data ${NODE} <data1-if-uuid>
# Create Data Networks that vswitch 'data' interfaces will be connected to
DATANET0='datanet0'
DATANET1='datanet1'
system datanetwork-add ${DATANET0} vlan
system datanetwork-add ${DATANET1} vlan
# Assign Data Networks to Data Interfaces
system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${DATANET0}
system interface-datanetwork-assign ${NODE} <data1-if-uuid> ${DATANET1}
.. end-config-worker-nodes-std-with-storage-bare-metal
.. start-config-pci-sriov-interfaces-standard-storage
#. **Optionally**, configure pci-sriov interfaces for worker nodes.
This step is **optional** for Kubernetes. Do this step if using |SRIOV|
network attachments in hosted application containers.
.. only:: openstack
This step is **optional** for OpenStack. Do this step if using
|SRIOV| vNICs in hosted application |VMs|. Note that pci-sriov
interfaces can have the same Data Networks assigned to them as
vswitch data interfaces.
* Configure the pci-sriov interfaces for worker nodes.
.. code-block:: bash
# Execute the following lines with
export NODE=worker-0
# and then repeat with
export NODE=worker-1
# List inventoried hosts ports and identify ports to be used as pci-sriov interfaces,
# based on displayed linux port name, pci address and device type.
system host-port-list ${NODE}
# List hosts auto-configured ethernet interfaces,
# find the interfaces corresponding to the ports identified in previous step, and
# take note of their UUID
system host-if-list -a ${NODE}
# Modify configuration for these interfaces
# Configuring them as pci-sriov class interfaces, MTU of 1500 and named sriov#
system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} <sriov0-if-uuid> -N <num_vfs>
system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} <sriov1-if-uuid> -N <num_vfs>
# If not already created, create Data Networks that the 'pci-sriov'
# interfaces will be connected to
DATANET0='datanet0'
DATANET1='datanet1'
system datanetwork-add ${DATANET0} vlan
system datanetwork-add ${DATANET1} vlan
# Assign Data Networks to PCI-SRIOV Interfaces
system interface-datanetwork-assign ${NODE} <sriov0-if-uuid> ${DATANET0}
system interface-datanetwork-assign ${NODE} <sriov1-if-uuid> ${DATANET1}
* **For Kubernetes only:** To enable using |SRIOV| network attachments
for the above interfaces in Kubernetes hosted application
containers:
* Configure the Kubernetes |SRIOV| device plugin.
.. code-block:: bash
for NODE in worker-0 worker-1; do
system host-label-assign $NODE sriovdp=enabled
done
* If planning on running |DPDK| in Kubernetes hosted application
containers on this host, configure the number of 1G Huge pages
required on both |NUMA| nodes.
.. code-block:: bash
for NODE in worker-0 worker-1; do
# assign 10x 1G huge page on processor/numa-node 0 on worker-node to applications
system host-memory-modify -f application $NODE 0 -1G 10
# assign 10x 1G huge page on processor/numa-node 1 on worker-node to applications
system host-memory-modify -f application $NODE 1 -1G 10
done
.. end-config-pci-sriov-interfaces-standard-storage
.. start-add-ceph-osds-to-controllers-std-storage
#. Add |OSDs| to controller-0. The following example adds |OSDs| to the
`sdb` disk:
.. code-block:: bash
HOST=controller-0
# List host's disks and identify disks you want to use for CEPH OSDs, taking note of their UUID
# By default, /dev/sda is being used as system disk and can not be used for OSD.
system host-disk-list ${HOST}
# Add disk as an OSD storage
system host-stor-add ${HOST} osd <disk-uuid>
# List OSD storage devices and wait for configuration of newly added OSD to complete.
system host-stor-list ${HOST}
#. Add |OSDs| to controller-1. The following example adds |OSDs| to the
`sdb` disk:
.. code-block:: bash
HOST=controller-1
# List host's disks and identify disks you want to use for CEPH OSDs, taking note of their UUID
# By default, /dev/sda is being used as system disk and can not be used for OSD.
system host-disk-list ${HOST}
# Add disk as an OSD storage
system host-stor-add ${HOST} osd <disk-uuid>
# List OSD storage devices and wait for configuration of newly added OSD to complete.
system host-stor-list ${HOST}
.. end-add-ceph-osds-to-controllers-std-storage
.. begin-openstack-specific-host-configs-bare-metal
#. **For OpenStack only:** Assign OpenStack host labels to
controller-0 in support of installing the |prefix|-openstack
manifest and helm-charts later.
::
system host-label-assign controller-0 openstack-control-plane=enabled
#. **For OpenStack only:** Configure the system setting for the
vSwitch.
.. only:: starlingx
StarlingX has |OVS| (kernel-based) vSwitch configured as
default:
* Runs in a container; defined within the helm charts of
the |prefix|-openstack manifest.
* Shares the core(s) assigned to the platform.
If you require better performance, |OVS-DPDK| (|OVS| with the
Data Plane Development Kit, which is supported only on bare
metal hardware) should be used:
* Runs directly on the host (it is not containerized).
* Requires that at least 1 core be assigned/dedicated to the
vSwitch function.
To deploy the default containerized |OVS|:
::
system modify --vswitch_type none
This does not run any vSwitch directly on the host, instead,
it uses the containerized |OVS| defined in the helm charts of
|prefix|-openstack manifest.
To deploy |OVS-DPDK|, run the following command:
.. parsed-literal::
system modify --vswitch_type |ovs-dpdk|
Once vswitch_type is set to |OVS-DPDK|, any subsequent
|AIO|-controller or worker nodes created will default to
automatically assigning 1 vSwitch core for |AIO| controllers and
2 vSwitch cores (both on numa-node 0; physical |NICs| are
typically on first numa-node) for compute-labeled worker nodes.
.. note::
After controller-0 is unlocked, changing vswitch_type requires
locking and unlocking controller-0 to apply the change.
.. end-openstack-specific-host-configs-bare-metal

View File

@ -0,0 +1,420 @@
.. begin-install-sw-cont-1-stor-and-wkr-nodes
#. Power on the controller-1 server and force it to network boot with
the appropriate BIOS boot options for your particular server.
#. As controller-1 boots, a message appears on its console instructing
you to configure the personality of the node.
#. On the console of controller-0, list hosts to see newly discovered
controller-1 host (hostname=None):
::
system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | unlocked | enabled | available |
| 2 | None | None | locked | disabled | offline |
+----+--------------+-------------+----------------+-------------+--------------+
#. Using the host id, set the personality of this host to 'controller':
::
system host-update 2 personality=controller
This initiates the install of software on controller-1. This can
take 5-10 minutes, depending on the performance of the host
machine.
#. While waiting for the previous step to complete, power on the
storage-0 and storage-1 servers. Set the personality to 'storage'
and assign a unique hostname for each.
For example, power on storage-0 and wait for the new host
(hostname=None) to be discovered by checking 'system host-list':
::
system host-update 3 personality=storage
Repeat for storage-1. Power on storage-1 and wait for the new host
(hostname=None) to be discovered by checking 'system host-list':
::
system host-update 4 personality=storage
This initiates the software installation on storage-0 and
storage-1. This can take 5-10 minutes, depending on the performance
of the host machine.
#. While waiting for the previous step to complete, power on the
worker nodes. Set the personality to 'worker' and assign a unique
hostname for each.
For example, power on worker-0 and wait for the new host
(hostname=None) to be discovered by checking 'system host-list':
::
system host-update 5 personality=worker hostname=worker-0
Repeat for worker-1. Power on worker-1 and wait for the new host
(hostname=None) to be discovered by checking 'system host-list':
::
system host-update 6 personality=worker hostname=worker-1
This initiates the install of software on worker-0 and worker-1.
.. only:: starlingx
.. Note::
A node with Edgeworker personality is also available. See
:ref:`deploy-edgeworker-nodes` for details.
#. Wait for the software installation on controller-1, storage-0,
storage-1, worker-0, and worker-1 to complete, for all servers to
reboot, and for all to show as locked/disabled/online in 'system
host-list'.
::
system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | unlocked | enabled | available |
| 2 | controller-1 | controller | locked | disabled | online |
| 3 | storage-0 | storage | locked | disabled | online |
| 4 | storage-1 | storage | locked | disabled | online |
| 5 | worker-0 | worker | locked | disabled | online |
| 6 | worker-1 | worker | locked | disabled | online |
+----+--------------+-------------+----------------+-------------+--------------+
.. end-install-sw-cont-1-stor-and-wkr-nodes
.. begin-dedicated-config-storage-nodes
#. Assign the cluster-host network to the MGMT interface for the storage nodes:
(Note that the MGMT interfaces are partially set up automatically by the
network install procedure.)
.. code-block:: bash
for NODE in storage-0 storage-1; do
system interface-network-assign $NODE mgmt0 cluster-host
done
#. Add |OSDs| to storage-0.
.. code-block:: bash
HOST=storage-0
# List hosts disks and identify disks you want to use for CEPH OSDs, taking note of their UUID
# By default, /dev/sda is being used as system disk and can not be used for OSD.
system host-disk-list ${HOST}
# Add disk as an OSD storage
system host-stor-add ${HOST} osd <disk-uuid>
# List OSD storage devices and wait for configuration of newly added OSD to complete.
system host-stor-list ${HOST}
#. Add |OSDs| to storage-1.
.. code-block:: bash
HOST=storage-1
# List hosts disks and identify disks you want to use for CEPH OSDs, taking note of their UUID
# By default, /dev/sda is being used as system disk and can not be used for OSD.
system host-disk-list ${HOST}
# Add disk as an OSD storage
system host-stor-add ${HOST} osd <disk-uuid>
# List OSD storage devices and wait for configuration of newly added OSD to complete.
system host-stor-list ${HOST}
.. end-dedicated-config-storage-nodes
.. begin-dedicated-stor-config-workers
#. The MGMT interfaces are partially set up by the network install procedure;
configuring the port used for network install as the MGMT port and
specifying the attached network of "mgmt".
Complete the MGMT interface configuration of the worker nodes by specifying
the attached network of "cluster-host".
.. code-block:: bash
for NODE in worker-0 worker-1; do
system interface-network-assign $NODE mgmt0 cluster-host
done
.. only:: openstack
*************************************
OpenStack-specific host configuration
*************************************
.. important::
These steps are required only if the |prod-os| application
(|prefix|-openstack) will be installed.
#. **For OpenStack only:** Assign OpenStack host labels to the worker nodes in
support of installing the |prefix|-openstack manifest and helm-charts later.
.. parsed-literal::
for NODE in worker-0 worker-1; do
system host-label-assign $NODE openstack-compute-node=enabled
kubectl taint nodes $NODE openstack-compute-node:NoSchedule
system host-label-assign $NODE |vswitch-label|
system host-label-assign $NODE sriov=enabled
done
#. **For OpenStack only:** Configure the host settings for the vSwitch.
If using |OVS-DPDK| vSwitch, run the following commands:
Default recommendation for worker node is to use node two cores on
numa-node 0 for |OVS-DPDK| vSwitch; physical NICs are typically on first
numa-node. This should have been automatically configured, if not run
the following command.
.. code-block:: bash
for NODE in worker-0 worker-1; do
# assign 2 cores on processor/numa-node 0 on worker-node to vswitch
system host-cpu-modify -f vswitch -p0 2 $NODE
done
When using |OVS-DPDK|, configure 1G of huge pages for vSwitch memory on
each |NUMA| node on the host. It is recommended to configure 1x 1G huge
page (-1G 1) for vSwitch memory on each |NUMA| node on the host.
However, due to a limitation with Kubernetes, only a single huge page
size is supported on any one host. If your application |VMs| require 2M
huge pages, then configure 500x 2M huge pages (-2M 500) for vSwitch
memory on each |NUMA| node on the host.
.. code-block:: bash
for NODE in worker-0 worker-1; do
# assign 1x 1G huge page on processor/numa-node 0 on worker-node to vswitch
system host-memory-modify -f vswitch -1G 1 $NODE 0
# assign 1x 1G huge page on processor/numa-node 0 on worker-node to vswitch
system host-memory-modify -f vswitch -1G 1 $NODE 1
done
.. important::
|VMs| created in an |OVS-DPDK| environment must be configured to use
huge pages to enable networking and must use a flavor with property:
hw:mem_page_size=large
Configure the huge pages for |VMs| in an |OVS-DPDK| environment on
this host, the following commands are an example that assumes that 1G
huge page size is being used on this host:
.. code-block:: bash
for NODE in worker-0 worker-1; do
# assign 10x 1G huge page on processor/numa-node 0 on worker-node to applications
system host-memory-modify -f application -1G 10 $NODE 0
# assign 10x 1G huge page on processor/numa-node 1 on worker-node to applications
system host-memory-modify -f application -1G 10 $NODE 1
done
#. **For OpenStack only:** Add an instances filesystem OR Set up a disk
based nova-local volume group, which is needed for |prefix|-openstack
nova ephemeral disks. NOTE: both cannot exist ast the same time
Add an 'instances' filesystem
.. code-block:: bash
# Create instances filesystem
for NODE in worker-0 worker-1; do
system host-fs-add ${NODE} instances=<size>
done
OR add a 'nova-local' volume group
.. code-block:: bash
for NODE in worker-0 worker-1; do
# Create nova-local local volume group
system host-lvg-add ${NODE} nova-local
# Get UUID of an unused DISK to to be added to the nova-local volume
# group. CEPH OSD Disks can NOT be used. Assume /dev/sdb is unused
# on all workers
DISK_UUID=$(system host-disk-list ${NODE} | awk '/sdb/{print $2}')
# Add the unused disk to the nova-local volume group
system host-pv-add ${NODE} nova-local ${DISK_UUID}
done
#. **For OpenStack only:** Configure data interfaces for worker nodes.
Data class interfaces are vswitch interfaces used by vswitch to provide
|VM| virtio vNIC connectivity to OpenStack Neutron Tenant Networks on the
underlying assigned Data Network.
.. important::
A compute-labeled worker host **MUST** have at least one Data class
interface.
* Configure the data interfaces for worker nodes.
.. code-block:: bash
# Execute the following lines with
export NODE=worker-0
# and then repeat with
export NODE=worker-1
# List inventoried hosts ports and identify ports to be used as data interfaces,
# based on displayed linux port name, pci address and device type.
system host-port-list ${NODE}
# List hosts auto-configured ethernet interfaces,
# find the interfaces corresponding to the ports identified in previous step, and
# take note of their UUID
system host-if-list -a ${NODE}
# Modify configuration for these interfaces
# Configuring them as data class interfaces, MTU of 1500 and named data#
system host-if-modify -m 1500 -n data0 -c data ${NODE} <data0-if-uuid>
system host-if-modify -m 1500 -n data1 -c data ${NODE} <data1-if-uuid>
# Create Data Networks that vswitch 'data' interfaces will be connected to
DATANET0='datanet0'
DATANET1='datanet1'
system datanetwork-add ${DATANET0} vlan
system datanetwork-add ${DATANET1} vlan
# Assign Data Networks to Data Interfaces
system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${DATANET0}
system interface-datanetwork-assign ${NODE} <data1-if-uuid> ${DATANET1}
.. end-dedicated-stor-config-workers
.. begin-dedicated-conf-pci-sriov-interfaces
**Optionally**, configure pci-sriov interfaces for worker nodes.
This step is **optional** for Kubernetes. Do this step if using
|SRIOV| network attachments in hosted application containers.
.. only:: openstack
This step is **optional** for OpenStack. Do this step if using
|SRIOV| vNICs in hosted application |VMs|. Note that pci-sriov
interfaces can have the same Data Networks assigned to them as
vswitch data interfaces.
* Configure the pci-sriov interfaces for worker nodes.
.. code-block:: bash
# Execute the following lines with
export NODE=worker-0
# and then repeat with
export NODE=worker-1
# List inventoried hosts ports and identify ports to be used as pci-sriov interfaces,
# based on displayed linux port name, pci address and device type.
system host-port-list ${NODE}
# List hosts auto-configured ethernet interfaces,
# find the interfaces corresponding to the ports identified in previous step, and
# take note of their UUID
system host-if-list -a ${NODE}
# Modify configuration for these interfaces
# Configuring them as pci-sriov class interfaces, MTU of 1500 and named sriov#
system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} <sriov0-if-uuid> -N <num_vfs>
system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} <sriov1-if-uuid> -N <num_vfs>
# If not created already, create Data Networks that the 'pci-sriov'
# interfaces will be connected to
DATANET0='datanet0'
DATANET1='datanet1'
system datanetwork-add ${DATANET0} vlan
system datanetwork-add ${DATANET1} vlan
# Assign Data Networks to PCI-SRIOV Interfaces
system interface-datanetwork-assign ${NODE} <sriov0-if-uuid> ${DATANET0}
system interface-datanetwork-assign ${NODE} <sriov1-if-uuid> ${DATANET1}
* **For Kubernetes only:** To enable using |SRIOV| network attachments
for the above interfaces in Kubernetes hosted application
containers:
* Configure the Kubernetes |SRIOV| device plugin.
.. code-block:: bash
for NODE in worker-0 worker-1; do
system host-label-assign $NODE sriovdp=enabled
done
* If planning on running |DPDK| in Kubernetes hosted application
containers on this host, configure the number of 1G Huge pages
required on both |NUMA| nodes.
.. code-block:: bash
for NODE in worker-0 worker-1; do
# assign 10x 1G huge page on processor/numa-node 0 on worker-node to applications
system host-memory-modify -f application $NODE 0 -1G 10
# assign 10x 1G huge page on processor/numa-node 1 on worker-node to applications
system host-memory-modify -f application $NODE 1 -1G 10
done
.. end-dedicated-conf-pci-sriov-interfaces
.. begin-dedicated-unlock-workers
Unlock worker nodes in order to bring them into service:
.. code-block:: bash
for NODE in worker-0 worker-1; do
system host-unlock $NODE
done
The worker nodes will reboot in order to apply configuration changes
and come into service. This can take 5-10 minutes, depending on the
performance of the host machine.
.. end-dedicated-unlock-workers

View File

@ -1,8 +1,5 @@
.. incl-install-software-controller-0-aio-start
Installing software on controller-0 is the second step in the |prod|
installation procedure.
.. note::
The disks and disk partitions need to be wiped before the install. Installing
@ -33,6 +30,8 @@ installation procedure.
#. Attach to a console, ensure the host boots from the USB, and wait for the
|prod| Installer Menus.
.. begin-install-software-controller-0-aio-virtual
#. Wait for the Install menus, and when prompted, make the following menu
selections in the installer:
@ -73,6 +72,8 @@ installation procedure.
When using the low latency kernel, you must use the serial console
instead of the graphics console, as it causes RT performance issues.
.. end-install-software-controller-0-aio-virtual
.. include:: /_includes/install-patch-ctl-0.rest
.. incl-install-software-controller-0-aio-end

View File

@ -0,0 +1,108 @@
.. incl-bootstrap-controller-0-virt-controller-storage-start:
On virtual controller-0:
#. Log in using the username / password of "sysadmin" / "sysadmin".
When logging in for the first time, you will be forced to change
the password.
::
Login: sysadmin
Password:
Changing password for sysadmin.
(current) UNIX Password: sysadmin
New Password:
(repeat) New Password:
#. External connectivity is required to run the Ansible bootstrap
playbook:
::
export CONTROLLER0_OAM_CIDR=10.10.10.3/24
export DEFAULT_OAM_GATEWAY=10.10.10.1
sudo ip address add $CONTROLLER0_OAM_CIDR dev enp7s1
sudo ip link set up dev enp7s1
sudo ip route add default via $DEFAULT_OAM_GATEWAY dev enp7s1
#. Specify user configuration overrides for the Ansible bootstrap
playbook.
Ansible is used to bootstrap StarlingX on controller-0. Key files
for Ansible configuration are:
``/etc/ansible/hosts``
The default Ansible inventory file. Contains a single host:
localhost.
``/usr/share/ansible/stx-ansible/playbooks/bootstrap.yml``
The Ansible bootstrap playbook.
``/usr/share/ansible/stx-ansible/playbooks/host_vars/bootstrap/default.yml``
The default configuration values for the bootstrap playbook.
``sysadmin home directory ($HOME)``
The default location where Ansible looks for and imports user
configuration override files for hosts. For example:
``$HOME/<hostname>.yml``.
.. include:: /shared/_includes/ansible_install_time_only.txt
Specify the user configuration override file for the Ansible
bootstrap playbook using one of the following methods:
* Copy the ``default.yml`` file listed above to
``$HOME/localhost.yml`` and edit the configurable values as
desired (use the commented instructions in the file).
or
* Create the minimal user configuration override file as shown in
the example below:
::
cd ~
cat <<EOF > localhost.yml
system_mode: duplex
dns_servers:
- 8.8.8.8
- 8.8.4.4
external_oam_subnet: 10.10.10.0/24
external_oam_gateway_address: 10.10.10.1
external_oam_floating_address: 10.10.10.2
external_oam_node_0_address: 10.10.10.3
external_oam_node_1_address: 10.10.10.4
admin_username: admin
admin_password: <admin-password>
ansible_become_pass: <sysadmin-password>
# Add these lines to configure Docker to use a proxy server
# docker_http_proxy: http://my.proxy.com:1080
# docker_https_proxy: https://my.proxy.com:1443
# docker_no_proxy:
# - 1.2.3.4
EOF
Refer to :ref:`Ansible Bootstrap Configurations
<ansible_bootstrap_configs_r7>` for information on additional
Ansible bootstrap configurations for advanced Ansible bootstrap
scenarios, such as Docker proxies when deploying behind a firewall,
etc. Refer to :ref:`Docker Proxy Configuration
<docker_proxy_config>` for details about Docker proxy settings.
#. Run the Ansible bootstrap playbook:
::
ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap.yml
Wait for Ansible bootstrap playbook to complete. This can take 5-10
minutes, depending on the performance of the host machine.
.. incl-bootstrap-controller-0-virt-controller-storage-end:

View File

@ -0,0 +1,170 @@
.. incl-bootstrap-sys-controller-0-standard-start
#. Login using the username / password of "sysadmin" / "sysadmin".
When logging in for the first time, you will be forced to change the
password.
::
Login: sysadmin
Password:
Changing password for sysadmin.
(current) UNIX Password: sysadmin
New Password:
(repeat) New Password:
#. Verify and/or configure IP connectivity.
External connectivity is required to run the Ansible bootstrap
playbook. The StarlingX boot image will |DHCP| out all interfaces so
the server may have obtained an IP address and have external IP
connectivity if a |DHCP| server is present in your environment. Verify
this using the :command:`ip addr` and :command:`ping 8.8.8.8`
commands.
Otherwise, manually configure an IP address and default IP route. Use
the PORT, IP-ADDRESS/SUBNET-LENGTH and GATEWAY-IP-ADDRESS applicable
to your deployment environment.
.. code-block:: bash
sudo ip address add <IP-ADDRESS>/<SUBNET-LENGTH> dev <PORT>
sudo ip link set up dev <PORT>
sudo ip route add default via <GATEWAY-IP-ADDRESS> dev <PORT>
ping 8.8.8.8
#. Specify user configuration overrides for the Ansible bootstrap
playbook.
Ansible is used to bootstrap StarlingX on controller-0. Key files for
Ansible configuration are:
``/etc/ansible/hosts``
The default Ansible inventory file. Contains a single host:
localhost.
``/usr/share/ansible/stx-ansible/playbooks/bootstrap.yml``
The Ansible bootstrap playbook.
``/usr/share/ansible/stx-ansible/playbooks/host_vars/bootstrap/default.yml``
The default configuration values for the bootstrap playbook.
``sysadmin home directory ($HOME)``
The default location where Ansible looks for and imports user
configuration override files for hosts. For example:
``$HOME/<hostname>.yml``.
.. only:: starlingx
.. include:: /shared/_includes/ansible_install_time_only.txt
Specify the user configuration override file for the Ansible bootstrap
playbook using one of the following methods:
.. note::
This Ansible Overrides file for the Bootstrap Playbook
($HOME/localhost.yml) contains security sensitive information, use
the :command:`ansible-vault create $HOME/localhost.yml` command to
create it. You will be prompted for a password to protect/encrypt
the file. Use the :command:`ansible-vault edit $HOME/localhost.yml`
command if the file needs to be edited after it is created.
#. Use a copy of the default.yml file listed above to provide your
overrides.
The ``default.yml`` file lists all available parameters for
bootstrap configuration with a brief description for each parameter
in the file comments.
To use this method, run the :command:`ansible-vault create
$HOME/localhost.yml` command and copy the contents of the
``default.yml`` file into the ansible-vault editor, and edit the
configurable values as required.
#. Create a minimal user configuration override file.
To use this method, create your override file with the
:command:`ansible-vault create $HOME/localhost.yml` command and
provide the minimum required parameters for the deployment
configuration as shown in the example below. Use the OAM IP SUBNET
and IP ADDRESSing applicable to your deployment environment.
.. include:: /_includes/min-bootstrap-overrides-non-simplex.rest
.. only:: starlingx
In either of the above options, the bootstrap playbooks default
values will pull all container images required for the |prod-p|
from Docker hub.
If you have setup a private Docker registry to use for
bootstrapping then you will need to add the following lines in
$HOME/localhost.yml:
.. only:: partner
.. include:: /_includes/install-kubernetes-bootstrap-playbook.rest
:start-after: docker-reg-begin
:end-before: docker-reg-end
.. code-block:: yaml
docker_registries:
quay.io:
url: myprivateregistry.abc.com:9001/quay.io
docker.elastic.co:
url: myprivateregistry.abc.com:9001/docker.elastic.co
gcr.io:
url: myprivateregistry.abc.com:9001/gcr.io
ghcr.io:
url: myprivateregistry.abc.com:9001/gcr.io
k8s.gcr.io:
url: myprivateregistry.abc.com:9001/k8s.ghcr.io
docker.io:
url: myprivateregistry.abc.com:9001/docker.io
registry.k8s.io:
url: myprivateregistry.abc.com:9001/registry.k8s.io
icr.io:
url: myprivateregistry.abc.com:9001/icr.io
defaults:
type: docker
username: <your_myprivateregistry.abc.com_username>
password: <your_myprivateregistry.abc.com_password>
# Add the CA Certificate that signed myprivateregistry.abc.coms
# certificate as a Trusted CA
ssl_ca_cert: /home/sysadmin/myprivateregistry.abc.com-ca-cert.pem
See :ref:`Use a Private Docker Registry <use-private-docker-registry-r7>`
for more information.
.. only:: starlingx
If a firewall is blocking access to Docker hub or your private
registry from your StarlingX deployment, you will need to add
the following lines in $HOME/localhost.yml (see :ref:`Docker
Proxy Configuration <docker_proxy_config>` for more details
about Docker proxy settings):
.. only:: partner
.. include:: /_includes/install-kubernetes-bootstrap-playbook.rest
:start-after: firewall-begin
:end-before: firewall-end
.. code-block:: bash
# Add these lines to configure Docker to use a proxy server
docker_http_proxy: http://my.proxy.com:1080
docker_https_proxy: https://my.proxy.com:1443
docker_no_proxy:
- 1.2.3.4
Refer to :ref:`Ansible Bootstrap Configurations
<ansible_bootstrap_configs_r7>` for information on additional
Ansible bootstrap configurations for advanced Ansible bootstrap
scenarios.
.. incl-bootstrap-sys-controller-0-standard-end

View File

@ -0,0 +1,75 @@
.. incl-config-controller-0-storage-start
#. Acquire admin credentials:
::
source /etc/platform/openrc
#. Configure the |OAM| interface of controller-0 and specify the
attached network as "oam".
The following example configures the |OAM| interface on a physical untagged
ethernet port, use the |OAM| port name that is applicable to your deployment
environment, for example eth0:
.. code-block:: bash
OAM_IF=<OAM-PORT>
system host-if-modify controller-0 $OAM_IF -c platform
system interface-network-assign controller-0 $OAM_IF oam
To configure a vlan or aggregated ethernet interface, see :ref:`Node
Interfaces <node-interfaces-index>`.
#. Configure the MGMT interface of controller-0 and specify the attached
networks of both "mgmt" and "cluster-host".
The following example configures the MGMT interface on a physical untagged
ethernet port, use the MGMT port name that is applicable to your deployment
environment, for example eth1:
.. code-block:: bash
MGMT_IF=<MGMT-PORT>
# De-provision loopback interface and
# remove mgmt and cluster-host networks from loopback interface
system host-if-modify controller-0 lo -c none
IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}')
for UUID in $IFNET_UUIDS; do
system interface-network-remove ${UUID}
done
# Configure management interface and assign mgmt and cluster-host networks to it
system host-if-modify controller-0 $MGMT_IF -c platform
system interface-network-assign controller-0 $MGMT_IF mgmt
system interface-network-assign controller-0 $MGMT_IF cluster-host
To configure a vlan or aggregated ethernet interface, see :ref:`Node
Interfaces <node-interfaces-index>`.
#. Configure |NTP| servers for network time synchronization:
::
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
To configure |PTP| instead of |NTP|, see :ref:`PTP Server Configuration
<ptp-server-config-index>`.
#. If required, configure Ceph storage backend:
A persistent storage backend is required if your application requires |PVCs|.
.. only:: openstack
.. important::
The StarlingX OpenStack application **requires** |PVCs|.
::
system storage-backend-add ceph --confirm
.. incl-config-controller-0-storage-end:

View File

@ -0,0 +1,68 @@
.. incl-config-controller-0-virt-controller-storage-start:
On virtual controller-0:
#. Acquire admin credentials:
::
source /etc/platform/openrc
#. Configure the |OAM| and MGMT interfaces of controller-0 and specify
the attached networks:
::
OAM_IF=enp7s1
MGMT_IF=enp7s2
system host-if-modify controller-0 lo -c none
IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}')
for UUID in $IFNET_UUIDS; do
system interface-network-remove ${UUID}
done
system host-if-modify controller-0 $OAM_IF -c platform
system interface-network-assign controller-0 $OAM_IF oam
system host-if-modify controller-0 $MGMT_IF -c platform
system interface-network-assign controller-0 $MGMT_IF mgmt
system interface-network-assign controller-0 $MGMT_IF cluster-host
#. Configure NTP servers for network time synchronization:
.. note::
In a virtual environment, this can sometimes cause Ceph clock
skew alarms. Also, the virtual instance clock is synchronized
with the host clock, so it is not absolutely required to
configure NTP here.
::
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
#. Configure Ceph storage backend
.. important::
This step required only if your application requires
persistent storage.
If you want to install the StarlingX Openstack application
(|prefix|-openstack) this step is mandatory.
::
system storage-backend-add ceph --confirmed
#. If required, and not already done as part of bootstrap, configure
Docker to use a proxy server.
#. List Docker proxy parameters:
::
system service-parameter-list platform docker
#. Refer to :ref:`docker_proxy_config` for
details about Docker proxy setting
.. incl-config-controller-0-virt-controller-storage-end:

View File

@ -0,0 +1,28 @@
.. incl-config-controller-1-virt-controller-storage-start:
Configure the |OAM| and MGMT interfaces of virtual controller-0 and
specify the attached networks. Note that the MGMT interface is partially
set up by the network install procedure.
::
OAM_IF=enp7s1
system host-if-modify controller-1 $OAM_IF -c platform
system interface-network-assign controller-1 $OAM_IF oam
system interface-network-assign controller-1 mgmt0 cluster-host
.. rubric:: OpenStack-specific host configuration
.. important::
This step is required only if the StarlingX OpenStack application
(|prefix|-openstack) will be installed.
**For OpenStack only:** Assign OpenStack host labels to controller-1 in
support of installing the |prefix|-openstack manifest/helm-charts later:
::
system host-label-assign controller-1 openstack-control-plane=enabled
.. incl-config-controller-1-virt-controller-storage-end:

View File

@ -0,0 +1,50 @@
.. incl-config-controller-1-start:
#. Configure the |OAM| interface of controller-1 and specify the
attached network of "oam".
The following example configures the |OAM| interface on a physical
untagged ethernet port, use the |OAM| port name that is applicable
to your deployment environment, for example eth0:
.. code-block:: bash
OAM_IF=<OAM-PORT>
system host-if-modify controller-1 $OAM_IF -c platform
system interface-network-assign controller-1 $OAM_IF oam
To configure a vlan or aggregated ethernet interface, see
:ref:`Node Interfaces <node-interfaces-index>`.
#. The MGMT interface is partially set up by the network install
procedure; configuring the port used for network install as the MGMT
port and specifying the attached network of "mgmt".
Complete the MGMT interface configuration of controller-1 by
specifying the attached network of "cluster-host".
::
system interface-network-assign controller-1 mgmt0 cluster-host
.. only:: openstack
*************************************
OpenStack-specific host configuration
*************************************
.. important::
This step is required only if the |prod-os| application
(|prefix|-openstack) will be installed.
**For OpenStack only:** Assign OpenStack host labels to controller-1
in support of installing the |prefix|-openstack manifest and
helm-charts later.
::
system host-label-assign controller-1 openstack-control-plane=enabled
.. incl-config-controller-1-end:

View File

@ -12,8 +12,6 @@ installation.
Before attempting to install |prod|, ensure that you have the following:
.. _installation-pre-requisites-ul-uzl-rny-q3b:
- The |prod-long| host installer ISO image file.

View File

@ -1,9 +1,8 @@
The following sections describe system requirements and host setup for a
workstation hosting virtual machine(s) where StarlingX will be deployed.
*********************
Hardware requirements
*********************
.. rubric:: Hardware requirements
The host system should have at least:
@ -18,9 +17,8 @@ The host system should have at least:
* **Network:** One network adapter with active Internet connection
*********************
Software requirements
*********************
.. rubric:: Software requirements
The host system should have at least:
@ -28,9 +26,8 @@ The host system should have at least:
All other required packages will be installed by scripts in the StarlingX tools repository.
**********
Host setup
**********
.. rubric:: Host setup
Set up the host with the following steps:
@ -68,5 +65,7 @@ Set up the host with the following steps:
#. Get the latest StarlingX ISO from the
`StarlingX mirror <https://mirror.starlingx.windriver.com/mirror/starlingx/release/latest_release/debian/monolithic/outputs/iso/>`_.
Alternately, you can get an older release ISO from `here <https://mirror.starlingx.windriver.com/mirror/starlingx/release/>`_.
`StarlingX mirror
<https://mirror.starlingx.windriver.com/mirror/starlingx/release/latest_release/debian/monolithic/outputs/iso/>`_.
Alternately, you can get an older release ISO from `here
<https://mirror.starlingx.windriver.com/mirror/starlingx/release/>`_.