.. Greg updates required for -High Security Vulnerability Document Updates
.. _aio_duplex_install_kubernetes_r7:
================================================
Install Kubernetes Platform on All-in-one Duplex
================================================
.. only:: partner
.. include:: /_includes/install-kubernetes-null-labels.rest
.. contents:: |minitoc|
:local:
:depth: 1
--------
Overview
--------
.. _aiodx-installation-prereqs:
.. include:: /shared/_includes/desc_aio_duplex.txt
.. _installation-prereqs-dx:
-----------------------------
Minimum hardware requirements
-----------------------------
.. only:: starlingx
.. tabs::
.. group-tab:: Bare Metal
.. include:: /shared/_includes/prepare-servers-for-installation-91baad307173.rest
:start-after: begin-min-hw-reqs-dx
:end-before: end-min-hw-reqs-dx
.. group-tab:: Virtual
The following sections describe system requirements and host setup for
a workstation hosting virtual machine(s) where StarlingX will be
deployed; i.e., a |VM| for each StarlingX node (controller,
AIO-controller, worker or storage node).
.. rubric:: **Hardware requirements**
The host system should have at least:
* **Processor:** x86_64 only supported architecture with BIOS enabled
hardware virtualization extensions
* **Cores:** 8
* **Memory:** 32GB RAM
* **Hard Disk:** 500GB HDD
* **Network:** One network adapter with active Internet connection
.. rubric:: **Software requirements**
The host system should have at least:
* A workstation computer with Ubuntu 16.04 LTS 64-bit
All other required packages will be installed by scripts in the
StarlingX tools repository.
.. rubric:: **Host setup**
Set up the host with the following steps:
#. Update OS:
::
apt-get update
#. Clone the StarlingX tools repository:
::
apt-get install -y git
cd $HOME
git clone https://opendev.org/starlingx/virtual-deployment.git
#. Install required packages:
::
cd $HOME/virtual-deployment/libvirt
bash install_packages.sh
apt install -y apparmor-profiles
apt-get install -y ufw
ufw disable
ufw status
.. note::
On Ubuntu 16.04, if apparmor-profile modules were installed as
shown in the example above, you must reboot the server to fully
install the apparmor-profile modules.
.. only:: partner
.. include:: /shared/_includes/prepare-servers-for-installation-91baad307173.rest
:start-after: begin-min-hw-reqs-dx
:end-before: end-min-hw-reqs-dx
--------------------------
Installation Prerequisites
--------------------------
.. only:: starlingx
.. tabs::
.. group-tab:: Bare Metal
.. include:: /shared/_includes/installation-prereqs.rest
:start-after: begin-install-prereqs
:end-before: end-install-prereqs
.. group-tab:: Virtual
Several pre-requisites must be completed prior to starting the |prod|
installation.
Before attempting to install |prod|, ensure that you have the the
|prod| host installer ISO image file.
Get the latest |prod| ISO from the `StarlingX mirror
`__.
Alternately, you can get an older release ISO from `here `__.
.. only:: partner
.. include:: /shared/_includes/installation-prereqs.rest
:start-after: begin-install-prereqs
:end-before: end-install-prereqs
--------------------------------
Prepare Servers for Installation
--------------------------------
.. only:: starlingx
.. tabs::
.. group-tab:: Bare Metal
.. include:: /shared/_includes/prepare-servers-for-installation-91baad307173.rest
:start-after: start-prepare-servers-common
:end-before: end-prepare-servers-common
.. group-tab:: Virtual
.. note::
The following commands for host, virtual environment setup, and
host power-on use KVM/virsh for virtual machine and |VM|
management technology. For an alternative virtualization
environment, see: :ref:`install_virtualbox`.
#. Prepare virtual environment.
Set up the virtual platform networks for virtual deployment:
.. code-block:: none
bash setup_network.sh
#. Prepare virtual servers.
.. note::
The following commands for host, virtual environment setup, and
host power-on use KVM / virsh for virtual machine and |VM|
management technology. For an alternative virtualization
environment, see: :ref:`install_virtualbox`.
#. Prepare virtual environment.
Set up the virtual platform networks for virtual deployment:
::
bash setup_network.sh
#. Prepare virtual servers.
Create the XML definitions for the virtual servers required by
this configuration option. This will create the XML virtual
server definition for:
* duplex-controller-0
* duplex-controller-1
The following command will start/virtually power on:
* The 'duplex-controller-0' virtual server
* The X-based graphical virt-manager application
::
bash setup_configuration.sh -c duplex -i ./bootimage.iso
If there is no X-server present errors will occur and the
X-based GUI for the virt-manager application will not start. The
virt-manager GUI is not absolutely required and you can safely
ignore errors and continue.
.. only:: partner
.. include:: /shared/_includes/prepare-servers-for-installation-91baad307173.rest
:start-after: start-prepare-servers-common
:end-before: end-prepare-servers-common
--------------------------------
Install Software on Controller-0
--------------------------------
.. only:: starlingx
.. tabs::
.. group-tab:: Bare Metal
.. include:: /shared/_includes/inc-install-software-on-controller.rest
:start-after: incl-install-software-controller-0-aio-start
:end-before: incl-install-software-controller-0-aio-end
.. group-tab:: Virtual
In the last step of :ref:`aio_duplex_environ`, the controller-0
virtual server 'duplex-controller-0' was started by the
:command:`setup_configuration.sh` command.
On the host, attach to the console of virtual controller-0 and select the
appropriate installer menu options to start the non-interactive install of
StarlingX software on controller-0.
.. note::
When entering the console, it is very easy to miss the first
installer menu selection. Use ESC to navigate to previous menus, to
ensure you are at the first installer menu.
::
virsh console duplex-controller-0
Make the following menu selections in the installer:
#. First menu: Select 'All-in-one Controller Configuration'
#. Second menu: Select 'Serial Console'
Wait for the non-interactive install of software to complete and for
the server to reboot. This can take 5-10 minutes, depending on the
performance of the host machine.
.. only:: partner
.. include:: /shared/_includes/inc-install-software-on-controller.rest
:start-after: incl-install-software-controller-0-aio-start
:end-before: incl-install-software-controller-0-aio-end
--------------------------------
Bootstrap system on controller-0
--------------------------------
#. Login using the username / password of "sysadmin" / "sysadmin".
When logging in for the first time, you will be forced to change the
password.
::
Login: sysadmin
Password:
Changing password for sysadmin.
(current) UNIX Password: sysadmin
New Password:
(repeat) New Password:
#. Verify and/or configure IP connectivity.
.. only:: starlingx
.. tabs::
.. group-tab:: Bare Metal
.. include:: /shared/_includes/aio_duplex_install_kubernetes.rest
:start-after: begin-aio-dx-install-verify-ip-connectivity
:end-before: end-aio-dx-install-verify-ip-connectivity
.. group-tab:: Virtual
External connectivity is required to run the Ansible bootstrap
playbook.
.. code-block:: bash
export CONTROLLER0_OAM_CIDR=10.10.10.3/24
export DEFAULT_OAM_GATEWAY=10.10.10.1
sudo ip address add $CONTROLLER0_OAM_CIDR dev enp7s1
sudo ip link set up dev enp7s1
sudo ip route add default via $DEFAULT_OAM_GATEWAY dev enp7s1
.. only:: partner
.. include:: /shared/_includes/aio_duplex_install_kubernetes.rest
:start-after: begin-aio-dx-install-verify-ip-connectivity
:end-before: end-aio-dx-install-verify-ip-connectivity
#. Specify user configuration overrides for the Ansible bootstrap playbook.
Ansible is used to bootstrap |prod| on controller-0. Key files for
Ansible configuration are:
``/etc/ansible/hosts``
The default Ansible inventory file. Contains a single host: localhost.
``/usr/share/ansible/stx-ansible/playbooks/bootstrap.yml``
The Ansible bootstrap playbook.
``/usr/share/ansible/stx-ansible/playbooks/host_vars/bootstrap/default.yml``
The default configuration values for the bootstrap playbook.
``sysadmin home directory ($HOME)``
The default location where Ansible looks for and imports user
configuration override files for hosts. For example:
``$HOME/.yml``.
.. only:: starlingx
.. include:: /shared/_includes/ansible_install_time_only.txt
Specify the user configuration override file for the Ansible bootstrap
playbook using one of the following methods:
.. note::
This Ansible Overrides file for the Bootstrap Playbook ($HOME/localhost.yml)
contains security sensitive information, use the
:command:`ansible-vault create $HOME/localhost.yml` command to create it.
You will be prompted for a password to protect/encrypt the file.
Use the :command:`ansible-vault edit $HOME/localhost.yml` command if the
file needs to be edited after it is created.
#. Use a copy of the default.yml file listed above to provide your overrides.
The default.yml file lists all available parameters for bootstrap
configuration with a brief description for each parameter in the file
comments.
To use this method, run the :command:`ansible-vault create $HOME/localhost.yml`
command and copy the contents of the ``default.yml`` file into the
ansible-vault editor, and edit the configurable values as required.
#. Create a minimal user configuration override file.
To use this method, create your override file with
the :command:`ansible-vault create $HOME/localhost.yml`
command and provide the minimum required parameters for the deployment
configuration as shown in the example below. Use the OAM IP SUBNET and IP
ADDRESSing applicable to your deployment environment.
.. include:: /shared/_includes/quotation-marks-in-keystone-password.rest
.. include:: /_includes/min-bootstrap-overrides-non-simplex.rest
.. only:: starlingx
In either of the above options, the bootstrap playbook’s default values
will pull all container images required for the |prod-p| from Docker hub.
If you have setup a private Docker registry to use for bootstrapping
then you will need to add the following lines in $HOME/localhost.yml:
.. only:: partner
.. include:: /_includes/install-kubernetes-bootstrap-playbook.rest
:start-after: docker-reg-begin
:end-before: docker-reg-end
.. code-block:: yaml
docker_registries:
quay.io:
url: myprivateregistry.abc.com:9001/quay.io
docker.elastic.co:
url: myprivateregistry.abc.com:9001/docker.elastic.co
gcr.io:
url: myprivateregistry.abc.com:9001/gcr.io
ghcr.io:
url: myprivateregistry.abc.com:9001/ghcr.io
k8s.gcr.io:
url: myprivateregistry.abc.com:9001/k8s.gcr.io
docker.io:
url: myprivateregistry.abc.com:9001/docker.io
registry.k8s.io:
url: myprivateregistry.abc.com:9001/registry.k8s.io
icr.io:
url: myprivateregistry.abc.com:9001/icr.io
defaults:
type: docker
username:
password:
# Add the CA Certificate that signed myprivateregistry.abc.com’s
# certificate as a Trusted CA
ssl_ca_cert: /home/sysadmin/myprivateregistry.abc.com-ca-cert.pem
See :ref:`Use a Private Docker Registry `
for more information.
.. only:: starlingx
If a firewall is blocking access to Docker hub or your private
registry from your StarlingX deployment, you will need to add the
following lines in $HOME/localhost.yml (see :ref:`Docker Proxy
Configuration ` for more details about Docker
proxy settings):
.. only:: partner
.. include:: /_includes/install-kubernetes-bootstrap-playbook.rest
:start-after: firewall-begin
:end-before: firewall-end
.. code-block:: bash
# Add these lines to configure Docker to use a proxy server
docker_http_proxy: http://my.proxy.com:1080
docker_https_proxy: https://my.proxy.com:1443
docker_no_proxy:
- 1.2.3.4
Refer to :ref:`Ansible Bootstrap Configurations `
for information on additional Ansible bootstrap configurations for advanced
Ansible bootstrap scenarios.
#. Run the Ansible bootstrap playbook:
.. include:: /shared/_includes/ntp-update-note.rest
::
ansible-playbook --ask-vault-pass /usr/share/ansible/stx-ansible/playbooks/bootstrap.yml
Wait for Ansible bootstrap playbook to complete. This can take 5-10 minutes,
depending on the performance of the host machine.
----------------------
Configure controller-0
----------------------
#. Acquire admin credentials:
::
source /etc/platform/openrc
#. Configure the |OAM| interface of controller-0 and specify the
attached network as "oam".
.. only:: starlingx
.. tabs::
.. group-tab:: Bare Metal
.. include:: /shared/_includes/aio_duplex_install_kubernetes.rest
:start-after: begin-config-controller-0-oam-interface-dx
:end-before: end-config-controller-0-oam-interface-dx
.. group-tab:: Virtual
.. code-block:: none
~(keystone_admin)$ OAM_IF=enp7s1
~(keystone_admin)$ system host-if-modify controller-0 $OAM_IF -c platform
~(keystone_admin)$ system interface-network-assign controller-0 $OAM_IF oam
.. only:: partner
.. include:: /shared/_includes/aio_duplex_install_kubernetes.rest
:start-after: begin-config-controller-0-oam-interface-dx
:end-before: end-config-controller-0-oam-interface-dx
To configure a |VLAN| or aggregated ethernet interface, see :ref:`Node
Interfaces `.
#. Configure the MGMT interface of controller-0 and specify the attached
networks of both "mgmt" and "cluster-host".
.. only:: starlingx
.. tabs::
.. group-tab:: Bare Metal
.. begin-config-controller-0-mgmt-interface-dx
The following example configures the MGMT interface on a physical
untagged ethernet port. Use the MGMT port name that is applicable
to your deployment environment, for example ``eth1``:
.. code-block:: none
~(keystone_admin)$ MGMT_IF=
~(keystone_admin)$ system host-if-modify controller-0 lo -c none
~(keystone_admin)$ IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}')
~(keystone_admin)$ for UUID in $IFNET_UUIDS; do \
system interface-network-remove ${UUID} \
done
~(keystone_admin)$ system host-if-modify controller-0 $MGMT_IF -c platform
~(keystone_admin)$ system interface-network-assign controller-0 $MGMT_IF mgmt
~(keystone_admin)$ system interface-network-assign controller-0 $MGMT_IF cluster-host
.. end-config-controller-0-mgmt-interface-dx
.. group-tab:: Virtual
.. code-block:: none
~(keystone_admin)$ MGMT_IF=enp7s2
~(keystone_admin)$ system host-if-modify controller-0 lo -c none
~(keystone_admin)$ IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}')
~(keystone_admin)$ for UUID in $IFNET_UUIDS; do \
system interface-network-remove ${UUID} \
done
~(keystone_admin)$ system host-if-modify controller-0 $MGMT_IF -c platform
~(keystone_admin)$ system interface-network-assign controller-0 $MGMT_IF mgmt
~(keystone_admin)$ system interface-network-assign controller-0 $MGMT_IF cluster-host
.. only:: partner
.. include:: aio_duplex_install_kubernetes.rst
:start-after: begin-config-controller-0-mgmt-interface
:end-before: end-config-controller-0-mgmt-interface
To configure a vlan or aggregated ethernet interface, see :ref:`Node
Interfaces `.
#. Configure |NTP| servers for network time synchronization:
.. only:: starlingx
.. tabs::
.. group-tab:: Bare Metal
.. include:: /shared/_includes/aio_duplex_install_kubernetes.rest
:start-after: begin-config-controller-0-ntp-interface-dx
:end-before: end-config-controller-0-ntp-interface-dx
.. group-tab:: Virtual
.. note::
In a virtual environment, this can sometimes cause Ceph clock
skew alarms. Also, the virtual instances clock is synchronized
with the host clock, so it is not absolutely required to
configure |NTP| in this step.
.. code-block:: none
~(keystone_admin)$ system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
.. only:: partner
.. include:: /shared/_includes/aio_duplex_install_kubernetes.rest
:start-after: begin-config-controller-0-ntp-interface-dx
:end-before: end-config-controller-0-ntp-interface-dx
.. only:: openstack
*************************************
OpenStack-specific host configuration
*************************************
.. important::
These steps are required only if the StarlingX OpenStack application
(|prefix|-openstack) will be installed.
#. **For OpenStack only:** Assign OpenStack host labels to controller-0 in
support of installing the |prefix|-openstack manifest and helm-charts later.
.. only:: starlingx
.. parsed-literal::
system host-label-assign controller-0 openstack-control-plane=enabled
system host-label-assign controller-0 openstack-compute-node=enabled
system host-label-assign controller-0 |vswitch-label|
.. note::
If you have a |NIC| that supports |SRIOV|, then you can enable it by
using the following:
.. code-block:: none
system host-label-assign controller-0 sriov=enabled
.. only:: partner
.. include:: /_includes/aio_duplex_install_kubernetes.rest
:start-after: ref1-begin
:end-before: ref1-end
#. **For OpenStack only:** Due to the additional OpenStack services running
on the |AIO| controller platform cores, additional platform cores may be
required.
.. only:: starlingx
.. tabs::
.. group-tab:: Bare Metal
.. include:: /shared/_includes/aio_duplex_install_kubernetes.rest
:start-after: begin-config-controller-0-OS-add-cores-dx
:end-before: end-config-controller-0-OS-add-cores-dx
.. group-tab:: Virtual
The |VMs| being used for hosts only have 4 cores; 2 for platform
and 2 for |VMs|. There are no additional cores available for
platform in this scenario.
.. only:: partner
.. include:: /shared/_includes/aio_duplex_install_kubernetes.rest
:start-after: begin-config-controller-0-OS-add-cores-dx
:end-before: end-config-controller-0-OS-add-cores-dx
#. Due to the additional OpenStack services' containers running on the
controller host, the size of the Docker filesystem needs to be
increased from the default size of 30G to 60G.
.. code-block:: bash
# check existing size of docker fs
system host-fs-list controller-0
# check available space (Avail Size (GiB)) in cgts-vg LVG where docker fs is located
system host-lvg-list controller-0
# if existing docker fs size + cgts-vg available space is less than
# 80G, you will need to add a new disk to cgts-vg.
# Get device path of BOOT DISK
system host-show controller-0 | fgrep rootfs
# Get UUID of ROOT DISK by listing disks
system host-disk-list controller-0
# Add new disk to 'cgts-vg' local volume group
system host-pv-add controller-0 cgts-vg
sleep 10 # wait for disk to be added
# Confirm the available space and increased number of physical
# volumes added to the cgts-vg colume group
system host-lvg-list controller-0
# Increase docker filesystem to 60G
system host-fs-modify controller-0 docker=60
#. **For OpenStack only:** Configure the system setting for the vSwitch.
.. only:: starlingx
.. tabs::
.. group-tab:: Bare Metal
|prod| has |OVS| (kernel-based) vSwitch configured as default,
which:
* runs in a container; defined within the helm charts of
|prefix|-openstack manifest.
* shares the core(s) assigned to the platform.
If you require better performance, |OVS-DPDK| (|OVS| with the Data
Plane Development Kit, which is supported only on bare metal hardware)
should be used:
* Runs directly on the host (it is not containerized).
Requires that at least 1 core be assigned/dedicated to the vSwitch
function.
To deploy the default containerized |OVS|:
.. code-block:: none
~(keystone_admin)$ system modify --vswitch_type none
This does not run any vSwitch directly on the host, instead, it uses
the containerized |OVS| defined in the helm charts of
|prefix|-openstack manifest.
.. include:: /shared/_includes/aio_duplex_install_kubernetes.rest
:start-after: begin-config-controller-0-OS-vswitch-dx
:end-before: end-config-controller-0-OS-vswitch-dx
.. group-tab:: Virtual
The default vSwitch is the containerized |OVS| that is packaged
with the ``stx-openstack`` manifest/helm-charts. |prod| provides
the option to use OVS-DPDK on the host, however, in the virtual
environment OVS-DPDK is not supported, only |OVS| is supported.
Therefore, simply use the default |OVS| vSwitch here.
.. only:: partner
.. include:: /shared/_includes/aio_duplex_install_kubernetes.rest
:start-after: begin-config-controller-0-OS-vswitch-dx
:end-before: end-config-controller-0-OS-vswitch-dx
#. **For OpenStack only:** Add an instances filesystem OR set up a disk
based nova-local volume group, which is needed for |prefix|-openstack
nova ephemeral disks.
.. only:: starlingx
.. tabs::
.. group-tab:: Bare Metal
.. include:: /shared/_includes/aio_duplex_install_kubernetes.rest
:start-after: begin-config-controller-0-OS-add-fs-dx
:end-before: end-config-controller-0-OS-add-fs-dx
.. group-tab:: Virtual
Set up an "instances" filesystem, which is needed for
stx-openstack nova ephemeral disks.
.. code-block:: bash
~(keystone_admin)$ export NODE=controller-0
~(keystone_admin)$ system host-fs-add ${NODE} instances=34
.. only:: partner
.. include:: /shared/_includes/aio_duplex_install_kubernetes.rest
:start-after: begin-config-controller-0-OS-add-fs-dx
:end-before: end-config-controller-0-OS-add-fs-dx
#. **For OpenStack only:** Configure data interfaces for controller-0.
Data class interfaces are vswitch interfaces used by vswitch to provide
|VM| virtio vNIC connectivity to OpenStack Neutron Tenant Networks on the
underlying assigned Data Network.
.. important::
A compute-labeled All-in-one controller host **MUST** have at least
one Data class interface.
* Configure the data interfaces for controller-0.
.. only:: starlingx
.. tabs::
.. group-tab:: Bare Metal
.. include:: /shared/_includes/aio_duplex_install_kubernetes.rest
:start-after: begin-config-controller-0-OS-data-interface-dx
:end-before: end-config-controller-0-OS-data-interface-dx
.. group-tab:: Virtual
.. code-block:: bash
~(keystone_admin)$ DATA0IF=eth1000
~(keystone_admin)$ DATA1IF=eth1001
~(keystone_admin)$ export NODE=controller-0
~(keystone_admin)$ PHYSNET0='physnet0'
~(keystone_admin)$ PHYSNET1='physnet1'
~(keystone_admin)$ SPL=/tmp/tmp-system-port-list
~(keystone_admin)$ SPIL=/tmp/tmp-system-host-if-list
~(keystone_admin)$ system host-port-list ${NODE} --nowrap > ${SPL}
~(keystone_admin)$ system host-if-list -a ${NODE} --nowrap > ${SPIL}
~(keystone_admin)$ DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
~(keystone_admin)$ DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
~(keystone_admin)$ DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
~(keystone_admin)$ DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
~(keystone_admin)$ DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
~(keystone_admin)$ DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
~(keystone_admin)$ DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
~(keystone_admin)$ DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
~(keystone_admin)$ system datanetwork-add ${PHYSNET0} vlan
~(keystone_admin)$ system datanetwork-add ${PHYSNET1} vlan
~(keystone_admin)$ system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
~(keystone_admin)$ system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
~(keystone_admin)$ system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
~(keystone_admin)$ system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
.. only:: partner
.. include:: /shared/_includes/aio_duplex_install_kubernetes.rest
:start-after: begin-config-controller-0-OS-data-interface-dx
:end-before: end-config-controller-0-OS-data-interface-dx
*****************************************
Optionally Configure PCI-SRIOV Interfaces
*****************************************
#. **Optionally**, configure |PCI|-|SRIOV| interfaces for controller-0.
This step is **optional** for Kubernetes. Do this step if using |SRIOV|
network attachments in hosted application containers.
.. only:: openstack
This step is **optional** for OpenStack. Do this step if using |SRIOV|
vNICs in hosted application VMs. Note that |PCI|-|SRIOV| interfaces can
have the same Data Networks assigned to them as vswitch data interfaces.
* Configure the pci-sriov interfaces for controller-0.
.. code-block:: bash
~(keystone_admin)$ export NODE=controller-0
# List inventoried host’s ports and identify ports to be used as ‘pci-sriov’ interfaces,
# based on displayed linux port name, pci address and device type.
~(keystone_admin)$ system host-port-list ${NODE}
# List host’s auto-configured ‘ethernet’ interfaces,
# find the interfaces corresponding to the ports identified in previous step, and
# take note of their UUID
~(keystone_admin)$ system host-if-list -a ${NODE}
# Modify configuration for these interfaces
# Configuring them as ‘pci-sriov’ class interfaces, MTU of 1500 and named sriov#
~(keystone_admin)$ system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} -N
~(keystone_admin)$ system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} -N
# If not already created, create Data Networks that the 'pci-sriov'
# interfaces will be connected to
~(keystone_admin)$ DATANET0='datanet0'
~(keystone_admin)$ DATANET1='datanet1'
~(keystone_admin)$ system datanetwork-add ${DATANET0} vlan
~(keystone_admin)$ system datanetwork-add ${DATANET1} vlan
# Assign Data Networks to PCI-SRIOV Interfaces
~(keystone_admin)$ system interface-datanetwork-assign ${NODE} ${DATANET0}
~(keystone_admin)$ system interface-datanetwork-assign ${NODE} ${DATANET1}
* **For Kubernetes Only:** To enable using |SRIOV| network attachments for
the above interfaces in Kubernetes hosted application containers:
.. only:: starlingx
.. tabs::
.. group-tab:: Bare Metal
.. include:: /shared/_includes/aio_duplex_install_kubernetes.rest
:start-after: begin-config-controller-0-OS-k8s-sriov-dx
:end-before: end-config-controller-0-OS-k8s-sriov-dx
.. group-tab:: Virtual
Configure the Kubernetes |SRIOV| device plugin.
.. code-block:: none
~(keystone_admin)$ system host-label-assign controller-0 sriovdp=enabled
.. only:: partner
.. include:: /shared/_includes/aio_duplex_install_kubernetes.rest
:start-after: begin-config-controller-0-OS-k8s-sriov-dx
:end-before: end-config-controller-0-OS-k8s-sriov-dx
***************************************************************
If required, initialize a Ceph-based Persistent Storage Backend
***************************************************************
A persistent storage backend is required if your application requires |PVCs|.
.. only:: openstack
.. important::
The StarlingX OpenStack application **requires** |PVCs|.
.. only:: starlingx
There are two options for persistent storage backend: the host-based Ceph
solution and the Rook container-based Ceph solution.
For host-based Ceph:
#. Initialize with add ceph backend:
::
~(keystone_admin)$ system storage-backend-add ceph --confirmed
#. Add an |OSD| on controller-0 for host-based Ceph:
.. code-block:: bash
# List host’s disks and identify disks you want to use for CEPH OSDs, taking note of their UUID
# By default, /dev/sda is being used as system disk and can not be used for OSD.
~(keystone_admin)$ system host-disk-list controller-0
# Add disk as an OSD storage
~(keystone_admin)$ system host-stor-add controller-0 osd
# List OSD storage devices
~(keystone_admin)$ system host-stor-list controller-0
.. only:: starlingx
For Rook container-based Ceph:
#. Initialize with add ceph-rook backend:
::
~(keystone_admin)$ system storage-backend-add ceph-rook --confirmed
#. Assign Rook host labels to controller-0 in support of installing the
rook-ceph-apps manifest/helm-charts later:
::
~(keystone_admin)$ system host-label-assign controller-0 ceph-mon-placement=enabled
~(keystone_admin)$ system host-label-assign controller-0 ceph-mgr-placement=enabled
-------------------
Unlock controller-0
-------------------
.. include:: aio_simplex_install_kubernetes.rst
:start-after: incl-unlock-controller-0-aio-simplex-start:
:end-before: incl-unlock-controller-0-aio-simplex-end:
.. only:: openstack
* **For OpenStack Only** Due to the additional OpenStack services’
containers running on the controller host, the size of the Docker
filesystem needs to be increased from the default size of 30G to 60G.
.. code-block:: bash
# check existing size of docker fs
~(keystone_admin)$ system host-fs-list controller-0
# check available space (Avail Size (GiB)) in cgts-vg LVG where docker fs is located
~(keystone_admin)$ system host-lvg-list controller-0
# if existing docker fs size + cgts-vg available space is less than
# 80G, you will need to add a new disk to cgts-vg.
# Get device path of BOOT DISK
~(keystone_admin)$ system host-show controller-0 | fgrep rootfs
# Get UUID of ROOT DISK by listing disks
~(keystone_admin)$ system host-disk-list controller-0
# Add new disk to 'cgts-vg' local volume group
~(keystone_admin)$ system host-pv-add controller-0 cgts-vg
~(keystone_admin)$ sleep 10 # wait for disk to be added
# Confirm the available space and increased number of physical
# volumes added to the cgts-vg colume group
~(keystone_admin)$ system host-lvg-list controller-0
# Increase docker filesystem to 60G
~(keystone_admin)$ system host-fs-modify controller-0 docker=60
-------------------------------------
Install software on controller-1 node
-------------------------------------
#. Power on the controller-1 server.
.. only:: starlingx
.. tabs::
.. group-tab:: Bare Metal
.. include:: /shared/_includes/aio_duplex_install_kubernetes.rest
:start-after: begin-power-on-controller-1-server-dx
:end-before: end-power-on-controller-1-server-dx
.. group-tab:: Virtual
#. On the host, power on the controller-1 virtual server,
``duplex-controller-1``. It will automatically attempt to
network boot over the management network:
.. code-block:: none
$ virsh start duplex-controller-1
#. Attach to the console of virtual controller-1:
.. code-block:: none
$ virsh console duplex-controller-1
.. only:: partner
.. include:: /shared/_includes/aio_duplex_install_kubernetes.rest
:start-after: begin-power-on-controller-1-server-dx
:end-before: end-power-on-controller-1-server-dx
As controller-1 boots, a message appears on its console instructing you to
configure the personality of the node.
#. On the console of controller-0, list hosts to see newly discovered
controller-1 host (hostname=None):
::
~(keystone_admin)$ system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | unlocked | enabled | available |
| 2 | None | None | locked | disabled | offline |
+----+--------------+-------------+----------------+-------------+--------------+
#. Using the host id, set the personality of this host to 'controller':
::
~(keystone_admin)$ system host-update 2 personality=controller
#. Wait for the software installation on controller-1 to complete, for
controller-1 to reboot, and for controller-1 to show as
locked/disabled/online in 'system host-list'.
This can take 5-10 minutes, depending on the performance of the host machine.
::
~(keystone_admin)$ system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | unlocked | enabled | available |
| 2 | controller-1 | controller | locked | disabled | online |
+----+--------------+-------------+----------------+-------------+--------------+
----------------------
Configure controller-1
----------------------
#. Configure the |OAM| interface of controller-1 and specify the attached
network of "oam".
.. only:: starlingx
.. tabs::
.. group-tab:: Bare Metal
.. include:: /shared/_includes/aio_duplex_install_kubernetes.rest
:start-after: begin-config-controller-1-server-oam-dx
:end-before: end-config-controller-1-server-oam-dx
.. group-tab:: Virtual
.. code-block:: none
~(keystone_admin)$ OAM_IF=enp7s1
~(keystone_admin)$ system host-if-modify controller-1 $OAM_IF -c platform
~(keystone_admin)$ system interface-network-assign controller-1 $OAM_IF oam
To configure a |VLAN| or aggregated ethernet interface, see :ref:`Node
Interfaces `.
.. only:: partner
.. include:: /shared/_includes/aio_duplex_install_kubernetes.rest
:start-after: begin-config-controller-1-server-oam-dx
:end-before: end-config-controller-1-server-oam-dx
#. The MGMT interface is partially set up by the network install procedure;
configuring the port used for network install as the MGMT port and
specifying the attached network of "mgmt".
Complete the MGMT interface configuration of controller-1 by specifying the
attached network of "cluster-host".
::
~(keystone_admin)$ system interface-network-assign controller-1 mgmt0 cluster-host
.. only:: openstack
*************************************
OpenStack-specific host configuration
*************************************
.. important::
These steps are required only if the |prod-os| application
(|prefix|-openstack) will be installed.
#. **For OpenStack only:** Assign OpenStack host labels to controller-1 in
support of installing the |prefix|-openstack manifest and helm-charts later.
.. only:: starlingx
.. parsed-literal::
~(keystone_admin)$ system host-label-assign controller-1 openstack-control-plane=enabled
~(keystone_admin)$ system host-label-assign controller-1 openstack-compute-node=enabled
~(keystone_admin)$ system host-label-assign controller-1 |vswitch-label|
.. note::
If you have a |NIC| that supports |SRIOV|, then you can enable it by
using the following:
.. code-block:: none
~(keystone_admin)$ system host-label-assign controller-0 sriov=enabled
.. only:: partner
.. include:: /_includes/aio_duplex_install_kubernetes.rest
:start-after: ref2-begin
:end-before: ref2-end
#. **For OpenStack only:** Due to the additional openstack services running
on the |AIO| controller platform cores, additional cores may be required.
.. only:: starlingx
.. tabs::
.. group-tab:: Bare Metal
.. include:: /shared/_includes/aio_duplex_install_kubernetes.rest
:start-after: begin-increase-cores-controller-1-dx
:end-before: end-increase-cores-controller-1-dx
.. group-tab:: Virtual
The |VMs| being used for hosts only have 4 cores; 2 for platform
and 2 for |VMs|; there are not additional cores available
for platform in this scenario.
.. only:: partner
.. include:: /shared/_includes/aio_duplex_install_kubernetes.rest
:start-after: begin-increase-cores-controller-1-dx
:end-before: end-increase-cores-controller-1-dx
#. Due to the additional openstack services' containers running on the
controller host, the size of the docker filesystem needs to be
increased from the default size of 30G to 60G.
.. code-block:: bash
# check existing size of docker fs
~(keystone_admin)$ system host-fs-list controller-1
# check available space (Avail Size (GiB)) in cgts-vg LVG where docker fs is located
~(keystone_admin)$ system host-lvg-list controller-1
# if existing docker fs size + cgts-vg available space is less than
# 80G, you will need to add a new disk to cgts-vg.
# Get device path of BOOT DISK
~(keystone_admin)$ system host-show controller-1 | fgrep rootfs
# Get UUID of ROOT DISK by listing disks
~(keystone_admin)$ system host-disk-list controller-1
# Add new disk to 'cgts-vg' local volume group
~(keystone_admin)$ system host-pv-add controller-1 cgts-vg
~(keystone_admin)$ sleep 10 # wait for disk to be added
# Confirm the available space and increased number of physical
# volumes added to the cgts-vg colume group
~(keystone_admin)$ system host-lvg-list controller-1
# Increase docker filesystem to 60G
~(keystone_admin)$ system host-fs-modify controller-1 docker=60
#. **For OpenStack only:** Configure the host settings for the vSwitch.
.. only:: starlingx
.. tabs::
.. group-tab:: Bare Metal
.. include:: /shared/_includes/aio_duplex_install_kubernetes.rest
:start-after: begin-config-vswitch-controller-1-dx
:end-before: end-config-vswitch-controller-1-dx
.. group-tab:: Virtual
No additional configuration is required for the OVS vswitch in
virtual environment.
.. only:: partner
.. include:: /shared/_includes/aio_duplex_install_kubernetes.rest
:start-after: begin-config-vswitch-controller-1-dx
:end-before: end-config-vswitch-controller-1-dx
#. **For OpenStack only:** Add an instances filesystem OR Set up a disk
based nova-local volume group, which is needed for |prefix|-openstack
nova ephemeral disks.
.. only:: starlingx
.. tabs::
.. group-tab:: Bare Metal
.. include:: /shared/_includes/aio_duplex_install_kubernetes.rest
:start-after: begin-config-fs-controller-1-dx
:end-before: end-config-fs-controller-1-dx
.. group-tab:: Virtual
Set up a ‘instances’ filesystem, which is needed for
stx-openstack nova ephemeral disks.
.. code-block::
~(keystone_admin)$ export NODE=controller-1
~(keystone_admin)$ system host-fs-add ${NODE} instances=34
.. only:: partner
.. include:: /shared/_includes/aio_duplex_install_kubernetes.rest
:start-after: begin-config-fs-controller-1-dx
:end-before: end-config-fs-controller-1-dx
#. **For OpenStack only:** Configure data interfaces for controller-1.
Data class interfaces are vswitch interfaces used by vswitch to provide
|VM| virtio vNIC connectivity to OpenStack Neutron Tenant Networks on the
underlying assigned Data Network.
.. important::
A compute-labeled All-in-one controller host **MUST** have at least
one Data class interface.
* Configure the data interfaces for controller-1.
.. only:: starlingx
.. tabs::
.. group-tab:: Bare Metal
.. include:: /shared/_includes/aio_duplex_install_kubernetes.rest
:start-after: begin-config-data-interfaces-controller-1-dx
:end-before: end-config-data-interfaces-controller-1-dx
.. group-tab:: Virtual
.. code-block:: bash
~(keystone_admin)$ DATA0IF=eth1000
~(keystone_admin)$ DATA1IF=eth1001
~(keystone_admin)$ export NODE=controller-1
~(keystone_admin)$ PHYSNET0='physnet0'
~(keystone_admin)$ PHYSNET1='physnet1'
~(keystone_admin)$ SPL=/tmp/tmp-system-port-list
~(keystone_admin)$ SPIL=/tmp/tmp-system-host-if-list
~(keystone_admin)$ system host-port-list ${NODE} --nowrap > ${SPL}
~(keystone_admin)$ system host-if-list -a ${NODE} --nowrap > ${SPIL}
~(keystone_admin)$ DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
~(keystone_admin)$ DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
~(keystone_admin)$ DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
~(keystone_admin)$ DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
~(keystone_admin)$ DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
~(keystone_admin)$ DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
~(keystone_admin)$ DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
~(keystone_admin)$ DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
~(keystone_admin)$ system datanetwork-add ${PHYSNET0} vlan
~(keystone_admin)$ system datanetwork-add ${PHYSNET1} vlan
~(keystone_admin)$ system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
~(keystone_admin)$ system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
~(keystone_admin)$ system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
~(keystone_admin)$ system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
.. only:: partner
.. include:: /shared/_includes/aio_duplex_install_kubernetes.rest
:start-after: begin-config-data-interfaces-controller-1-dx
:end-before: end-config-data-interfaces-controller-1-dx
*****************************************
Optionally Configure PCI-SRIOV Interfaces
*****************************************
#. **Optionally**, configure |PCI|-|SRIOV| interfaces for controller-1.
This step is **optional** for Kubernetes. Do this step if using |SRIOV|
network attachments in hosted application containers.
.. only:: openstack
This step is **optional** for OpenStack. Do this step if using |SRIOV|
vNICs in hosted application VMs. Note that |PCI|-|SRIOV| interfaces can
have the same Data Networks assigned to them as vswitch data interfaces.
* Configure the |PCI|-|SRIOV| interfaces for controller-1.
.. code-block:: bash
~(keystone_admin)$ export NODE=controller-1
# List inventoried host’s ports and identify ports to be used as ‘pci-sriov’ interfaces,
# based on displayed linux port name, pci address and device type.
~(keystone_admin)$ system host-port-list ${NODE}
# List host’s auto-configured 'ethernet' interfaces,
# find the interfaces corresponding to the ports identified in previous step, and
# take note of their UUID
~(keystone_admin)$ system host-if-list -a ${NODE}
# Modify configuration for these interfaces
# Configuring them as 'pci-sriov' class interfaces, MTU of 1500 and named sriov#
~(keystone_admin)$ system host-if-modify -m 1500 -n sriov0 -c pci-sriov ${NODE} -N
~(keystone_admin)$ system host-if-modify -m 1500 -n sriov1 -c pci-sriov ${NODE} -N
# If not already created, create Data Networks that the 'pci-sriov' interfaces
# will be connected to
~(keystone_admin)$ DATANET0='datanet0'
~(keystone_admin)$ DATANET1='datanet1'
# Assign Data Networks to PCI-SRIOV Interfaces
~(keystone_admin)$ system interface-datanetwork-assign ${NODE} ${DATANET0}
~(keystone_admin)$ system interface-datanetwork-assign ${NODE} ${DATANET1}
* **For Kubernetes only:** To enable using |SRIOV| network attachments for
the above interfaces in Kubernetes hosted application containers:
.. only:: starlingx
.. tabs::
.. group-tab:: Bare Metal
.. include:: /shared/_includes/aio_duplex_install_kubernetes.rest
:start-after: begin-config-k8s-sriov-controller-1-dx
:end-before: end-config-k8s-sriov-controller-1-dx
.. group-tab:: Virtual
Configure the Kubernetes SR-IOV device plugin.
.. code-block:: none
~(keystone_admin)$ system host-label-assign controller-1 sriovdp=enabled
.. only:: partner
.. include:: /shared/_includes/aio_duplex_install_kubernetes.rest
:start-after: begin-config-k8s-sriov-controller-1-dx
:end-before: end-config-k8s-sriov-controller-1-dx
***************************************************************************************
If configuring a Ceph-based Persistent Storage Backend, configure host-specific details
***************************************************************************************
For host-based Ceph:
#. Add an |OSD| on controller-1 for host-based Ceph:
.. code-block:: bash
# List host’s disks and identify disks you want to use for CEPH OSDs, taking note of their UUID
# By default, /dev/sda is being used as system disk and can not be used for OSD.
~(keystone_admin)$ system host-disk-list controller-1
# Add disk as an OSD storage
~(keystone_admin)$ system host-stor-add controller-1 osd
# List OSD storage devices
~(keystone_admin)$ system host-stor-list controller-1
.. only:: starlingx
For Rook container-based Ceph:
#. Assign Rook host labels to controller-1 in support of installing the
rook-ceph-apps manifest/helm-charts later:
.. code-block:: bash
~(keystone_admin)$ system host-label-assign controller-1 ceph-mon-placement=enabled
~(keystone_admin)$ ~(keystone_admin)$ system host-label-assign controller-1 ceph-mgr-placement=enabled
-------------------
Unlock controller-1
-------------------
Unlock controller-1 in order to bring it into service:
.. code-block:: bash
system host-unlock controller-1
Controller-1 will reboot in order to apply configuration changes and come into
service. This can take 5-10 minutes, depending on the performance of the host
machine.
.. only:: starlingx
-----------------------------------------------------------------------------------------------
If using Rook container-based Ceph, finish configuring the ceph-rook Persistent Storage Backend
-----------------------------------------------------------------------------------------------
For Rook container-based Ceph:
On active controller:
#. Wait for the ``rook-ceph-apps`` application to be uploaded
::
~(keystone_admin)$ source /etc/platform/openrc
~(keystone_admin)$ system application-list
+---------------------+---------+-------------------------------+---------------+----------+-----------+
| application | version | manifest name | manifest file | status | progress |
+---------------------+---------+-------------------------------+---------------+----------+-----------+
| oidc-auth-apps | 1.0-0 | oidc-auth-manifest | manifest.yaml | uploaded | completed |
| platform-integ-apps | 1.0-8 | platform-integration-manifest | manifest.yaml | uploaded | completed |
| rook-ceph-apps | 1.0-1 | rook-ceph-manifest | manifest.yaml | uploaded | completed |
+---------------------+---------+-------------------------------+---------------+----------+-----------+
#. Configure Rook to use /dev/sdb on controller-0 and controller-1 as a ceph
|OSD|.
.. code-block:: bash
~(keystone_admin)$ system host-disk-wipe -s --confirm controller-0 /dev/sdb
~(keystone_admin)$ system host-disk-wipe -s --confirm controller-1 /dev/sdb
``values.yaml`` for rook-ceph-apps.
.. code-block:: yaml
cluster:
storage:
nodes:
- name: controller-0
devices:
- name: /dev/disk/by-path/pci-0000:00:03.0-ata-2.0
- name: controller-1
devices:
- name: /dev/disk/by-path/pci-0000:00:03.0-ata-2.0
::
~(keystone_admin)$ system helm-override-update rook-ceph-apps rook-ceph kube-system --values values.yaml
#. Apply the rook-ceph-apps application.
::
~(keystone_admin)$ system application-apply rook-ceph-apps
#. Wait for |OSDs| pod to be ready.
::
~(keystone_admin)$ kubectl get pods -n kube-system
rook-ceph-crashcollector-controller-0-f984688ff-jsr8t 1/1 Running 0 4m9s
rook-ceph-crashcollector-controller-1-7f9b6f55b6-699bb 1/1 Running 0 2m5s
rook-ceph-mgr-a-7f9d588c5b-49cbg 1/1 Running 0 3m5s
rook-ceph-mon-a-75bcbd8664-pvq99 1/1 Running 0 4m27s
rook-ceph-mon-b-86c67658b4-f4snf 1/1 Running 0 4m10s
rook-ceph-mon-c-7f48b58dfb-4nx2n 1/1 Running 0 3m30s
rook-ceph-operator-77b64588c5-bhfg7 1/1 Running 0 7m6s
rook-ceph-osd-0-6949657cf7-dkfp2 1/1 Running 0 2m6s
rook-ceph-osd-1-5d4b58cf69-kdg82 1/1 Running 0 2m4s
rook-ceph-osd-prepare-controller-0-wcvsn 0/1 Completed 0 2m27s
rook-ceph-osd-prepare-controller-1-98h76 0/1 Completed 0 2m26s
rook-ceph-tools-5778d7f6c-2h8s8 1/1 Running 0 5m55s
rook-discover-xc22t 1/1 Running 0 6m2s
rook-discover-xndld 1/1 Running 0 6m2s
storage-init-rook-ceph-provisioner-t868q 0/1 Completed 0 108s
.. include:: /_includes/bootstrapping-and-deploying-starlingx.rest
.. _extend-dx-with-workers:
--------------------------------------------
Optionally Extend Capacity with Worker Nodes
--------------------------------------------
.. include:: /shared/_includes/aio_duplex_extend.rest
:start-after: start-aio-duplex-extend
:end-before: end-aio-duplex-extend
.. only:: starlingx
----------
Next steps
----------
.. include:: /_includes/kubernetes_install_next.txt
.. only:: partner
.. include:: /_includes/72hr-to-license.rest
Complete system configuration by reviewing procedures in:
- :ref:`index-security-kub-81153c1254c3`
- :ref:`index-sysconf-kub-78f0e1e9ca5a`
- :ref:`index-admintasks-kub-ebc55fefc368`