19 KiB
Install Kubernetes Platform on All-in-one Simplex
partner
starlingx
This section describes the steps to install the StarlingX Kubernetes platform on a StarlingX R5.0 All-in-one Simplex deployment configuration.
Create a bootable USB
Refer to Bootable USB <bootable_usb>
for instructions on
how to create a bootable USB with the StarlingX ISO on your system.
Install software on controller-0
Bootstrap system on controller-0
Login using the username / password of "sysadmin" / "sysadmin". When logging in for the first time, you will be forced to change the password.
Login: sysadmin Password: Changing password for sysadmin. (current) UNIX Password: sysadmin New Password: (repeat) New Password:
Verify and/or configure IP connectivity.
External connectivity is required to run the Ansible bootstrap playbook. The StarlingX boot image will out all interfaces so the server may have obtained an IP address and have external IP connectivity if a server is present in your environment. Verify this using the
ip addr
andping 8.8.8.8
commands.Otherwise, manually configure an IP address and default IP route. Use the PORT, IP-ADDRESS/SUBNET-LENGTH and GATEWAY-IP-ADDRESS applicable to your deployment environment.
sudo ip address add <IP-ADDRESS>/<SUBNET-LENGTH> dev <PORT> sudo ip link set up dev <PORT> sudo ip route add default via <GATEWAY-IP-ADDRESS> dev <PORT> ping 8.8.8.8
Specify user configuration overrides for the Ansible bootstrap playbook.
Ansible is used to bootstrap StarlingX on controller-0. Key files for Ansible configuration are:
/etc/ansible/hosts
-
The default Ansible inventory file. Contains a single host: localhost.
/usr/share/ansible/stx-ansible/playbooks/bootstrap.yml
-
The Ansible bootstrap playbook.
/usr/share/ansible/stx-ansible/playbooks/host_vars/bootstrap/default.yml
-
The default configuration values for the bootstrap playbook.
sysadmin home directory ($HOME)
-
The default location where Ansible looks for and imports user configuration override files for hosts. For example:
$HOME/<hostname>.yml
.
starlingx
Specify the user configuration override file for the Ansible bootstrap playbook using one of the following methods:
Use a copy of the default.yml file listed above to provide your overrides.
The default.yml file lists all available parameters for bootstrap configuration with a brief description for each parameter in the file comments.
To use this method, copy the default.yml file listed above to
$HOME/localhost.yml
and edit the configurable values as desired.Create a minimal user configuration override file.
To use this method, create your override file at
$HOME/localhost.yml
and provide the minimum required parameters for the deployment configuration as shown in the example below. Use the OAM IP SUBNET and IP ADDRESSing applicable to your deployment environment.cd ~ cat <<EOF > localhost.yml system_mode: simplex dns_servers: - 8.8.8.8 - 8.8.4.4 external_oam_subnet: <OAM-IP-SUBNET>/<OAM-IP-SUBNET-LENGTH> external_oam_gateway_address: <OAM-GATEWAY-IP-ADDRESS> external_oam_floating_address: <OAM-FLOATING-IP-ADDRESS> admin_username: admin admin_password: <admin-password> ansible_become_pass: <sysadmin-password> EOF
starlingx
In either of the above options, the bootstrap playbook’s default values will pull all container images required for the from Docker hub.
If you have setup a private Docker registry to use for bootstrapping then you will need to add the following lines in $HOME/localhost.yml:
partner
docker_registries: quay.io: url: myprivateregistry.abc.com:9001/quay.io docker.elastic.co: url: myprivateregistry.abc.com:9001/docker.elastic.co gcr.io: url: myprivateregistry.abc.com:9001/gcr.io k8s.gcr.io: url: myprivateregistry.abc.com:9001/k8s.gcr.io docker.io: url: myprivateregistry.abc.com:9001/docker.io defaults: type: docker username: <your_myprivateregistry.abc.com_username> password: <your_myprivateregistry.abc.com_password> # Add the CA Certificate that signed myprivateregistry.abc.com’s # certificate as a Trusted CA ssl_ca_cert: /home/sysadmin/myprivateregistry.abc.com-ca-cert.pem
See
Use a Private Docker Registry <use-private-docker-registry>
for more information.starlingx
If a firewall is blocking access to Docker hub or your private registry from your StarlingX deployment, you will need to add the following lines in $HOME/localhost.yml (see
Docker Proxy Configuration <docker_proxy_config>
for more details about Docker proxy settings):partner
# Add these lines to configure Docker to use a proxy server docker_http_proxy: http://my.proxy.com:1080 docker_https_proxy: https://my.proxy.com:1443 docker_no_proxy: - 1.2.3.4
Refer to
Ansible Bootstrap Configurations <ansible_bootstrap_configs>
for information on additional Ansible bootstrap configurations for advanced Ansible bootstrap scenarios.
Run the Ansible bootstrap playbook:
ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap.yml
Wait for Ansible bootstrap playbook to complete. This can take 5-10 minutes, depending on the performance of the host machine.
Configure controller-0
The newly installed controller needs to be configured.
Acquire admin credentials:
source /etc/platform/openrc
Configure the interface of controller-0 and specify the attached network as "oam". Use the port name that is applicable to your deployment environment, for example eth0:
OAM_IF=<OAM-PORT> system host-if-modify controller-0 $OAM_IF -c platform system interface-network-assign controller-0 $OAM_IF oam
Configure servers for network time synchronization:
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
Configure data interfaces for controller-0. Use the DATA port names, for example eth0, applicable to your deployment environment.
This step is optional for Kubernetes. Do this step if using network attachments in hosted application containers.
starlingx
Important
This step is required for OpenStack.
Configure the data interfaces.
export NODE=controller-0 # List inventoried host’s ports and identify ports to be used as ‘data’ interfaces, # based on displayed linux port name, pci address and device type. system host-port-list ${NODE} # List host’s auto-configured ‘ethernet’ interfaces, # find the interfaces corresponding to the ports identified in previous step, and # take note of their UUID system host-if-list -a ${NODE} # Modify configuration for these interfaces # Configuring them as ‘data’ class interfaces, MTU of 1500 and named data# system host-if-modify -m 1500 -n data0 -c data ${NODE} <data0-if-uuid> system host-if-modify -m 1500 -n data1 -c data ${NODE} <data1-if-uuid> # Create Data Networks PHYSNET0='physnet0' PHYSNET1='physnet1' system datanetwork-add ${PHYSNET0} vlan system datanetwork-add ${PHYSNET1} vlan # Assign Data Networks to Data Interfaces system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${PHYSNET0} system interface-datanetwork-assign ${NODE} <data1-if-uuid> ${PHYSNET1}
To enable using network attachments for the above interfaces in Kubernetes hosted application containers:
Configure the Kubernetes device plugin.
system host-label-assign controller-0 sriovdp=enabled
If planning on running in Kubernetes hosted application containers on this host, configure the number of 1G Huge pages required on both nodes.
# assign 10x 1G huge page on processor/numa-node 0 on controller-0 to applications system host-memory-modify -f application controller-0 0 -1G 10 # assign 10x 1G huge page on processor/numa-node 1 on controller-0 to applications system host-memory-modify -f application controller-0 1 -1G 10
If required, initialize a Ceph-based Persistent Storage Backend
A persistent storage backend is required if your application requires .
starlingx
Important
The StarlingX OpenStack application requires .
There are two options for persistent storage backend: the host-based Ceph solution and the Rook container-based Ceph solution.
For host-based Ceph:
Add host-based Ceph backend:
system storage-backend-add ceph --confirmed
Add an on controller-0 for host-based Ceph:
# List host’s disks and identify disks you want to use for CEPH OSDs, taking note of their UUID # By default, /dev/sda is being used as system disk and can not be used for OSD. system host-disk-list controller-0 # Add disk as an OSD storage system host-stor-add controller-0 osd <disk-uuid> # List OSD storage devices system host-stor-list controller-0
starlingx
For Rook container-based Ceph:
Add Rook container-based backend:
system storage-backend-add ceph-rook --confirmed
Assign Rook host labels to controller-0 in support of installing the rook-ceph-apps manifest/helm-charts later:
system host-label-assign controller-0 ceph-mon-placement=enabled system host-label-assign controller-0 ceph-mgr-placement=enabled
openstack
OpenStack-specific host configuration
Important
This step is required only if the StarlingX OpenStack application (stx-openstack) will be installed.
For OpenStack only: Assign OpenStack host labels to controller-0 in support of installing the stx-openstack manifest and helm-charts later.
system host-label-assign controller-0 openstack-control-plane=enabled system host-label-assign controller-0 openstack-compute-node=enabled system host-label-assign controller-0 openvswitch=enabled system host-label-assign controller-0 sriov=enabled
For OpenStack only: Configure the system setting for the vSwitch.
StarlingX has (kernel-based) vSwitch configured as default:
- Runs in a container; defined within the helm charts of stx-openstack manifest.
- Shares the core(s) assigned to the platform.
If you require better performance, - ( with the Data Plane Development Kit, which is supported only on bare metal hardware) should be used:
- Runs directly on the host (it is not containerized).
- Requires that at least 1 core be assigned/dedicated to the vSwitch function.
To deploy the default containerized OVS:
system modify --vswitch_type none
This does not run any vSwitch directly on the host, instead, it uses the containerized defined in the helm charts of stx-openstack manifest.
To deploy OVS-DPDK, run the following command:
system modify --vswitch_type ovs-dpdk
Default recommendation for an AIO-controller is to use a single core for - vswitch.
# assign 1 core on processor/numa-node 0 on controller-0 to vswitch system host-cpu-modify -f vswitch -p0 1 controller-0
When using -, configure 1x 1G huge page for vSwitch memory on each node where vswitch is running on this host, with the following command:
# assign 1x 1G huge page on processor/numa-node 0 on controller-0 to vswitch system host-memory-modify -f vswitch -1G 1 controller-0 0
Important
created in an - environment must be configured to use huge pages to enable networking and must use a flavor with property: hw:mem_page_size=large
Configure the huge pages for in an - environment on this host with the commands:
# assign 1x 1G huge page on processor/numa-node 0 on controller-0 to applications system host-memory-modify -f application -1G 10 controller-0 0 # assign 1x 1G huge page on processor/numa-node 1 on controller-0 to applications system host-memory-modify -f application -1G 10 controller-0 1
Note
After controller-0 is unlocked, changing vswitch_type requires locking and unlocking controller-0 to apply the change.
For OpenStack only: Set up disk partition for nova-local volume group, which is needed for stx-openstack nova ephemeral disks.
export NODE=controller-0 echo ">>> Getting root disk info" ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}') ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}') echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID" echo ">>>> Configuring nova-local" NOVA_SIZE=34 NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${NOVA_SIZE}) NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}') system host-lvg-add ${NODE} nova-local system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID} sleep 2
Unlock controller-0
Unlock controller-0 to bring it into service:
system host-unlock controller-0
Controller-0 will reboot in order to apply configuration changes and come into service. This can take 5-10 minutes, depending on the performance of the host machine.
starlingx
If using Rook container-based Ceph, finish configuring the ceph-rook Persistent Storage Backend
On controller-0:
Wait for application rook-ceph-apps to be uploaded
$ source /etc/platform/openrc $ system application-list +---------------------+---------+-------------------------------+---------------+----------+-----------+ | application | version | manifest name | manifest file | status | progress | +---------------------+---------+-------------------------------+---------------+----------+-----------+ | oidc-auth-apps | 1.0-0 | oidc-auth-manifest | manifest.yaml | uploaded | completed | | platform-integ-apps | 1.0-8 | platform-integration-manifest | manifest.yaml | uploaded | completed | | rook-ceph-apps | 1.0-1 | rook-ceph-manifest | manifest.yaml | uploaded | completed | +---------------------+---------+-------------------------------+---------------+----------+-----------+
Configure rook to use /dev/sdb disk on controller-0 as a ceph .
system host-disk-wipe -s --confirm controller-0 /dev/sdb
values.yaml for rook-ceph-apps. :
cluster: storage: nodes: - name: controller-0 devices: - name: /dev/disk/by-path/pci-0000:00:03.0-ata-2.0
system helm-override-update rook-ceph-apps rook-ceph kube-system --values values.yaml
Apply the rook-ceph-apps application.
system application-apply rook-ceph-apps
Wait for pod to be ready.
kubectl get pods -n kube-system rook--ceph-crashcollector-controller-0-764c7f9c8-bh5c7 1/1 Running 0 62m rook--ceph-mgr-a-69df96f57-9l28p 1/1 Running 0 63m rook--ceph-mon-a-55fff49dcf-ljfnx 1/1 Running 0 63m rook--ceph-operator-77b64588c5-nlsf2 1/1 Running 0 66m rook--ceph-osd-0-7d5785889f-4rgmb 1/1 Running 0 62m rook--ceph-osd-prepare-controller-0-cmwt5 0/1 Completed 0 2m14s rook--ceph-tools-5778d7f6c-22tms 1/1 Running 0 64m rook--discover-kmv6c 1/1 Running 0 65m