Install docs refactoring-updates

Additional conditionalization and corrections.
Patchset 1 review updates.
Added Installation appendixes
Adjusted tox.ini for contitionalized openstack blocks
Patchset 2 review comments.
Patchset 3 review comments and tried to make changes more globally in edited files.

Signed-off-by: Ron Stone <ronald.stone@windriver.com>
Change-Id: Icca40bc8c8c46ca1f3f38f5e4a63f5bb466b19bc
Signed-off-by: Ron Stone <ronald.stone@windriver.com>
This commit is contained in:
Ron Stone 2021-05-07 16:45:37 -04:00
parent dc4315d6f5
commit 9ac2eb2a4e
21 changed files with 1376 additions and 322 deletions

View File

@ -0,0 +1,22 @@
.. figures_begin
.. figure:: /configuration/figures/stx_proxy_overview.png
:width: 500
:alt: |prod| proxy usage
Figure 1: |prod| proxy usage
.. figures_end
.. r3_begin
----------------------
Set proxy at bootstrap
----------------------
To set the Docker proxy at bootstrap time, refer to :doc:`Ansible Bootstrap
Configurations
<../deploy_install_guides/r3_release/ansible_bootstrap_configs>`.
.. r3_end

View File

@ -0,0 +1,8 @@
.. playbook-defaults-begin
.. playbook-defaults-end
.. docker-reg-begin
.. docker-reg-end
.. firewall-begin
.. firewall-end

View File

@ -0,0 +1,6 @@
----------------------
Set proxy at bootstrap
----------------------
To set the Docker proxy at bootstrap time, refer to
:doc:`Ansible Bootstrap Configurations <../deploy_install_guides/r3_release/ansible_bootstrap_configs>`.

View File

@ -5,12 +5,12 @@
Docker Proxy Configuration Docker Proxy Configuration
========================== ==========================
StarlingX uses publicly available container runtime registries. If you are |org| uses publicly available container runtime registries. If you are
behind a corporate firewall or proxy, you need to set proxy settings. behind a corporate firewall or proxy, you need to set proxy settings.
For example, if the StarlingX OAM interface or network is behind an http/https For example, if the |prod| |OAM| interface or network is behind an http/https
proxy, relative to the Docker registries used by StarlingX or applications proxy, relative to the Docker registries used by |prod| or applications
running on StarlingX, then Docker within StarlingX must be configured to use running on |prod|, then Docker within |prod| must be configured to use
these http/https proxies. these http/https proxies.
.. contents:: .. contents::
@ -21,24 +21,24 @@ these http/https proxies.
Proxy overview Proxy overview
-------------- --------------
The figure below shows how proxies are used in StarlingX. The figure below shows how proxies are used in |prod|.
.. figure:: figures/stx_proxy_overview.png
:scale: 60%
:alt: StarlingX proxy usage
*Figure 1: StarlingX proxy usage* .. include:: /_includes/docker-proxy-config.rest
:start-after: figures_begin
:end-before: figures_end
The items labeled *a* and *b* in the figure indicate two configuration files: The items labeled *a* and *b* in the figure indicate two configuration files:
* Configuration file *a* lists sysadmin shell proxy environment variables. * Configuration file *a* lists sysadmin shell proxy environment variables.
This file is not required for StarlingX bootstrap or any StarlingX This file is not required for |prod| bootstrap or any |prod|
operations. You **must** manually add this file if you are accessing the operations. You **must** manually add this file if you are accessing the
public network via a proxy. You **must** add the following StarlingX public network via a proxy. You **must** add the following |prod|
specific IP addresses to the no_proxy list: specific IP addresses to the no_proxy list:
* registry.local * registry.local
* {controller OAM gateway IP/floating IP/host IP} * {controller |OAM| gateway IP/floating IP/host IP}
* {controller management floating IP/host IP} * {controller management floating IP/host IP}
* {controller cluster gateway IP} * {controller cluster gateway IP}
* 10.96.0.1 {apiserver cluster IP for Kubernetes} * 10.96.0.1 {apiserver cluster IP for Kubernetes}
@ -48,7 +48,7 @@ The items labeled *a* and *b* in the figure indicate two configuration files:
* Configuration file *b* lists container runtime proxy variables * Configuration file *b* lists container runtime proxy variables
(docker_proxy). Configure these variables in the ``localhost.yml`` file (docker_proxy). Configure these variables in the ``localhost.yml`` file
before Ansible bootstrap. This file is **required** if you are accessing before Ansible bootstrap. This file is **required** if you are accessing
the public network via a proxy. StarlingX specific IP addresses will be the public network via a proxy. |prod| specific IP addresses will be
automatically added to the no_proxy list. automatically added to the no_proxy list.
The numbered items in the figure indicate the process flow: The numbered items in the figure indicate the process flow:
@ -65,21 +65,18 @@ The numbered items in the figure indicate the process flow:
The bootstrap process will push to the registry.local afterwards. The bootstrap process will push to the registry.local afterwards.
#. After the Kubernetes API server is running, the bootstrap process will #. After the Kubernetes API server is running, the bootstrap process will
communicate with it for further StarlingX configuration. You **must** ensure communicate with it for further |prod| configuration. You **must** ensure
the cluster network gateway is set for no_proxy in configuration file *a*. the cluster network gateway is set for no_proxy in configuration file *a*.
#. After StarlingX provisioning is complete, any operations that pull Docker #. After |prod| provisioning is complete, any operations that pull Docker
images will use configuration file *b*. All other operations, including images will use configuration file *b*. All other operations, including
kubectl and system operations, will use the sysadmin shell and kubectl and system operations, will use the sysadmin shell and
configuration file *a*. configuration file *a*.
---------------------- .. include:: /_includes/docker-proxy-config.rest
Set proxy at bootstrap :start-after: r3_begin
---------------------- :end-before: r3_end
To set the Docker proxy at bootstrap time, refer to
:doc:`Ansible Bootstrap Configurations <../deploy_install_guides/r3_release/ansible_bootstrap_configs>`.
------------------ ------------------
Set HTTP proxy URL Set HTTP proxy URL
@ -109,7 +106,7 @@ Set no_proxy address list
A no_proxy address list can be provided for registries not on the other side A no_proxy address list can be provided for registries not on the other side
of the proxies. This list will be added to the default no_proxy list derived of the proxies. This list will be added to the default no_proxy list derived
from localhost, loopback, management, and OAM floating addresses at runtime. from localhost, loopback, management, and |OAM| floating addresses at runtime.
Due to a Docker restriction, each address in the no_proxy list must not be in Due to a Docker restriction, each address in the no_proxy list must not be in
subnet format and it cannot contain a wildcard. For example: subnet format and it cannot contain a wildcard. For example:

View File

@ -370,7 +370,7 @@ k8s_root_ca_key
CA certificate has an expiry of at least 5-10 years. CA certificate has an expiry of at least 5-10 years.
The administrator can also provide values to add to the Kubernetes API server The administrator can also provide values to add to the Kubernetes API server
certificate Subject Alternative Name list using the 'apiserver_cert_sans` certificate Subject Alternative Name list using the `apiserver_cert_sans`
override parameter. override parameter.
apiserver_cert_sans apiserver_cert_sans

View File

@ -0,0 +1,60 @@
.. jow1442253584837
.. _accessing-pxe-boot-server-files-for-a-custom-configuration:
=======================================================
Access PXE Boot Server Files for a Custom Configuration
=======================================================
If you prefer, you can create a custom |PXE| boot configuration using the
installation files provided with |prod|.
.. rubric:: |context|
You can use the setup script included with the ISO image to copy the boot
configuration files and distribution content to a working directory. You can
use the contents of the working directory to construct a |PXE| boot environment
according to your own requirements or preferences.
For more information about using a |PXE| boot server, see :ref:`Configure a
PXE Boot Server <configuring-a-pxe-boot-server>`.
.. rubric:: |proc|
.. _accessing-pxe-boot-server-files-for-a-custom-configuration-steps-www-gcz-3t:
#. Copy the ISO image from the source \(product DVD, USB device, or
|dnload-loc|\) to a temporary location on the |PXE| boot server.
This example assumes that the copied image file is
tmp/TS-host-installer-1.0.iso.
#. Mount the ISO image and make it executable.
.. code-block:: none
$ mount -o loop /tmp/TS-host-installer-1.0.iso /media/iso
$ mount -o remount,exec,dev /media/iso
#. Create and populate a working directory.
Use a command of the following form:
.. code-block:: none
$ /media/iso/pxeboot_setup.sh -u http://<ip-addr>/<symlink> <-w <working directory>>
where:
**ip-addr**
is the Apache listening address.
**symlink**
is a name for a symbolic link to be created under the Apache document
root directory, pointing to the directory specified by <working-dir>.
**working-dir**
is the path to the working directory.
#. Copy the required files from the working directory to your custom |PXE|
boot server directory.

View File

@ -0,0 +1,61 @@
.. ulc1552927930507
.. _adding-hosts-in-bulk:
=================
Add Hosts in Bulk
=================
You can add an arbitrary number of hosts using a single CLI command.
.. rubric:: |proc|
#. Prepare an XML file that describes the hosts to be added.
For more information, see :ref:`Bulk Host XML File Format
<bulk-host-xml-file-format>`.
You can also create the XML configuration file from an existing, running
configuration using the :command:`system host-bulk-export` command.
#. Run the :command:`system host-bulk-add` utility.
The command syntax is:
.. code-block:: none
~[keystone_admin]$ system host-bulk-add <xml_file>
where <xml\_file> is the name of the prepared XML file.
#. Power on the hosts to be added, if required.
.. note::
Hosts can be powered on automatically from board management controllers
using settings in the XML file.
.. rubric:: |result|
The hosts are configured. The utility provides a summary report, as shown in
the following example:
.. code-block:: none
Success:
worker-0
worker-1
Error:
controller-1: Host-add Rejected: Host with mgmt_mac 08:00:28:A9:54:19 already exists
.. rubric:: |postreq|
After adding the host, you must provision it according to the requirements of
the personality.
.. xbooklink For more information, see :ref:`Installing, Configuring, and
Unlocking Nodes <installing-configuring-and-unlocking-nodes>`, for your system,
and follow the *Configure* steps for the appropriate node personality.
.. seealso::
:ref:`Bulk Host XML File Format <bulk-host-xml-file-format>`

View File

@ -0,0 +1,164 @@
.. pyp1552927946441
.. _adding-hosts-using-the-host-add-command:
====================================
Add Hosts Using the host-add Command
====================================
You can add hosts to the system inventory using the command line.
.. rubric:: |context|
There are several ways to add hosts to |prod|; for an overview, see the
StarlingX Installation Guides,
`https://docs.starlingx.io/deploy_install_guides/index.html
<https://docs.starlingx.io/deploy_install_guides/index.html>`__ for your
system. Instead of powering up each host and then defining its personality and
other characteristics interactively, you can use the :command:`system host-add`
command to define hosts before you power them up. This can be useful for
scripting an initial setup.
.. note::
On systems that use static IP address assignment on the management network,
new hosts must be added to the inventory manually and assigned an IP
address using the :command:`system host-add` command. If a host is not
added successfully, the host console displays the following message at
power-on:
.. code-block:: none
This system has been configured with static management
and infrastructure IP address allocation. This requires
that the node be manually provisioned in System
Inventory using the 'system host-add' CLI, GUI, or
stx API equivalent.
.. rubric:: |proc|
#. Add the host to the system inventory.
.. note::
The host must be added to the system inventory before it is powered on.
On **controller-0**, acquire Keystone administrative privileges:
.. code-block:: none
$ source /etc/platform/openrc
Use the :command:`system host-add` command to add a host and specify its
personality. You can also specify the device used to display messages
during boot.
.. note::
The hostname parameter is required for worker hosts. For controller and
storage hosts, it is ignored.
.. code-block:: none
~(keystone_admin)]$ system host-add -n <hostname> \
-p <personality> [-s <subfunctions>] \
[-l <location>] [-o <install_output>[-c <console>]] [-b <boot_device>] \
[-r <rootfs_device>] [-m <mgmt_mac>] [-i <mgmt_ip>] [-D <ttys_dcd>] \
[-T <bm_type> -I <bm_ip> -U <bm_username> -P <bm_password>]
where
**<hostname>**
is a name to assign to the host. This is used for worker nodes only.
Controller and storage node names are assigned automatically and
override user input.
**<personality>**
is the host type. The following are valid values:
- controller
- worker
- storage
**<subfunctions>**
are the host personality subfunctions \(used only for a worker host\).
For a worker host, the only valid value is worker,lowlatency to enable
a low-latency performance profile. For a standard performance profile,
omit this option.
For more information about performance profiles, see |deploy-doc|:
:ref:`Worker Function Performance Profiles
<worker-function-performance-profiles>`.
**<location>**
is a string describing the location of the host
**<console>**
is the output device to use for message display on the host \(for
example, tty0\). The default is ttys0, 115200.
**<install\_output>**
is the format for console output on the host \(text or graphical\). The
default is text.
.. note::
The graphical option currently has no effect. Text-based
installation is used regardless of this setting.
**<boot\_device>**
is the host device for boot partition, relative to /dev. The default is
sda.
**<rootfs\_device>**
is the host device for rootfs partition, relative to/dev. The default
is sda.
**<mgmt\_mac>**
is the |MAC| address of the port connected to the internal management
or |PXE| boot network.
**<mgmt\_ip>**
is the IP address of the port connected to the internal management or
|PXE| boot network, if static IP address allocation is used.
.. note::
The <mgmt\_ip> option is not used for a controller node.
**<ttys\_dcd>**
is set to **True** to have any active console session automatically
logged out when the serial console cable is disconnected, or **False**
to disable this behavior. The server must support data carrier detect
on the serial console port.
**<bm\_type>**
is the board management controller type. Use bmc.
**<bm\_ip>**
is the board management controller IP address \(used for external
access to board management controllers over the |OAM| network\)
**<bm\_username>**
is the username for board management controller access
**<bm\_password>**
is the password for board management controller access
For example:
.. code-block:: none
~(keystone_admin)]$ system host-add -n compute-0 -p worker -I 10.10.10.100
#. With **controller-0** running, start the host.
The host is booted and configured with a personality.
.. rubric:: |postreq|
After adding the host, you must provision it according to the requirements of
the personality.
.. xbooklink For more information, see :ref:`Install, Configure, and Unlock
Nodes <installing-configuring-and-unlocking-nodes>` and follow the *Configure*
steps for the appropriate node personality.

View File

@ -92,39 +92,37 @@ Configure worker nodes
* Configure the data interfaces * Configure the data interfaces
:: .. code-block:: bash
DATA0IF=<DATA-0-PORT> # Execute the following lines with
DATA1IF=<DATA-1-PORT> export NODE=worker-0
PHYSNET0='physnet0' # and then repeat with
PHYSNET1='physnet1' export NODE=worker-1
SPL=/tmp/tmp-system-port-list
SPIL=/tmp/tmp-system-host-if-list
# configure the datanetworks in sysinv, prior to referencing it # List inventoried hosts ports and identify ports to be used as data interfaces,
# in the ``system host-if-modify`` command'. # based on displayed linux port name, pci address and device type.
system datanetwork-add ${PHYSNET0} vlan system host-port-list ${NODE}
system datanetwork-add ${PHYSNET1} vlan
# List hosts auto-configured ethernet interfaces,
# find the interfaces corresponding to the ports identified in previous step, and
# take note of their UUID
system host-if-list -a ${NODE}
# Modify configuration for these interfaces
# Configuring them as data class interfaces, MTU of 1500 and named data#
system host-if-modify -m 1500 -n data0 -c data ${NODE} <data0-if-uuid>
system host-if-modify -m 1500 -n data1 -c data ${NODE} <data1-if-uuid>
# Previously configured Data Networks
PHYSNET0='physnet0'
PHYSNET1='physnet1'
system datanetwork-add ${PHYSNET0} vlan
system datanetwork-add ${PHYSNET1} vlan
# Assign Data Networks to Data Interfaces
system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${PHYSNET0}
system interface-datanetwork-assign ${NODE} <data1-if-uuid> ${PHYSNET1}
for NODE in worker-0 worker-1; do
echo "Configuring interface for: $NODE"
set -ex
system host-port-list ${NODE} --nowrap > ${SPL}
system host-if-list -a ${NODE} --nowrap > ${SPIL}
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
set +ex
done
* To enable using |SRIOV| network attachments for the above interfaces in * To enable using |SRIOV| network attachments for the above interfaces in
Kubernetes hosted application containers: Kubernetes hosted application containers:

View File

@ -131,24 +131,74 @@ Bootstrap system on controller-0
admin_password: <admin-password> admin_password: <admin-password>
ansible_become_pass: <sysadmin-password> ansible_become_pass: <sysadmin-password>
# Add these lines to configure Docker to use a proxy server
# docker_http_proxy: http://my.proxy.com:1080
# docker_https_proxy: https://my.proxy.com:1443
# docker_no_proxy:
# - 1.2.3.4
EOF EOF
.. only:: partner
.. include:: ../../../_includes/install-playbook-values-aws.rest .. only:: starlingx
In either of the above options, the bootstrap playbooks default values
will pull all container images required for the |prod-p| from Docker hub.
If you have setup a private Docker registry to use for bootstrapping
then you will need to add the following lines in $HOME/localhost.yml:
.. only:: partner
.. include:: /_includes/install-kubernetes-bootstrap-playbook.rest
:start-after: docker-reg-begin
:end-before: docker-reg-end
.. code-block::
docker_registries:
quay.io:
url: myprivateregistry.abc.com:9001/quay.io
docker.elastic.co:
url: myprivateregistry.abc.com:9001/docker.elastic.co
gcr.io:
url: myprivateregistry.abc.com:9001/gcr.io
k8s.gcr.io:
url: myprivateregistry.abc.com:9001/k8s.gcr.io
docker.io:
url: myprivateregistry.abc.com:9001/docker.io
defaults:
type: docker
username: <your_myprivateregistry.abc.com_username>
password: <your_myprivateregistry.abc.com_password>
# Add the CA Certificate that signed myprivateregistry.abc.coms
# certificate as a Trusted CA
ssl_ca_cert: /home/sysadmin/myprivateregistry.abc.com-ca-cert.pem
See :ref:`Use a Private Docker Registry <use-private-docker-registry>`
for more information.
.. only:: starlingx
If a firewall is blocking access to Docker hub or your private
registry from your StarlingX deployment, you will need to add the
following lines in $HOME/localhost.yml (see :ref:`Docker Proxy
Configuration <docker_proxy_config>` for more details about Docker
proxy settings):
.. only:: partner
.. include:: /_includes/install-kubernetes-bootstrap-playbook.rest
:start-after: firewall-begin
:end-before: firewall-end
.. code-block::
# Add these lines to configure Docker to use a proxy server
docker_http_proxy: http://my.proxy.com:1080
docker_https_proxy: https://my.proxy.com:1443
docker_no_proxy:
- 1.2.3.4
Refer to :ref:`Ansible Bootstrap Configurations <ansible_bootstrap_configs>` Refer to :ref:`Ansible Bootstrap Configurations <ansible_bootstrap_configs>`
for information on additional Ansible bootstrap configurations for advanced for information on additional Ansible bootstrap configurations for advanced
Ansible bootstrap scenarios, such as Docker proxies when deploying behind a Ansible bootstrap scenarios.
firewall, etc. Refer to :ref:`Docker Proxy Configurations
<docker_proxy_config>` for details about Docker proxy settings.
#. Run the Ansible bootstrap playbook: #. Run the Ansible bootstrap playbook:
@ -211,40 +261,42 @@ Configure controller-0
This step is optional for Kubernetes: Do this step if using |SRIOV| network This step is optional for Kubernetes: Do this step if using |SRIOV| network
attachments in hosted application containers. attachments in hosted application containers.
.. important:: .. only:: starlingx
This step is **required** for OpenStack. .. important::
This step is **required** for OpenStack.
* Configure the data interfaces * Configure the data interfaces
:: ::
DATA0IF=<DATA-0-PORT> export NODE=controller-0
DATA1IF=<DATA-1-PORT>
export NODE=controller-0
PHYSNET0='physnet0'
PHYSNET1='physnet1'
SPL=/tmp/tmp-system-port-list
SPIL=/tmp/tmp-system-host-if-list
system host-port-list ${NODE} --nowrap > ${SPL}
system host-if-list -a ${NODE} --nowrap > ${SPIL}
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
system datanetwork-add ${PHYSNET0} vlan # List inventoried hosts ports and identify ports to be used as data interfaces,
system datanetwork-add ${PHYSNET1} vlan # based on displayed linux port name, pci address and device type.
system host-port-list ${NODE}
system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID} # List hosts auto-configured ethernet interfaces,
system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID} # find the interfaces corresponding to the ports identified in previous step, and
system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0} # take note of their UUID
system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1} system host-if-list -a ${NODE}
# Modify configuration for these interfaces
# Configuring them as data class interfaces, MTU of 1500 and named data#
system host-if-modify -m 1500 -n data0 -c data ${NODE} <data0-if-uuid>
system host-if-modify -m 1500 -n data1 -c data ${NODE} <data1-if-uuid>
# Create Data Networks
PHYSNET0='physnet0'
PHYSNET1='physnet1'
system datanetwork-add ${PHYSNET0} vlan
system datanetwork-add ${PHYSNET1} vlan
# Assign Data Networks to Data Interfaces
system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${PHYSNET0}
system interface-datanetwork-assign ${NODE} <data1-if-uuid> ${PHYSNET1}
* To enable using |SRIOV| network attachments for the above interfaces in * To enable using |SRIOV| network attachments for the above interfaces in
Kubernetes hosted application containers: Kubernetes hosted application containers:
@ -293,12 +345,22 @@ For host-based Ceph:
#. Add an |OSD| on controller-0 for host-based Ceph: #. Add an |OSD| on controller-0 for host-based Ceph:
:: .. code-block:: bash
# List hosts disks and identify disks you want to use for CEPH OSDs, taking note of their UUID
# By default, /dev/sda is being used as system disk and can not be used for OSD.
system host-disk-list controller-0 system host-disk-list controller-0
system host-disk-list controller-0 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-0 {}
# Add disk as an OSD storage
system host-stor-add controller-0 osd <disk-uuid>
# List OSD storage devices
system host-stor-list controller-0 system host-stor-list controller-0
# Add disk as an OSD storage
system host-stor-add controller-0 osd <disk-uuid>
.. only:: starlingx .. only:: starlingx
For Rook container-based Ceph: For Rook container-based Ceph:
@ -317,19 +379,7 @@ For host-based Ceph:
system host-label-assign controller-0 ceph-mon-placement=enabled system host-label-assign controller-0 ceph-mon-placement=enabled
system host-label-assign controller-0 ceph-mgr-placement=enabled system host-label-assign controller-0 ceph-mgr-placement=enabled
.. only:: openstack
***********************************
If required, configure Docker Proxy
***********************************
StarlingX uses publicly available container runtime registries. If you are
behind a corporate firewall or proxy, you need to set docker proxy settings.
Refer to :ref:`Docker Proxy Configuration <docker_proxy_config>` for
details about configuring Docker proxy settings.
.. only:: starlingx
************************************* *************************************
OpenStack-specific host configuration OpenStack-specific host configuration
@ -386,6 +436,7 @@ details about configuring Docker proxy settings.
:: ::
# assign 1 core on processor/numa-node 0 on controller-0 to vswitch # assign 1 core on processor/numa-node 0 on controller-0 to vswitch
system host-cpu-modify -f vswitch -p0 1 controller-0 system host-cpu-modify -f vswitch -p0 1 controller-0
@ -415,6 +466,7 @@ details about configuring Docker proxy settings.
:: ::
# assign 10x 1G huge page on processor/numa-node 0 on controller-0 to applications # assign 10x 1G huge page on processor/numa-node 0 on controller-0 to applications
system host-memory-modify -f application -1G 10 controller-0 0 system host-memory-modify -f application -1G 10 controller-0 0
@ -533,40 +585,41 @@ Configure controller-1
This step is optional for Kubernetes. Do this step if using |SRIOV| This step is optional for Kubernetes. Do this step if using |SRIOV|
network attachments in hosted application containers. network attachments in hosted application containers.
.. important:: .. only:: starlingx
This step is **required** for OpenStack. .. important::
This step is **required** for OpenStack.
* Configure the data interfaces * Configure the data interfaces
:: ::
DATA0IF=<DATA-0-PORT>
DATA1IF=<DATA-1-PORT>
export NODE=controller-1 export NODE=controller-1
# List inventoried hosts ports and identify ports to be used as data interfaces,
# based on displayed linux port name, pci address and device type.
system host-port-list ${NODE}
# List hosts auto-configured ethernet interfaces,
# find the interfaces corresponding to the ports identified in previous step, and
# take note of their UUID
system host-if-list -a ${NODE}
# Modify configuration for these interfaces
# Configuring them as data class interfaces, MTU of 1500 and named data#
system host-if-modify -m 1500 -n data0 -c data ${NODE} <data0-if-uuid>
system host-if-modify -m 1500 -n data1 -c data ${NODE} <data1-if-uuid>
# Previouly created Data Networks
PHYSNET0='physnet0' PHYSNET0='physnet0'
PHYSNET1='physnet1' PHYSNET1='physnet1'
SPL=/tmp/tmp-system-port-list
SPIL=/tmp/tmp-system-host-if-list
system host-port-list ${NODE} --nowrap > ${SPL}
system host-if-list -a ${NODE} --nowrap > ${SPIL}
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
system datanetwork-add ${PHYSNET0} vlan # Assign Data Networks to Data Interfaces
system datanetwork-add ${PHYSNET1} vlan system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${PHYSNET0}
system interface-datanetwork-assign ${NODE} <data1-if-uuid> ${PHYSNET1}
system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
* To enable using |SRIOV| network attachments for the above interfaes in * To enable using |SRIOV| network attachments for the above interfaes in
Kubernetes hosted application containers: Kubernetes hosted application containers:
@ -601,9 +654,19 @@ For host-based Ceph:
:: ::
system host-disk-list controller-1 # List hosts disks and identify disks you want to use for CEPH OSDs, taking note of their UUID
system host-disk-list controller-1 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-1 {} # By default, /dev/sda is being used as system disk and can not be used for OSD.
system host-stor-list controller-1 system host-disk-list controller-0
# Add disk as an OSD storage
system host-stor-add controller-0 osd <disk-uuid>
# List OSD storage devices
system host-stor-list controller-0
# Add disk as an OSD storage
system host-stor-add controller-0 osd <disk-uuid>
.. only:: starlingx .. only:: starlingx
@ -617,6 +680,7 @@ For host-based Ceph:
system host-label-assign controller-1 ceph-mon-placement=enabled system host-label-assign controller-1 ceph-mon-placement=enabled
system host-label-assign controller-1 ceph-mgr-placement=enabled system host-label-assign controller-1 ceph-mgr-placement=enabled
************************************* *************************************
OpenStack-specific host configuration OpenStack-specific host configuration
************************************* *************************************
@ -788,10 +852,13 @@ machine.
rook-discover-xndld 1/1 Running 0 6m2s rook-discover-xndld 1/1 Running 0 6m2s
storage-init-rook-ceph-provisioner-t868q 0/1 Completed 0 108s storage-init-rook-ceph-provisioner-t868q 0/1 Completed 0 108s
.. include:: /_includes/bootstrapping-and-deploying-starlingx.rest
.. only:: starlingx
---------- ----------
Next steps Next steps
---------- ----------
.. include:: ../kubernetes_install_next.txt .. include:: ../kubernetes_install_next.txt
.. include:: /_includes/bootstrapping-and-deploying-starlingx.rest

View File

@ -130,24 +130,74 @@ Bootstrap system on controller-0
admin_password: <admin-password> admin_password: <admin-password>
ansible_become_pass: <sysadmin-password> ansible_become_pass: <sysadmin-password>
# Add these lines to configure Docker to use a proxy server
# docker_http_proxy: http://my.proxy.com:1080
# docker_https_proxy: https://my.proxy.com:1443
# docker_no_proxy:
# - 1.2.3.4
EOF EOF
.. only:: partner .. only:: starlingx
.. include:: ../../../_includes/install-playbook-values-aws.rest In either of the above options, the bootstrap playbooks default values
will pull all container images required for the |prod-p| from Docker hub.
If you have setup a private Docker registry to use for bootstrapping
then you will need to add the following lines in $HOME/localhost.yml:
.. only:: partner
.. include:: /_includes/install-kubernetes-bootstrap-playbook.rest
:start-after: docker-reg-begin
:end-before: docker-reg-end
.. code-block::
docker_registries:
quay.io:
url: myprivateregistry.abc.com:9001/quay.io
docker.elastic.co:
url: myprivateregistry.abc.com:9001/docker.elastic.co
gcr.io:
url: myprivateregistry.abc.com:9001/gcr.io
k8s.gcr.io:
url: myprivateregistry.abc.com:9001/k8s.gcr.io
docker.io:
url: myprivateregistry.abc.com:9001/docker.io
defaults:
type: docker
username: <your_myprivateregistry.abc.com_username>
password: <your_myprivateregistry.abc.com_password>
# Add the CA Certificate that signed myprivateregistry.abc.coms
# certificate as a Trusted CA
ssl_ca_cert: /home/sysadmin/myprivateregistry.abc.com-ca-cert.pem
See :ref:`Use a Private Docker Registry <use-private-docker-registry>`
for more information.
Refer to :ref:`Ansible Bootstrap Configurations <ansible_bootstrap_configs>` .. only:: starlingx
for information on additional Ansible bootstrap configurations for advanced
Ansible bootstrap scenarios, such as Docker proxies when deploying behind a If a firewall is blocking access to Docker hub or your private
firewall, etc. Refer to :ref:`Docker Proxy Configuration registry from your StarlingX deployment, you will need to add the
<docker_proxy_config>` for details about Docker proxy settings. following lines in $HOME/localhost.yml (see :ref:`Docker Proxy
Configuration <docker_proxy_config>` for more details about Docker
proxy settings):
.. only:: partner
.. include:: /_includes/install-kubernetes-bootstrap-playbook.rest
:start-after: firewall-begin
:end-before: firewall-end
.. code-block::
# Add these lines to configure Docker to use a proxy server
docker_http_proxy: http://my.proxy.com:1080
docker_https_proxy: https://my.proxy.com:1443
docker_no_proxy:
- 1.2.3.4
Refer to :ref:`Ansible Bootstrap Configurations <ansible_bootstrap_configs>`
for information on additional Ansible bootstrap configurations for advanced
Ansible bootstrap scenarios.
#. Run the Ansible bootstrap playbook: #. Run the Ansible bootstrap playbook:
@ -198,35 +248,36 @@ The newly installed controller needs to be configured.
This step is **required** for OpenStack. This step is **required** for OpenStack.
* Configure the data interfaces * Configure the data interfaces.
:: ::
DATA0IF=<DATA-0-PORT>
DATA1IF=<DATA-1-PORT>
export NODE=controller-0 export NODE=controller-0
# List inventoried hosts ports and identify ports to be used as data interfaces,
# based on displayed linux port name, pci address and device type.
system host-port-list ${NODE}
# List hosts auto-configured ethernet interfaces,
# find the interfaces corresponding to the ports identified in previous step, and
# take note of their UUID
system host-if-list -a ${NODE}
# Modify configuration for these interfaces
# Configuring them as data class interfaces, MTU of 1500 and named data#
system host-if-modify -m 1500 -n data0 -c data ${NODE} <data0-if-uuid>
system host-if-modify -m 1500 -n data1 -c data ${NODE} <data1-if-uuid>
# Create Data Networks
PHYSNET0='physnet0' PHYSNET0='physnet0'
PHYSNET1='physnet1' PHYSNET1='physnet1'
SPL=/tmp/tmp-system-port-list
SPIL=/tmp/tmp-system-host-if-list
system host-port-list ${NODE} --nowrap > ${SPL}
system host-if-list -a ${NODE} --nowrap > ${SPIL}
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
system datanetwork-add ${PHYSNET0} vlan system datanetwork-add ${PHYSNET0} vlan
system datanetwork-add ${PHYSNET1} vlan system datanetwork-add ${PHYSNET1} vlan
system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID} # Assign Data Networks to Data Interfaces
system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID} system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${PHYSNET0}
system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0} system interface-datanetwork-assign ${NODE} <data1-if-uuid> ${PHYSNET1}
system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
* To enable using |SRIOV| network attachments for the above interfaces in * To enable using |SRIOV| network attachments for the above interfaces in
@ -238,7 +289,7 @@ The newly installed controller needs to be configured.
system host-label-assign controller-0 sriovdp=enabled system host-label-assign controller-0 sriovdp=enabled
* If planning on running |DPDK| in kubernetes hosted application * If planning on running |DPDK| in Kubernetes hosted application
containers on this host, configure the number of 1G Huge pages required containers on this host, configure the number of 1G Huge pages required
on both |NUMA| nodes. on both |NUMA| nodes.
@ -265,11 +316,12 @@ A persistent storage backend is required if your application requires
The StarlingX OpenStack application **requires** |PVCs|. The StarlingX OpenStack application **requires** |PVCs|.
There are two options for persistent storage backend: the host-based Ceph solution and the Rook container-based Ceph solution. There are two options for persistent storage backend: the host-based Ceph
solution and the Rook container-based Ceph solution.
For host-based Ceph: For host-based Ceph:
#. Add host-based ceph backend: #. Add host-based Ceph backend:
:: ::
@ -277,12 +329,19 @@ For host-based Ceph:
#. Add an |OSD| on controller-0 for host-based Ceph: #. Add an |OSD| on controller-0 for host-based Ceph:
:: .. code-block:: bash
# List hosts disks and identify disks you want to use for CEPH OSDs, taking note of their UUID
# By default, /dev/sda is being used as system disk and can not be used for OSD.
system host-disk-list controller-0 system host-disk-list controller-0
system host-disk-list controller-0 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-0 {}
# Add disk as an OSD storage
system host-stor-add controller-0 osd <disk-uuid>
# List OSD storage devices
system host-stor-list controller-0 system host-stor-list controller-0
.. only:: starlingx .. only:: starlingx
For Rook container-based Ceph: For Rook container-based Ceph:
@ -301,17 +360,7 @@ For host-based Ceph:
system host-label-assign controller-0 ceph-mon-placement=enabled system host-label-assign controller-0 ceph-mon-placement=enabled
system host-label-assign controller-0 ceph-mgr-placement=enabled system host-label-assign controller-0 ceph-mgr-placement=enabled
*********************************** .. only:: openstack
If required, configure Docker Proxy
***********************************
StarlingX uses publicly available container runtime registries. If you are
behind a corporate firewall or proxy, you need to set docker proxy settings.
Refer to :ref:`Docker Proxy Configuration <docker_proxy_config>` for
details about configuring Docker proxy settings.
.. only:: starlingx
************************************* *************************************
OpenStack-specific host configuration OpenStack-specific host configuration
@ -406,25 +455,26 @@ details about configuring Docker proxy settings.
#. **For OpenStack only:** Set up disk partition for nova-local volume #. **For OpenStack only:** Set up disk partition for nova-local volume
group, which is needed for stx-openstack nova ephemeral disks. group, which is needed for stx-openstack nova ephemeral disks.
:: .. code-block:: bash
export NODE=controller-0 export NODE=controller-0
echo ">>> Getting root disk info" echo ">>> Getting root disk info"
ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}') ROOT_DISK=$(system host-show ${NODE} | grep rootfs | awk '{print $4}')
ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}') ROOT_DISK_UUID=$(system host-disk-list ${NODE} --nowrap | grep ${ROOT_DISK} | awk '{print $2}')
echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID" echo "Root disk: $ROOT_DISK, UUID: $ROOT_DISK_UUID"
echo ">>>> Configuring nova-local" echo ">>>> Configuring nova-local"
NOVA_SIZE=34 NOVA_SIZE=34
NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${NOVA_SIZE}) NOVA_PARTITION=$(system host-disk-partition-add -t lvm_phys_vol ${NODE} ${ROOT_DISK_UUID} ${NOVA_SIZE})
NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}') NOVA_PARTITION_UUID=$(echo ${NOVA_PARTITION} | grep -ow "| uuid | [a-z0-9\-]* |" | awk '{print $4}')
system host-lvg-add ${NODE} nova-local system host-lvg-add ${NODE} nova-local
system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID} system host-pv-add ${NODE} nova-local ${NOVA_PARTITION_UUID}
sleep 2 sleep 2
.. incl-config-controller-0-openstack-specific-aio-simplex-end: .. incl-config-controller-0-openstack-specific-aio-simplex-end:
------------------- -------------------
Unlock controller-0 Unlock controller-0
------------------- -------------------

View File

@ -0,0 +1,52 @@
.. vqr1569420650576
.. _bootstrapping-from-a-private-docker-registry:
============================================
Bootstrapping from a Private Docker Registry
============================================
You can bootstrap controller-0 from a private Docker registry in the event that
your server is isolated from the public Internet.
.. rubric:: |proc|
#. Update your /home/sysadmin/localhost.yml bootstrap overrides file with the
following lines to use a Private Docker Registry pre-populated from the
|org| Docker Registry:
.. code-block:: none
docker_registries:
k8s.gcr.io:
url: <my-registry.io>/k8s.gcr.io
gcr.io:
url: <my-registry.io>/gcr.io
quay.io:
url: <my-registry.io>/quay.io
docker.io:
url: <my-registry.io>/docker.io
docker.elastic.co:
url: <my-registry.io>/docker.elastic.co
defaults:
type: docker
username: <your_my-registry.io_username>
password: <your_my-registry.io_password>
Where ``<your\_my-registry.io\_username>`` and
``<your\_my-registry.io\_password>`` are your login credentials for the
``<my-registry.io>`` private Docker registry.
.. note::
``<my-registry.io>`` must be a DNS name resolvable by the dns servers
configured in the ``dns\_servers:`` structure of the ansible bootstrap
override file /home/sysadmin/localhost.yml.
#. For any additional local registry images required, use the full image name
as shown below.
.. code-block:: none
additional_local_registry_images:
docker.io/wind-river/<imageName>:<tag>

View File

@ -0,0 +1,135 @@
.. hzf1552927866550
.. _bulk-host-xml-file-format:
=========================
Bulk Host XML File Format
=========================
Hosts for bulk addition are described using an XML document.
The document root is **hosts**. Within the root, each host is described using a
**host** node. To provide details, child elements are used, corresponding to
the parameters for the :command:`host-add` command.
The following elements are accepted. Each element takes a text string. For
valid values, refer to the CLI documentation.
.. _bulk-host-xml-file-format-simpletable-tc3-w15-ht:
.. table::
:widths: auto
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Element | Remarks |
+=========================================================================================================================================================================================+=========================================================================================================================================================================================+
| hostname | A unique name for the host. |
| | |
| | .. note:: |
| | Controller and storage node names are assigned automatically and override user input. |
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| personality | The type of host. |
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| subfunctions | For a worker host, an optional element to enable a low-latency performance profile. |
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| mgmt\_mac | The MAC address of the management interface. |
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| mgmt\_ip | The IP address of the management interface. |
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| bm\_ip | The IP address of the board management controller. |
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| bm\_type | The board management controller type. |
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| bm\_username | The username for board management controller authentication. |
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| bm\_password | The password for board management controller authentication. |
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| power\_on | An empty element. If present, powers on the host automatically using the specified board management controller. |
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| install\_output | The display mode to use during installation \(text or graphical\). The default is **text**. |
| | |
| | .. note:: |
| | The graphical option currently has no effect. Text-based installation is used regardless of this setting. |
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| console | If present, this element specifies the port, and if applicable the baud, for displaying messages. If the element is empty or not present, the default setting **ttyS0,115200** is used. |
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| rootfs\_device | The device to use for the rootfs partition, relative to /dev. |
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| boot\_device | The device to use for the boot partition, relative to /dev. |
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| location | A description of the host location. |
+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
The following sample describes a controller, three worker nodes, and two storage nodes:
.. code-block:: none
<?xml version="1.0" encoding="UTF-8" ?>
<hosts>
<host>
<personality>controller</personality>
<mgmt_mac>08:00:27:19:b0:c5</mgmt_mac>
<bm_ip>10.10.10.100</bm_ip>
<bm_type>bmc</bm_type>
<bm_username>tsmith1</bm_username>
<bm_password>mypass1</bm_password>
<install_output>text</install_output>
<location>System12/A4</location>
</host>
<host>
<hostname>worker-0</hostname>
<personality>worker</personality>
<mgmt_mac>08:00:27:dc:42:46</mgmt_mac>
<mgmt_ip>192.168.204.50</mgmt_ip>
<bm_ip>10.10.10.101</bm_ip>
<bm_username>tsmith1</bm_username>
<bm_password>mypass1</bm_password>
<bm_type>bmc</bm_type>
<install_output>text</install_output>
<console></console>
</host>
<host>
<hostname>worker-1</hostname>
<personality>worker</personality>
<mgmt_mac>08:00:27:87:82:3E</mgmt_mac>
<mgmt_ip>192.168.204.51</mgmt_ip>
<bm_ip>10.10.10.102</bm_ip>
<bm_type>bmc</bm_type>
<bm_username>tsmith1</bm_username>
<bm_password>mypass1</bm_password>
<rootfs_device>sda</rootfs_device>
<install_output>text</install_output>
</host>
<host>
<hostname>worker-2</hostname>
<personality>worker</personality>
<mgmt_mac>08:00:27:b9:16:0d</mgmt_mac>
<mgmt_ip>192.168.204.52</mgmt_ip>
<rootfs_device>sda</rootfs_device>
<install_output>text</install_output>
<console></console>
<power_on/>
<bm_ip>10.10.10.103</bm_ip>
<bm_type>bmc</bm_type>
<bm_username>tsmith1</bm_username>
<bm_password>mypass1</bm_password>
</host>
<host>
<personality>storage</personality>
<mgmt_mac>08:00:27:dd:e3:3f</mgmt_mac>
<bm_ip>10.10.10.104</bm_ip>
<bm_type>bmc</bm_type>
<bm_username>tsmith1</bm_username>
<bm_password>mypass1</bm_password>
</host>
<host>
<personality>storage</personality>
<mgmt_mac>08:00:27:8e:f1:b8</mgmt_mac>
<bm_ip>10.10.10.105</bm_ip>
<bm_type>bmc</bm_type>
<bm_username>tsmith1</bm_username>
<bm_password>mypass1</bm_password>
</host>
</hosts>

View File

@ -0,0 +1,200 @@
.. jow1440534908675
.. _configuring-a-pxe-boot-server:
===========================
Configure a PXE Boot Server
===========================
You can optionally set up a |PXE| Boot Server to support **controller-0**
initialization.
.. rubric:: |context|
|prod| includes a setup script to simplify configuring a |PXE| boot server. If
you prefer, you can manually apply a custom configuration; for more
information, see :ref:`Access PXE Boot Server Files for a Custom Configuration
<accessing-pxe-boot-server-files-for-a-custom-configuration>`.
The |prod| setup script accepts a path to the root TFTP directory as a
parameter, and copies all required files for BIOS and |UEFI| clients into this
directory.
The |PXE| boot server serves a boot loader file to the requesting client from a
specified path on the server. The path depends on whether the client uses BIOS
or |UEFI|. The appropriate path is selected by conditional logic in the |DHCP|
configuration file.
The boot loader runs on the client, and reads boot parameters, including the
location of the kernel and initial ramdisk image files, from a boot file
contained on the server. To find the boot file, the boot loader searches a
known directory on the server. This search directory can contain more than one
entry, supporting the use of separate boot files for different clients.
The file names and locations depend on the BIOS or |UEFI| implementation.
.. _configuring-a-pxe-boot-server-table-mgq-xlh-2cb:
.. table:: Table 1. |PXE| boot server file locations for BIOS and |UEFI| implementations
:widths: auto
+------------------------------------------+------------------------+-------------------------------+
| Resource | BIOS | UEFI |
+==========================================+========================+===============================+
| **boot loader** | ./pxelinux.0 | ./EFI/grubx64.efi |
+------------------------------------------+------------------------+-------------------------------+
| **boot file search directory** | ./pxelinux.cfg | ./ or ./EFI |
| | | |
| | | \(system-dependent\) |
+------------------------------------------+------------------------+-------------------------------+
| **boot file** and path | ./pxelinux.cfg/default | ./grub.cfg and ./EFI/grub.cfg |
+------------------------------------------+------------------------+-------------------------------+
| \(./ indicates the root TFTP directory\) |
+------------------------------------------+------------------------+-------------------------------+
.. rubric:: |prereq|
Use a Linux workstation as the |PXE| Boot server.
.. _configuring-a-pxe-boot-server-ul-mrz-jlj-dt:
- On the workstation, install the packages required to support |DHCP|, TFTP,
and Apache.
- Configure |DHCP|, TFTP, and Apache according to your system requirements.
For details, refer to the documentation included with the packages.
- Additionally, configure |DHCP| to support both BIOS and |UEFI| client
architectures. For example:
.. code-block:: none
option arch code 93 unsigned integer 16; # ref RFC4578
# ...
subnet 192.168.1.0 netmask 255.255.255.0 {
if option arch = 00:07 {
filename "EFI/grubx64.efi";
# NOTE: substitute the full tftp-boot-dir specified in the setup script
}
else {
filename "pxelinux.0";
}
# ...
}
- Start the |DHCP|, TFTP, and Apache services.
- Connect the |PXE| boot server to the |prod| management or |PXE| boot
network.
.. rubric:: |proc|
.. _configuring-a-pxe-boot-server-steps-qfb-kyh-2cb:
#. Copy the ISO image from the source \(product DVD, USB device, or WindShare
`http://windshare.windriver.com <http://windshare.windriver.com>`__\) to a
temporary location on the PXE boot server.
This example assumes that the copied image file is tmp/TS-host-installer-1.0.iso.
#. Mount the ISO image and make it executable.
.. code-block:: none
$ mount -o loop /tmp/TS-host-installer-1.0.iso /media/iso
$ mount -o remount,exec,dev /media/iso
#. Set up the |PXE| boot configuration.
The ISO image includes a setup script, which you can run to complete the
configuration.
.. code-block:: none
$ /media/iso/pxeboot_setup.sh -u http://<ip-addr>/<symlink> \
-t <tftp-boot-dir>
where
``ip-addr``
is the Apache listening address.
``symlink``
is the name of a user-created symbolic link under the Apache document
root directory, pointing to the directory specified by <tftp-boot-dir>.
``tftp-boot-dir``
is the path from which the boot loader is served \(the TFTP root
directory\).
The script creates the directory specified by <tftp-boot-dir>.
For example:
.. code-block:: none
$ /media/iso/pxeboot_setup.sh -u http://192.168.100.100/BIOS-client -t /export/pxeboot
#. To serve a specific boot file to a specific controller, assign a special
name to the file.
The boot loader searches for a file name that uses a string based on the
client interface |MAC| address. The string uses lower case, substitutes
dashes for colons, and includes the prefix "01-".
- For a BIOS client, use the |MAC| address string as the file name:
.. code-block:: none
$ cd <tftp-boot-dir>/pxelinux.cfg/
$ cp pxeboot.cfg <mac-address-string>
where:
``<tftp-boot-dir>``
is the path from which the boot loader is served.
``<mac-address-string>``
is a lower-case string formed from the |MAC| address of the client
|PXE| boot interface, using dashes instead of colons, and prefixed
by "01-".
For example, to represent the |MAC| address ``08:00:27:dl:63:c9``,
use the string ``01-08-00-27-d1-63-c9`` in the file name.
For example:
.. code-block:: none
$ cd /export/pxeboot/pxelinux.cfg/
$ cp pxeboot.cfg 01-08-00-27-d1-63-c9
If the boot loader does not find a file named using this convention, it
looks for a file with the name default.
- For a |UEFI| client, use the |MAC| address string prefixed by
"grub.cfg-". To ensure the file is found, copy it to both search
directories used by the |UEFI| convention.
.. code-block:: none
$ cd <tftp-boot-dir>
$ cp grub.cfg grub.cfg-<mac-address-string>
$ cp grub.cfg ./EFI/grub.cfg-<mac-address-string>
For example:
.. code-block:: none
$ cd /export/pxeboot
$ cp grub.cfg grub.cfg-01-08-00-27-d1-63-c9
$ cp grub.cfg ./EFI/grub.cfg-01-08-00-27-d1-63-c9
.. note::
Alternatively, you can use symlinks in the search directories to
ensure the file is found.

View File

@ -107,9 +107,9 @@ Bootstrap system on controller-0
#. Create a minimal user configuration override file. #. Create a minimal user configuration override file.
To use this method, create your override file at ``$HOME/localhost.yml`` To use this method, create your override file at ``$HOME/localhost.yml``
and provide the minimum required parameters for the deployment configuration and provide the minimum required parameters for the deployment
as shown in the example below. Use the OAM IP SUBNET and IP ADDRESSing configuration as shown in the example below. Use the OAM IP SUBNET and IP
applicable to your deployment environment. ADDRESSing applicable to your deployment environment.
:: ::
@ -131,24 +131,72 @@ Bootstrap system on controller-0
admin_password: <admin-password> admin_password: <admin-password>
ansible_become_pass: <sysadmin-password> ansible_become_pass: <sysadmin-password>
# Add these lines to configure Docker to use a proxy server
# docker_http_proxy: http://my.proxy.com:1080
# docker_https_proxy: https://my.proxy.com:1443
# docker_no_proxy:
# - 1.2.3.4
EOF EOF
.. only:: partner .. only:: starlingx
.. include:: ../../../_includes/install-playbook-values-aws.rest In either of the above options, the bootstrap playbooks default values
will pull all container images required for the |prod-p| from Docker hub.
If you have setup a private Docker registry to use for bootstrapping
then you will need to add the following lines in $HOME/localhost.yml:
Refer to :ref:`Ansible Bootstrap Configurations <ansible_bootstrap_configs>` .. only:: partner
for information on additional Ansible bootstrap configurations for advanced
Ansible bootstrap scenarios, such as Docker proxies when deploying behind a .. include:: /_includes/install-kubernetes-bootstrap-playbook.rest
firewall, etc. Refer to :ref:`Docker Proxy configuration :start-after: docker-reg-begin
<docker_proxy_config>` for details about Docker proxy settings. :end-before: docker-reg-end
.. code-block::
docker_registries:
quay.io:
url: myprivateregistry.abc.com:9001/quay.io
docker.elastic.co:
url: myprivateregistry.abc.com:9001/docker.elastic.co
gcr.io:
url: myprivateregistry.abc.com:9001/gcr.io
k8s.gcr.io:
url: myprivateregistry.abc.com:9001/k8s.gcr.io
docker.io:
url: myprivateregistry.abc.com:9001/docker.io
defaults:
type: docker
username: <your_myprivateregistry.abc.com_username>
password: <your_myprivateregistry.abc.com_password>
# Add the CA Certificate that signed myprivateregistry.abc.coms
# certificate as a Trusted CA
ssl_ca_cert: /home/sysadmin/myprivateregistry.abc.com-ca-cert.pem
See :ref:`Use a Private Docker Registry <use-private-docker-registry>`
for more information.
.. only:: starlingx
If a firewall is blocking access to Docker hub or your private
registry from your StarlingX deployment, you will need to add the
following lines in $HOME/localhost.yml (see :ref:`Docker Proxy
Configuration <docker_proxy_config>` for more details about Docker
proxy settings):
.. only:: partner
.. include:: /_includes/install-kubernetes-bootstrap-playbook.rest
:start-after: firewall-begin
:end-before: firewall-end
.. code-block::
# Add these lines to configure Docker to use a proxy server
docker_http_proxy: http://my.proxy.com:1080
docker_https_proxy: https://my.proxy.com:1443
docker_no_proxy:
- 1.2.3.4
Refer to :ref:`Ansible Bootstrap Configurations <ansible_bootstrap_configs>`
for information on additional Ansible bootstrap configurations for advanced
Ansible bootstrap scenarios.
#. Run the Ansible bootstrap playbook: #. Run the Ansible bootstrap playbook:
@ -177,7 +225,8 @@ Configure controller-0
#. Configure the |OAM| interface of controller-0 and specify the #. Configure the |OAM| interface of controller-0 and specify the
attached network as "oam". attached network as "oam".
Use the |OAM| port name that is applicable to your deployment environment, for example eth0: Use the |OAM| port name that is applicable to your deployment environment,
for example eth0:
:: ::
@ -185,9 +234,11 @@ Configure controller-0
system host-if-modify controller-0 $OAM_IF -c platform system host-if-modify controller-0 $OAM_IF -c platform
system interface-network-assign controller-0 $OAM_IF oam system interface-network-assign controller-0 $OAM_IF oam
#. Configure the MGMT interface of controller-0 and specify the attached networks of both "mgmt" and "cluster-host". #. Configure the MGMT interface of controller-0 and specify the attached
networks of both "mgmt" and "cluster-host".
Use the MGMT port name that is applicable to your deployment environment, for example eth1: Use the MGMT port name that is applicable to your deployment environment,
for example eth1:
:: ::
@ -222,17 +273,7 @@ Configure controller-0
system storage-backend-add ceph --confirmed system storage-backend-add ceph --confirmed
#. If required, and not already done as part of bootstrap, configure Docker to .. only:: openstack
use a proxy server.
StarlingX uses publicly available container runtime registries. If you are behind a
corporate firewall or proxy, you need to set docker proxy settings.
Refer to :ref:`Docker Proxy Configuration <docker_proxy_config>` for
details about configuring Docker proxy settings.
.. only:: starlingx
************************************* *************************************
OpenStack-specific host configuration OpenStack-specific host configuration
@ -382,7 +423,8 @@ Configure controller-1
#. Configure the |OAM| interface of controller-1 and specify the #. Configure the |OAM| interface of controller-1 and specify the
attached network of "oam". attached network of "oam".
Use the |OAM| port name that is applicable to your deployment environment, for example eth0: Use the |OAM| port name that is applicable to your deployment environment,
for example eth0:
:: ::
@ -390,18 +432,19 @@ Configure controller-1
system host-if-modify controller-1 $OAM_IF -c platform system host-if-modify controller-1 $OAM_IF -c platform
system interface-network-assign controller-1 $OAM_IF oam system interface-network-assign controller-1 $OAM_IF oam
#. The MGMT interface is partially set up by the network install procedure; configuring #. The MGMT interface is partially set up by the network install procedure;
the port used for network install as the MGMT port and specifying the attached network of "mgmt". configuring the port used for network install as the MGMT port and
specifying the attached network of "mgmt".
Complete the MGMT interface configuration of controller-1 by specifying the attached Complete the MGMT interface configuration of controller-1 by specifying the
network of "cluster-host". attached network of "cluster-host".
:: ::
system interface-network-assign controller-1 mgmt0 cluster-host system interface-network-assign controller-1 mgmt0 cluster-host
.. only:: starlingx .. only:: openstack
************************************* *************************************
OpenStack-specific host configuration OpenStack-specific host configuration
@ -412,8 +455,8 @@ Configure controller-1
**This step is required only if the StarlingX OpenStack application **This step is required only if the StarlingX OpenStack application
(stx-openstack) will be installed.** (stx-openstack) will be installed.**
**For OpenStack only:** Assign OpenStack host labels to controller-1 in support **For OpenStack only:** Assign OpenStack host labels to controller-1 in
of installing the stx-openstack manifest and helm-charts later. support of installing the stx-openstack manifest and helm-charts later.
:: ::
@ -494,41 +537,39 @@ Configure worker nodes
* Configure the data interfaces * Configure the data interfaces
:: ::
DATA0IF=<DATA-0-PORT> # Execute the following lines with
DATA1IF=<DATA-1-PORT> export NODE=worker-0
PHYSNET0='physnet0' # and then repeat with
PHYSNET1='physnet1' export NODE=worker-1
SPL=/tmp/tmp-system-port-list
SPIL=/tmp/tmp-system-host-if-list
# configure the datanetworks in sysinv, prior to referencing it # List inventoried hosts ports and identify ports to be used as data interfaces,
# in the ``system host-if-modify`` command'. # based on displayed linux port name, pci address and device type.
system datanetwork-add ${PHYSNET0} vlan system host-port-list ${NODE}
system datanetwork-add ${PHYSNET1} vlan
for NODE in worker-0 worker-1; do # List hosts auto-configured ethernet interfaces,
echo "Configuring interface for: $NODE" # find the interfaces corresponding to the ports identified in previous step, and
set -ex # take note of their UUID
system host-port-list ${NODE} --nowrap > ${SPL} system host-if-list -a ${NODE}
system host-if-list -a ${NODE} --nowrap > ${SPIL}
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
set +ex
done
* To enable using |SRIOV| network attachments for the above interfaces in Kubernetes hosted application containers: # Modify configuration for these interfaces
# Configuring them as data class interfaces, MTU of 1500 and named data#
system host-if-modify -m 1500 -n data0 -c data ${NODE} <data0-if-uuid>
system host-if-modify -m 1500 -n data1 -c data ${NODE} <data1-if-uuid>
# Create Data Networks
PHYSNET0='physnet0'
PHYSNET1='physnet1'
system datanetwork-add ${PHYSNET0} vlan
system datanetwork-add ${PHYSNET1} vlan
# Assign Data Networks to Data Interfaces
system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${PHYSNET0}
system interface-datanetwork-assign ${NODE} <data1-if-uuid> ${PHYSNET1}
* To enable using |SRIOV| network attachments for the above interfaces in
Kubernetes hosted application containers:
* Configure |SRIOV| device plug in: * Configure |SRIOV| device plug in:
@ -554,7 +595,7 @@ Configure worker nodes
done done
.. only:: starlingx .. only:: openstack
************************************* *************************************
OpenStack-specific host configuration OpenStack-specific host configuration

View File

@ -10,9 +10,8 @@
Install Kubernetes Platform on Standard with Dedicated Storage Install Kubernetes Platform on Standard with Dedicated Storage
============================================================== ==============================================================
This section describes the steps to install the StarlingX Kubernetes platform This section describes the steps to install the |prod| Kubernetes platform on a
on a **StarlingX R5.0 Standard with Dedicated Storage** deployment **Standard with Dedicated Storage** deployment configuration.
configuration.
.. contents:: .. contents::
:local: :local:
@ -68,7 +67,8 @@ Unlock controller-0 in order to bring it into service:
system host-unlock controller-0 system host-unlock controller-0
Controller-0 will reboot in order to apply configuration changes and come into Controller-0 will reboot in order to apply configuration changes and come into
service. This can take 5-10 minutes, depending on the performance of the host machine. service. This can take 5-10 minutes, depending on the performance of the host
machine.
----------------------------------------------------------------- -----------------------------------------------------------------
Install software on controller-1, storage nodes, and worker nodes Install software on controller-1, storage nodes, and worker nodes
@ -193,35 +193,37 @@ Configure storage nodes
system interface-network-assign $NODE mgmt0 cluster-host system interface-network-assign $NODE mgmt0 cluster-host
done done
#. Add |OSDs| to storage-0. The following example adds |OSDs| to the `sdb` disk: #. Add |OSDs| to storage-0.
:: ::
HOST=storage-0 HOST=storage-0
DISKS=$(system host-disk-list ${HOST})
TIERS=$(system storage-tier-list ceph_cluster)
OSDs="/dev/sdb"
for OSD in $OSDs; do
system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
while true; do system host-stor-list ${HOST} | grep ${OSD} | grep configuring; if [ $? -ne 0 ]; then break; fi; sleep 1; done
done
system host-stor-list $HOST # List hosts disks and identify disks you want to use for CEPH OSDs, taking note of their UUID
# By default, /dev/sda is being used as system disk and can not be used for OSD.
system host-disk-list ${HOST}
#. Add |OSDs| to storage-1. The following example adds |OSDs| to the `sdb` disk: # Add disk as an OSD storage
system host-stor-add ${HOST} osd <disk-uuid>
# List OSD storage devices and wait for configuration of newly added OSD to complete.
system host-stor-list ${HOST}
#. Add |OSDs| to storage-1.
:: ::
HOST=storage-1 HOST=storage-1
DISKS=$(system host-disk-list ${HOST})
TIERS=$(system storage-tier-list ceph_cluster)
OSDs="/dev/sdb"
for OSD in $OSDs; do
system host-stor-add ${HOST} $(echo "$DISKS" | grep "$OSD" | awk '{print $2}') --tier-uuid $(echo "$TIERS" | grep storage | awk '{print $2}')
while true; do system host-stor-list ${HOST} | grep ${OSD} | grep configuring; if [ $? -ne 0 ]; then break; fi; sleep 1; done
done
system host-stor-list $HOST # List hosts disks and identify disks you want to use for CEPH OSDs, taking note of their UUID
# By default, /dev/sda is being used as system disk and can not be used for OSD.
system host-disk-list ${HOST}
# Add disk as an OSD storage
system host-stor-add ${HOST} osd <disk-uuid>
# List OSD storage devices and wait for configuration of newly added OSD to complete.
system host-stor-list ${HOST}
-------------------- --------------------
Unlock storage nodes Unlock storage nodes
@ -266,45 +268,44 @@ Configure worker nodes
This step is **required** for OpenStack. This step is **required** for OpenStack.
* Configure the data interfaces * Configure the data interfaces.
:: ::
DATA0IF=<DATA-0-PORT> # Execute the following lines with
DATA1IF=<DATA-1-PORT> export NODE=worker-0
# and then repeat with
export NODE=worker-1
# List inventoried hosts ports and identify ports to be used as data interfaces,
# based on displayed linux port name, pci address and device type.
system host-port-list ${NODE}
# List hosts auto-configured ethernet interfaces,
# find the interfaces corresponding to the ports identified in previous step, and
# take note of their UUID
system host-if-list -a ${NODE}
# Modify configuration for these interfaces
# Configuring them as data class interfaces, MTU of 1500 and named data#
system host-if-modify -m 1500 -n data0 -c data ${NODE} <data0-if-uuid>
system host-if-modify -m 1500 -n data1 -c data ${NODE} <data1-if-uuid>
# Create Data Networks
PHYSNET0='physnet0' PHYSNET0='physnet0'
PHYSNET1='physnet1' PHYSNET1='physnet1'
SPL=/tmp/tmp-system-port-list
SPIL=/tmp/tmp-system-host-if-list
# configure the datanetworks in sysinv, prior to referencing it
# in the ``system host-if-modify`` command'.
system datanetwork-add ${PHYSNET0} vlan system datanetwork-add ${PHYSNET0} vlan
system datanetwork-add ${PHYSNET1} vlan system datanetwork-add ${PHYSNET1} vlan
for NODE in worker-0 worker-1; do # Assign Data Networks to Data Interfaces
echo "Configuring interface for: $NODE" system interface-datanetwork-assign ${NODE} <data0-if-uuid> ${PHYSNET0}
set -ex system interface-datanetwork-assign ${NODE} <data1-if-uuid> ${PHYSNET1}
system host-port-list ${NODE} --nowrap > ${SPL}
system host-if-list -a ${NODE} --nowrap > ${SPIL}
DATA0PCIADDR=$(cat $SPL | grep $DATA0IF |awk '{print $8}')
DATA1PCIADDR=$(cat $SPL | grep $DATA1IF |awk '{print $8}')
DATA0PORTUUID=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $2}')
DATA1PORTUUID=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $2}')
DATA0PORTNAME=$(cat $SPL | grep ${DATA0PCIADDR} | awk '{print $4}')
DATA1PORTNAME=$(cat $SPL | grep ${DATA1PCIADDR} | awk '{print $4}')
DATA0IFUUID=$(cat $SPIL | awk -v DATA0PORTNAME=$DATA0PORTNAME '($12 ~ DATA0PORTNAME) {print $2}')
DATA1IFUUID=$(cat $SPIL | awk -v DATA1PORTNAME=$DATA1PORTNAME '($12 ~ DATA1PORTNAME) {print $2}')
system host-if-modify -m 1500 -n data0 -c data ${NODE} ${DATA0IFUUID}
system host-if-modify -m 1500 -n data1 -c data ${NODE} ${DATA1IFUUID}
system interface-datanetwork-assign ${NODE} ${DATA0IFUUID} ${PHYSNET0}
system interface-datanetwork-assign ${NODE} ${DATA1IFUUID} ${PHYSNET1}
set +ex
done
* To enable using SRIOV network attachments for the above interfaces in Kubernetes hosted application containers:
* Configure SRIOV device plug in: * To enable using |SRIOV| network attachments for the above interfaces in
Kubernetes hosted application containers:
* Configure |SRIOV| device plug in:
:: ::
@ -313,7 +314,7 @@ Configure worker nodes
done done
* If planning on running |DPDK| in containers on this host, configure the * If planning on running |DPDK| in containers on this host, configure the
number of 1G Huge pages required on both NUMA nodes: number of 1G Huge pages required on both |NUMA| nodes:
:: ::

View File

@ -0,0 +1,53 @@
.. fdm1552927801987
.. _exporting-host-configurations:
==========================
Export Host Configurations
==========================
You can generate a host configuration file from an existing system for
re-installation, upgrade, or maintenance purposes.
.. rubric:: |context|
You can generate a host configuration file using the :command:`system
host-bulk-export` command, and then use this file with the :command:`system
host-bulk-add` command to re-create the system. If required, you can modify the
file before using it.
The configuration settings \(management |MAC| address, BM IP address, and so
on\) for all nodes except **controller-0** are written to the file.
.. note::
To ensure that the hosts are not powered on unexpectedly, the **power-on**
element for each host is commented out by default.
.. rubric:: |prereq|
To perform this procedure, you must be logged in as the **admin** user.
.. rubric:: |proc|
.. _exporting-host-configurations-steps-unordered-ntw-nw1-c2b:
- Run the :command:`system host-bulk-export` command to create the host
configuration file.
.. code-block:: none
system host-bulk-export [--filename <FILENAME]>
- where <FILENAME> is the path and name of the output file. If the
``--filename`` option is not present, the default path ./hosts.xml is
used.
.. rubric:: |postreq|
To use the host configuration file, see :ref:`Reinstall a System Using an
Exported Host Configuration File
<reinstalling-a-system-using-an-exported-host-configuration-file>`.
For details on the structure and elements of the file, see :ref:`Bulk Host XML
File Format <bulk-host-xml-file-format>`.

View File

@ -0,0 +1,39 @@
.. deo1552927844327
.. _reinstalling-a-system-or-a-host:
============================
Reinstall a System or a Host
============================
You can reinstall individual hosts or the entire system if necessary.
Reinstalling host software or deleting and re-adding a host node may be
required to complete certain configuration changes.
.. rubric:: |context|
For a summary of changes that require system or host reinstallation, see
|node-doc|: :ref:`Configuration Changes Requiring Re-installation
<configuration-changes-requiring-re-installation>`.
To reinstall an entire system, refer to the Installation Guide for your system
type \(for example, Standard or All-in-one\).
.. note::
To simplify system reinstallation, you can export and reuse an existing
system configuration. For more information, see :ref:`Reinstalling a System
Using an Exported Host Configuration File
<reinstalling-a-system-using-an-exported-host-configuration-file>`.
To reinstall the software on a host using the Host Inventory controls, see
|node-doc|: :ref:`Host Inventory <hosts-tab>`. In some cases, you must delete
the host instead, and then re-add it using the standard host installation
procedure. This applies if the system inventory record must be corrected to
complete the configuration change \(for example, if the |MAC| address of the
management interface has changed\).
- :ref:`Reinstalling a System Using an Exported Host Configuration File
<reinstalling-a-system-using-an-exported-host-configuration-file>`
- :ref:`Exporting Host Configurations <exporting-host-configurations>`

View File

@ -0,0 +1,45 @@
.. wuh1552927822054
.. _reinstalling-a-system-using-an-exported-host-configuration-file:
============================================================
Reinstall a System Using an Exported Host Configuration File
============================================================
You can reinstall a system using the host configuration file that is generated
using the :command:`host-bulk-export` command.
.. rubric:: |prereq|
For the following procedure, **controller-0** must be the active controller.
.. rubric:: |proc|
#. Create a host configuration file using the :command:`system
host-bulk-export` command, as described in :ref:`Exporting Host
Configurations <exporting-host-configurations>`.
#. Copy the host configuration file to a USB drive or somewhere off the
controller hard disk.
#. Edit the host configuration file as needed, for example to specify power-on
or |BMC| information.
#. Delete all the hosts except **controller-0** from the inventory.
#. Reinstall the |prod| software on **controller-0**, which must be the active
controller.
#. Run :command:`Ansible Bootstrap playbook`.
#. Follow the instructions for using the :command:`system host-bulk-add`
command, as detailed in :ref:`Adding Hosts in Bulk <adding-hosts-in-bulk>`.
.. rubric:: |postreq|
After adding the host, you must provision it according to the requirements of
the personality.
.. xbooklink For more information, see :ref:`Installing, Configuring, and
Unlocking Nodes <installing-configuring-and-unlocking-nodes>`, for your system,
and follow the *Configure* steps for the appropriate node personality.

View File

@ -38,6 +38,61 @@ Install StarlingX Kubernetes on bare metal
bare_metal/dedicated_storage bare_metal/dedicated_storage
bare_metal/ironic bare_metal/ironic
bare_metal/rook_storage bare_metal/rook_storage
**********
Appendixes
**********
.. _use-private-docker-registry:
Use a private Docker registry
*****************************
.. toctree::
:maxdepth: 1
bare_metal/bootstrapping-from-a-private-docker-registry
Install controller-0 from a PXE boot server
*******************************************
.. toctree::
:maxdepth: 1
bare_metal/configuring-a-pxe-boot-server
bare_metal/accessing-pxe-boot-server-files-for-a-custom-configuration
Add and reinstall a host
************************
.. toctree::
:maxdepth: 1
bare_metal/adding-hosts-using-the-host-add-command
Add hosts in bulk
,,,,,,,,,,,,,,,,,
.. toctree::
:maxdepth: 1
bare_metal/adding-hosts-in-bulk
bare_metal/bulk-host-xml-file-format
Reinstall a system or a host
,,,,,,,,,,,,,,,,,,,,,,,,,,,,
.. toctree::
:maxdepth: 1
bare_metal/reinstalling-a-system-or-a-host
bare_metal/reinstalling-a-system-using-an-exported-host-configuration-file
bare_metal/exporting-host-configurations
.. toctree:: .. toctree::
:hidden: :hidden:

View File

@ -16,7 +16,7 @@ deps =
-c{env:TOX_CONSTRAINTS_FILE:https://releases.openstack.org/constraints/upper/master} -c{env:TOX_CONSTRAINTS_FILE:https://releases.openstack.org/constraints/upper/master}
-r{toxinidir}/doc/requirements.txt -r{toxinidir}/doc/requirements.txt
commands = commands =
sphinx-build -a -E -W --keep-going -d doc/build/doctrees -t starlingx -b html doc/source doc/build/html {posargs} sphinx-build -a -E -W --keep-going -d doc/build/doctrees -t starlingx -t openstack -b html doc/source doc/build/html {posargs}
bash htmlChecks.sh bash htmlChecks.sh
whitelist_externals = bash whitelist_externals = bash
htmlChecks.sh htmlChecks.sh