Apply proposed doc organization and migrate wiki content

1. Updated index page to include toctree for proposed organization of docs. Updated sample content (this needs to be finalized - SAMPLE content only).
2. Added Installation Guide page. Migrated content from wiki to docs (reST).
3. Added Developer Guide page. Migrated content from wiki to docs (reST).
4. Updated Contribute page to link to 2 child pages
- Added API Contributor Guide page. Migrated content from wiki to docs (reST).
- Added Release Notes Contributor Guide page. Added draft content from etherpad (reST).
5. Removed trailing white space in files (patch)

Depends-On: https://review.openstack.org/#/c/611078

Change-Id: If4448fcc096f8fdcdf0d88c31b6b42ea94aea1fd
Signed-off-by: Kristal Dale <kristal.dale@intel.com>
This commit is contained in:
Kristal Dale 2018-10-17 15:14:32 -07:00
parent f253a48c45
commit ed5bddca67
12 changed files with 5275 additions and 43 deletions

View File

@ -0,0 +1,266 @@
=====================
API Contributor Guide
=====================
---------
Locations
---------
OpenStack API working group has defined a guideline to follow for API
documentation when a project provides a REST API service. API
documentation information comes from RST source files stored in the
project repository, that when built, generate HTML files. More details
about the OpenStack API documentation can be found at:
https://docs.openstack.org/doc-contrib-guide/api-guides.html.
StarlingX API Reference documentation exists in the following projects:
- **stx-config:** StarlingX System Configuration Management
- **stx-docs:** StarlingX Documentation
- **stx-python-cinderclient** // i.e. only StarlingX-specific
extensions to Cinder API are documented here
- **stx-nova** // i.e. only StarlingX-specific extensions to Nova
API are documented here
- **stx-glance** // i.e. only StarlingX-specific extensions to
Glance API are documented here
- **stx-neutron** // i.e. only StarlingX-specific extensions to
Neutron API are documented here
- **stx-distcloud:** StarlingX Distributed Cloud
- **stx-fault:** StarlingX Fault Management
- **stx-ha:** StarlingX High Availability/Process Monitoring/Service
Management
- **stx-metal:** StarlingX Bare Metal and Node Management, Hardware
Maintenance
- **stx-nfv:** StarlingX NFVI Orchestration
--------------------
Directory Structures
--------------------
The directory structure of the API Reference documentation under each
StarlingX project repository is fixed. Here is an example showing
**stx-config** StarlingX System Configuration Management
::
stx-config/api-ref/
└── source
├── api-ref-sysinv-v1-config.rst
├── conf.py
└── index.rst
The initial modifications and additions to enable the API Documentation
service in each StarlingX project are as follows:
- **.gitignore** modifications to ignore the building directories and
HTML files for the API reference
- **.zuul.yaml** modifications to add the jobs to build and publish the
api-ref document
- **api-ref/source** directory creation to store your API Reference
project directory
- **api-ref/source/conf.py** configuration file to determine the HTML
theme, Sphinx extensions and project information
- **api-ref/source/index.rst** source file to create your index RST
source file
- **doc/requiremets.txt** modifications to add the os-api-ref Sphinx
extension
- **tox.ini** modifications to add the configuration to build the API
reference locally
See stx-config [Doc] OpenStack API Reference Guide as an example of this
first commit: https://review.openstack.org/#/c/603258/
----------------------------
Creating the RST Source File
----------------------------
Once the API Documentation service has been enabled, you create the RST
source files that document the API operations under the same API
Reference documentation project directory. The following shows the RST
source file for the **stx-config** StarlingX System Configuration
Management: Configuration API v1
::
stx-config/api-ref/
└── source
└── api-ref-sysinv-v1-config.rst
-----------------------
Creating the Index File
-----------------------
After providing the RST source file as shown in the previous example,
you add the **index.rst** file. This file provides captioning, a brief
description of the document, and the table-of-contents structure with
depth restrictions. The **index.rst** file resides in the same folder as
the RST source file.
Here is an example using the **stx-config** StarlingX System
Configuration Management: Configurationi API v1:
::
stx-config/api-ref/
|___source
|___api-ref-sysinv-v1-config.rst
|___index.rst
The syntax of the **index.rst** file is fixed. Following shows the
**index.rst** file used in the **stx-config**:
::
========================
stx-config API Reference
========================
StarlingX System Configuration Management
.. toctree::
:maxdepth: 2
api-ref-sys-v1-config
Following are explanations for each of the four areas of the
**index.rst** file:
- **Reference title:** Literal title that is used in the rendered
document. In this case it is "stx-config API Reference".
- **Reference summary:** Literal summary of the rendered document. In
this case it is "StarlingX System Configuration Management".
- **Table-of-Contents tree structure and depth parameters:** The
directive to create a TOC and to limit the depth of topics to "2".
- **RST source file root name:** The source file to use as content. In
this case, the file reference is "api-ref-sys-v1-config". This
references the **api-ref-sys-v1-config.rst** file in the same folder
as the **index.rst** file.
------------------
REST METHOD Syntax
------------------
Following is the syntax for each REST METHOD in the RST source file
(e.g. **api-ref-sys-v1-config.rst**).
::
******************************************
Modifies attributes of the System object
******************************************
.. rest_method:: PATCH /v1/isystems
<  TEXT - description of the overall REST API >
**Normal response codes**
< TEXT - list of normal response codes  >
**Error response codes**
< TEXT  list of  error response codes  >
**Request parameters**
.. csv-table::
:header: "Parameter", "Style", "Type", "Description"
:widths: 20, 20, 20, 60
"ihosts (Optional)", "plain", "xsd:list", "Links for retreiving the list of hosts for this system."
"name (Optional)", "plain", "xsd:string", "A user-specified name of the cloud system. The default value is the system UUID."
   < etc. >
::
< verbatim list of an example REQUEST body >
[
    {
       "path": "/name",
       "value": "OTTAWA_LAB_WEST",
       "op": "replace"
    }
    {
       "path": "/description",
       "value": "The Ottawa Cloud Test Lab - West Wing.",
       "op": "replace"
    }
]
::
**Response parameters**
.. csv-table::
:header: "Parameter", "Style", "Type", "Description"
:widths: 20, 20, 20, 60
"ihosts (Optional)", "plain", "xsd:list", "Links for retreiving the list of hosts for this system."
"name (Optional)", "plain", "xsd:string", "A user-specified name of the cloud system. The default value is the system UUID."
< etc. >
::
< verbatim list of an example RESPONSE body >
{
   "isystems": [
{
"links": [
{
"href": "http://192.168.204.2:6385/v1/isystems/5ce48a37-f6f5-4f14-8fbd-ac6393464b19",
"rel": "self"
},
{
"href": "http://192.168.204.2:6385/isystems/5ce48a37-f6f5-4f14-8fbd-ac6393464b19",
"rel": "bookmark"
}
],
"description": "The Ottawa Cloud Test Lab - West Wing.",
"software_version": "18.03",
"updated_at": "2017-07-31T17:44:06.051441+00:00",
"created_at": "2017-07-31T17:35:46.836024+00:00",
}
]
}
------------------------------------
Building the Reference Documentation
------------------------------------
To build the API reference documentation locally in HTML format, use the
following command:
.. code:: sh
$ tox -e api-ref
The resulting directories and HTML files looks like:
::
api-ref
|__build/
├── doctrees
   ├── api-ref-sysinv-v1-config.doctree
     ...
└── html
   ├── api-ref-sysinv-v1-config.html
   ├── index.html
    ...
   └── _static
--------------------------------------------
Viewing the Rendered Reference Documentation
--------------------------------------------
To view the rendered HTML API Reference document in a browser, open up
the **index.html** file.
**NOTE:** The PDF build uses a different tox environment and is
currently not supported for StarlingX.

View File

@ -1,8 +1,11 @@
=========================
Contributor Documentation
=========================
==========
Contribute
==========
The contributing page.
.. toctree::
:maxdepth: 1
api_contribute_guide
release_note_contribute_guide

View File

@ -0,0 +1,137 @@
===============================
Release Notes Contributor Guide
===============================
**[DRAFT]**
Release notes for StarlingX projects are managed using Reno allowing release
notes go through the same review process used for managing code changes.
Release documentation information comes from YAML source files stored in the
project repository, that when built in conjuction with RST source files,
generate HTML files. More details about the Reno Release Notes Manager can
be found at: https://docs.openstack.org/reno
---------
Locations
---------
StarlingX Release Notes documentation exists in the following projects:
- **stx-clients:** StarlingX Client Libraries
- **stx-config:** StarlingX System Configuration Management
- **stx-distcloud:** StarlingX Distributed Cloud
- **stx-distcloud-client:** StarlingX Distributed Cloud Client
- **stx-fault:** StarlingX Fault Management
- **stx-gui:** StarlingX Horizon plugins for new StarlingX services
- **stx-ha:** StarlingX High Availability/Process Monitoring/Service Management
- **stx-integ:** StarlingX Integration and Packaging
- **stx-metal:** StarlingX Bare Metal and Node Management, Hardware Maintenance
- **stx-nfv:** StarlingX NFVI Orchestration
- **stx-tools:** StarlingX Build Tools
- **stx-update:** StarlingX Installation/Update/Patching/Backup/Restore
- **stx-upstream:** StarlingX Upstream Packaging
--------------------
Directory Structures
--------------------
The directory structure of Release documentation under each StarlingX project
repository is fixed. Here is an example showing **stx-confi** StarlingX System
Configuration Management:
::
releasenotes/
├── notes
│ └── release-summary-6738ff2f310f9b57.yaml
└── source
├── conf.py
├── index.rst
└── unreleased.rst
The initial modifications and additions to enable the API Documentation service
in each StarlingX project are as follows:
- **.gitignore** modifications to ignore the building directories and HTML files
for the Release Notes
- **.zuul.yaml** modifications to add the jobs to build and publish the api-ref
document
- **releasenotes/notes/** directory creation to store your release notes files
in YAML format
- **releasenotes/source** directory creation to store your API Reference project
directory
- **releasenotes/source/conf.py** configuration file to determine the HTML theme,
Sphinx extensions and project information
- **releasenotes/source/index.rst** source file to create your index RST source
file
- **releasenotes/source/unrelased.rst** source file to avoid breaking the real
release notes build job on the master branch
- **doc/requiremets.txt** modifications to add the os-api-ref Sphinx extension
- **tox.ini** modifications to add the configuration to build the API reference
locally
See stx-config [Doc] Release Notes Management as an example of this first commit:
https://review.openstack.org/#/c/603257/
Once the Release Notes Documentation service has been enabled, you can create a new
release notes.
-------------------
Release Notes Files
-------------------
The following shows the YAML source file for the stx-config StarlingX System
Configuration Management:
`Release Summary r/2018.10 <http://git.openstack.org/cgit/openstack/stx-config/tree/releasenotes/notes/release-summary-6738ff2f310f9b57.yaml>`_
::
stx-config/releasenotes/
├── notes
│ └── release-summary-6738ff2f310f9b57.yaml
To create a new release note that document your code changes via tox newnote environment:
$ tox -e newnote hello-my-change
A YAML source file is created with a unique name under releasenote/notes/ directory:
::
stx-config/releasenotes/
├── notes
│ ├── hello-my-change-dcef4b934a670160.yaml
The content are gound into logical sections based in the default template used by reno:
::
features
issues
upgrade
deprecations
critical
security
fixes
other
Modify the content in the YAML source file based on
`reStructuredText <http://www.sphinx-doc.org/en/stable/rest.html>`_ format.
------------------
Developer Workflow
------------------
#. Start common development workflow to create your change: "Hello My Change"
#. Create its release notes, no major effort since title and content might be reused from git commit information:
#. Add your change including its release notes and submit for review.
---------------------
Release Team Workflow
---------------------
#. Start development work to prepare the release, this might include git tag.
#. Generate the Reno Report
#. Add your change and submit for review

View File

@ -0,0 +1,780 @@
.. _developer-guide:
===============
Developer Guide
===============
This section contains the steps for building a StarlingX ISO from Master
branch.
------------
Requirements
------------
The recommended minimum requirements include:
Hardware Requirements
*********************
A workstation computer with:
- Processor: x86_64 is the only supported architecture
- Memory: At least 32GB RAM
- Hard Disk: 500GB HDD
- Network: Network adapter with active Internet connection
Software Requirements
*********************
A workstation computer with:
- Operating System: Ubuntu 16.04 LTS 64-bit
- Docker
- Android Repo Tool
- Proxy Settings Configured (If Required)
- See
http://lists.starlingx.io/pipermail/starlingx-discuss/2018-July/000136.html
for more details
- Public SSH Key
-----------------------------
Development Environment Setup
-----------------------------
This section describes how to set up a StarlingX development system on a
workstation computer. After completing these steps, you will be able to
build a StarlingX ISO image on the following Linux distribution:
- Ubuntu 16.04 LTS 64-bit
Update Your Operating System
****************************
Before proceeding with the build, ensure your OS is up to date. Youll
first need to update the local database list of available packages:
.. code:: sh
$ sudo apt-get update
Installation Requirements and Dependencies
******************************************
Git
^^^
1. Install the required packages in an Ubuntu host system with:
.. code:: sh
$ sudo apt-get install make git curl
2. Make sure to setup your identity
.. code:: sh
$ git config --global user.name "Name LastName"
$ git config --global user.email "Email Address"
Docker CE
^^^^^^^^^
3. Install the required Docker CE packages in an Ubuntu host system. See
`Get Docker CE for
Ubuntu <https://docs.docker.com/install/linux/docker-ce/ubuntu/#os-requirements>`__
for more information.
Android Repo Tool
^^^^^^^^^^^^^^^^^
4. Install the required Android Repo Tool in an Ubuntu host system. Follow
the 2 steps in "Installing Repo" section from `Installing
Repo <https://source.android.com/setup/build/downloading#installing-repo>`__
to have Andriod Repo Tool installed.
Install Public SSH Key
**********************
#. Follow these instructions on GitHub to `Generate a Public SSH
Key <https://help.github.com/articles/connecting-to-github-with-ssh>`__
and then upload your public key to your GitHub and Gerrit account
profiles:
- `Upload to
Github <https://help.github.com/articles/adding-a-new-ssh-key-to-your-github-account>`__
- `Upload to
Gerrit <https://review.openstack.org/#/settings/ssh-keys>`__
Install stx-tools project
*************************
#. Under your $HOME directory, clone the <stx-tools> project
.. code:: sh
$ cd $HOME
$ git clone https://git.starlingx.io/stx-tools
Create a Workspace Directory
****************************
#. Create a *starlingx* workspace directory on your workstation
computer. Usually, youll want to create it somewhere under your
users home directory.
.. code:: sh
$ mkdir -p $HOME/starlingx/
----------------------------------
Build the CentOS Mirror Repository
----------------------------------
This section describes how to build the CentOS Mirror Repository.
Setup Repository Docker Container
*********************************
| Run the following commands under a terminal identified as "One".
#. Navigate to the *<$HOME/stx-tools>/centos-mirror-tool* project
directory:
.. code:: sh
$ cd $HOME/stx-tools/centos-mirror-tools/
#. If necessary you might have to set http/https proxy in your
Dockerfile before building the docker image.
.. code:: sh
ENV http_proxy " http://your.actual_http_proxy.com:your_port "
ENV https_proxy " https://your.actual_https_proxy.com:your_port "
ENV ftp_proxy " http://your.actual_ftp_proxy.com:your_port "
RUN echo " proxy=http://your-proxy.com:port " >> /etc/yum.conf
#. Build your *<user>:<tag>* base container image with **e.g.**
*user:centos-mirror-repository*
.. code:: sh
$ docker build --tag $USER:centos-mirror-repository --file Dockerfile .
#. Launch a *<user>* docker container using previously created Docker
base container image *<user>:<tag>* **e.g.**
*-centos-mirror-repository*. As /localdisk is defined as the workdir
of the container, the same folder name should be used to define the
volume. The container will start to run and populate a logs and
output folders in this directory. The container shall be run from the
same directory where the other scripts are stored.
.. code:: sh
$ docker run -itd --name $USER-centos-mirror-repository --volume $(pwd):/localdisk $USER:centos-mirror-repository
**Note**: the above command will create the container in background,
this mean that you need to attach it manually. The advantage of this
is that you can enter/exit from the container many times as you want.
Download Packages
*****************
#. Attach to the docker repository previously created
::
$ docker exec -it <CONTAINER ID> /bin/bash
#. Inside Repository Docker container, enter the following command to
download the required packages to populate the CentOS Mirror
Repository:
::
# bash download_mirror.sh
#. Monitor the download of packages until it is complete. When download
is complete, the following message is displayed:
::
totally 17 files are downloaded!
step #3: done successfully
IMPORTANT: The following 3 files are just bootstrap versions. Based on them, the workable images
for StarlingX could be generated by running "update-pxe-network-installer" command after "build-iso"
- out/stx-r1/CentOS/pike/Binary/LiveOS/squashfs.img
- out/stx-r1/CentOS/pike/Binary/images/pxeboot/initrd.img
- out/stx-r1/CentOS/pike/Binary/images/pxeboot/vmlinuz
Verify Packages
***************
#. Verify there are no missing or failed packages:
::
# cat logs/*_missing_*.log
# cat logs/*_failmove_*.log
#. In case there are missing or failed ones due to network instability
(or timeout), you should download them manually, to assure you get
all RPMs listed in
**rpms_3rdparties.lst**/**rpms_centos.lst**/**rpms_centos3rdparties.lst**.
Packages Structure
******************
The following is a general overview of the packages structure that you
will have after having downloaded the packages
::
/home/<user>/stx-tools/centos-mirror-tools/output
└── stx-r1
└── CentOS
└── pike
├── Binary
│   ├── EFI
│   ├── images
│   ├── isolinux
│   ├── LiveOS
│   ├── noarch
│   └── x86_64
├── downloads
│   ├── integrity
│   └── puppet
└── Source
Create CentOS Mirror Repository
*******************************
Outside your Repository Docker container, in another terminal identified
as "**Two**", run the following commands:
#. From terminal identified as "**Two**", create a *mirror/CentOS*
directory under your *starlingx* workspace directory:
.. code:: sh
$ mkdir -p $HOME/starlingx/mirror/CentOS/
#. Copy the built CentOS Mirror Repository built under
*$HOME/stx-tools/centos-mirror-tool* to the *$HOME/starlingx/mirror/*
workspace directory.
.. code:: sh
$ cp -r $HOME/stx-tools/centos-mirror-tools/output/stx-r1/ $HOME/starlingx/mirror/CentOS/
-------------------------
Create StarlingX Packages
-------------------------
Setup Building Docker Container
*******************************
#. From terminal identified as "**Two**", create the workspace folder
.. code:: sh
$ mkdir -p $HOME/starlingx/workspace
#. Navigate to the '' $HOME/stx-tools'' project directory:
.. code:: sh
$ cd $HOME/stx-tools
#. Copy your git options to "toCopy" folder
.. code:: sh
$ cp ~/.gitconfig toCOPY
#. Create a *<localrc>* file
.. code:: sh
$ cat <<- EOF > localrc
# tbuilder localrc
MYUNAME=$USER
PROJECT=starlingx
HOST_PREFIX=$HOME/starlingx/workspace
HOST_MIRROR_DIR=$HOME/starlingx/mirror
EOF
#. If necessary you might have to set http/https proxy in your
Dockerfile.centos73 before building the docker image.
.. code:: sh
ENV http_proxy "http://your.actual_http_proxy.com:your_port"
ENV https_proxy "https://your.actual_https_proxy.com:your_port"
ENV ftp_proxy "http://your.actual_ftp_proxy.com:your_port"
RUN echo "proxy=$http_proxy" >> /etc/yum.conf && \
echo -e "export http_proxy=$http_proxy\nexport https_proxy=$https_proxy\n\
export ftp_proxy=$ftp_proxy" >> /root/.bashrc
#. Base container setup If you are running in fedora system, you will
see " .makeenv:88: \**\* missing separator. Stop. " error, to
continue :
- delete the functions define in the .makeenv ( module () { ... } )
- delete the line 19 in the Makefile and ( NULL := $(shell bash -c
"source buildrc ... ).
.. code:: sh
$ make base-build
#. Build container setup
.. code:: sh
$ make build
#. Verify environment variables
.. code:: sh
$ bash tb.sh env
#. Build container run
.. code:: sh
$ bash tb.sh run
#. Execute the built container:
.. code:: sh
$ bash tb.sh exec
Download Source Code Repositories
*********************************
#. From terminal identified as "Two", now inside the Building Docker
container, Internal environment
.. code:: sh
$ eval $(ssh-agent)
$ ssh-add
#. Repo init
.. code:: sh
$ cd $MY_REPO_ROOT_DIR
$ repo init -u https://git.starlingx.io/stx-manifest -m default.xml
#. Repo sync
.. code:: sh
$ repo sync -j`nproc`
#. Tarballs Repository
.. code:: sh
$ ln -s /import/mirrors/CentOS/stx-r1/CentOS/pike/downloads/ $MY_REPO/stx/
Alternatively you can run the populate_downloads.sh script to copy
the tarballs instead of using a symlink.
.. code:: sh
$ populate_downloads.sh /import/mirrors/CentOS/stx-r1/CentOS/pike/
Outside the container
#. From another terminal identified as "Three", Mirror Binaries
.. code:: sh
$ mkdir -p $HOME/starlingx/mirror/CentOS/tis-installer
$ cp $HOME/starlingx/mirror/CentOS/stx-r1/CentOS/pike/Binary/images/pxeboot/initrd.img $HOME/starlingx/mirror/CentOS/tis-installer/initrd.img-stx-0.2
$ cp $HOME/starlingx/mirror/CentOS/stx-r1/CentOS/pike/Binary/images/pxeboot/vmlinuz $HOME/starlingx/mirror/CentOS/tis-installer/vmlinuz-stx-0.2
$ cp $HOME/starlingx/mirror/CentOS/stx-r1/CentOS/pike/Binary/LiveOS/squashfs.img $HOME/starlingx/mirror/CentOS/tis-installer/squashfs.img-stx-0.2
Build Packages
**************
#. Back to the Building Docker container, terminal identified as
"**Two**"
#. **Temporal!** Build-Pkgs Errors Be prepared to have some missing /
corrupted rpm and tarball packages generated during
`Build the CentOS Mirror Repository`_ which will make the next step
to fail, if that happens please download manually those missing /
corrupted packages.
#. **Update the symbolic links**
.. code:: sh
$ generate-cgcs-centos-repo.sh /import/mirrors/CentOS/stx-r1/CentOS/pike/
#. Build-Pkgs
.. code:: sh
$ build-pkgs
#. **Optional!** Generate-Cgcs-Tis-Repo
This step is optional but will improve performance on subsequent
builds. The cgcs-tis-repo has the dependency information that
sequences the build order; To generate or update the information the
following command needs to be executed after building modified or new
packages.
.. code:: sh
$ generate-cgcs-tis-repo
-------------------
Build StarlingX ISO
-------------------
#. Build-Iso
.. code:: sh
$ build-iso
---------------
Build installer
---------------
To get your StarlingX ISO ready to use, you will need to create the init
files that will be used to boot the ISO as well to boot additional
controllers and compute nodes. Note that this procedure only is needed
in your first build and every time the kernel is upgraded.
Once you had run build-iso, run:
.. code:: sh
$ build-pkgs --installer
This will build *rpm* and *anaconda* packages. Then run:
.. code:: sh
$ update-pxe-network-installer
The *update-pxe-network-installer* covers the steps detailed in
*$MY_REPO/stx/stx-metal/installer/initrd/README*. This script will
create three files on
*/localdisk/loadbuild///pxe-network-installer/output*.
::
new-initrd.img
new-squashfs.img
new-vmlinuz
Then, rename them to:
::
initrd.img-stx-0.2
squashfs.img-stx-0.2
vmlinuz-stx-0.2
There are two ways to use these files:
#. Store the files in the */import/mirror/CentOS/tis-installer/* folder
for future use.
#. Store it in an arbitrary location and modify the
*$MY_REPO/stx/stx-metal/installer/pxe-network-installer/centos/build_srpm.data*
file to point to these files.
Now, the *pxe-network-installer* package needs to be recreated and the
ISO regenerated.
.. code:: sh
$ build-pkgs --clean pxe-network-installer
$ build-pkgs pxe-network-installer
$ build-iso
Now your ISO should be able to boot.
Additional notes
****************
- In order to get the first boot working this complete procedure needs
to be done. However, once the init files are created, these can be
stored in a shared location where different developers can make use
of them. Updating these files is not a frequent task and should be
done whenever the kernel is upgraded.
- StarlingX is in active development, so it is possible that in the
future the **0.2** version will change to a more generic solution.
---------------
Build Avoidance
---------------
Purpose
*******
Greatly reduce build times after a repo sync for designers working
within a regional office. Starting from a new workspace, build-pkgs
typically requires 3+ hours. Build avoidance typically reduces this step
to ~20min
Limitations
***********
- Little or no benefit for designers who refresh a pre-existing
workspace at least daily. (download_mirror.sh, repo sync,
generate-cgcs-centos-repo.sh, build-pkgs, build-iso). In these cases
an incremental build (reuse of same workspace without a 'build-pkgs
--clean') is often just as efficient.
- Not likely to be useful to solo designers, or teleworkers that wish
to compile on there home computers. Build avoidance downloads build
artifacts from a reference build, and WAN speeds are generally to
slow.
Method (in brief)
*****************
#. Reference Builds
- A server in the regional office performs a regular (daily?),
automated builds using existing methods. Call these the reference
builds.
- The builds are timestamped, and preserved for some time. (a few
weeks)
- A build CONTEXT is captured. This is a file produced by build-pkgs
at location '$MY_WORKSPACE/CONTEXT'. It is a bash script that can
cd to each and every git and checkout the SHA that contributed to
the build.
- For each package built, a file shall capture he md5sums of all the
source code inputs to the build of that package. These files are
also produced by build-pkgs at location
'$MY_WORKSPACE//rpmbuild/SOURCES//srpm_reference.md5'.
- All these build products are accessible locally (e.g. a regional
office) via rsync (other protocols can be added later)
#. Designers
- Request a build avoidance build. Recommended after you have just
done a repo sync. e.g.
::
repo sync
generate-cgcs-centos-repo.sh
populate_downloads.sh
build-pkgs --build-avoidance
- Additional arguments, and/or environment variables, and/or a
config file unique to the regional office, are used to specify a URL
to the reference builds.
- Using a config file to specify location of your reference build
::
mkdir -p $MY_REPO/local-build-data
cat <<- EOF > $MY_REPO/local-build-data/build_avoidance_source
# Optional, these are already the default values.
BUILD_AVOIDANCE_DATE_FORMAT="%Y%m%d"
BUILD_AVOIDANCE_TIME_FORMAT="%H%M%S"
BUILD_AVOIDANCE_DATE_TIME_DELIM="T"
BUILD_AVOIDANCE_DATE_TIME_POSTFIX="Z"
BUILD_AVOIDANCE_DATE_UTC=1
BUILD_AVOIDANCE_FILE_TRANSFER="rsync"
# Required, unique values for each regional office
BUILD_AVOIDANCE_USR="jenkins"
BUILD_AVOIDANCE_HOST="stx-builder.mycompany.com"
BUILD_AVOIDANCE_DIR="/localdisk/loadbuild/jenkins/StarlingX_Reference_Build"
EOF
- Using command line args to specify location of your reference
build
::
build-pkgs --build-avoidance --build-avoidance-dir /localdisk/loadbuild/jenkins/StarlingX_Reference_Build --build-avoidance-host stx-builder.mycompany.com --build-avoidance-user jenkins
- Prior to your build attempt, you need to accept the host key. This will prevent rsync failures on a yes/no prompt. (you should only have to do this once)
::
grep -q $BUILD_AVOIDANCE_HOST $HOME/.ssh/known_hosts
if [ $? != 0 ]; then
ssh-keyscan $BUILD_AVOIDANCE_HOST >> $HOME/.ssh/known_hosts
fi
- build-pkgs will:
- From newest to oldest, scan the CONTEXTs of the various
reference builds. Select the first (most recent) context which
satisfies the following requirement. For every git, the SHA
specified in the CONTEXT is present.
- The selected context might be slightly out of date, but not by
more than a day (assuming daily reference builds).
- If the context has not been previously downloaded, then
download it now. Meaning download select portions of the
reference build workspace into the designer's workspace. This
includes all the SRPMS, RPMS, MD5SUMS, and misc supporting
files. (~10 min over office LAN)
- The designer may have additional commits not present in the
reference build, or uncommitted changes. Affected packages will
identified by the differing md5sum's, and the package is
re-built. (5+ min, depending on what packages have changed)
- What if no valid reference build is found? Then build-pkgs will fall
back to a regular build.
Reference builds
****************
- The regional office implements an automated build that pulls the
latest StarlingX software and builds it on a regular basis. e.g. a
daily. Perhaps implemented by Jenkins, cron, or similar tools.
- Each build is saved to a unique directory, and preserved for a time
that is reflective of how long a designer might be expected to work
on a private branch without syncronizing with the master branch. e.g.
2 weeks.
- The MY_WORKSPACE directory for the build shall have a common root
directory, and a leaf directory that is a sortable time stamp. Suggested
format YYYYMMDDThhmmss. e.g.
.. code:: sh
$ sudo apt-get update
BUILD_AVOIDANCE_DIR="/localdisk/loadbuild/jenkins/StarlingX_Reference_Build"
BUILD_TIMESTAMP=$(date -u '+%Y%m%dT%H%M%SZ')
MY_WORKSPACE=${BUILD_AVOIDANCE_DIR}/${BUILD_TIMESTAMP}
- Designers can access all build products over the internal network of
the regional office. The current prototype employs rsync. Other
protocols that can efficiently share/copy/transfer large directories
of content can be added as needed.
Advanced usage
^^^^^^^^^^^^^^
Can the reference build itself use build avoidance? Yes
Can it reference itself? Yes.
In either case we advise caution. To protect against any possible
'divergence from reality', you should limit how many steps removed a
build avoidance build is from a full build.
Suppose we want to implement a self referencing daily build, except
that a full build occurs every Saturday. To protect ourselves from a
build failure on Saturday we also want a limit of 7 days since last
full build. You build script might look like this ...
::
...
BUILD_AVOIDANCE_DIR="/localdisk/loadbuild/jenkins/StarlingX_Reference_Build"
BUILD_AVOIDANCE_HOST="stx-builder.mycompany.com"
FULL_BUILD_DAY="Saturday"
MAX_AGE_DAYS=7
LAST_FULL_BUILD_LINK="$BUILD_AVOIDANCE_DIR/latest_full_build"
LAST_FULL_BUILD_DAY=""
NOW_DAY=$(date -u "+%A")
BUILD_TIMESTAMP=$(date -u '+%Y%m%dT%H%M%SZ')
MY_WORKSPACE=${BUILD_AVOIDANCE_DIR}/${BUILD_TIMESTAMP}
# update software
repo init -u ${BUILD_REPO_URL} -b ${BUILD_BRANCH}
repo sync --force-sync
$MY_REPO_ROOT_DIR/stx-tools/toCOPY/generate-cgcs-centos-repo.sh
$MY_REPO_ROOT_DIR/stx-tools/toCOPY/populate_downloads.sh
# User can optionally define BUILD_METHOD equal to one of 'FULL', 'AVOIDANCE', or 'AUTO'
# Sanitize BUILD_METHOD
if [ "$BUILD_METHOD" != "FULL" ] && [ "$BUILD_METHOD" != "AVOIDANCE" ]; then
BUILD_METHOD="AUTO"
fi
# First build test
if [ "$BUILD_METHOD" != "FULL" ] && [ ! -L $LAST_FULL_BUILD_LINK ]; then
echo "latest_full_build symlink missing, forcing full build"
BUILD_METHOD="FULL"
fi
# Build day test
if [ "$BUILD_METHOD" == "AUTO" ] && [ "$NOW_DAY" == "$FULL_BUILD_DAY" ]; then
echo "Today is $FULL_BUILD_DAY, forcing full build"
BUILD_METHOD="FULL"
fi
# Build age test
if [ "$BUILD_METHOD" != "FULL" ]; then
LAST_FULL_BUILD_DATE=$(basename $(readlink $LAST_FULL_BUILD_LINK) | cut -d '_' -f 1)
LAST_FULL_BUILD_DAY=$(date -d $LAST_FULL_BUILD_DATE "+%A")
AGE_SECS=$(( $(date "+%s") - $(date -d $LAST_FULL_BUILD_DATE "+%s") ))
AGE_DAYS=$(( $AGE_SECS/60/60/24 ))
if [ $AGE_DAYS -ge $MAX_AGE_DAYS ]; then
echo "Haven't had a full build in $AGE_DAYS days, forcing full build"
BUILD_METHOD="FULL"
fi
BUILD_METHOD="AVOIDANCE"
fi
#Build it
if [ "$BUILD_METHOD" == "FULL" ]; then
build-pkgs --no-build-avoidance
else
build-pkgs --build-avoidance --build-avoidance-dir $BUILD_AVOIDANCE_DIR --build-avoidance-host $BUILD_AVOIDANCE_HOST --build-avoidance-user $USER
fi
if [ $? -ne 0 ]; then
echo "Build failed in build-pkgs"
exit 1
fi
build-iso
if [ $? -ne 0 ]; then
echo "Build failed in build-iso"
exit 1
fi
if [ "$BUILD_METHOD" == "FULL" ]; then
# A successful full build. Set last full build symlink.
if [ -L $LAST_FULL_BUILD_LINK ]; then
rm -rf $LAST_FULL_BUILD_LINK
fi
ln -sf $MY_WORKSPACE $LAST_FULL_BUILD_LINK
fi
...
One final wrinkle.
We can ask build avoidance to preferentially use the full build day
rather than the most recent build, as the reference point of the next
avoidance build via use of '--build-avoidance-day '. e.g. substitute
this line into the above.
::
build-pkgs --build-avoidance --build-avoidance-dir $BUILD_AVOIDANCE_DIR --build-avoidance-host $BUILD_AVOIDANCE_HOST --build-avoidance-user $USER --build-avoidance-day $FULL_BUILD_DAY
# or perhaps, with a bit more shuffling of the above script.
build-pkgs --build-avoidance --build-avoidance-dir $BUILD_AVOIDANCE_DIR --build-avoidance-host $BUILD_AVOIDANCE_HOST --build-avoidance-user $USER --build-avoidance-day $LAST_FULL_BUILD_DAY
The advantage is that our build is never more than one step removed
from a full build (assuming the full build was successful).
The disadvantage is that by end of week the reference build is getting
rather old. During active weeks, builds times might be approaching
that of a full build.

View File

@ -1,5 +0,0 @@
===============
Getting started
===============
The getting started page.

View File

@ -2,44 +2,34 @@
StarlingX Documentation
=======================
Abstract
--------
This is the general documentation for the StarlingX project.
Welcome to the StarlingX project documentation for [version].
View the VERSION Release Notes <https://docs.starlingx.io/releasenotes>
for release specific details.
The following versions of the documentation are available:
StarlingX v1 | StarlingX v2 | StarlingX v3
StarlingX is provided under ... ToDo.
Source code is maintained at the StarlingX GitHub repo.
Contents
--------
.. toctree::
:maxdepth: 2
:maxdepth: 1
:caption: Contents:
getting_started.rst
contributor/index
Release Notes
-------------
.. toctree::
:maxdepth: 1
Release Notes <https://docs.starlingx.io/releasenotes>
Specs
-----
.. toctree::
:maxdepth: 1
Specs <https://docs.starlingx.io/specs>
API Reference
-------------
.. toctree::
:maxdepth: 1
installation_guide/index
developer_guide/index
API Reference <https://docs.starlingx.io/api-ref/stx-docs>
Release Notes <https://docs.starlingx.io/releasenotes>
contributor/index
Project Specifications <https://docs.starlingx.io/specs>
Contributing
============
@ -49,8 +39,3 @@ The source is hosted on `OpenStack's Gerrit server`_.
.. _`OpenStack's Gerrit server`: https://git.starlingx.io
Indices and tables
==================
* :ref:`genindex`
* :ref:`search`

View File

@ -0,0 +1,943 @@
.. _controller-storage:
===================================================================
StarlingX/Installation Guide Virtual Environment/Controller Storage
===================================================================
-----------------
Preparing Servers
-----------------
Bare Metal
**********
Required Servers:
- Controllers: 2
- Computes: 2 - 100
Hardware Requirements
^^^^^^^^^^^^^^^^^^^^^
The recommended minimum requirements for the physical servers where
StarlingX Controller Storage will be deployed, include:
- Minimum Processor:
- Dual-CPU Intel® Xeon® E5 26xx Family (SandyBridge) 8 cores/socket
- Memory:
- 64 GB Controller
- 32 GB Compute
- BIOS:
- Hyper-Threading Tech: Enabled
- Virtualization Technology: Enabled
- VT for Directed I/O: Enabled
- CPU Power and Performance Policy: Performance
- CPU C State Control: Disabled
- Plug & Play BMC Detection: Disabled
- Primary Disk:
- 500 GB SDD or NVMe Controller
- 120 GB (min. 10K RPM) Compute
- Additional Disks:
- 1 or more 500 GB disks (min. 10K RPM) Compute
- Network Ports\*
- Management: 10GE Controller, Compute
- OAM: 10GE Controller
- Data: n x 10GE Compute
Virtual Environment
*******************
Run the libvirt qemu setup scripts. Setting up virtualized OAM and
Management networks:
::
$ bash setup_network.sh
Building xmls for definition of virtual servers:
::
$ bash setup_standard_controller.sh -i <starlingx iso image>
Accessing Virtual Server Consoles
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The xml for virtual servers in stx-tools repo, deployment/libvirt,
provides both graphical and text consoles.
Access the graphical console in virt-manager by right-click on the
domain (the server) and selecting "Open".
Access the textual console with the command "virsh console $DOMAIN",
where DOMAIN is the name of the server shown in virsh.
When booting the controller-0 for the first time, both the serial and
graphical consoles will present the initial configuration menu for the
cluster. One can select serial or graphical console for controller-0.
For the other nodes however only serial is used, regardless of which
option is selected.
Open the graphic console on all servers before powering them on to
observe the boot device selection and PXI boot progress. Run "virsh
console $DOMAIN" command promptly after power on to see the initial boot
sequence which follows the boot device selection. One has a few seconds
to do this.
------------------------------
Controller-0 Host Installation
------------------------------
Installing controller-0 involves initializing a host with software and
then applying a bootstrap configuration from the command line. The
configured bootstrapped host becomes Controller-0.
Procedure:
#. Power on the server that will be controller-0 with the StarlingX ISO
on a USB in a bootable USB slot.
#. Configure the controller using the config_controller script.
Initializing Controller-0
*************************
This section describes how to initialize StarlingX in host Controller-0.
Except where noted, all the commands must be executed from a console of
the host.
Power on the host to be configured as Controller-0, with the StarlingX
ISO on a USB in a bootable USB slot. Wait for the console to show the
StarlingX ISO booting options:
- **Standard Controller Configuration**
- When the installer is loaded and the installer welcome screen
appears in the Controller-0 host, select the type of installation
"Standard Controller Configuration".
- **Graphical Console**
- Select the "Graphical Console" as the console to use during
installation.
- **Standard Security Boot Profile**
- Select "Standard Security Boot Profile" as the Security Profile.
Monitor the initialization. When it is complete, a reboot is initiated
on the Controller-0 host, briefly displays a GNU GRUB screen, and then
boots automatically into the StarlingX image.
Log into Controller-0 as user wrsroot, with password wrsroot. The
first time you log in as wrsroot, you are required to change your
password. Enter the current password (wrsroot):
::
Changing password for wrsroot.
(current) UNIX Password:
Enter a new password for the wrsroot account:
::
New password:
Enter the new password again to confirm it:
::
Retype new password:
Controller-0 is initialized with StarlingX, and is ready for
configuration.
Configuring Controller-0
************************
This section describes how to perform the Controller-0 configuration
interactively just to bootstrap system with minimum critical data.
Except where noted, all the commands must be executed from the console
of the active controller (here assumed to be controller-0).
When run interactively, the config_controller script presents a series
of prompts for initial configuration of StarlingX:
- For the Virtual Environment, you can accept all the default values
immediately after system date and time.
- For a Physical Deployment, answer the bootstrap configuration
questions with answers applicable to your particular physical setup.
The script is used to configure the first controller in the StarlingX
cluster as controller-0. The prompts are grouped by configuration
area. To start the script interactively, use the following command
with no parameters:
::
controller-0:~$ sudo config_controller
System Configuration
================
Enter ! at any prompt to abort...
...
Accept all the default values immediately after system date and time.
::
...
Applying configuration (this will take several minutes):
01/08: Creating bootstrap configuration ... DONE
02/08: Applying bootstrap manifest ... DONE
03/08: Persisting local configuration ... DONE
04/08: Populating initial system inventory ... DONE
05:08: Creating system configuration ... DONE
06:08: Applying controller manifest ... DONE
07:08: Finalize controller configuration ... DONE
08:08: Waiting for service activation ... DONE
Configuration was applied
Please complete any out of service commissioning steps with system commands and unlock controller to proceed.
After config_controller bootstrap configuration, REST API, CLI and
Horizon interfaces are enabled on the controller-0 OAM IP Address. The
remaining installation instructions will use the CLI.
---------------------------------
Controller-0 and System Provision
---------------------------------
On Controller-0, acquire Keystone administrative privileges:
::
controller-0:~$ source /etc/nova/openrc
Configuring Provider Networks at Installation
*********************************************
You must set up provider networks at installation so that you can attach
data interfaces and unlock the compute nodes.
Set up one provider network of the vlan type, named providernet-a:
::
[wrsroot@controller-0 ~(keystone_admin)]$ neutron providernet-create providernet-a --type=vlan
[wrsroot@controller-0 ~(keystone_admin)]$ neutron providernet-range-create --name providernet-a-range1 --range 100-400 providernet-a
Configuring Cinder on Controller Disk
*************************************
Review the available disk space and capacity and obtain the uuid of the
physical disk
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-list controller-0
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+...
| uuid | device_no | device_ | device_ | size_ | available_ | rpm |...
| | de | num | type | gib | gib | |...
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+...
| 004f4c09-2f61-46c5-8def-99b2bdeed83c | /dev/sda | 2048 | HDD | 200.0 | 0.0 | |...
| 89694799-0dd8-4532-8636-c0d8aabfe215 | /dev/sdb | 2064 | HDD | 200.0 | 199.997 | |...
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+...
Create the 'cinder-volumes' local volume group
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-lvg-add controller-0 cinder-volumes
+-----------------+--------------------------------------+
| Property | Value |
+-----------------+--------------------------------------+
| lvm_vg_name | cinder-volumes |
| vg_state | adding |
| uuid | ece4c755-241c-4363-958e-85e9e3d12917 |
| ihost_uuid | 150284e2-fb60-4169-ae75-7f444b8ca9bf |
| lvm_vg_access | None |
| lvm_max_lv | 0 |
| lvm_cur_lv | 0 |
| lvm_max_pv | 0 |
| lvm_cur_pv | 0 |
| lvm_vg_size_gib | 0.00 |
| lvm_vg_total_pe | 0 |
| lvm_vg_free_pe | 0 |
| created_at | 2018-08-22T03:59:30.685718+00:00 |
| updated_at | None |
| parameters | {u'lvm_type': u'thin'} |
+-----------------+--------------------------------------+
Create a disk partition to add to the volume group
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-partition-add controller-0 89694799-0dd8-4532-8636-c0d8aabfe215 199 -t lvm_phys_vol
+-------------+--------------------------------------------------+
| Property | Value |
+-------------+--------------------------------------------------+
| device_path | /dev/disk/by-path/pci-0000:00:03.0-ata-2.0-part1 |
| device_node | /dev/sdb1 |
| type_guid | ba5eba11-0000-1111-2222-000000000001 |
| type_name | None |
| start_mib | None |
| end_mib | None |
| size_mib | 203776 |
| uuid | 9ba2d76a-6ae2-4bfa-ad48-57b62d102e80 |
| ihost_uuid | 150284e2-fb60-4169-ae75-7f444b8ca9bf |
| idisk_uuid | 89694799-0dd8-4532-8636-c0d8aabfe215 |
| ipv_uuid | None |
| status | Creating |
| created_at | 2018-08-22T04:03:40.761221+00:00 |
| updated_at | None |
+-------------+--------------------------------------------------+
Wait for the new partition to be created (i.e. status=Ready)
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-partition-list controller-0 --disk 89694799-0dd8-4532-8636-c0d8aabfe215
+--------------------------------------+...+------------+...+---------------------+----------+--------+
| uuid |...| device_nod |...| type_name | size_mib | status |
| |...| e |...| | | |
+--------------------------------------+...+------------+...+---------------------+----------+--------+
| 9ba2d76a-6ae2-4bfa-ad48-57b62d102e80 |...| /dev/sdb1 |...| LVM Physical Volume | 199.0 | Ready |
| |...| |...| | | |
| |...| |...| | | |
+--------------------------------------+...+------------+...+---------------------+----------+--------+
Add the partition to the volume group
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-pv-add controller-0 cinder-volumes 9ba2d76a-6ae2-4bfa-ad48-57b62d102e80
+--------------------------+--------------------------------------------------+
| Property | Value |
+--------------------------+--------------------------------------------------+
| uuid | 060dc47e-bc17-40f4-8f09-5326ef0e86a5 |
| pv_state | adding |
| pv_type | partition |
| disk_or_part_uuid | 9ba2d76a-6ae2-4bfa-ad48-57b62d102e80 |
| disk_or_part_device_node | /dev/sdb1 |
| disk_or_part_device_path | /dev/disk/by-path/pci-0000:00:03.0-ata-2.0-part1 |
| lvm_pv_name | /dev/sdb1 |
| lvm_vg_name | cinder-volumes |
| lvm_pv_uuid | None |
| lvm_pv_size_gib | 0.0 |
| lvm_pe_total | 0 |
| lvm_pe_alloced | 0 |
| ihost_uuid | 150284e2-fb60-4169-ae75-7f444b8ca9bf |
| created_at | 2018-08-22T04:06:54.008632+00:00 |
| updated_at | None |
+--------------------------+--------------------------------------------------+
Enable LVM Backend.
::
[wrsroot@controller-0 ~(keystone_admin)]$ system storage-backend-add lvm -s cinder --confirmed
Wait for the storage backend to leave "configuring" state. Confirm LVM
Backend storage is configured:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system storage-backend-list
+--------------------------------------+------------+---------+------------+------+----------+...
| uuid | name | backend | state | task | services |...
+--------------------------------------+------------+---------+------------+------+----------+...
| 1daf3e5b-4122-459f-9dba-d2e92896e718 | file-store | file | configured | None | glance |...
| a4607355-be7e-4c5c-bf87-c71a0e2ad380 | lvm-store | lvm | configured | None | cinder |...
+--------------------------------------+------------+---------+------------+------+----------+...
Unlocking Controller-0
**********************
You must unlock controller-0 so that you can use it to install the
remaining hosts. On Controller-0, acquire Keystone administrative
privileges. Use the system host-unlock command:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-unlock controller-0
The host is rebooted. During the reboot, the command line is
unavailable, and any ssh connections are dropped. To monitor the
progress of the reboot, use the controller-0 console.
Verifying the Controller-0 Configuration
****************************************
On Controller-0, acquire Keystone administrative privileges:
::
controller-0:~$ source /etc/nova/openrc
Verify that the StarlingX controller services are running:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system service-list
+-----+-------------------------------+--------------+----------------+
| id | service_name | hostname | state |
+-----+-------------------------------+--------------+----------------+
...
| 1 | oam-ip | controller-0 | enabled-active |
| 2 | management-ip | controller-0 | enabled-active |
...
+-----+-------------------------------+--------------+----------------+
Verify that controller-0 is unlocked, enabled, and available:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | unlocked | enabled | available |
+----+--------------+-------------+----------------+-------------+--------------+
-----------------------------------------
Controller-1 / Compute Hosts Installation
-----------------------------------------
After initializing and configuring an active controller, you can add and
configure a backup controller and additional compute hosts. For each
host do the following:
Initializing Host
*****************
Power on Host. In host console you will see:
::
Waiting for this node to be configured.
Please configure the personality for this node from the
controller node in order to proceed.
Updating Host Host Name and Personality
***************************************
On Controller-0, acquire Keystone administrative privileges:
::
controller-0:~$ source /etc/nova/openrc
Wait for Controller-0 to discover new host, list the host until new
UNKNOWN host shows up in table:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | unlocked | enabled | available |
| 2 | None | None | locked | disabled | offline |
+----+--------------+-------------+----------------+-------------+--------------+
Use the system host-update to update host personality attribute:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-update 2 personality=controller hostname=controller-1
Or for compute-0:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-update 3 personality=compute hostname=compute-0
See also: 'system help host-update'
Unless it is known that the host's configuration can support the
installation of more than one node, it is recommended that the
installation and configuration of each node be serialized. For example,
if the entire cluster has its virtual disks hosted on the host's root
disk which happens to be a single rotational type hard disk, then the
host cannot (reliably) support parallel node installation.
Monitoring Host
***************
On Controller-0, you can monitor the installation progress by running
the system host-show command for the host periodically. Progress is
shown in the install_state field.
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-show <host> | grep install
| install_output | text |
| install_state | booting |
| install_state_info | None |
Wait while the host is configured and rebooted. Up to 20 minutes may be
required for a reboot, depending on hardware. When the reboot is
complete, the host is reported as Locked, Disabled, and Online.
Listing Hosts
*************
Once all Nodes have been installed, configured and rebooted, on
Controller-0 list the hosts:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | unlocked | enabled | available |
| 2 | controller-1 | controller | locked | disabled | online |
| 3 | compute-0 | compute | locked | disabled | online |
| 4 | compute-1 | compute | locked | disabled | online |
+----+--------------+-------------+----------------+-------------+--------------+
-------------------------
Controller-1 Provisioning
-------------------------
On Controller-0, list hosts
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
...
| 2 | controller-1 | controller | locked | disabled | online |
...
+----+--------------+-------------+----------------+-------------+--------------+
Provisioning Network Interfaces on Controller-1
***********************************************
In order to list out hardware port names, types, pci-addresses that have
been discovered:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-port-list controller-1
Provision the oam interface for Controller-1.
**Temporal** changes to host-if-modify command: check help 'system help
host-if-modify'. If the help text lists '-c ' option then execute the
following command; otherwise use the form with '-nt' listed below:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-if-modify -n <oam interface> -c platform --networks oam controller-1 <oam interface>
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-if-modify -n <oam interface> -nt oam controller-1 <oam interface>
Provisioning Storage on Controller-1
************************************
Review the available disk space and capacity and obtain the uuid of the
physical disk
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-list controller-1
+--------------------------------------+-----------+---------+---------+-------+------------+
| uuid | device_no | device_ | device_ | size_ | available_ |
| | de | num | type | gib | gib |
+--------------------------------------+-----------+---------+---------+-------+------------+
| f7ce53db-7843-457e-8422-3c8f9970b4f2 | /dev/sda | 2048 | HDD | 200.0 | 0.0 |
| 70b83394-968e-4f0d-8a99-7985cd282a21 | /dev/sdb | 2064 | HDD | 200.0 | 199.997 |
+--------------------------------------+-----------+---------+---------+-------+------------+
Assign Cinder storage to the physical disk
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-lvg-add controller-1 cinder-volumes
+-----------------+--------------------------------------+
| Property | Value |
+-----------------+--------------------------------------+
| lvm_vg_name | cinder-volumes |
| vg_state | adding |
| uuid | 22d8b94a-200a-4fd5-b1f5-7015ddf10d0b |
| ihost_uuid | 06827025-eacb-45e6-bb88-1a649f7404ec |
| lvm_vg_access | None |
| lvm_max_lv | 0 |
| lvm_cur_lv | 0 |
| lvm_max_pv | 0 |
| lvm_cur_pv | 0 |
| lvm_vg_size_gib | 0.00 |
| lvm_vg_total_pe | 0 |
| lvm_vg_free_pe | 0 |
| created_at | 2018-08-22T05:33:44.608913+00:00 |
| updated_at | None |
| parameters | {u'lvm_type': u'thin'} |
+-----------------+--------------------------------------+
Create a disk partition to add to the volume group based on uuid of the
physical disk
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-partition-add controller-1 70b83394-968e-4f0d-8a99-7985cd282a21 199 -t lvm_phys_vol
+-------------+--------------------------------------------------+
| Property | Value |
+-------------+--------------------------------------------------+
| device_path | /dev/disk/by-path/pci-0000:00:03.0-ata-2.0-part1 |
| device_node | /dev/sdb1 |
| type_guid | ba5eba11-0000-1111-2222-000000000001 |
| type_name | None |
| start_mib | None |
| end_mib | None |
| size_mib | 203776 |
| uuid | 16a1c5cb-620c-47a3-be4b-022eafd122ee |
| ihost_uuid | 06827025-eacb-45e6-bb88-1a649f7404ec |
| idisk_uuid | 70b83394-968e-4f0d-8a99-7985cd282a21 |
| ipv_uuid | None |
| status | Creating (on unlock) |
| created_at | 2018-08-22T05:36:42.123770+00:00 |
| updated_at | None |
+-------------+--------------------------------------------------+
Wait for the new partition to be created (i.e. status=Ready)
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-partition-list controller-1 --disk 70b83394-968e-4f0d-8a99-7985cd282a21
+--------------------------------------+...+------------+...+-------+--------+----------------------+
| uuid |...| device_nod | ... | size_g | status |
| |...| e | ... | ib | |
+--------------------------------------+...+------------+ ... +--------+----------------------+
| 16a1c5cb-620c-47a3-be4b-022eafd122ee |...| /dev/sdb1 | ... | 199.0 | Creating (on unlock) |
| |...| | ... | | |
| |...| | ... | | |
+--------------------------------------+...+------------+...+--------+----------------------+
Add the partition to the volume group
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-pv-add controller-1 cinder-volumes 16a1c5cb-620c-47a3-be4b-022eafd122ee
+--------------------------+--------------------------------------------------+
| Property | Value |
+--------------------------+--------------------------------------------------+
| uuid | 01d79ed2-717f-428e-b9bc-23894203b35b |
| pv_state | adding |
| pv_type | partition |
| disk_or_part_uuid | 16a1c5cb-620c-47a3-be4b-022eafd122ee |
| disk_or_part_device_node | /dev/sdb1 |
| disk_or_part_device_path | /dev/disk/by-path/pci-0000:00:03.0-ata-2.0-part1 |
| lvm_pv_name | /dev/sdb1 |
| lvm_vg_name | cinder-volumes |
| lvm_pv_uuid | None |
| lvm_pv_size_gib | 0.0 |
| lvm_pe_total | 0 |
| lvm_pe_alloced | 0 |
| ihost_uuid | 06827025-eacb-45e6-bb88-1a649f7404ec |
| created_at | 2018-08-22T05:44:34.715289+00:00 |
| updated_at | None |
+--------------------------+--------------------------------------------------+
Unlocking Controller-1
**********************
Unlock Controller-1
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-unlock controller-1
Wait while the Controller-1 is rebooted. Up to 10 minutes may be
required for a reboot, depending on hardware.
**REMARK:** Controller-1 will remain in 'degraded' state until
data-syncing is complete. The duration is dependant on the
virtualization host's configuration - i.e., the number and configuration
of physical disks used to host the nodes' virtual disks. Also, the
management network is expected to have link capacity of 10000 (1000 is
not supported due to excessive data-sync time). Use 'fm alarm-list' to
confirm status.
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | unlocked | enabled | available |
| 2 | controller-1 | controller | unlocked | enabled | available |
...
----------------------
Compute Host Provision
----------------------
You must configure the network interfaces and the storage disks on a
host before you can unlock it. For each Compute Host do the following:
On Controller-0, acquire Keystone administrative privileges:
::
controller-0:~$ source /etc/nova/openrc
Provisioning Network Interfaces on a Compute Host
*************************************************
On Controller-0, in order to list out hardware port names, types,
pci-addresses that have been discovered:
- **Only in Virtual Environment**: Ensure that the interface used is
one of those attached to host bridge with model type "virtio" (i.e.,
eth1000 and eth1001). The model type "e1000" emulated devices will
not work for provider networks:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-port-list compute-0
Provision the data interface for Compute. **Temporal** changes to
host-if-modify command: check help 'system help host-if-modify'. If the
help text lists '-c ' option then execute the following command;
otherwise use the form with '-nt' listed below:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-if-modify -p providernet-a -c data compute-0 eth1000
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-if-modify -p providernet-a -nt data compute-0 eth1000
VSwitch Virtual Environment
***************************
**Only in Virtual Environment**. If the compute has more than 4 cpus,
the system will auto-configure the vswitch to use 2 cores. However some
virtual environments do not properly support multi-queue required in a
multi-cpu environment. Therefore run the following command to reduce the
vswitch cores to 1:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-cpu-modify compute-0 -f vswitch -p0 1
+--------------------------------------+-------+-----------+-------+--------+...
| uuid | log_c | processor | phy_c | thread |...
| | ore | | ore | |...
+--------------------------------------+-------+-----------+-------+--------+...
| a3b5620c-28b1-4fe0-9e97-82950d8582c2 | 0 | 0 | 0 | 0 |...
| f2e91c2b-bfc5-4f2a-9434-bceb7e5722c3 | 1 | 0 | 1 | 0 |...
| 18a98743-fdc4-4c0c-990f-3c1cb2df8cb3 | 2 | 0 | 2 | 0 |...
| 690d25d2-4f99-4ba1-a9ba-0484eec21cc7 | 3 | 0 | 3 | 0 |...
+--------------------------------------+-------+-----------+-------+--------+...
Provisioning Storage on a Compute Host
**************************************
Review the available disk space and capacity and obtain the uuid(s) of
the physical disk(s) to be used for nova local:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-list compute-0
+--------------------------------------+-----------+---------+---------+-------+------------+...
| uuid | device_no | device_ | device_ | size_ | available_ |...
| | de | num | type | gib | gib |...
+--------------------------------------+-----------+---------+---------+-------+------------+...
| 8a9d2c09-d3a7-4781-bd06-f7abf603713a | /dev/sda | 2048 | HDD | 200.0 | 172.164 |...
| 5ad61bd1-795a-4a76-96ce-39433ef55ca5 | /dev/sdb | 2064 | HDD | 200.0 | 199.997 |...
+--------------------------------------+-----------+---------+---------+-------+------------+...
Create the 'nova-local' local volume group:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-lvg-add compute-0 nova-local
+-----------------+-------------------------------------------------------------------+
| Property | Value |
+-----------------+-------------------------------------------------------------------+
| lvm_vg_name | nova-local |
| vg_state | adding |
| uuid | 18898640-c8b7-4bbd-a323-4bf3e35fee4d |
| ihost_uuid | da1cbe93-cec5-4f64-b211-b277e4860ab3 |
| lvm_vg_access | None |
| lvm_max_lv | 0 |
| lvm_cur_lv | 0 |
| lvm_max_pv | 0 |
| lvm_cur_pv | 0 |
| lvm_vg_size_gib | 0.00 |
| lvm_vg_total_pe | 0 |
| lvm_vg_free_pe | 0 |
| created_at | 2018-08-22T08:00:51.945160+00:00 |
| updated_at | None |
| parameters | {u'concurrent_disk_operations': 2, u'instance_backing': u'image'} |
+-----------------+-------------------------------------------------------------------+
Create a disk partition to add to the volume group based on uuid of the
physical disk:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-pv-add compute-0 nova-local 5ad61bd1-795a-4a76-96ce-39433ef55ca5
+--------------------------+--------------------------------------------+
| Property | Value |
+--------------------------+--------------------------------------------+
| uuid | 4c81745b-286a-4850-ba10-305e19cee78c |
| pv_state | adding |
| pv_type | disk |
| disk_or_part_uuid | 5ad61bd1-795a-4a76-96ce-39433ef55ca5 |
| disk_or_part_device_node | /dev/sdb |
| disk_or_part_device_path | /dev/disk/by-path/pci-0000:00:03.0-ata-2.0 |
| lvm_pv_name | /dev/sdb |
| lvm_vg_name | nova-local |
| lvm_pv_uuid | None |
| lvm_pv_size_gib | 0.0 |
| lvm_pe_total | 0 |
| lvm_pe_alloced | 0 |
| ihost_uuid | da1cbe93-cec5-4f64-b211-b277e4860ab3 |
| created_at | 2018-08-22T08:07:14.205690+00:00 |
| updated_at | None |
+--------------------------+--------------------------------------------+
Specify the local storage space as local copy-on-write image volumes in
nova-local:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-lvg-modify -b image -s 10240 compute-0 nova-local
+-----------------+-------------------------------------------------------------------+
| Property | Value |
+-----------------+-------------------------------------------------------------------+
| lvm_vg_name | nova-local |
| vg_state | adding |
| uuid | 18898640-c8b7-4bbd-a323-4bf3e35fee4d |
| ihost_uuid | da1cbe93-cec5-4f64-b211-b277e4860ab3 |
| lvm_vg_access | None |
| lvm_max_lv | 0 |
| lvm_cur_lv | 0 |
| lvm_max_pv | 0 |
| lvm_cur_pv | 0 |
| lvm_vg_size_gib | 0.00 |
| lvm_vg_total_pe | 0 |
| lvm_vg_free_pe | 0 |
| created_at | 2018-08-22T08:00:51.945160+00:00 |
| updated_at | None |
| parameters | {u'concurrent_disk_operations': 2, u'instance_backing': u'image'} |
+-----------------+-------------------------------------------------------------------+
Unlocking a Compute Host
************************
On Controller-0, use the system host-unlock command to unlock the
Compute node:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-unlock compute-0
Wait while the Compute node is rebooted. Up to 10 minutes may be
required for a reboot, depending on hardware. The host is rebooted, and
its Availability State is reported as In-Test, followed by
unlocked/enabled.
-------------------
System Health Check
-------------------
Listing StarlingX Nodes
***********************
On Controller-0, after a few minutes, all nodes shall be reported as
Unlocked, Enabled, and Available:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | unlocked | enabled | available |
| 2 | controller-1 | controller | unlocked | enabled | available |
| 3 | compute-0 | compute | unlocked | enabled | available |
| 4 | compute-1 | compute | unlocked | enabled | available |
+----+--------------+-------------+----------------+-------------+--------------+
System Alarm List
*****************
When all nodes are Unlocked, Enabled and Available: check 'fm alarm-list' for issues.
Your StarlingX deployment is now up and running with 2x HA Controllers with Cinder
Storage, 2x Computes and all OpenStack services up and running. You can now proceed
with standard OpenStack APIs, CLIs and/or Horizon to load Glance Images, configure
Nova Flavors, configure Neutron networks and launch Nova Virtual Machines.

View File

@ -0,0 +1,871 @@
.. _dedicated-storage:
==================================================================
StarlingX/Installation Guide Virtual Environment/Dedicated Storage
==================================================================
-----------------
Preparing Servers
-----------------
Bare Metal
**********
Required Servers:
- Controllers: 2
- Storage
- Replication factor of 2: 2 - 8
- Replication factor of 3: 3 - 9
- Computes: 2 - 100
Hardware Requirements
^^^^^^^^^^^^^^^^^^^^^
The recommended minimum requirements for the physical servers where
StarlingX Dedicated Storage will be deployed, include:
- Minimum Processor:
- Dual-CPU Intel® Xeon® E5 26xx Family (SandyBridge) 8 cores/socket
- Memory:
- 64 GB Controller, Storage
- 32 GB Compute
- BIOS:
- Hyper-Threading Tech: Enabled
- Virtualization Technology: Enabled
- VT for Directed I/O: Enabled
- CPU Power and Performance Policy: Performance
- CPU C State Control: Disabled
- Plug & Play BMC Detection: Disabled
- Primary Disk:
- 500 GB SDD or NVMe Controller
- 120 GB (min. 10K RPM) Compute, Storage
- Additional Disks:
- 1 or more 500 GB disks (min. 10K RPM) Storage, Compute
- Network Ports\*
- Management: 10GE Controller, Storage, Compute
- OAM: 10GE Controller
- Data: n x 10GE Compute
Virtual Environment
*******************
Run the libvirt qemu setup scripts. Setting up virtualized OAM and
Management networks:
::
$ bash setup_network.sh
Building xmls for definition of virtual servers:
::
$ bash setup_standard_controller.sh -i <starlingx iso image>
Accessing Virtual Server Consoles
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The xml for virtual servers in stx-tools repo, deployment/libvirt,
provides both graphical and text consoles.
Access the graphical console in virt-manager by right-click on the
domain (the server) and selecting "Open".
Access the textual console with the command "virsh console $DOMAIN",
where DOMAIN is the name of the server shown in virsh.
When booting the controller-0 for the first time, both the serial and
graphical consoles will present the initial configuration menu for the
cluster. One can select serial or graphical console for controller-0.
For the other nodes however only serial is used, regardless of which
option is selected.
Open the graphic console on all servers before powering them on to
observe the boot device selection and PXI boot progress. Run "virsh
console $DOMAIN" command promptly after power on to see the initial boot
sequence which follows the boot device selection. One has a few seconds
to do this.
------------------------------
Controller-0 Host Installation
------------------------------
Installing controller-0 involves initializing a host with software and
then applying a bootstrap configuration from the command line. The
configured bootstrapped host becomes Controller-0.
Procedure:
#. Power on the server that will be controller-0 with the StarlingX ISO
on a USB in a bootable USB slot.
#. Configure the controller using the config_controller script.
Initializing Controller-0
*************************
This section describes how to initialize StarlingX in host Controller-0.
Except where noted, all the commands must be executed from a console of
the host.
Power on the host to be configured as Controller-0, with the StarlingX
ISO on a USB in a bootable USB slot. Wait for the console to show the
StarlingX ISO booting options:
- **Standard Controller Configuration**
- When the installer is loaded and the installer welcome screen
appears in the Controller-0 host, select the type of installation
"Standard Controller Configuration".
- **Graphical Console**
- Select the "Graphical Console" as the console to use during
installation.
- **Standard Security Boot Profile**
- Select "Standard Security Boot Profile" as the Security Profile.
Monitor the initialization. When it is complete, a reboot is initiated
on the Controller-0 host, briefly displays a GNU GRUB screen, and then
boots automatically into the StarlingX image.
Log into Controller-0 as user wrsroot, with password wrsroot. The
first time you log in as wrsroot, you are required to change your
password. Enter the current password (wrsroot):
::
Changing password for wrsroot.
(current) UNIX Password:
Enter a new password for the wrsroot account:
::
New password:
Enter the new password again to confirm it:
::
Retype new password:
Controller-0 is initialized with StarlingX, and is ready for
configuration.
Configuring Controller-0
************************
This section describes how to perform the Controller-0 configuration
interactively just to bootstrap system with minimum critical data.
Except where noted, all the commands must be executed from the console
of the active controller (here assumed to be controller-0).
When run interactively, the config_controller script presents a series
of prompts for initial configuration of StarlingX:
- For the Virtual Environment, you can accept all the default values
immediately after system date and time.
- For a Physical Deployment, answer the bootstrap configuration
questions with answers applicable to your particular physical setup.
The script is used to configure the first controller in the StarlingX
cluster as controller-0. The prompts are grouped by configuration
area. To start the script interactively, use the following command
with no parameters:
::
controller-0:~$ sudo config_controller
System Configuration
================
Enter ! at any prompt to abort...
...
Accept all the default values immediately after system date and time
::
...
Applying configuration (this will take several minutes):
01/08: Creating bootstrap configuration ... DONE
02/08: Applying bootstrap manifest ... DONE
03/08: Persisting local configuration ... DONE
04/08: Populating initial system inventory ... DONE
05:08: Creating system configuration ... DONE
06:08: Applying controller manifest ... DONE
07:08: Finalize controller configuration ... DONE
08:08: Waiting for service activation ... DONE
Configuration was applied
Please complete any out of service commissioning steps with system commands and unlock controller to proceed.
After config_controller bootstrap configuration, REST API, CLI and
Horizon interfaces are enabled on the controller-0 OAM IP Address. The
remaining installation instructions will use the CLI.
---------------------------------
Controller-0 and System Provision
---------------------------------
On Controller-0, acquire Keystone administrative privileges:
::
controller-0:~$ source /etc/nova/openrc
Configuring Provider Networks at Installation
*********************************************
You must set up provider networks at installation so that you can attach
data interfaces and unlock the compute nodes.
Set up one provider network of the vlan type, named providernet-a:
::
[wrsroot@controller-0 ~(keystone_admin)]$ neutron providernet-create providernet-a --type=vlan
[wrsroot@controller-0 ~(keystone_admin)]$ neutron providernet-range-create --name providernet-a-range1 --range 100-400 providernet-a
Adding a Ceph Storage Backend at Installation
*********************************************
Add CEPH Storage backend:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system storage-backend-add ceph -s cinder,glance,swift,nova
WARNING : THIS OPERATION IS NOT REVERSIBLE AND CANNOT BE CANCELLED.
By confirming this operation, Ceph backend will be created.
A minimum of 2 storage nodes are required to complete the configuration.
Please set the 'confirmed' field to execute this operation for the ceph backend.
::
[wrsroot@controller-0 ~(keystone_admin)]$ system storage-backend-add ceph -s cinder,glance,swift,nova --confirmed
System configuration has changed.
Please follow the administrator guide to complete configuring the system.
+--------------------------------------+------------+---------+-------------+--------------------+----------+...
| uuid | name | backend | state | task | services |...
+--------------------------------------+------------+---------+-------------+--------------------+----------+...
| 48ddb10a-206c-42da-bb3f-f7160a356724 | ceph-store | ceph | configuring | applying-manifests | cinder, |...
| | | | | | glance, |...
| | | | | | swift |...
| | | | | | nova |...
| | | | | | |...
| 55f49f86-3e01-4d03-a014-42e1b55ba487 | file-store | file | configured | None | glance |...
+--------------------------------------+------------+---------+-------------+--------------------+----------+...
Confirm CEPH storage is configured
::
[wrsroot@controller-0 ~(keystone_admin)]$ system storage-backend-list
+--------------------------------------+------------+---------+------------+-------------------+-----------+...
| uuid | name | backend | state | task | services |...
+--------------------------------------+------------+---------+------------+-------------------+-----------+...
| 48ddb10a-206c-42da-bb3f-f7160a356724 | ceph-store | ceph | configured | provision-storage | cinder, |...
| | | | | | glance, |...
| | | | | | swift |...
| | | | | | nova |...
| | | | | | |...
| 55f49f86-3e01-4d03-a014-42e1b55ba487 | file-store | file | configured | None | glance |...
+--------------------------------------+------------+---------+------------+-------------------+-----------+...
Unlocking Controller-0
**********************
You must unlock controller-0 so that you can use it to install the
remaining hosts. Use the system host-unlock command:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-unlock controller-0
The host is rebooted. During the reboot, the command line is
unavailable, and any ssh connections are dropped. To monitor the
progress of the reboot, use the controller-0 console.
Verifying the Controller-0 Configuration
****************************************
On Controller-0, acquire Keystone administrative privileges:
::
controller-0:~$ source /etc/nova/openrc
Verify that the StarlingX controller services are running:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system service-list
+-----+-------------------------------+--------------+----------------+
| id | service_name | hostname | state |
+-----+-------------------------------+--------------+----------------+
...
| 1 | oam-ip | controller-0 | enabled-active |
| 2 | management-ip | controller-0 | enabled-active |
...
+-----+-------------------------------+--------------+----------------+
Verify that controller-0 is unlocked, enabled, and available:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | unlocked | enabled | available |
+----+--------------+-------------+----------------+-------------+--------------+
Provisioning Filesystem Storage
*******************************
List the controller filesystems with status and current sizes
::
[wrsroot@controller-0 ~(keystone_admin)]$ system controllerfs-list
+--------------------------------------+-----------------+------+--------------------+------------+-------+
| UUID | FS Name | Size | Logical Volume | Replicated | State |
| | | in | | | |
| | | GiB | | | |
+--------------------------------------+-----------------+------+--------------------+------------+-------+
| 4e31c4ea-6970-4fc6-80ba-431fdcdae15f | backup | 5 | backup-lv | False | None |
| 6c689cd7-2bef-4755-a2fb-ddd9504692f3 | database | 5 | pgsql-lv | True | None |
| 44c7d520-9dbe-41be-ac6a-5d02e3833fd5 | extension | 1 | extension-lv | True | None |
| 809a5ed3-22c0-4385-9d1e-dd250f634a37 | glance | 8 | cgcs-lv | True | None |
| 9c94ef09-c474-425c-a8ba-264e82d9467e | gnocchi | 5 | gnocchi-lv | False | None |
| 895222b3-3ce5-486a-be79-9fe21b94c075 | img-conversions | 8 | img-conversions-lv | False | None |
| 5811713f-def2-420b-9edf-6680446cd379 | scratch | 8 | scratch-lv | False | None |
+--------------------------------------+-----------------+------+--------------------+------------+-------+
Modify filesystem sizes
::
[wrsroot@controller-0 ~(keystone_admin)]$ system controllerfs-modify backup=42 database=12 img-conversions=12
---------------------------------------------------------
Controller-1 / Storage Hosts / Compute Hosts Installation
---------------------------------------------------------
After initializing and configuring an active controller, you can add and
configure a backup controller and additional compute or storage hosts.
For each host do the following:
Initializing Host
*****************
Power on Host. In host console you will see:
::
Waiting for this node to be configured.
Please configure the personality for this node from the
controller node in order to proceed.
Updating Host Name and Personality
**********************************
On Controller-0, acquire Keystone administrative privileges:
::
controller-0:~$ source /etc/nova/openrc
Wait for Controller-0 to discover new host, list the host until new
UNKNOWN host shows up in table:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | unlocked | enabled | available |
| 2 | None | None | locked | disabled | offline |
+----+--------------+-------------+----------------+-------------+--------------+
Use the system host-add to update host personality attribute:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-add -n <controller_name> -p <personality> -m <mac address>
**REMARK:** use the Mac Address for the specific network interface you
are going to be connected. e.g. OAM network interface for "Controller-1"
node, Management network interface for "Computes" and "Storage" nodes.
Check the **NIC** MAC Address from "Virtual Manager GUI" under *"Show
virtual hardware details -*\ **i**\ *" Main Banner --> NIC: --> specific
"Bridge name:" under MAC Address text field.*
Monitoring Host
***************
On Controller-0, you can monitor the installation progress by running
the system host-show command for the host periodically. Progress is
shown in the install_state field.
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-show <host> | grep install
| install_output | text |
| install_state | booting |
| install_state_info | None |
Wait while the host is configured and rebooted. Up to 20 minutes may be
required for a reboot, depending on hardware. When the reboot is
complete, the host is reported as Locked, Disabled, and Online.
Listing Hosts
*************
Once all Nodes have been installed, configured and rebooted, on
Controller-0 list the hosts:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | unlocked | enabled | available |
| 3 | controller-1 | controller | locked | disabled | online |
| 4 | compute-0 | compute | locked | disabled | online |
| 5 | storage-0 | storage | locked | disabled | online |
| 6 | storage-1 | storage | locked | disabled | online |
| 7 | storage-2 | storage | locked | disabled | online |
+----+--------------+-------------+----------------+-------------+--------------+
-------------------------
Controller-1 Provisioning
-------------------------
On Controller-0, list hosts
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
...
| 2 | controller-1 | controller | locked | disabled | online |
...
+----+--------------+-------------+----------------+-------------+--------------+
Provisioning Network Interfaces on Controller-1
***********************************************
In order to list out hardware port names, types, pci-addresses that have
been discovered:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-port-list controller-1
Provision the oam interface for Controller-1.
**Temporal** changes to host-if-modify command: check help 'system help
host-if-modify'. If the help text lists '-c ' option then execute the
following command; otherwise use the form with '-nt' listed below:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-if-modify -n <oam interface> -c platform --networks oam controller-1 <oam interface>
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-if-modify -n <oam interface> -nt oam controller-1 <oam interface>
Unlocking Controller-1
**********************
Unlock Controller-1
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-unlock controller-1
Wait while the Controller-1 is rebooted. Up to 10 minutes may be
required for a reboot, depending on hardware.
**REMARK:** Controller-1 will remain in 'degraded' state until
data-syncing is complete. The duration is dependant on the
virtualization host's configuration - i.e., the number and configuration
of physical disks used to host the nodes' virtual disks. Also, the
management network is expected to have link capacity of 10000 (1000 is
not supported due to excessive data-sync time). Use 'fm alarm-list' to
confirm status.
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | unlocked | enabled | available |
| 2 | controller-1 | controller | unlocked | enabled | available |
...
-------------------------
Storage Host Provisioning
-------------------------
Provisioning Storage on a Storage Host
**************************************
Available physical disks in Storage-N
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-list storage-0
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+...
| uuid | device_no | device_ | device_ | size_ | available_ | rpm |...
| | de | num | type | gib | gib | |...
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+...
| a2bbfe1f-cf91-4d39-a2e8-a9785448aa56 | /dev/sda | 2048 | HDD | 292. | 0.0 | Undetermined |...
| | | | | 968 | | |...
| | | | | | | |...
| c7cc08e6-ff18-4229-a79d-a04187de7b8d | /dev/sdb | 2064 | HDD | 100.0 | 99.997 | Undetermined |...
| | | | | | | |...
| | | | | | | |...
| 1ece5d1b-5dcf-4e3c-9d10-ea83a19dd661 | /dev/sdc | 2080 | HDD | 4.0 | 3.997 |...
| | | | | | | |...
| | | | | | | |...
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+...
Available storage tiers in Storage-N
::
[wrsroot@controller-0 ~(keystone_admin)]$ system storage-tier-list ceph_cluster
+--------------------------------------+---------+--------+--------------------------------------+
| uuid | name | status | backend_using |
+--------------------------------------+---------+--------+--------------------------------------+
| 4398d910-75e4-4e99-a57f-fc147fb87bdb | storage | in-use | 5131a848-25ea-4cd8-bbce-0d65c84183df |
+--------------------------------------+---------+--------+--------------------------------------+
Create a storage function (an OSD) in Storage-N
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-stor-add storage-0 c7cc08e6-ff18-4229-a79d-a04187de7b8d
+------------------+--------------------------------------------------+
| Property | Value |
+------------------+--------------------------------------------------+
| osdid | 0 |
| function | osd |
| journal_location | 34989bad-67fc-49ea-9e9c-38ca4be95fad |
| journal_size_gib | 1024 |
| journal_path | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0-part2 |
| journal_node | /dev/sdb2 |
| uuid | 34989bad-67fc-49ea-9e9c-38ca4be95fad |
| ihost_uuid | 4a5ed4fc-1d2b-4607-acf9-e50a3759c994 |
| idisk_uuid | c7cc08e6-ff18-4229-a79d-a04187de7b8d |
| tier_uuid | 4398d910-75e4-4e99-a57f-fc147fb87bdb |
| tier_name | storage |
| created_at | 2018-08-16T00:39:44.409448+00:00 |
| updated_at | 2018-08-16T00:40:07.626762+00:00 |
+------------------+--------------------------------------------------+
Create remaining available storage function (an OSD) in Storage-N
based in the number of available physical disks.
List the OSDs:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-stor-list storage-0
+--------------------------------------+----------+-------+--------------+--------------------------------------+
| uuid | function | osdid | capabilities | idisk_uuid |
+--------------------------------------+----------+-------+--------------+--------------------------------------+
| 34989bad-67fc-49ea-9e9c-38ca4be95fad | osd | 0 | {} | c7cc08e6-ff18-4229-a79d-a04187de7b8d |
+--------------------------------------+----------+-------+--------------+--------------------------------------+
Unlock Storage-N
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-unlock storage-0
**REMARK:** Before you continue, repeat Provisioning Storage steps on
remaining storage nodes.
----------------------
Compute Host Provision
----------------------
You must configure the network interfaces and the storage disks on a
host before you can unlock it. For each Compute Host do the following:
On Controller-0, acquire Keystone administrative privileges:
::
controller-0:~$ source /etc/nova/openrc
Provisioning Network Interfaces on a Compute Host
*************************************************
On Controller-0, in order to list out hardware port names, types,
pci-addresses that have been discovered:
- **Only in Virtual Environment**: Ensure that the interface used is
one of those attached to host bridge with model type "virtio" (i.e.,
eth1000 and eth1001). The model type "e1000" emulated devices will
not work for provider networks.
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-port-list compute-0
Provision the data interface for Compute. **Temporal** changes to
host-if-modify command: check help 'system help host-if-modify'. If the
help text lists '-c ' option then execute the following command;
otherwise use the form with '-nt' listed below:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-if-modify -p providernet-a -c data compute-0 eth1000
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-if-modify -p providernet-a -nt data compute-0 eth1000
VSwitch Virtual Environment
***************************
**Only in Virtual Environment**. If the compute has more than 4 cpus,
the system will auto-configure the vswitch to use 2 cores. However some
virtual environments do not properly support multi-queue required in a
multi-cpu environment. Therefore run the following command to reduce the
vswitch cores to 1:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-cpu-modify compute-0 -f vswitch -p0 1
+--------------------------------------+-------+-----------+-------+--------+...
| uuid | log_c | processor | phy_c | thread |...
| | ore | | ore | |...
+--------------------------------------+-------+-----------+-------+--------+...
| a3b5620c-28b1-4fe0-9e97-82950d8582c2 | 0 | 0 | 0 | 0 |...
| f2e91c2b-bfc5-4f2a-9434-bceb7e5722c3 | 1 | 0 | 1 | 0 |...
| 18a98743-fdc4-4c0c-990f-3c1cb2df8cb3 | 2 | 0 | 2 | 0 |...
| 690d25d2-4f99-4ba1-a9ba-0484eec21cc7 | 3 | 0 | 3 | 0 |...
+--------------------------------------+-------+-----------+-------+--------+...
Provisioning Storage on a Compute Host
**************************************
Review the available disk space and capacity and obtain the uuid(s) of
the physical disk(s) to be used for nova local:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-list compute-0
+--------------------------------------+-----------+---------+---------+-------+------------+...
| uuid | device_no | device_ | device_ | size_ | available_ |...
| | de | num | type | gib | gib |...
+--------------------------------------+-----------+---------+---------+-------+------------+
| 14e52a55-f6a7-40ad-a0b1-11c2c3b6e7e9 | /dev/sda | 2048 | HDD | 292. | 265.132 |...
| a639914b-23a9-4071-9f25-a5f1960846cc | /dev/sdb | 2064 | HDD | 100.0 | 99.997 |...
+--------------------------------------+-----------+---------+---------+-------+------------+...
Create the 'nova-local' local volume group:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-lvg-add compute-0 nova-local
+-----------------+-------------------------------------------------------------------+
| Property | Value |
+-----------------+-------------------------------------------------------------------+
| lvm_vg_name | nova-local |
| vg_state | adding |
| uuid | 37f4c178-f0fe-422d-b66e-24ae057da674 |
| ihost_uuid | f56921a6-8784-45ac-bd72-c0372cd95964 |
| lvm_vg_access | None |
| lvm_max_lv | 0 |
| lvm_cur_lv | 0 |
| lvm_max_pv | 0 |
| lvm_cur_pv | 0 |
| lvm_vg_size_gib | 0.00 |
| lvm_vg_total_pe | 0 |
| lvm_vg_free_pe | 0 |
| created_at | 2018-08-16T00:57:46.340454+00:00 |
| updated_at | None |
| parameters | {u'concurrent_disk_operations': 2, u'instance_backing': u'image'} |
+-----------------+-------------------------------------------------------------------+
Create a disk partition to add to the volume group based on uuid of the
physical disk:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-pv-add compute-0 nova-local a639914b-23a9-4071-9f25-a5f1960846cc
+--------------------------+--------------------------------------------+
| Property | Value |
+--------------------------+--------------------------------------------+
| uuid | 56fdb63a-1078-4394-b1ce-9a0b3bff46dc |
| pv_state | adding |
| pv_type | disk |
| disk_or_part_uuid | a639914b-23a9-4071-9f25-a5f1960846cc |
| disk_or_part_device_node | /dev/sdb |
| disk_or_part_device_path | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0 |
| lvm_pv_name | /dev/sdb |
| lvm_vg_name | nova-local |
| lvm_pv_uuid | None |
| lvm_pv_size_gib | 0.0 |
| lvm_pe_total | 0 |
| lvm_pe_alloced | 0 |
| ihost_uuid | f56921a6-8784-45ac-bd72-c0372cd95964 |
| created_at | 2018-08-16T01:05:59.013257+00:00 |
| updated_at | None |
+--------------------------+--------------------------------------------+
Remote RAW Ceph storage backed will be used to back nova local ephemeral
volumes:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-lvg-modify -b remote compute-0 nova-local
Unlocking a Compute Host
************************
On Controller-0, use the system host-unlock command to unlock the
Compute-N:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-unlock compute-0
Wait while the Compute-N is rebooted. Up to 10 minutes may be required
for a reboot, depending on hardware. The host is rebooted, and its
Availability State is reported as In-Test, followed by unlocked/enabled.
-------------------
System Health Check
-------------------
Listing StarlingX Nodes
***********************
On Controller-0, after a few minutes, all nodes shall be reported as
Unlocked, Enabled, and Available:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | unlocked | enabled | available |
| 3 | controller-1 | controller | unlocked | enabled | available |
| 4 | compute-0 | compute | unlocked | enabled | available |
| 5 | storage-0 | storage | unlocked | enabled | available |
| 6 | storage-1 | storage | unlocked | enabled | available |
| 7 | storage-2 | storage | unlocked | enabled | available |
+----+--------------+-------------+----------------+-------------+--------------+
[wrsroot@controller-0 ~(keystone_admin)]$
Checking StarlingX CEPH Health
******************************
::
[wrsroot@controller-0 ~(keystone_admin)]$ ceph -s
cluster e14ebfd6-5030-4592-91c3-7e6146b3c910
health HEALTH_OK
monmap e1: 3 mons at {controller-0=192.168.204.3:6789/0,controller-1=192.168.204.4:6789/0,storage-0=192.168.204.204:6789/0}
election epoch 22, quorum 0,1,2 controller-0,controller-1,storage-0
osdmap e84: 2 osds: 2 up, 2 in
flags sortbitwise,require_jewel_osds
pgmap v168: 1600 pgs, 5 pools, 0 bytes data, 0 objects
87444 kB used, 197 GB / 197 GB avail
1600 active+clean
controller-0:~$
System Alarm List
*****************
When all nodes are Unlocked, Enabled and Available: check 'fm alarm-list' for issues.
Your StarlingX deployment is now up and running with 2x HA Controllers with Cinder
Storage, 1x Compute, 3x Storages and all OpenStack services up and running. You can
now proceed with standard OpenStack APIs, CLIs and/or Horizon to load Glance Images,
configure Nova Flavors, configure Neutron networks and launch Nova Virtual Machines.

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,228 @@
==================
Installation Guide
==================
-----
Intro
-----
StarlingX may be installed in:
- **Bare Metal**: Real deployments of StarlingX are only supported on
physical servers.
- **Virtual Environment**: It should only be used for evaluation or
development purposes.
StarlingX installed in virtual environments has two options:
- :ref:`Libvirt/QEMU <Installation-libvirt-qemu>`
- VirtualBox
------------
Requirements
------------
Different use cases require different configurations.
Bare Metal
**********
The minimum requirements for the physical servers where StarlingX might
be deployed, include:
- **Controller Hosts**
- Minimum Processor is:
- Dual-CPU Intel® Xeon® E5 26xx Family (SandyBridge) 8
cores/socket
- Minimum Memory: 64 GB
- Hard Drives:
- Primary Hard Drive, minimum 500 GB for OS and system databases.
- Secondary Hard Drive, minimum 500 GB for persistent VM storage.
- 2 physical Ethernet interfaces: OAM and MGMT Network.
- USB boot support.
- PXE boot support.
- **Storage Hosts**
- Minimum Processor is:
- Dual-CPU Intel® Xeon® E5 26xx Family (SandyBridge) 8
cores/socket.
- Minimum Memory: 64 GB.
- Hard Drives:
- Primary Hard Drive, minimum 500 GB for OS.
- 1 or more additional Hard Drives for CEPH OSD storage, and
- Optionally 1 or more SSD or NVMe Drives for CEPH Journals.
- 1 physical Ethernet interface: MGMT Network
- PXE boot support.
- **Compute Hosts**
- Minimum Processor is:
- Dual-CPU Intel® Xeon® E5 26xx Family (SandyBridge) 8
cores/socket.
- Minimum Memory: 32 GB.
- Hard Drives:
- Primary Hard Drive, minimum 500 GB for OS.
- 1 or more additional Hard Drives for ephemeral VM Storage.
- 2 or more physical Ethernet interfaces: MGMT Network and 1 or more
Provider Networks.
- PXE boot support.
The recommended minimum requirements for the physical servers are
described later in each StarlingX Deployment Options guide.
Virtual Environment
*******************
The recommended minimum requirements for the workstation, hosting the
Virtual Machine(s) where StarlingX will be deployed, include:
Hardware Requirements
^^^^^^^^^^^^^^^^^^^^^
A workstation computer with:
- Processor: x86_64 only supported architecture with BIOS enabled
hardware virtualization extensions
- Cores: 8 (4 with careful monitoring of cpu load)
- Memory: At least 32GB RAM
- Hard Disk: 500GB HDD
- Network: Two network adapters with active Internet connection
Software Requirements
^^^^^^^^^^^^^^^^^^^^^
A workstation computer with:
- Operating System: Freshly installed Ubuntu 16.04 LTS 64-bit
- Proxy settings configured (if applies)
- Git
- KVM/VirtManager
- Libvirt Library
- QEMU Full System Emulation Binaries
- project
- StarlingX ISO Image
Deployment Environment Setup
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This section describes how to set up the workstation computer which will
host the Virtual Machine(s) where StarlingX will be deployed.
Updating Your Operating System
''''''''''''''''''''''''''''''
Before proceeding with the build, ensure your OS is up to date. Youll
first need to update the local database list of available packages:
::
$ sudo apt-get update
Install stx-tools project
'''''''''''''''''''''''''
Clone the stx-tools project. Usually youll want to clone it under your
users home directory.
::
$ cd $HOME
$ git clone git://git.openstack.org/openstack/stx-tools
Installing Requirements and Dependencies
''''''''''''''''''''''''''''''''''''''''
Navigate to the stx-tools installation libvirt directory:
::
$ cd $HOME/stx-tools/deployment/libvirt/
Install the required packages:
::
$ bash install_packages.sh
Disabling Firewall
''''''''''''''''''
Unload firewall and disable firewall on boot:
::
$ sudo ufw disable
Firewall stopped and disabled on system startup
$ sudo ufw status
Status: inactive
-------------------------------
Getting the StarlingX ISO Image
-------------------------------
Follow the instructions from the :ref:`developer-guide` to build a
StarlingX ISO image.
Bare Metal
**********
A bootable USB flash drive containing StarlingX ISO image.
Virtual Environment
*******************
Copy the StarlingX ISO Image to the stx-tools deployment libvirt project
directory:
::
$ cp <starlingx iso image> $HOME/stx-tools/deployment/libvirt/
------------------
Deployment Options
------------------
- Standard Controller
- :ref:`StarlingX Cloud with Dedicated Storage <dedicated-storage>`
- :ref:`StarlingX Cloud with Controller Storage <controller-storage>`
- All-in-one
- :ref:`StarlingX Cloud Duplex <duplex>`
- :ref:`StarlingX Cloud Simplex <simplex>`
.. toctree::
:hidden:
installation_libvirt_qemu
controller_storage
dedicated_storage
duplex
simplex

View File

@ -0,0 +1,193 @@
.. _Installation-libvirt-qemu:
=========================
Installation libvirt qemu
=========================
Installation for StarlingX using Libvirt/QEMU virtualization.
---------------------
Hardware Requirements
---------------------
A workstation computer with:
- Processor: x86_64 only supported architecture with BIOS enabled
hardware virtualization extensions
- Memory: At least 32GB RAM
- Hard Disk: 500GB HDD
- Network: One network adapter with active Internet connection
---------------------
Software Requirements
---------------------
A workstation computer with:
- Operating System: This process is known to work on Ubuntu 16.04 and
is likely to work on other Linux OS's with some appropriate
adjustments.
- Proxy settings configured (if applies)
- Git
- KVM/VirtManager
- Libvirt Library
- QEMU Full System Emulation Binaries
- project
- StarlingX ISO Image
Deployment Environment Setup
****************************
-------------
Configuration
-------------
These scripts are configured using environment variables that all have
built-in defaults. On shared systems you probably do not want to use the
defaults. The simplest way to handle this is to keep an rc file that can
be sourced into an interactive shell that configures everything. Here's
an example called madcloud.rc:
::
export CONTROLLER=madcloud
export COMPUTE=madnode
export BRIDGE_INTERFACE=madbr
export INTERNAL_NETWORK=172.30.20.0/24
export INTERNAL_IP=172.30.20.1/24
export EXTERNAL_NETWORK=192.168.20.0/24
export EXTERNAL_IP=192.168.20.1/24
This rc file shows the defaults baked into the scripts:
::
export CONTROLLER=controller
export COMPUTE=compute
export BRIDGE_INTERFACE=stxbr
export INTERNAL_NETWORK=10.10.10.0/24
export INTERNAL_IP=10.10.10.1/24
export EXTERNAL_NETWORK=192.168.204.0/24
export EXTERNAL_IP=192.168.204.1/24
-------------------------
Install stx-tools project
-------------------------
Clone the stx-tools project into a working directory.
::
git clone git://git.openstack.org/openstack/stx-tools.git
It will be convenient to set up a shortcut to the deployment script
directory:
::
SCRIPTS=$(pwd)/stx-tools/deployment/libvirt
Load the configuration (if you created one) from madcloud.rc:
::
source madcloud.rc
----------------------------------------
Installing Requirements and Dependencies
----------------------------------------
Install the required packages and configure QEMU. This only needs to be
done once per host. (NOTE: this script only knows about Ubuntu at this
time):
::
$SCRIPTS/install_packages.sh
------------------
Disabling Firewall
------------------
Unload firewall and disable firewall on boot:
::
sudo ufw disable
sudo ufw status
------------------
Configure Networks
------------------
Configure the network bridges using setup_network.sh before doing
anything else. It will create 4 bridges named stxbr1, stxbr2, stxbr3 and
stxbr4. Set the BRIDGE_INTERFACE environment variable if you need to
change stxbr to something unique.
::
$SCRIPTS/setup_network.sh
The destroy_network.sh script does the reverse, and should not be used
lightly. It should also only be used after all of the VMs created below
have been destroyed.
There is also a script cleanup_network.sh that will remove networking
configuration from libvirt.
---------------------
Configure Controllers
---------------------
There are two scripts for creating the controllers: setup_allinone.sh
and setup_standard_controller.sh. They are operated in the same manner
but build different StarlingX cloud configurations. Choose wisely.
You need an ISO file for the installation, these scripts take a name
with the -i option:
::
$SCRIPTS/setup_allinone.sh -i stx-2018-08-28-93.iso
And the setup will begin. The scripts create one or more VMs and start
the boot of the first controller, named oddly enough \``controller-0``.
If you have Xwindows available you will get virt-manager running. If
not, Ctrl-C out of that attempt if it doesn't return to a shell prompt.
Then connect to the serial console:
::
virsh console madcloud-0
Continue the usual StarlingX installation from this point forward.
Tear down the VMs using destroy_allinone.sh and
destroy_standard_controller.sh.
Continue
********
Pick up the installation in one of the existing guides at the
'Initializing Controller-0 step.
- Standard Controller
- :ref:`StarlingX Cloud with Dedicated Storage Virtual Environment <dedicated-storage>`
- :ref:`StarlingX Cloud with Controller Storage Virtual Environment <controller-storage>`
- All-in-one
- :ref:`StarlingX Cloud Duplex Virtual Environment <duplex>`
- :ref:`StarlingX Cloud Simplex Virtual Environment <simplex>`

View File

@ -0,0 +1,660 @@
.. _simplex:
========================================================
StarlingX/Installation Guide Virtual Environment/Simplex
========================================================
-----------------
Preparing Servers
-----------------
Bare Metal
**********
Required Server:
- Combined Server (Controller + Compute): 1
Hardware Requirements
^^^^^^^^^^^^^^^^^^^^^
The recommended minimum requirements for the physical servers where
StarlingX Simplex will be deployed, include:
- Minimum Processor:
- Dual-CPU Intel® Xeon® E5 26xx Family (SandyBridge) 8 cores/socket
- Memory: 64 GB
- BIOS:
- Hyper-Threading Tech: Enabled
- Virtualization Technology: Enabled
- VT for Directed I/O: Enabled
- CPU Power and Performance Policy: Performance
- CPU C State Control: Disabled
- Plug & Play BMC Detection: Disabled
- Primary Disk:
- 500 GB SDD or NVMe
- Additional Disks:
- 1 or more 500 GB disks (min. 10K RPM)
- Network Ports
- Management: 10GE
- OAM: 10GE
Virtual Environment
*******************
Run the libvirt qemu setup scripts. Setting up virtualized OAM and
Management networks:
::
$ bash setup_network.sh
Building xmls for definition of virtual servers:
::
$ bash setup_allinone.sh -i <starlingx iso image>
Accessing Virtual Server Consoles
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The xml for virtual servers in stx-tools repo, deployment/libvirt,
provides both graphical and text consoles.
Access the graphical console in virt-manager by right-click on the
domain (the server) and selecting "Open".
Access the textual console with the command "virsh console $DOMAIN",
where DOMAIN is the name of the server shown in virsh.
When booting the controller-0 for the first time, both the serial and
graphical consoles will present the initial configuration menu for the
cluster. One can select serial or graphical console for controller-0.
For the other nodes however only serial is used, regardless of which
option is selected.
Open the graphic console on all servers before powering them on to
observe the boot device selection and PXI boot progress. Run "virsh
console $DOMAIN" command promptly after power on to see the initial boot
sequence which follows the boot device selection. One has a few seconds
to do this.
------------------------------
Controller-0 Host Installation
------------------------------
Installing controller-0 involves initializing a host with software and
then applying a bootstrap configuration from the command line. The
configured bootstrapped host becomes Controller-0.
Procedure:
#. Power on the server that will be controller-0 with the StarlingX ISO
on a USB in a bootable USB slot.
#. Configure the controller using the config_controller script.
Initializing Controller-0
*************************
This section describes how to initialize StarlingX in host Controller-0.
Except where noted, all the commands must be executed from a console of
the host.
Power on the host to be configured as Controller-0, with the StarlingX
ISO on a USB in a bootable USB slot. Wait for the console to show the
StarlingX ISO booting options:
- **All-in-one Controller Configuration**
- When the installer is loaded and the installer welcome screen
appears in the Controller-0 host, select the type of installation
"All-in-one Controller Configuration".
- **Graphical Console**
- Select the "Graphical Console" as the console to use during
installation.
- **Standard Security Boot Profile**
- Select "Standard Security Boot Profile" as the Security Profile.
Monitor the initialization. When it is complete, a reboot is initiated
on the Controller-0 host, briefly displays a GNU GRUB screen, and then
boots automatically into the StarlingX image.
Log into Controller-0 as user wrsroot, with password wrsroot. The
first time you log in as wrsroot, you are required to change your
password. Enter the current password (wrsroot):
::
Changing password for wrsroot.
(current) UNIX Password:
Enter a new password for the wrsroot account:
::
New password:
Enter the new password again to confirm it:
::
Retype new password:
Controller-0 is initialized with StarlingX, and is ready for
configuration.
Configuring Controller-0
************************
This section describes how to perform the Controller-0 configuration
interactively just to bootstrap system with minimum critical data.
Except where noted, all the commands must be executed from the console
of the active controller (here assumed to be controller-0).
When run interactively, the config_controller script presents a series
of prompts for initial configuration of StarlingX:
- For the Virtual Environment, you can accept all the default values
immediately after system date and time.
- For a Physical Deployment, answer the bootstrap configuration
questions with answers applicable to your particular physical setup.
The script is used to configure the first controller in the StarlingX
cluster as controller-0. The prompts are grouped by configuration
area. To start the script interactively, use the following command
with no parameters:
::
controller-0:~$ sudo config_controller
System Configuration
================
Enter ! at any prompt to abort...
...
Select [y] for System Date and Time:
::
System date and time:
-----------------------------
Is the current date and time correct? [y/N]: y
Accept all the default values immediately after system date and time,
for System mode choose "simplex":
::
...
1) duplex-direct: two node-redundant configuration. Management and
infrastructure networks are directly connected to peer ports
2) duplex - two node redundant configuration
3) simplex - single node non-redundant configuration
System mode [duplex-direct]: 3
...
Applying configuration (this will take several minutes):
01/08: Creating bootstrap configuration ... DONE
02/08: Applying bootstrap manifest ... DONE
03/08: Persisting local configuration ... DONE
04/08: Populating initial system inventory ... DONE
05:08: Creating system configuration ... DONE
06:08: Applying controller manifest ... DONE
07:08: Finalize controller configuration ... DONE
08:08: Waiting for service activation ... DONE
Configuration was applied
Please complete any out of service commissioning steps with system
commands and unlock controller to proceed.
After config_controller bootstrap configuration, REST API, CLI and
Horizon interfaces are enabled on the controller-0 OAM IP Address. The
remaining installation instructions will use the CLI.
---------------------------
Controller-0 Host Provision
---------------------------
On Controller-0, acquire Keystone administrative privileges:
::
controller-0:~$ source /etc/nova/openrc
Configuring Provider Networks at Installation
*********************************************
Set up one provider network of the vlan type, named providernet-a:
::
[wrsroot@controller-0 ~(keystone_admin)]$ neutron providernet-create providernet-a --type=vlan
[wrsroot@controller-0 ~(keystone_admin)]$ neutron providernet-range-create --name providernet-a-range1 --range 100-400 providernet-a
Providing Data Interfaces on Controller-0
*****************************************
List all interfaces
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-if-list -a controller-0
+--------------------------------------+---------+----------+...+------+--------------+------+---------+------------+..
| uuid | name | class |...| vlan | ports | uses | used by | attributes |..
| | | |...| id | | i/f | i/f | |..
+--------------------------------------+----------+---------+...+------+--------------+------+---------+------------+..
| 49fd8938-e76f-49f1-879e-83c431a9f1af | enp0s3 | platform |...| None | [u'enp0s3'] | [] | [] | MTU=1500 |..
| 8957bb2c-fec3-4e5d-b4ed-78071f9f781c | eth1000 | None |...| None | [u'eth1000'] | [] | [] | MTU=1500 |..
| bf6f4cad-1022-4dd7-962b-4d7c47d16d54 | eth1001 | None |...| None | [u'eth1001'] | [] | [] | MTU=1500 |..
| f59b9469-7702-4b46-bad5-683b95f0a1cb | enp0s8 | platform |...| None | [u'enp0s8'] | [] | [] | MTU=1500 |..
+--------------------------------------+---------+----------+...+------+--------------+------+---------+------------+..
Configure the data interfaces
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-if-modify -c data controller-0 eth1000 -p providernet-a
+------------------+--------------------------------------+
| Property | Value |
+------------------+--------------------------------------+
| ifname | eth1000 |
| iftype | ethernet |
| ports | [u'eth1000'] |
| providernetworks | providernet-a |
| imac | 08:00:27:c4:ad:3e |
| imtu | 1500 |
| ifclass | data |
| aemode | None |
| schedpolicy | None |
| txhashpolicy | None |
| uuid | 8957bb2c-fec3-4e5d-b4ed-78071f9f781c |
| ihost_uuid | 9c332b27-6f22-433b-bf51-396371ac4608 |
| vlan_id | None |
| uses | [] |
| used_by | [] |
| created_at | 2018-08-28T12:50:51.820151+00:00 |
| updated_at | 2018-08-28T14:46:18.333109+00:00 |
| sriov_numvfs | 0 |
| ipv4_mode | disabled |
| ipv6_mode | disabled |
| accelerated | [True] |
+------------------+--------------------------------------+
Configuring Cinder on Controller Disk
*************************************
Review the available disk space and capacity and obtain the uuid of the
physical disk
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-list controller-0
+--------------------------------------+-----------+---------+---------+---------+------------+...
| uuid | device_no | device_ | device_ | size_mi | available_ |...
| | de | num | type | b | mib |...
+--------------------------------------+-----------+---------+---------+---------+------------+...
| 6b42c9dc-f7c0-42f1-a410-6576f5f069f1 | /dev/sda | 2048 | HDD | 600000 | 434072 |...
| | | | | | |...
| | | | | | |...
| 534352d8-fec2-4ca5-bda7-0e0abe5a8e17 | /dev/sdb | 2064 | HDD | 16240 | 16237 |...
| | | | | | |...
| | | | | | |...
| 146195b2-f3d7-42f9-935d-057a53736929 | /dev/sdc | 2080 | HDD | 16240 | 16237 |...
| | | | | | |...
| | | | | | |...
+--------------------------------------+-----------+---------+---------+---------+------------+...
Create the 'cinder-volumes' local volume group
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-lvg-add controller-0 cinder-volumes
+-----------------+--------------------------------------+
| lvm_vg_name | cinder-volumes |
| vg_state | adding |
| uuid | 61cb5cd2-171e-4ef7-8228-915d3560cdc3 |
| ihost_uuid | 9c332b27-6f22-433b-bf51-396371ac4608 |
| lvm_vg_access | None |
| lvm_max_lv | 0 |
| lvm_cur_lv | 0 |
| lvm_max_pv | 0 |
| lvm_cur_pv | 0 |
| lvm_vg_size | 0.00 |
| lvm_vg_total_pe | 0 |
| lvm_vg_free_pe | 0 |
| created_at | 2018-08-28T13:45:20.218905+00:00 |
| updated_at | None |
| parameters | {u'lvm_type': u'thin'} |
+-----------------+--------------------------------------+
Create a disk partition to add to the volume group
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-partition-add controller-0 534352d8-fec2-4ca5-bda7-0e0abe5a8e17 16237 -t lvm_phys_vol
+-------------+--------------------------------------------------+
| Property | Value |
+-------------+--------------------------------------------------+
| device_path | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0-part1 |
| device_node | /dev/sdb1 |
| type_guid | ba5eba11-0000-1111-2222-000000000001 |
| type_name | None |
| start_mib | None |
| end_mib | None |
| size_mib | 16237 |
| uuid | 0494615f-bd79-4490-84b9-dcebbe5f377a |
| ihost_uuid | 9c332b27-6f22-433b-bf51-396371ac4608 |
| idisk_uuid | 534352d8-fec2-4ca5-bda7-0e0abe5a8e17 |
| ipv_uuid | None |
| status | Creating |
| created_at | 2018-08-28T13:45:48.512226+00:00 |
| updated_at | None |
+-------------+--------------------------------------------------+
Wait for the new partition to be created (i.e. status=Ready)
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-partition-list controller-0 --disk 534352d8-fec2-4ca5-bda7-0e0abe5a8e17
+--------------------------------------+...+------------+...+---------------------+----------+--------+
| uuid |...| device_nod |...| type_name | size_mib | status |
| |...| e |...| | | |
+--------------------------------------+...+------------+...+---------------------+----------+--------+
| 0494615f-bd79-4490-84b9-dcebbe5f377a |...| /dev/sdb1 |...| LVM Physical Volume | 16237 | Ready |
| |...| |...| | | |
| |...| |...| | | |
+--------------------------------------+...+------------+...+---------------------+----------+--------+
Add the partition to the volume group
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-pv-add controller-0 cinder-volumes 0494615f-bd79-4490-84b9-dcebbe5f377a
+--------------------------+--------------------------------------------------+
| Property | Value |
+--------------------------+--------------------------------------------------+
| uuid | 9a0ad568-0ace-4d57-9e03-e7a63f609cf2 |
| pv_state | adding |
| pv_type | partition |
| disk_or_part_uuid | 0494615f-bd79-4490-84b9-dcebbe5f377a |
| disk_or_part_device_node | /dev/sdb1 |
| disk_or_part_device_path | /dev/disk/by-path/pci-0000:00:0d.0-ata-2.0-part1 |
| lvm_pv_name | /dev/sdb1 |
| lvm_vg_name | cinder-volumes |
| lvm_pv_uuid | None |
| lvm_pv_size | 0 |
| lvm_pe_total | 0 |
| lvm_pe_alloced | 0 |
| ihost_uuid | 9c332b27-6f22-433b-bf51-396371ac4608 |
| created_at | 2018-08-28T13:47:39.450763+00:00 |
| updated_at | None |
+--------------------------+--------------------------------------------------+
Adding an LVM Storage Backend at Installation
*********************************************
Ensure requirements are met to add LVM storage
::
[wrsroot@controller-0 ~(keystone_admin)]$ system storage-backend-add lvm -s cinder
WARNING : THIS OPERATION IS NOT REVERSIBLE AND CANNOT BE CANCELLED.
By confirming this operation, the LVM backend will be created.
Please refer to the system admin guide for minimum spec for LVM
storage. Set the 'confirmed' field to execute this operation
for the lvm backend.
Add the LVM storage backend
::
[wrsroot@controller-0 ~(keystone_admin)]$ system storage-backend-add lvm -s cinder --confirmed
System configuration has changed.
Please follow the administrator guide to complete configuring the system.
+--------------------------------------+------------+---------+-------------+...+----------+--------------+
| uuid | name | backend | state |...| services | capabilities |
+--------------------------------------+------------+---------+-------------+...+----------+--------------+
| 6d750a68-115a-4c26-adf4-58d6e358a00d | file-store | file | configured |...| glance | {} |
| e2697426-2d79-4a83-beb7-2eafa9ceaee5 | lvm-store | lvm | configuring |...| cinder | {} |
+--------------------------------------+------------+---------+-------------+...+----------+--------------+
Wait for the LVM storage backend to be configured (i.e.
state=Configured)
::
[wrsroot@controller-0 ~(keystone_admin)]$ system storage-backend-list
+--------------------------------------+------------+---------+------------+------+----------+--------------+
| uuid | name | backend | state | task | services | capabilities |
+--------------------------------------+------------+---------+------------+------+----------+--------------+
| 6d750a68-115a-4c26-adf4-58d6e358a00d | file-store | file | configured | None | glance | {} |
| e2697426-2d79-4a83-beb7-2eafa9ceaee5 | lvm-store | lvm | configured | None | cinder | {} |
+--------------------------------------+------------+---------+------------+------+----------+--------------+
Configuring VM Local Storage on Controller Disk
***********************************************
Review the available disk space and capacity and obtain the uuid of the
physical disk
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-list controller-0
+--------------------------------------+-----------+---------+---------+---------+------------+...
| uuid | device_no | device_ | device_ | size_mi | available_ |...
| | de | num | type | b | mib |...
+--------------------------------------+-----------+---------+---------+---------+------------+...
| 6b42c9dc-f7c0-42f1-a410-6576f5f069f1 | /dev/sda | 2048 | HDD | 600000 | 434072 |...
| | | | | | |...
| | | | | | |...
| 534352d8-fec2-4ca5-bda7-0e0abe5a8e17 | /dev/sdb | 2064 | HDD | 16240 | 0 |...
| | | | | | |...
| | | | | | |...
| 146195b2-f3d7-42f9-935d-057a53736929 | /dev/sdc | 2080 | HDD | 16240 | 16237 |...
| | | | | | |...
| | | | | | |...
+--------------------------------------+-----------+---------+---------+---------+------------+...
Create the 'noval-local' volume group
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-lvg-add controller-0 nova-local
+-----------------+-------------------------------------------------------------------+
| Property | Value |
+-----------------+-------------------------------------------------------------------+
| lvm_vg_name | nova-local |
| vg_state | adding |
| uuid | 517d313e-8aa0-4b4d-92e6-774b9085f336 |
| ihost_uuid | 9c332b27-6f22-433b-bf51-396371ac4608 |
| lvm_vg_access | None |
| lvm_max_lv | 0 |
| lvm_cur_lv | 0 |
| lvm_max_pv | 0 |
| lvm_cur_pv | 0 |
| lvm_vg_size | 0.00 |
| lvm_vg_total_pe | 0 |
| lvm_vg_free_pe | 0 |
| created_at | 2018-08-28T14:02:58.486716+00:00 |
| updated_at | None |
| parameters | {u'concurrent_disk_operations': 2, u'instance_backing': u'image'} |
+-----------------+-------------------------------------------------------------------+
Create a disk partition to add to the volume group
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-partition-add controller-0 146195b2-f3d7-42f9-935d-057a53736929 16237 -t lvm_phys_vol
+-------------+--------------------------------------------------+
| Property | Value |
+-------------+--------------------------------------------------+
| device_path | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0-part1 |
| device_node | /dev/sdc1 |
| type_guid | ba5eba11-0000-1111-2222-000000000001 |
| type_name | None |
| start_mib | None |
| end_mib | None |
| size_mib | 16237 |
| uuid | 009ce3b1-ed07-46e9-9560-9d2371676748 |
| ihost_uuid | 9c332b27-6f22-433b-bf51-396371ac4608 |
| idisk_uuid | 146195b2-f3d7-42f9-935d-057a53736929 |
| ipv_uuid | None |
| status | Creating |
| created_at | 2018-08-28T14:04:29.714030+00:00 |
| updated_at | None |
+-------------+--------------------------------------------------+
Wait for the new partition to be created (i.e. status=Ready)
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-disk-partition-list controller-0 --disk 146195b2-f3d7-42f9-935d-057a53736929
+--------------------------------------+...+------------+...+---------------------+----------+--------+
| uuid |...| device_nod |...| type_name | size_mib | status |
| |...| e |...| | | |
+--------------------------------------+...+------------+...+---------------------+----------+--------+
| 009ce3b1-ed07-46e9-9560-9d2371676748 |...| /dev/sdc1 |...| LVM Physical Volume | 16237 | Ready |
| |...| |...| | | |
| |...| |...| | | |
+--------------------------------------+...+------------+...+---------------------+----------+--------+
Add the partition to the volume group
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-pv-add controller-0 nova-local 009ce3b1-ed07-46e9-9560-9d2371676748
+--------------------------+--------------------------------------------------+
| Property | Value |
+--------------------------+--------------------------------------------------+
| uuid | 830c9dc8-c71a-4cb2-83be-c4d955ef4f6b |
| pv_state | adding |
| pv_type | partition |
| disk_or_part_uuid | 009ce3b1-ed07-46e9-9560-9d2371676748 |
| disk_or_part_device_node | /dev/sdc1 |
| disk_or_part_device_path | /dev/disk/by-path/pci-0000:00:0d.0-ata-3.0-part1 |
| lvm_pv_name | /dev/sdc1 |
| lvm_vg_name | nova-local |
| lvm_pv_uuid | None |
| lvm_pv_size | 0 |
| lvm_pe_total | 0 |
| lvm_pe_alloced | 0 |
| ihost_uuid | 9c332b27-6f22-433b-bf51-396371ac4608 |
| created_at | 2018-08-28T14:06:05.705546+00:00 |
| updated_at | None |
+--------------------------+--------------------------------------------------+
[wrsroot@controller-0 ~(keystone_admin)]$
Unlocking Controller-0
**********************
You must unlock controller-0 so that you can use it to install
Controller-1. Use the system host-unlock command:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-unlock controller-0
The host is rebooted. During the reboot, the command line is
unavailable, and any ssh connections are dropped. To monitor the
progress of the reboot, use the controller-0 console.
Verifying the Controller-0 Configuration
****************************************
On Controller-0, acquire Keystone administrative privileges:
::
controller-0:~$ source /etc/nova/openrc
Verify that the controller-0 services are running:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system service-list
+-----+-------------------------------+--------------+----------------+
| id | service_name | hostname | state |
+-----+-------------------------------+--------------+----------------+
...
| 1 | oam-ip | controller-0 | enabled-active |
| 2 | management-ip | controller-0 | enabled-active |
...
+-----+-------------------------------+--------------+----------------+
Verify that controller-0 has controller and compute subfunctions
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-show 1 | grep subfunctions
| subfunctions | controller,compute |
Verify that controller-0 is unlocked, enabled, and available:
::
[wrsroot@controller-0 ~(keystone_admin)]$ system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1 | controller-0 | controller | unlocked | enabled | available |
+----+--------------+-------------+----------------+-------------+--------------+
System Alarm List
*****************
When all nodes are Unlocked, Enabled and Available: check 'fm alarm-list' for issues.
Your StarlingX deployment is now up and running with 1 Controller with Cinder Storage
and all OpenStack services up and running. You can now proceed with standard OpenStack
APIs, CLIs and/or Horizon to load Glance Images, configure Nova Flavors, configure
Neutron networks and launch Nova Virtual Machines.