Adding system_inventory domain

System inventory tests verify hardware and node configurations/discovery.

Removing trailing spaces

Changing 'compute' to 'workers'. Adding format to command lines.

Adjusting bulleted sub-lists. Fixing typos. Removing "change the size of the image storage pool on ceph" as it already exists on the storage test plan.

Change-Id: I88fba446c5dc8c880cc19274ccccd498a81756ad
Signed-off-by: Ada Cabrales <ada.cabrales@intel.com>
This commit is contained in:
Ada Cabrales 2019-03-22 16:54:21 -06:00
parent e0ef9ee162
commit d03a87b0a2
8 changed files with 1756 additions and 0 deletions

View File

@ -25,3 +25,4 @@ For more information about StarlingX, see https://docs.starlingx.io/.
backup_restore/index backup_restore/index
installation_and_configuration/index installation_and_configuration/index
distributed_cloud/index distributed_cloud/index
system_inventory/index

View File

@ -0,0 +1,26 @@
================
System Inventory
================
System Inventory manages hardware discovery and configuration of nodes.
-----------------
Test Requirements
-----------------
For most of the test cases, a working configuration is required
----------
Subdomains
----------
.. toctree::
:maxdepth: 2
system_inventory_check
system_inventory_config
system_inventory_containers
system_inventory_host_ops
system_inventory_installs
system_inventory_modify

View File

@ -0,0 +1,460 @@
======================
System Inventory/Check
======================
Verify different system settings
.. contents::
:local:
:depth: 1
-----------------------
sysinv_check_01
-----------------------
:Test ID: sysinv_check_01
:Test Title: get the information of the software version and patch level using cli
:Tags: sysinv
~~~~~~~~~~~~~~~~~~
Testcase Objective
~~~~~~~~~~~~~~~~~~
Verify the software version and patch level using cli
~~~~~~~~~~~~~~~~~~~
Test Pre-Conditions
~~~~~~~~~~~~~~~~~~~
System up and running
~~~~~~~~~~
Test Steps
~~~~~~~~~~
1. Get the software version with the command
.. code:: bash
system show
2. Get the applied patches with the command
.. code:: bash
sudo sw-patch query
~~~~~~~~~~~~~~~~~
Expected Behavior
~~~~~~~~~~~~~~~~~
1. "software_version" row lists the correct version.
2. Patch ID column lists the current patch level.
~~~~~~~~~~
References
~~~~~~~~~~
N/A
-----------------------
sysinv_check_02
-----------------------
:Test ID: sysinv_check_02
:Test Title: query the system type using cli
:Tags: sysinv
~~~~~~~~~~~~~~~~~~
Testcase Objective
~~~~~~~~~~~~~~~~~~
Check system_mode and system_type using CLI
~~~~~~~~~~~~~~~~~~~
Test Pre-Conditions
~~~~~~~~~~~~~~~~~~~
System up and running
~~~~~~~~~~
Test Steps
~~~~~~~~~~
1. Authenticate with platform keystone
2. Query system_mode and system_type
.. code:: bash
system show | grep -e system_mode -e system_type
~~~~~~~~~~~~~~~~~
Expected Behavior
~~~~~~~~~~~~~~~~~
Simplex: system_mode simplex, system_type All-in-one
Duplex: system_mode duplex, system_type All-in-one
Standard: system_mode duplex, system_type Standard
~~~~~~~~~~
References
~~~~~~~~~~
N/A
-----------------------
sysinv_check_03
-----------------------
:Test ID: sysinv_check_03
:Test Title: resynchronize a host to the ntp server
:Tags: sysinv
~~~~~~~~~~~~~~~~~~
Testcase Objective
~~~~~~~~~~~~~~~~~~
Resynchronize a node to the NTP server. If a time discrepancy greater than ~17 min is found, the ntpd service is stopped.
~~~~~~~~~~~~~~~~~~~
Test Pre-Conditions
~~~~~~~~~~~~~~~~~~~
System up and running.
A NTP server reachable.
~~~~~~~~~~
Test Steps
~~~~~~~~~~
1. Make sure the node has a NTP server (that works) defined.
2. Change the time on worker-0, with a difference of 20 min.
3. Lock and unlock the host
.. code:: bash
system host-lock worker-0; system host-unlock worker-0
4. Wait for the node to come back and verify the time has been fixed.
~~~~~~~~~~~~~~~~~
Expected Behavior
~~~~~~~~~~~~~~~~~
2. Alarms 250.001 (configuration os out of date) and 200.006 (ntpd process has failed) are raised.
3. Alarms are cleared
4. time has been sync'd
~~~~~~~~~~
References
~~~~~~~~~~
N/A
-----------------------
sysinv_check_04
-----------------------
:Test ID: sysinv_check_04
:Test Title: swact active controller using rest api via floating oam ip
:Tags: sysinv
~~~~~~~~~~~~~~~~~~
Testcase Objective
~~~~~~~~~~~~~~~~~~
Execute a swact using REST API + OAM floating IP
~~~~~~~~~~~~~~~~~~~
Test Pre-Conditions
~~~~~~~~~~~~~~~~~~~
N/A
~~~~~~~~~~
Test Steps
~~~~~~~~~~
TBD
~~~~~~~~~~~~~~~~~
Expected Behavior
~~~~~~~~~~~~~~~~~
~~~~~~~~~~
References
~~~~~~~~~~
N/A
-----------------------
sysinv_check_05
-----------------------
:Test ID: sysinv_check_05
:Test Title: verify VM is consumming hugepage memory from the the affined NUMA node
:Tags: sysinv
~~~~~~~~~~~~~~~~~~
Testcase Objective
~~~~~~~~~~~~~~~~~~
Verify the instance created with cpu pinning consumes hugepages from the NUMA node associated to the CPU.
~~~~~~~~~~~~~~~~~~~
Test Pre-Conditions
~~~~~~~~~~~~~~~~~~~
N/A
~~~~~~~~~~
Test Steps
~~~~~~~~~~
1. Create a flavor with extra spec: 'hw:cpu_policy': 'dedicated'
2. lock a worker to boot the vm
3. Launch a vm
4. check the memory consumed by the vm, verify its on the same numa as the pinned cpu
~~~~~~~~~~~~~~~~~
Expected Behavior
~~~~~~~~~~~~~~~~~
1. the flavor is created without any error
2. expected result: the worker is locked without any error
3. the vm booted successfully
4. both huge-page memory and the pinned cpu are on the same numa node
~~~~~~~~~~
References
~~~~~~~~~~
N/A
-----------------------
sysinv_check_06
-----------------------
:Test ID: sysinv_check_06
:Test Title: verify wrong interface profiles will be rejected
:Tags: sysinv
~~~~~~~~~~~~~~~~~~
Testcase Objective
~~~~~~~~~~~~~~~~~~
Wrong interface profiles are rejected
~~~~~~~~~~~~~~~~~~~
Test Pre-Conditions
~~~~~~~~~~~~~~~~~~~
N/A
~~~~~~~~~~
Test Steps
~~~~~~~~~~
1. Create an interface profile of a worker node
.. code:: bash
system ifprofile-add <profile_name> <worker-n>
2. Apply the profile you just created to a worker node with mismatching network interfaces
.. code:: bash
system host-apply-ifprofile <worker-y> <profile_name>
~~~~~~~~~~~~~~~~~
Expected Behavior
~~~~~~~~~~~~~~~~~
2. the action is rejected with an error message
~~~~~~~~~~
References
~~~~~~~~~~
N/A
-----------------------
sysinv_check_07
-----------------------
:Test ID: sysinv_check_07
:Test Title: Check Resource Usage panel is working properly
:Tags: sysinv
~~~~~~~~~~~~~~~~~~
Testcase Objective
~~~~~~~~~~~~~~~~~~
Resource usage in Horizon works as expected.
~~~~~~~~~~~~~~~~~~~
Test Pre-Conditions
~~~~~~~~~~~~~~~~~~~
N/A
~~~~~~~~~~
Test Steps
~~~~~~~~~~
1. Login to OpenStack Horizon using 'admin'
2. Go to Admin / Overview
3. Download a CVS summary
4. Check the file contains the right information.
~~~~~~~~~~~~~~~~~
Expected Behavior
~~~~~~~~~~~~~~~~~
2. Reports should be displayed without issue
3. csv report should be downloaded.
4. report contains the same information as displayed.
~~~~~~~~~~
References
~~~~~~~~~~
N/A
-----------------------
sysinv_check_08
-----------------------
:Test ID: sysinv_check_08
:Test Title: Delete the mgmt. interface and re-add it to the same port
:Tags: sysinv
~~~~~~~~~~~~~~~~~~
Testcase Objective
~~~~~~~~~~~~~~~~~~
Delete the mgmt. interface and re-add it to the same port
~~~~~~~~~~~~~~~~~~~
Test Pre-Conditions
~~~~~~~~~~~~~~~~~~~
On a working configuration, use a worker node
~~~~~~~~~~
Test Steps
~~~~~~~~~~
1. Lock the worker node
.. code:: bash
system host-lock worker-1
2. Delete the mgmt interface
.. code:: bash
system host-if-list worker-1 , grep mgmt
system host-if-delete worker-1 <mgmt UUID>
3. Re-add the mgmt interface
.. code:: bash
system host-if-add -c platform worker-1 mgmt0 <name or UUID interface>
~~~~~~~~~~~~~~~~~
Expected Behavior
~~~~~~~~~~~~~~~~~
the mgmt interface is successfully added - the communication over the mgmt. interface is working
~~~~~~~~~~
References
~~~~~~~~~~
N/A
-----------------------
sysinv_check_09
-----------------------
:Test ID: sysinv_check_09
:Test Title: verify that the cpu data can be seen via cli
:Tags: sysinv
~~~~~~~~~~~~~~~~~~
Testcase Objective
~~~~~~~~~~~~~~~~~~
host-cpu-list shows the right information
~~~~~~~~~~~~~~~~~~~
Test Pre-Conditions
~~~~~~~~~~~~~~~~~~~
N/A
~~~~~~~~~~
Test Steps
~~~~~~~~~~
1. On a worker node, list the cpu processors using
.. code:: bash
system host-cpu-list worker-1
2. show the detailed information of a specific logical core
.. code:: bash
system host-cpu-show worker-1 <logical_cpu_number>
~~~~~~~~~~~~~~~~~
Expected Behavior
~~~~~~~~~~~~~~~~~
1. get the list without errors
2. the information about numa_node, physical_core, assigned_function and etc. are displayed correctly
~~~~~~~~~~
References
~~~~~~~~~~
N/A

View File

@ -0,0 +1,418 @@
=======================
System Inventory/Config
=======================
.. contents::
:local:
:depth: 1
-----------------------
sysinv_conf_01
-----------------------
:Test ID: sysinv_conf_01
:Test Title: change the dns server ip addresses using gui
:Tags: sysinv
~~~~~~~~~~~~~~~~~~
Testcase Objective
~~~~~~~~~~~~~~~~~~
Modify the DNS servers list after installation using Horizon
~~~~~~~~~~~~~~~~~~~
Test Pre-Conditions
~~~~~~~~~~~~~~~~~~~
System up and running
a DNS server
~~~~~~~~~~
Test Steps
~~~~~~~~~~
1. Login to platform horizon with the user 'admin'
2. Go to admin / platform / system configuration / dns / edit DNS
3. In the dialog box, edit the IP addresses and click Save
4. Check the DNS are set and the order is preserved
~~~~~~~~~~~~~~~~~
Expected Behavior
~~~~~~~~~~~~~~~~~
- DNS servers listed in the same order they were entered.
~~~~~~~~~~
References
~~~~~~~~~~
N/A
-----------------------
sysinv_conf_03
-----------------------
:Test ID: sysinv_conf_03
:Test Title: change the ntp server ip addresses using gui
:Tags: sysinv
~~~~~~~~~~~~~~~~~~
Testcase Objective
~~~~~~~~~~~~~~~~~~
Modify the NTP server using Horizon
~~~~~~~~~~~~~~~~~~~
Test Pre-Conditions
~~~~~~~~~~~~~~~~~~~
A system up and running
a Valid NTP server
~~~~~~~~~~
Test Steps
~~~~~~~~~~
1. Login to platform horizon with the user 'admin'
2. Go to admin / platform / system configuration / NTP / edit NTP
3. In the dialog box, edit the domain names or IP addresses and click Save.
4. Lock and unlock the standby controller.
5. Perform a swact
6. Lock and unlock the original controller.
~~~~~~~~~~~~~~~~~
Expected Behavior
~~~~~~~~~~~~~~~~~
3.
- NTP servers are changed
- alarms 250.001 (configuration is out-of-date) are created
4.
- alarm is cleared on standby controller
6.
- alarm is cleared on original controller
~~~~~~~~~~
References
~~~~~~~~~~
N/A
-----------------------
sysinv_conf_04
-----------------------
:Test ID: sysinv_conf_04
:Test Title: change the ntp server ip addresses using cli
:Tags: sysinv
~~~~~~~~~~~~~~~~~~
Testcase Objective
~~~~~~~~~~~~~~~~~~
Modify the NTP server using CLI
~~~~~~~~~~~~~~~~~~~
Test Pre-Conditions
~~~~~~~~~~~~~~~~~~~
A system up and running
a Valid NTP server
~~~~~~~~~~
Test Steps
~~~~~~~~~~
1. Authenticate with platform keystone
2. Change the NTP IP with the ntp-modify command.
.. code:: bash
system ntp-modify ntpservers=server1[, server2, server3]
3. Check the list
.. code:: bash
system ntp-show
4. Lock and unlock the standby controller.
5. Perform a swact
6. Lock and unlock the original controller.
~~~~~~~~~~~~~~~~~
Expected Behavior
~~~~~~~~~~~~~~~~~
2.
- NTP servers are changed
- alarms 250.001 (configuration is out-of-date) are created
4.
- alarm is cleared on standby controller
6.
- alarm is cleared on original controller
~~~~~~~~~~
References
~~~~~~~~~~
N/A
-----------------------
sysinv_conf_05
-----------------------
:Test ID: sysinv_conf_05
:Test Title: Enable the ptp service using cli
:Tags: sysinv
~~~~~~~~~~~~~~~~~~
Testcase Objective
~~~~~~~~~~~~~~~~~~
Enable the PTP service using CLI
~~~~~~~~~~~~~~~~~~~
Test Pre-Conditions
~~~~~~~~~~~~~~~~~~~
N/A
~~~~~~~~~~
Test Steps
~~~~~~~~~~
1. Disable NTP service
.. code:: bash
system ntp-modify enabled=false
2. Enable PTP service as legacy
.. code:: bash
system ptp-modify mode=legacy enabled=true
3. lock and unlock all the hosts to clear out of config alarms.
~~~~~~~~~~~~~~~~~
Expected Behavior
~~~~~~~~~~~~~~~~~
3. Hosts should be recoverded correclty
- Verify that host keep alive and there are not constant reboots
~~~~~~~~~~
References
~~~~~~~~~~
N/A
-----------------------
sysinv_conf_06
-----------------------
:Test ID: sysinv_conf_06
:Test Title: Enable the ptp service using gui
:Tags: sysinv
~~~~~~~~~~~~~~~~~~
Testcase Objective
~~~~~~~~~~~~~~~~~~
Enable the PTP service using Horizon
~~~~~~~~~~~~~~~~~~~
Test Pre-Conditions
~~~~~~~~~~~~~~~~~~~
N/A
~~~~~~~~~~
Test Steps
~~~~~~~~~~
1. Login to platform horizon with the user 'admin'
2. Go to admin / platform / system configuration / PTP / edit PTP
3. In the dialog box, click on the "Enabled" button and click Save.
4. Lock and unlock the standby controller.
5. Perform a swact
6. Lock and unlock the original controller.
~~~~~~~~~~~~~~~~~
Expected Behavior
~~~~~~~~~~~~~~~~~
2.
- PTP service is enabled
- alarms 250.001 (configuration is out-of-date) are created
4.
- alarm is cleared on standby controller
6.
- alarm is cleared on original controller
~~~~~~~~~~
References
~~~~~~~~~~
N/A
-----------------------
sysinv_conf_07
-----------------------
:Test ID: sysinv_conf_07
:Test Title: change the oam ip addresses using cli
:Tags: sysinv
~~~~~~~~~~~~~~~~~~
Testcase Objective
~~~~~~~~~~~~~~~~~~
Modify the OAM IP address using CLI
~~~~~~~~~~~~~~~~~~~
Test Pre-Conditions
~~~~~~~~~~~~~~~~~~~
N/A
~~~~~~~~~~
Test Steps
~~~~~~~~~~
1. Authenticate with platform keystone
2. Check there are no system alarms
3. Change the IP address on controller-1
.. code:: bash
system oam-modify key=value
4. Lock and unlock standby controller
5. Perform a swact
6. Lock and unlock the original controller.
7. Check IPs are correctly set
.. code:: bash
system oam-show
~~~~~~~~~~~~~~~~~
Expected Behavior
~~~~~~~~~~~~~~~~~
3.
- IP address is changed
- alarms 250.001 (configuration is out-of-date) are raised
4.
- alarm is cleared on standby controller
6.
- alarm is cleared on original controller
~~~~~~~~~~
References
~~~~~~~~~~
N/A
-----------------------
sysinv_conf_08
-----------------------
:Test ID: sysinv_conf_08
:Test Title: change the oam ip addresses using gui
:Tags: sysinv
~~~~~~~~~~~~~~~~~~
Testcase Objective
~~~~~~~~~~~~~~~~~~
Modify the OAM IP address using Horizon
~~~~~~~~~~~~~~~~~~~
Test Pre-Conditions
~~~~~~~~~~~~~~~~~~~
N/A
~~~~~~~~~~
Test Steps
~~~~~~~~~~
1. Login to platform horizon with the user 'admin'
2. Go to admin / platform / system configuration / OAM IP / edit OAM IP
3. In the dialog box, edit the IP address of Controller-1 and click Save.
4. Lock and unlock the standby controller.
5. Perform a swact
6. Lock and unlock the original controller.
~~~~~~~~~~~~~~~~~
Expected Behavior
~~~~~~~~~~~~~~~~~
3.
- controller-1 IP address is changed
- alarms 250.001 (configuration is out-of-date) are raised
4.
- alarm is cleared on standby controller
6.
- alarm is cleared on original controller
~~~~~~~~~~
References
~~~~~~~~~~
N/A

View File

@ -0,0 +1,187 @@
===========================
System Inventory/Containers
===========================
.. contents::
:local:
:depth: 1
-----------------------
sysinv_cont_01
-----------------------
:Test ID: sysinv_cont_01
:Test Title: Bring down services (uninstall the application)
:Tags: sysinv
~~~~~~~~~~~~~~~~~~
Testcase Objective
~~~~~~~~~~~~~~~~~~
Use sysinv to uninstall the application.
~~~~~~~~~~~~~~~~~~~
Test Pre-Conditions
~~~~~~~~~~~~~~~~~~~
A system up and running with a container application running
~~~~~~~~~~
Test Steps
~~~~~~~~~~
1. Uninstall the application
.. code:: bash
system application-remove stx-openstack
2. check the status
.. code:: bash
system application-list
~~~~~~~~~~~~~~~~~
Expected Behavior
~~~~~~~~~~~~~~~~~
2. the application is removed
~~~~~~~~~~
References
~~~~~~~~~~
N/A
-----------------------
sysinv_cont_02
-----------------------
:Test ID: sysinv_cont_02
:Test Title: Delete services (delete application definition)
:Tags: sysinv
~~~~~~~~~~~~~~~~~~
Testcase Objective
~~~~~~~~~~~~~~~~~~
Use sysinv to delete the application definition.
~~~~~~~~~~~~~~~~~~~
Test Pre-Conditions
~~~~~~~~~~~~~~~~~~~
Application has been previosly removed
~~~~~~~~~~
Test Steps
~~~~~~~~~~
1. Delete the aplication definition
.. code:: bash
system application-delete stx-openstack
2. check the status
.. code:: bash
system application-list
~~~~~~~~~~~~~~~~~
Expected Behavior
~~~~~~~~~~~~~~~~~
2. Once finished, the command returns no output.
~~~~~~~~~~
References
~~~~~~~~~~
N/A
-----------------------
sysinv_cont_03
-----------------------
:Test ID: sysinv_cont_03
:Test Title: update and delete a helm chart
:Tags: sysinv
~~~~~~~~~~~~~~~~~~
Testcase Objective
~~~~~~~~~~~~~~~~~~
Verify overriding a value is accepted on a helm chart
~~~~~~~~~~~~~~~~~~~
Test Pre-Conditions
~~~~~~~~~~~~~~~~~~~
N/A
~~~~~~~~~~
Test Steps
~~~~~~~~~~
1. Choose a helm chart to be updated
.. code:: bash
system helm-override-list
2. Show current values
.. code:: bash
system helm-override-show horizon openstack
3. Change a setting
.. code:: bash
system helm-override-update --set lockout_retries_num=5 horizon openstack
4. verify value has been updated (added as user_overrides)
.. code:: bash
system helm-override-show horizon openstack
5. Verify the change is working
6. Delete the overrides and get the chart back to it's original values
.. code:: bash
system helm-override-delete horizon openstack
~~~~~~~~~~~~~~~~~
Expected Behavior
~~~~~~~~~~~~~~~~~
1. The list of all the present helm charts with its corresponding namespaces
2. Property and values are shown
3. change is accepted
4. New Value is added as user_overrides
5. Setting works as expected
6. override changed is not longer appearing
~~~~~~~~~~
References
~~~~~~~~~~
N/A

View File

@ -0,0 +1,203 @@
=========================
System Inventory/Host Ops
=========================
.. contents::
:local:
:depth: 1
-----------------------
sysinv_ho_01
-----------------------
:Test ID: sysinv_ho_01
:Test Title: export hosts information using host-bulk-export cli
:Tags: sysinv
~~~~~~~~~~~~~~~~~~
Testcase Objective
~~~~~~~~~~~~~~~~~~
Export the information of current hosts using cli
~~~~~~~~~~~~~~~~~~~
Test Pre-Conditions
~~~~~~~~~~~~~~~~~~~
N/A
~~~~~~~~~~
Test Steps
~~~~~~~~~~
1. Authenticate with platform keystone
2. export the hosts information using
.. code:: bash
system host-bulk-export filename <hosts-file-name>
~~~~~~~~~~~~~~~~~
Expected Behavior
~~~~~~~~~~~~~~~~~
2. <host-file-name> is generated containing the information of all hosts in the current configuration
~~~~~~~~~~
References
~~~~~~~~~~
N/A
-----------------------
sysinv_ho_02
-----------------------
:Test ID: sysinv_ho_02
:Test Title: host operations (bulk-add; list; show; delete) in parallel on multiple hosts
:Tags: sysinv
~~~~~~~~~~~~~~~~~~
Testcase Objective
~~~~~~~~~~~~~~~~~~
host operations (bulk-add, list, show, delete) in parallel on multiple hosts
~~~~~~~~~~~~~~~~~~~
Test Pre-Conditions
~~~~~~~~~~~~~~~~~~~
N/A
~~~~~~~~~~
Test Steps
~~~~~~~~~~
1. Add multiple hosts using controller-0 (up to the max. limit e.g. currently 50)
.. code:: bash
system host-bulk-add <host-file.xml>
2. Connected to controller-1, delete hosts at the same time
.. code:: bash
system host-delete host1
system host-delete host2
3. repeat 1 and 2
~~~~~~~~~~~~~~~~~
Expected Behavior
~~~~~~~~~~~~~~~~~
1. verify the hosts are added without errors
2. the hosts are deleted without errors
3. verify the operations are successfully executed
~~~~~~~~~~
References
~~~~~~~~~~
N/A
-----------------------
sysinv_ho_03
-----------------------
:Test ID: sysinv_ho_03
:Test Title: Test BMC functionality
:Tags: sysinv
~~~~~~~~~~~~~~~~~~
Testcase Objective
~~~~~~~~~~~~~~~~~~
Test BMC functionality (host-reset; power-on;power-off) - requires BMC network on management VLAN
~~~~~~~~~~~~~~~~~~~
Test Pre-Conditions
~~~~~~~~~~~~~~~~~~~
BMC must be configured with its statick IP address, using BMC tools. BMC should be reachable from controllers.
~~~~~~~~~~
Test Steps
~~~~~~~~~~
1. Configure BMC on controller-1 (assuming is the stand-by controller)
- Login to platform Horizon
- Go to Admin / Platform / Host Inventory, "Hosts" tab
- click on "Edit host", then Board Management, and fill it out with the BMC info.
2. Verify Power off / on works as expected
- Lock controller-1
.. code:: bash
system host-lock controller-1
- turn it down
.. code:: bash
system host-power-off controller-1
- Wait until it's off
- turn it on
.. code:: bash
system host-power-on controller-1
3. Verify reset works
- Send the reset signal
.. code:: bash
system host-reset controller-1
- wait until it becomes 'online'
4. Unlock the controller
.. code:: bash
system host-unlock controller-1
~~~~~~~~~~~~~~~~~
Expected Behavior
~~~~~~~~~~~~~~~~~
1. BMC is configured and reachable from controller-0
2. node powers off, then on
3. Node reboots
Check alarms are raised and cleared as expected
~~~~~~~~~~
References
~~~~~~~~~~
N/A

View File

@ -0,0 +1,257 @@
=========================
System Inventory/Installs
=========================
.. contents::
:local:
:depth: 1
-----------------------
sysinv_inst_01
-----------------------
:Test ID: sysinv_inst_01
:Test Title: Install with with dynamic IP addressing
:Tags: sysinv
~~~~~~~~~~~~~~~~~~
Testcase Objective
~~~~~~~~~~~~~~~~~~
Install using dynamic IPs
~~~~~~~~~~~~~~~~~~~
Test Pre-Conditions
~~~~~~~~~~~~~~~~~~~
N/A
~~~~~~~~~~
Test Steps
~~~~~~~~~~
1. Install controller-0, configure it with system configuration file, using DYNAMIC_ALLOCATION IP addressing
2. Bring up controller-1:
- Verify the DHCP discover FM log occurs as expected on host discovery
- Verify that Mgmt. interface FM alarms are raised and cleared as expected
- Check the mgmt IP addresses assigned are in expected range (as specified in /etc/dnsmasq.conf)
3. Bring up the system and check the system functioning as expected
~~~~~~~~~~~~~~~~~
Expected Behavior
~~~~~~~~~~~~~~~~~
1. the active controller is up and all services are running
2. DHCP discover FM log occurs as expected on host discovery
- mgmt. or infra. interface FM alarms are raised and cleared as expected
- mgmt. and infra. IP addresses assigned are in expected range
3. Config is up and running
~~~~~~~~~~
References
~~~~~~~~~~
N/A
-----------------------
sysinv_inst_02
-----------------------
:Test ID: sysinv_inst_02
:Test Title: Install with with static addressing and pxeboot
:Tags: sysinv
~~~~~~~~~~~~~~~~~~
Testcase Objective
~~~~~~~~~~~~~~~~~~
Install using static IPs
~~~~~~~~~~~~~~~~~~~
Test Pre-Conditions
~~~~~~~~~~~~~~~~~~~
N/A
~~~~~~~~~~
Test Steps
~~~~~~~~~~
1. Install controller-0, configure it with system configuration file with STATIC_ALLOCATION IP addressing and pxeboot
2. Bring up controller-1
- Add the host to the inventory
.. code:: bash
system host-add -n controller-1 -p controller -i <mgmt_ip>
- power on the node and configure it after install finishes
~~~~~~~~~~~~~~~~~
Expected Behavior
~~~~~~~~~~~~~~~~~
1. active controller is up and all servers are running
2. mgmt. interface FM alarms are raised and cleared as expected
- static IPs in mgmt. range are accepted when update the personality of the node
~~~~~~~~~~
References
~~~~~~~~~~
N/A
-----------------------
sysinv_inst_03
-----------------------
:Test ID: sysinv_inst_03
:Test Title: Reinstall a node on a dynamic addressing system
:Tags: sysinv
~~~~~~~~~~~~~~~~~~
Testcase Objective
~~~~~~~~~~~~~~~~~~
Reinstalling a node using dynamic IPs
~~~~~~~~~~~~~~~~~~~
Test Pre-Conditions
~~~~~~~~~~~~~~~~~~~
A configuration already working, using dynamic IPs
~~~~~~~~~~
Test Steps
~~~~~~~~~~
1. Lock the host to be reinstalled - worker-1 in this case
.. code:: bash
system host-lock worker-1
2. Delete the host from inventory
.. code:: bash
system host-delete worker-1
3. Power off the host
4. Power it on (make sure it boots from management NIC)
5. Configure the node personality
.. code:: bash
system host-update <id> personality=worker hostname=worker-1
6. Allow the node to be installed and proceed to configure it.
~~~~~~~~~~~~~~~~~
Expected Behavior
~~~~~~~~~~~~~~~~~
1. node gets locked
2. node doesn't show up in host-list
5. node with worker personallity assigned
6. node working in the config
~~~~~~~~~~
References
~~~~~~~~~~
N/A
-----------------------
sysinv_inst_04
-----------------------
:Test ID: sysinv_inst_04
:Test Title: Reinstall a node on a static addressing system
:Tags: sysinv
~~~~~~~~~~~~~~~~~~
Testcase Objective
~~~~~~~~~~~~~~~~~~
Reinstalling a node using static IPs
~~~~~~~~~~~~~~~~~~~
Test Pre-Conditions
~~~~~~~~~~~~~~~~~~~
A configuration already working, using static IPs
~~~~~~~~~~
Test Steps
~~~~~~~~~~
1. Lock the host to be reinstalled - worker-1 in this case
.. code:: bash
system host-lock worker-1
2. Delete the host from inventory
.. code:: bash
system host-delete worker-1
3. Power off the host
4. Add the host to the inventory
.. code:: bash
system host-add -n worker-1 -p worker -i <mgmt_ip>
5. Power it on (make sure it boots from management NIC)
6. Allow the node to be installed and proceed to configure it.
~~~~~~~~~~~~~~~~~
Expected Behavior
~~~~~~~~~~~~~~~~~
1. node gest locked
2. node doesn't show up in host-list
4. node with worker personallity assigned
6. node working in the config
~~~~~~~~~~
References
~~~~~~~~~~
N/A

View File

@ -0,0 +1,204 @@
=======================
System Inventory/Modify
=======================
.. contents::
:local:
:depth: 1
-----------------------
sysinv_mod_01
-----------------------
:Test ID: sysinv_mod_01
:Test Title: change the mtu value of the data interface using gui
:Tags: sysinv
~~~~~~~~~~~~~~~~~~
Testcase Objective
~~~~~~~~~~~~~~~~~~
Change the mtu value of the data interface using Horizon
~~~~~~~~~~~~~~~~~~~
Test Pre-Conditions
~~~~~~~~~~~~~~~~~~~
N/A
~~~~~~~~~~
Test Steps
~~~~~~~~~~
1. Login to platform horizon with 'admin' user
2. Lock a worker node using gui
- Go to Platform / Host Inventory / Hosts
- from the edit menu for the worker-0, select lock host.
3. Change the MTU
- click the name of the host, and then go to "Interfaces" tab. Click "Edit" on data0.
- Modify the mtu field (use a value of 3000). Click save
4. unlock the node
5. repeat the above steps on each worker node
~~~~~~~~~~~~~~~~~
Expected Behavior
~~~~~~~~~~~~~~~~~
2. the node gets locked without any error
3. the MTU value changes to the value specified
4.
- worker node is unlocked after boot
- network works with no issues
5. the rest of the worker nodes are unlocked and enabled after boot
~~~~~~~~~~
References
~~~~~~~~~~
N/A
-----------------------
sysinv_mod_02
-----------------------
:Test ID: sysinv_mod_02
:Test Title: change the mtu value of the data interface using cli
:Tags: sysinv
~~~~~~~~~~~~~~~~~~
Testcase Objective
~~~~~~~~~~~~~~~~~~
Change the MTU value of the data interface using CLI
~~~~~~~~~~~~~~~~~~~
Test Pre-Conditions
~~~~~~~~~~~~~~~~~~~
N/A
~~~~~~~~~~
Test Steps
~~~~~~~~~~
1. Authenticate with platform keystone
2. Lock worker-0
3. Change the MTU value using "system host-if-modify"
.. code:: bash
system host-if-modify -m 3000 worker-0 eth1000
4. unlock the node
5. repeat the above steps on each worker node
Note:
- For Duplex use the second controller
- MTU must be greater or equal to MTU of the underline provider network
~~~~~~~~~~~~~~~~~
Expected Behavior
~~~~~~~~~~~~~~~~~
2. the node gets locked without any error
3. the MTU value changes to the value specified
4.
- worker node is unlocked after boot
- network works with no issues
5. the rest of the worker nodes are unlocked and enabled after boot
~~~~~~~~~~
References
~~~~~~~~~~
N/A
-----------------------
sysinv_mod_03
-----------------------
:Test ID: sysinv_mod_03
:Test Title: modify number of hugepages using Horizon
:Tags: sysinv
~~~~~~~~~~~~~~~~~~
Testcase Objective
~~~~~~~~~~~~~~~~~~
Change the Application hugepages on a worker node
~~~~~~~~~~~~~~~~~~~
Test Pre-Conditions
~~~~~~~~~~~~~~~~~~~
N/A
~~~~~~~~~~
Test Steps
~~~~~~~~~~
1. Login to platform horizon using 'admin'
2. Go to Admin / Platform / Host Inventory, "Hosts" tab
3. Lock worker-1 using the "Edit Host" button
4. Click on worker-1 to go to "host detail
5. Select "Memory" tab and click on "Update Memory
6. Update the Application hugepages to the maximum number allowed.
7. Unlock worker-1
8. Launch VMs on worker-1 using hugepage memory
.. code:: bash
openstack flavor set m1.small --property hw:mem_page_size=1GB
openstack server create --image cirros --flavor m1.small --nic net-id=net3 testvm
openstack server show testvm
~~~~~~~~~~~~~~~~~
Expected Behavior
~~~~~~~~~~~~~~~~~
3. worker-1 locked
6. 1g hugepages are in pending status
7. the worker boots and is available
8. The VMs are consuming hugepage memory from the correct numa node in worker-1
~~~~~~~~~~
References
~~~~~~~~~~
N/A