3.5 KiB
3.5 KiB
System Inventory/Modify
sysinv_mod_01
- Test ID
-
sysinv_mod_01
- Test Title
-
change the mtu value of the data interface using gui
- Tags
-
sysinv
Testcase Objective
Change the mtu value of the data interface using Horizon
Test Pre-Conditions
N/A
Test Steps
- Login to platform horizon with 'admin' user
- Lock a worker node using gui
- Go to Platform / Host Inventory / Hosts
- from the edit menu for the worker-0, select lock host.
- Change the MTU
- click the name of the host, and then go to "Interfaces" tab. Click "Edit" on data0.
- Modify the mtu field (use a value of 3000). Click save
- unlock the node
- repeat the above steps on each worker node
Expected Behavior
the node gets locked without any error
the MTU value changes to the value specified
- worker node is unlocked after boot
- network works with no issues
the rest of the worker nodes are unlocked and enabled after boot
References
N/A
sysinv_mod_02
- Test ID
-
sysinv_mod_02
- Test Title
-
change the mtu value of the data interface using cli
- Tags
-
sysinv
Testcase Objective
Change the MTU value of the data interface using CLI
Test Pre-Conditions
N/A
Test Steps
- Authenticate with platform keystone
- Lock worker-0
- Change the MTU value using "system host-if-modify"
system host-if-modify -m 3000 worker-0 eth1000
- unlock the node
- repeat the above steps on each worker node
Note:
- For Duplex use the second controller
- MTU must be greater or equal to MTU of the underline provider network
Expected Behavior
- the node gets locked without any error
- the MTU value changes to the value specified
- - worker node is unlocked after boot - network works with no issues
- the rest of the worker nodes are unlocked and enabled after boot
References
N/A
sysinv_mod_03
- Test ID
-
sysinv_mod_03
- Test Title
-
modify number of hugepages using Horizon
- Tags
-
sysinv
Testcase Objective
Change the Application hugepages on a worker node
Test Pre-Conditions
N/A
Test Steps
- Login to platform horizon using 'admin'
- Go to Admin / Platform / Host Inventory, "Hosts" tab
- Lock worker-1 using the "Edit Host" button
- Click on worker-1 to go to "host detail
- Select "Memory" tab and click on "Update Memory
- Update the Application hugepages to the maximum number allowed.
- Unlock worker-1
- Launch VMs on worker-1 using hugepage memory
openstack flavor set m1.small --property hw:mem_page_size=1GB
openstack server create --image cirros --flavor m1.small --nic net-id=net3 testvm
openstack server show testvm
Expected Behavior
- worker-1 locked
- 1g hugepages are in ‘pending’ status
- the worker boots and is available
- The VMs are consuming hugepage memory from the correct numa node in worker-1
References
N/A