Initial submission for starlingx pytest framework.
Include: - util modules. such as table_parser, ssh/localhost clients, cli module, exception, logger, etc. Util modules are mostly used by keywords. - keywords modules. These are helper functions that are used directly by test functions. - platform (with platform or platform_sanity marker) and stx-openstack (with sanity, sx_sanity, cpe_sanity, or storage_sanity marker) sanity testcases - pytest config conftest, and test fixture modules - test config file template/example Required packages: - python3.4 or python3.5 - pytest >=3.10,<4.0 - pexpect - requests - pyyaml - selenium (firefox, ffmpeg, pyvirtualdisplay, Xvfb or Xephyr or Xvnc) Limitations: - Anything that requires copying from Test File Server will not work until a public share is configured to shared test files. Tests skipped for now. Co-Authored-By: Maria Yousaf <maria.yousaf@windriver.com> Co-Authored-By: Marvin Huang <marvin.huang@windriver.com> Co-Authored-By: Yosief Gebremariam <yosief.gebremariam@windriver.com> Co-Authored-By: Paul Warner <paul.warner@windriver.com> Co-Authored-By: Xueguang Ma <Xueguang.Ma@windriver.com> Co-Authored-By: Charles Chen <charles.chen@windriver.com> Co-Authored-By: Daniel Graziano <Daniel.Graziano@windriver.com> Co-Authored-By: Jordan Li <jordan.li@windriver.com> Co-Authored-By: Nimalini Rasa <nimalini.rasa@windriver.com> Co-Authored-By: Senthil Mukundakumar <senthil.mukundakumar@windriver.com> Co-Authored-By: Anuejyan Manokeran <anujeyan.manokeran@windriver.com> Co-Authored-By: Peng Peng <peng.peng@windriver.com> Co-Authored-By: Chris Winnicki <chris.winnicki@windriver.com> Co-Authored-By: Joe Vimar <Joe.Vimar@windriver.com> Co-Authored-By: Alex Kozyrev <alex.kozyrev@windriver.com> Co-Authored-By: Jack Ding <jack.ding@windriver.com> Co-Authored-By: Ming Lei <ming.lei@windriver.com> Co-Authored-By: Ankit Jain <ankit.jain@windriver.com> Co-Authored-By: Eric Barrett <eric.barrett@windriver.com> Co-Authored-By: William Jia <william.jia@windriver.com> Co-Authored-By: Joseph Richard <Joseph.Richard@windriver.com> Co-Authored-By: Aldo Mcfarlane <aldo.mcfarlane@windriver.com> Story: 2005892 Task: 33750 Signed-off-by: Yang Liu <yang.liu@windriver.com> Change-Id: I7a88a47e09733d39f024144530f5abb9aee8cad2
This commit is contained in:
parent
d999d831d9
commit
33756ac899
26
README.rst
26
README.rst
|
@ -1,5 +1,25 @@
|
|||
==========
|
||||
========
|
||||
stx-test
|
||||
==========
|
||||
========
|
||||
|
||||
StarlingX Test
|
||||
StarlingX Test repository for manual and automated test cases.
|
||||
|
||||
|
||||
Contribute
|
||||
----------
|
||||
|
||||
- Clone the repo
|
||||
- Gerrit hook needs to be added for code review purpose.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# Generate a ssh key if needed
|
||||
ssh-keygen -t rsa -C "<your email address>"
|
||||
ssh-add $private_keyfile_path
|
||||
|
||||
# add ssh key to settings https://review.opendev.org/#/q/project:starlingx/test
|
||||
cd <stx-test repo>
|
||||
git remote add gerrit ssh://<your gerrit username>@review.opendev.org/starlingx/test.git
|
||||
git review -s
|
||||
|
||||
- When you are ready, create your commit with detailed commit message, and submit for review.
|
|
@ -0,0 +1,76 @@
|
|||
====================================
|
||||
StarlingX Integration Test Framework
|
||||
====================================
|
||||
|
||||
The project contains integration test cases that can be executed on an
|
||||
installed and configured StarlingX system.
|
||||
|
||||
Supported test cases:
|
||||
|
||||
- CLI tests over SSH connection to StarlingX system via OAM floating IP
|
||||
- Platform RestAPI test cases via external endpoints
|
||||
- Horizon test cases
|
||||
|
||||
|
||||
Packages Required
|
||||
-----------------
|
||||
- python >='3.4.3,<3.7'
|
||||
- pytest>='3.1.0,<4.0'
|
||||
- pexpect
|
||||
- pyyaml
|
||||
- requests (used by RestAPI test cases only)
|
||||
- selenium (used by Horizon test cases only)
|
||||
- Firefox (used by Horizon test cases only)
|
||||
- pyvirtualdisplay (used by Horizon test cases only)
|
||||
- ffmpeg (used by Horizon test cases only)
|
||||
- Xvfb or Xephyr or Xvnc (used by pyvirtualdisplay for Horizon test cases only)
|
||||
|
||||
|
||||
Setup Test Tool
|
||||
---------------
|
||||
This is a off-box test tool that needs to be set up once on a Linux server
|
||||
that can reach the StarlingX system under test (such as SSH to STX
|
||||
system, send/receive RestAPI requests, open Horizon page).
|
||||
|
||||
- Install above packages
|
||||
- Clone stx-test repo
|
||||
- Add absolute path for automated-pytest-suite to PYTHONPATH environment variable
|
||||
|
||||
Execute Test Cases
|
||||
------------------
|
||||
Precondition: STX system under test should be installed and configured.
|
||||
|
||||
- | Customized config can be provided via --testcase-config <config_file>.
|
||||
| Config template can be found at ${project_root}/stx-test_template.conf.
|
||||
- Test cases can be selected by specifying via -m <markers>
|
||||
- | If stx-openstack is not deployed, platform specific marker should be specified,
|
||||
| e.g., -m "platform_sanity or platform"
|
||||
- | Automation logs will be created at ${HOME}/AUTOMATION_LOGS directory by default.
|
||||
| Log directory can also be specified with --resultlog=${LOG_DIR} commandline option
|
||||
- Examples:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
export project_root=<automated-pytest-suite dir>
|
||||
|
||||
# Include $project_root to PYTHONPATH if not already done
|
||||
export PYTHONPATH=${PYTHONPATH}:${project_root}
|
||||
|
||||
cd $project_root
|
||||
|
||||
# Example 1: Run all platform_sanity test cases under testcases/
|
||||
pytest -m platform_sanity --testcase-config=~/my_config.conf testcases/
|
||||
|
||||
# Example 2: Run platform_sanity or sanity (requires stx-openstack) test cases,
|
||||
# on a StarlingX virtual box system that is already saved in consts/lab.py
|
||||
# and save automation logs to /tmp/AUTOMATION_LOGS
|
||||
pytest --resultlog=/tmp/ -m sanity --lab=vbox --natbox=localhost testcases/
|
||||
|
||||
# Example 3: List (not execute) the test cases with "migrate" in the name
|
||||
pytest --collect-only -k "migrate" --lab=<stx_oam_fip> testcases/
|
||||
|
||||
|
||||
Contribute
|
||||
----------
|
||||
|
||||
- In order to contribute, python3.4 is required to avoid producing code that is incompatible with python3.4.
|
|
@ -0,0 +1,693 @@
|
|||
#
|
||||
# Copyright (c) 2019 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
import logging
|
||||
import os
|
||||
from time import strftime, gmtime
|
||||
# import threading # Used for formatting logger
|
||||
|
||||
|
||||
import pytest # Don't remove. Used in eval
|
||||
|
||||
import setups
|
||||
from consts.proj_vars import ProjVar
|
||||
from utils.tis_log import LOG
|
||||
from utils import parse_log
|
||||
|
||||
tc_start_time = None
|
||||
tc_end_time = None
|
||||
has_fail = False
|
||||
repeat_count = -1
|
||||
stress_count = -1
|
||||
count = -1
|
||||
no_teardown = False
|
||||
tracebacks = []
|
||||
region = None
|
||||
test_count = 0
|
||||
console_log = True
|
||||
|
||||
################################
|
||||
# Process and log test results #
|
||||
################################
|
||||
|
||||
|
||||
class MakeReport:
|
||||
nodeid = None
|
||||
instances = {}
|
||||
|
||||
def __init__(self, item):
|
||||
MakeReport.nodeid = item.nodeid
|
||||
self.test_pass = None
|
||||
self.test_results = {}
|
||||
MakeReport.instances[item.nodeid] = self
|
||||
|
||||
def update_results(self, call, report):
|
||||
if report.failed:
|
||||
global has_fail
|
||||
has_fail = True
|
||||
msg = "***Failure at test {}: {}".format(call.when, call.excinfo)
|
||||
print(msg)
|
||||
LOG.debug(msg + "\n***Details: {}".format(report.longrepr))
|
||||
global tracebacks
|
||||
tracebacks.append(str(report.longrepr))
|
||||
self.test_results[call.when] = ['Failed', call.excinfo]
|
||||
elif report.skipped:
|
||||
sep = 'Skipped: '
|
||||
skipreason_list = str(call.excinfo).split(sep=sep)[1:]
|
||||
skipreason_str = sep.join(skipreason_list)
|
||||
self.test_results[call.when] = ['Skipped', skipreason_str]
|
||||
elif report.passed:
|
||||
self.test_results[call.when] = ['Passed', '']
|
||||
|
||||
def get_results(self):
|
||||
return self.test_results
|
||||
|
||||
@classmethod
|
||||
def get_report(cls, item):
|
||||
if item.nodeid == cls.nodeid:
|
||||
return cls.instances[cls.nodeid]
|
||||
else:
|
||||
return cls(item)
|
||||
|
||||
|
||||
class TestRes:
|
||||
PASSNUM = 0
|
||||
FAILNUM = 0
|
||||
SKIPNUM = 0
|
||||
TOTALNUM = 0
|
||||
|
||||
|
||||
def _write_results(res_in_tests, test_name):
|
||||
global tc_start_time
|
||||
with open(ProjVar.get_var("TCLIST_PATH"), mode='a') as f:
|
||||
f.write('\n{}\t{}\t{}'.format(res_in_tests, tc_start_time, test_name))
|
||||
global test_count
|
||||
test_count += 1
|
||||
# reset tc_start and end time for next test case
|
||||
tc_start_time = None
|
||||
|
||||
|
||||
def pytest_runtest_makereport(item, call, __multicall__):
|
||||
report = __multicall__.execute()
|
||||
my_rep = MakeReport.get_report(item)
|
||||
my_rep.update_results(call, report)
|
||||
|
||||
test_name = item.nodeid.replace('::()::',
|
||||
'::') # .replace('testcases/', '')
|
||||
res_in_tests = ''
|
||||
res = my_rep.get_results()
|
||||
|
||||
# Write final result to test_results.log
|
||||
if report.when == 'teardown':
|
||||
res_in_log = 'Test Passed'
|
||||
fail_at = []
|
||||
for key, val in res.items():
|
||||
if val[0] == 'Failed':
|
||||
fail_at.append('test ' + key)
|
||||
elif val[0] == 'Skipped':
|
||||
res_in_log = 'Test Skipped\nReason: {}'.format(val[1])
|
||||
res_in_tests = 'SKIP'
|
||||
break
|
||||
if fail_at:
|
||||
fail_at = ', '.join(fail_at)
|
||||
res_in_log = 'Test Failed at {}'.format(fail_at)
|
||||
|
||||
# Log test result
|
||||
testcase_log(msg=res_in_log, nodeid=test_name, log_type='tc_res')
|
||||
|
||||
if 'Test Passed' in res_in_log:
|
||||
res_in_tests = 'PASS'
|
||||
elif 'Test Failed' in res_in_log:
|
||||
res_in_tests = 'FAIL'
|
||||
if ProjVar.get_var('PING_FAILURE'):
|
||||
setups.add_ping_failure(test_name=test_name)
|
||||
|
||||
if not res_in_tests:
|
||||
res_in_tests = 'UNKNOWN'
|
||||
|
||||
# count testcases by status
|
||||
TestRes.TOTALNUM += 1
|
||||
if res_in_tests == 'PASS':
|
||||
TestRes.PASSNUM += 1
|
||||
elif res_in_tests == 'FAIL':
|
||||
TestRes.FAILNUM += 1
|
||||
elif res_in_tests == 'SKIP':
|
||||
TestRes.SKIPNUM += 1
|
||||
|
||||
_write_results(res_in_tests=res_in_tests, test_name=test_name)
|
||||
|
||||
if repeat_count > 0:
|
||||
for key, val in res.items():
|
||||
if val[0] == 'Failed':
|
||||
global tc_end_time
|
||||
tc_end_time = strftime("%Y%m%d %H:%M:%S", gmtime())
|
||||
_write_results(res_in_tests='FAIL', test_name=test_name)
|
||||
TestRes.FAILNUM += 1
|
||||
if ProjVar.get_var('PING_FAILURE'):
|
||||
setups.add_ping_failure(test_name=test_name)
|
||||
|
||||
try:
|
||||
parse_log.parse_test_steps(ProjVar.get_var('LOG_DIR'))
|
||||
except Exception as e:
|
||||
LOG.warning(
|
||||
"Unable to parse test steps. \nDetails: {}".format(
|
||||
e.__str__()))
|
||||
|
||||
pytest.exit(
|
||||
"Skip rest of the iterations upon stress test failure")
|
||||
|
||||
if no_teardown and report.when == 'call':
|
||||
for key, val in res.items():
|
||||
if val[0] == 'Skipped':
|
||||
break
|
||||
else:
|
||||
pytest.exit("No teardown and skip rest of the tests if any")
|
||||
|
||||
return report
|
||||
|
||||
|
||||
def pytest_runtest_setup(item):
|
||||
global tc_start_time
|
||||
# tc_start_time = setups.get_tis_timestamp(con_ssh)
|
||||
tc_start_time = strftime("%Y%m%d %H:%M:%S", gmtime())
|
||||
print('')
|
||||
message = "Setup started:"
|
||||
testcase_log(message, item.nodeid, log_type='tc_setup')
|
||||
# set test name for ping vm failure
|
||||
test_name = 'test_{}'.format(
|
||||
item.nodeid.rsplit('::test_', 1)[-1].replace('/', '_'))
|
||||
ProjVar.set_var(TEST_NAME=test_name)
|
||||
ProjVar.set_var(PING_FAILURE=False)
|
||||
|
||||
|
||||
def pytest_runtest_call(item):
|
||||
separator = \
|
||||
'++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++'
|
||||
message = "Test steps started:"
|
||||
testcase_log(message, item.nodeid, separator=separator, log_type='tc_start')
|
||||
|
||||
|
||||
def pytest_runtest_teardown(item):
|
||||
print('')
|
||||
message = 'Teardown started:'
|
||||
testcase_log(message, item.nodeid, log_type='tc_teardown')
|
||||
|
||||
|
||||
def testcase_log(msg, nodeid, separator=None, log_type=None):
|
||||
if separator is None:
|
||||
separator = '-----------'
|
||||
|
||||
print_msg = separator + '\n' + msg
|
||||
logging_msg = '\n{}{} {}'.format(separator, msg, nodeid)
|
||||
if console_log:
|
||||
print(print_msg)
|
||||
if log_type == 'tc_res':
|
||||
global tc_end_time
|
||||
tc_end_time = strftime("%Y%m%d %H:%M:%S", gmtime())
|
||||
LOG.tc_result(msg=msg, tc_name=nodeid)
|
||||
elif log_type == 'tc_start':
|
||||
LOG.tc_func_start(nodeid)
|
||||
elif log_type == 'tc_setup':
|
||||
LOG.tc_setup_start(nodeid)
|
||||
elif log_type == 'tc_teardown':
|
||||
LOG.tc_teardown_start(nodeid)
|
||||
else:
|
||||
LOG.debug(logging_msg)
|
||||
|
||||
|
||||
########################
|
||||
# Command line options #
|
||||
########################
|
||||
@pytest.mark.tryfirst
|
||||
def pytest_configure(config):
|
||||
config.addinivalue_line("markers",
|
||||
"features(feature_name1, feature_name2, "
|
||||
"...): mark impacted feature(s) for a test case.")
|
||||
config.addinivalue_line("markers",
|
||||
"priorities(, cpe_sanity, p2, ...): mark "
|
||||
"priorities for a test case.")
|
||||
config.addinivalue_line("markers",
|
||||
"known_issue(LP-xxxx): mark known issue with "
|
||||
"LP ID or description if no LP needed.")
|
||||
|
||||
if config.getoption('help'):
|
||||
return
|
||||
|
||||
# Common reporting params
|
||||
collect_all = config.getoption('collectall')
|
||||
always_collect = config.getoption('alwayscollect')
|
||||
session_log_dir = config.getoption('sessiondir')
|
||||
resultlog = config.getoption('resultlog')
|
||||
|
||||
# Test case params on installed system
|
||||
testcase_config = config.getoption('testcase_config')
|
||||
lab_arg = config.getoption('lab')
|
||||
natbox_arg = config.getoption('natbox')
|
||||
tenant_arg = config.getoption('tenant')
|
||||
horizon_visible = config.getoption('horizon_visible')
|
||||
is_vbox = config.getoption('is_vbox')
|
||||
|
||||
global repeat_count
|
||||
repeat_count = config.getoption('repeat')
|
||||
global stress_count
|
||||
stress_count = config.getoption('stress')
|
||||
global count
|
||||
if repeat_count > 0:
|
||||
count = repeat_count
|
||||
elif stress_count > 0:
|
||||
count = stress_count
|
||||
|
||||
global no_teardown
|
||||
no_teardown = config.getoption('noteardown')
|
||||
if repeat_count > 0 or no_teardown:
|
||||
ProjVar.set_var(NO_TEARDOWN=True)
|
||||
|
||||
collect_netinfo = config.getoption('netinfo')
|
||||
|
||||
# Determine lab value.
|
||||
lab = natbox = None
|
||||
if lab_arg:
|
||||
lab = setups.get_lab_dict(lab_arg)
|
||||
if natbox_arg:
|
||||
natbox = setups.get_natbox_dict(natbox_arg)
|
||||
|
||||
lab, natbox = setups.setup_testcase_config(testcase_config, lab=lab,
|
||||
natbox=natbox)
|
||||
tenant = tenant_arg.upper() if tenant_arg else 'TENANT1'
|
||||
|
||||
# Log collection params
|
||||
collect_all = True if collect_all else False
|
||||
always_collect = True if always_collect else False
|
||||
|
||||
# If floating ip cannot be reached, whether to try to ping/ssh
|
||||
# controller-0 unit IP, etc.
|
||||
if collect_netinfo:
|
||||
ProjVar.set_var(COLLECT_SYS_NET_INFO=True)
|
||||
|
||||
horizon_visible = True if horizon_visible else False
|
||||
|
||||
if session_log_dir:
|
||||
log_dir = session_log_dir
|
||||
else:
|
||||
# compute directory for all logs based on resultlog arg, lab,
|
||||
# and timestamp on local machine
|
||||
resultlog = resultlog if resultlog else os.path.expanduser("~")
|
||||
if '/AUTOMATION_LOGS' in resultlog:
|
||||
resultlog = resultlog.split(sep='/AUTOMATION_LOGS')[0]
|
||||
resultlog = os.path.join(resultlog, 'AUTOMATION_LOGS')
|
||||
lab_name = lab['short_name']
|
||||
time_stamp = strftime('%Y%m%d%H%M')
|
||||
log_dir = '{}/{}/{}'.format(resultlog, lab_name, time_stamp)
|
||||
os.makedirs(log_dir, exist_ok=True)
|
||||
|
||||
# set global constants, which will be used for the entire test session, etc
|
||||
ProjVar.init_vars(lab=lab, natbox=natbox, logdir=log_dir, tenant=tenant,
|
||||
collect_all=collect_all,
|
||||
always_collect=always_collect,
|
||||
horizon_visible=horizon_visible)
|
||||
|
||||
if lab.get('central_region'):
|
||||
ProjVar.set_var(IS_DC=True,
|
||||
PRIMARY_SUBCLOUD=config.getoption('subcloud'))
|
||||
|
||||
if is_vbox:
|
||||
ProjVar.set_var(IS_VBOX=True)
|
||||
|
||||
config_logger(log_dir, console=console_log)
|
||||
|
||||
# set resultlog save location
|
||||
config.option.resultlog = ProjVar.get_var("PYTESTLOG_PATH")
|
||||
|
||||
# Repeat test params
|
||||
file_or_dir = config.getoption('file_or_dir')
|
||||
origin_file_dir = list(file_or_dir)
|
||||
if count > 1:
|
||||
print("Repeat following tests {} times: {}".format(count, file_or_dir))
|
||||
del file_or_dir[:]
|
||||
for f_or_d in origin_file_dir:
|
||||
for i in range(count):
|
||||
file_or_dir.append(f_or_d)
|
||||
|
||||
|
||||
def pytest_addoption(parser):
|
||||
testconf_help = "Absolute path for testcase config file. Template can be " \
|
||||
"found at automated-pytest-suite/stx-test_template.conf"
|
||||
lab_help = "STX system to connect to. Valid value: 1) short_name or name " \
|
||||
"of an existing dict entry in consts.Labs; Or 2) OAM floating " \
|
||||
"ip of the STX system under test"
|
||||
tenant_help = "Default tenant to use when unspecified. Valid values: " \
|
||||
"tenant1, tenant2, or admin"
|
||||
natbox_help = "NatBox IP or name. If automated tests are executed from " \
|
||||
"NatBox, --natbox=localhost can be used. " \
|
||||
"If username/password are required to SSH to NatBox, " \
|
||||
"please specify them in test config file."
|
||||
vbox_help = "Specify if StarlingX system is installed in virtual " \
|
||||
"environment."
|
||||
collect_all_help = "Run collect all on STX system at the end of test " \
|
||||
"session if any test fails."
|
||||
logdir_help = "Directory to store test session logs. If this is " \
|
||||
"specified, then --resultlog will be ignored."
|
||||
stress_help = "Number of iterations to run specified testcase(s). Abort " \
|
||||
"rest of the test session on first failure"
|
||||
count_help = "Repeat tests x times - NO stop on failure"
|
||||
horizon_visible_help = "Display horizon on screen"
|
||||
no_console_log = 'Print minimal console logs'
|
||||
|
||||
# Test session options on installed and configured STX system:
|
||||
parser.addoption('--testcase-config', action='store',
|
||||
metavar='testcase_config', default=None,
|
||||
help=testconf_help)
|
||||
parser.addoption('--lab', action='store', metavar='lab', default=None,
|
||||
help=lab_help)
|
||||
parser.addoption('--tenant', action='store', metavar='tenantname',
|
||||
default=None, help=tenant_help)
|
||||
parser.addoption('--natbox', action='store', metavar='natbox', default=None,
|
||||
help=natbox_help)
|
||||
parser.addoption('--vm', '--vbox', action='store_true', dest='is_vbox',
|
||||
help=vbox_help)
|
||||
|
||||
# Debugging/Log collection options:
|
||||
parser.addoption('--sessiondir', '--session_dir', '--session-dir',
|
||||
action='store', dest='sessiondir',
|
||||
metavar='sessiondir', default=None, help=logdir_help)
|
||||
parser.addoption('--collectall', '--collect_all', '--collect-all',
|
||||
dest='collectall', action='store_true',
|
||||
help=collect_all_help)
|
||||
parser.addoption('--alwayscollect', '--always-collect', '--always_collect',
|
||||
dest='alwayscollect',
|
||||
action='store_true', help=collect_all_help)
|
||||
parser.addoption('--repeat', action='store', metavar='repeat', type=int,
|
||||
default=-1, help=stress_help)
|
||||
parser.addoption('--stress', metavar='stress', action='store', type=int,
|
||||
default=-1, help=count_help)
|
||||
parser.addoption('--no-teardown', '--no_teardown', '--noteardown',
|
||||
dest='noteardown', action='store_true')
|
||||
parser.addoption('--netinfo', '--net-info', dest='netinfo',
|
||||
action='store_true',
|
||||
help="Collect system networking info if scp keyfile fails")
|
||||
parser.addoption('--horizon-visible', '--horizon_visible',
|
||||
action='store_true', dest='horizon_visible',
|
||||
help=horizon_visible_help)
|
||||
parser.addoption('--noconsolelog', '--noconsole', '--no-console-log',
|
||||
'--no_console_log', '--no-console',
|
||||
'--no_console', action='store_true', dest='noconsolelog',
|
||||
help=no_console_log)
|
||||
|
||||
|
||||
def config_logger(log_dir, console=True):
|
||||
# logger for log saved in file
|
||||
file_name = log_dir + '/TIS_AUTOMATION.log'
|
||||
logging.Formatter.converter = gmtime
|
||||
log_format = '[%(asctime)s] %(lineno)-5d%(levelname)-5s %(threadName)-8s ' \
|
||||
'%(module)s.%(funcName)-8s:: %(message)s'
|
||||
tis_formatter = logging.Formatter(log_format)
|
||||
LOG.setLevel(logging.NOTSET)
|
||||
|
||||
tmp_path = os.path.join(os.path.expanduser('~'), '.tmp_log')
|
||||
# clear the tmp log with best effort so it wont keep growing
|
||||
try:
|
||||
os.remove(tmp_path)
|
||||
except:
|
||||
pass
|
||||
logging.basicConfig(level=logging.NOTSET, format=log_format,
|
||||
filename=tmp_path, filemode='w')
|
||||
|
||||
# file handler:
|
||||
file_handler = logging.FileHandler(file_name)
|
||||
file_handler.setFormatter(tis_formatter)
|
||||
file_handler.setLevel(logging.DEBUG)
|
||||
LOG.addHandler(file_handler)
|
||||
|
||||
# logger for stream output
|
||||
console_level = logging.INFO if console else logging.CRITICAL
|
||||
stream_hdler = logging.StreamHandler()
|
||||
stream_hdler.setFormatter(tis_formatter)
|
||||
stream_hdler.setLevel(console_level)
|
||||
LOG.addHandler(stream_hdler)
|
||||
|
||||
print("LOG DIR: {}".format(log_dir))
|
||||
|
||||
|
||||
def pytest_unconfigure(config):
|
||||
# collect all if needed
|
||||
if config.getoption('help'):
|
||||
return
|
||||
|
||||
try:
|
||||
natbox_ssh = ProjVar.get_var('NATBOX_SSH')
|
||||
natbox_ssh.close()
|
||||
except:
|
||||
pass
|
||||
|
||||
version_and_patch = ''
|
||||
try:
|
||||
version_and_patch = setups.get_version_and_patch_info()
|
||||
except Exception as e:
|
||||
LOG.debug(e)
|
||||
pass
|
||||
log_dir = ProjVar.get_var('LOG_DIR')
|
||||
if not log_dir:
|
||||
try:
|
||||
from utils.clients.ssh import ControllerClient
|
||||
ssh_list = ControllerClient.get_active_controllers(fail_ok=True)
|
||||
for con_ssh_ in ssh_list:
|
||||
con_ssh_.close()
|
||||
except:
|
||||
pass
|
||||
return
|
||||
|
||||
log_dir = ProjVar.get_var('LOG_DIR')
|
||||
if not log_dir:
|
||||
try:
|
||||
from utils.clients.ssh import ControllerClient
|
||||
ssh_list = ControllerClient.get_active_controllers(fail_ok=True)
|
||||
for con_ssh_ in ssh_list:
|
||||
con_ssh_.close()
|
||||
except:
|
||||
pass
|
||||
return
|
||||
|
||||
try:
|
||||
tc_res_path = log_dir + '/test_results.log'
|
||||
build_info = ProjVar.get_var('BUILD_INFO')
|
||||
build_id = build_info.get('BUILD_ID', '')
|
||||
build_job = build_info.get('JOB', '')
|
||||
build_server = build_info.get('BUILD_HOST', '')
|
||||
system_config = ProjVar.get_var('SYS_TYPE')
|
||||
session_str = ''
|
||||
total_exec = TestRes.PASSNUM + TestRes.FAILNUM
|
||||
# pass_rate = fail_rate = '0'
|
||||
if total_exec > 0:
|
||||
pass_rate = "{}%".format(
|
||||
round(TestRes.PASSNUM * 100 / total_exec, 2))
|
||||
fail_rate = "{}%".format(
|
||||
round(TestRes.FAILNUM * 100 / total_exec, 2))
|
||||
with open(tc_res_path, mode='a') as f:
|
||||
# Append general info to result log
|
||||
f.write('\n\nLab: {}\n'
|
||||
'Build ID: {}\n'
|
||||
'Job: {}\n'
|
||||
'Build Server: {}\n'
|
||||
'System Type: {}\n'
|
||||
'Automation LOGs DIR: {}\n'
|
||||
'Ends at: {}\n'
|
||||
'{}' # test session id and tag
|
||||
'{}'.format(ProjVar.get_var('LAB_NAME'), build_id,
|
||||
build_job, build_server, system_config,
|
||||
ProjVar.get_var('LOG_DIR'), tc_end_time,
|
||||
session_str, version_and_patch))
|
||||
# Add result summary to beginning of the file
|
||||
f.write(
|
||||
'\nSummary:\nPassed: {} ({})\nFailed: {} ({})\nTotal '
|
||||
'Executed: {}\n'.
|
||||
format(TestRes.PASSNUM, pass_rate, TestRes.FAILNUM,
|
||||
fail_rate, total_exec))
|
||||
if TestRes.SKIPNUM > 0:
|
||||
f.write('------------\nSkipped: {}'.format(TestRes.SKIPNUM))
|
||||
|
||||
LOG.info("Test Results saved to: {}".format(tc_res_path))
|
||||
with open(tc_res_path, 'r') as fin:
|
||||
print(fin.read())
|
||||
except Exception as e:
|
||||
LOG.exception(
|
||||
"Failed to add session summary to test_results.py. "
|
||||
"\nDetails: {}".format(e.__str__()))
|
||||
# Below needs con_ssh to be initialized
|
||||
try:
|
||||
from utils.clients.ssh import ControllerClient
|
||||
con_ssh = ControllerClient.get_active_controller()
|
||||
except:
|
||||
LOG.warning("No con_ssh found")
|
||||
return
|
||||
|
||||
try:
|
||||
parse_log.parse_test_steps(ProjVar.get_var('LOG_DIR'))
|
||||
except Exception as e:
|
||||
LOG.warning(
|
||||
"Unable to parse test steps. \nDetails: {}".format(e.__str__()))
|
||||
|
||||
if test_count > 0 and (ProjVar.get_var('ALWAYS_COLLECT') or (
|
||||
has_fail and ProjVar.get_var('COLLECT_ALL'))):
|
||||
# Collect tis logs if collect all required upon test(s) failure
|
||||
# Failure on collect all would not change the result of the last test
|
||||
# case.
|
||||
try:
|
||||
setups.collect_tis_logs(con_ssh)
|
||||
except Exception as e:
|
||||
LOG.warning("'collect all' failed. {}".format(e.__str__()))
|
||||
|
||||
ssh_list = ControllerClient.get_active_controllers(fail_ok=True,
|
||||
current_thread_only=True)
|
||||
for con_ssh_ in ssh_list:
|
||||
try:
|
||||
con_ssh_.close()
|
||||
except:
|
||||
pass
|
||||
|
||||
|
||||
def pytest_collection_modifyitems(items):
|
||||
# print("Collection modify")
|
||||
move_to_last = []
|
||||
absolute_last = []
|
||||
|
||||
for item in items:
|
||||
# re-order tests:
|
||||
trylast_marker = item.get_closest_marker('trylast')
|
||||
abslast_marker = item.get_closest_marker('abslast')
|
||||
|
||||
if abslast_marker:
|
||||
absolute_last.append(item)
|
||||
elif trylast_marker:
|
||||
move_to_last.append(item)
|
||||
|
||||
priority_marker = item.get_closest_marker('priorities')
|
||||
if priority_marker is not None:
|
||||
priorities = priority_marker.args
|
||||
for priority in priorities:
|
||||
item.add_marker(eval("pytest.mark.{}".format(priority)))
|
||||
|
||||
feature_marker = item.get_closest_marker('features')
|
||||
if feature_marker is not None:
|
||||
features = feature_marker.args
|
||||
for feature in features:
|
||||
item.add_marker(eval("pytest.mark.{}".format(feature)))
|
||||
|
||||
# known issue marker
|
||||
known_issue_mark = item.get_closest_marker('known_issue')
|
||||
if known_issue_mark is not None:
|
||||
issue = known_issue_mark.args[0]
|
||||
msg = "{} has a workaround due to {}".format(item.nodeid, issue)
|
||||
print(msg)
|
||||
LOG.debug(msg=msg)
|
||||
item.add_marker(eval("pytest.mark.known_issue"))
|
||||
|
||||
# add dc maker to all tests start with test_dc_xxx
|
||||
dc_maker = item.get_marker('dc')
|
||||
if not dc_maker and 'test_dc_' in item.nodeid:
|
||||
item.add_marker(pytest.mark.dc)
|
||||
|
||||
# add trylast tests to the end
|
||||
for item in move_to_last:
|
||||
items.remove(item)
|
||||
items.append(item)
|
||||
|
||||
for i in absolute_last:
|
||||
items.remove(i)
|
||||
items.append(i)
|
||||
|
||||
|
||||
def pytest_generate_tests(metafunc):
|
||||
# Prefix 'remote_cli' to test names so they are reported as a different
|
||||
# testcase
|
||||
if ProjVar.get_var('REMOTE_CLI'):
|
||||
metafunc.parametrize('prefix_remote_cli', ['remote_cli'])
|
||||
|
||||
|
||||
##############################################################
|
||||
# Manipulating fixture orders based on following pytest rules
|
||||
# session > module > class > function
|
||||
# autouse > non-autouse
|
||||
# alphabetic after full-filling above criteria
|
||||
#
|
||||
# Orders we want on fixtures of same scope:
|
||||
# check_alarms > delete_resources > config_host
|
||||
#############################################################
|
||||
|
||||
@pytest.fixture(scope='session')
|
||||
def check_alarms():
|
||||
LOG.debug("Empty check alarms")
|
||||
return
|
||||
|
||||
|
||||
@pytest.fixture(scope='session')
|
||||
def config_host_class():
|
||||
LOG.debug("Empty config host class")
|
||||
return
|
||||
|
||||
|
||||
@pytest.fixture(scope='session')
|
||||
def config_host_module():
|
||||
LOG.debug("Empty config host module")
|
||||
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def a1_fixture(check_alarms):
|
||||
return
|
||||
|
||||
|
||||
@pytest.fixture(scope='module', autouse=True)
|
||||
def c1_fixture(config_host_module):
|
||||
return
|
||||
|
||||
|
||||
@pytest.fixture(scope='class', autouse=True)
|
||||
def c2_fixture(config_host_class):
|
||||
return
|
||||
|
||||
|
||||
@pytest.fixture(scope='session', autouse=True)
|
||||
def prefix_remote_cli():
|
||||
return
|
||||
|
||||
|
||||
def __params_gen(index):
|
||||
return 'iter{}'.format(index)
|
||||
|
||||
|
||||
@pytest.fixture(scope='session')
|
||||
def global_setup():
|
||||
os.makedirs(ProjVar.get_var('TEMP_DIR'), exist_ok=True)
|
||||
os.makedirs(ProjVar.get_var('PING_FAILURE_DIR'), exist_ok=True)
|
||||
os.makedirs(ProjVar.get_var('GUEST_LOGS_DIR'), exist_ok=True)
|
||||
|
||||
if region:
|
||||
setups.set_region(region=region)
|
||||
|
||||
|
||||
#####################################
|
||||
# End of fixture order manipulation #
|
||||
#####################################
|
||||
|
||||
|
||||
def pytest_sessionfinish():
|
||||
if ProjVar.get_var('TELNET_THREADS'):
|
||||
threads, end_event = ProjVar.get_var('TELNET_THREADS')
|
||||
end_event.set()
|
||||
for thread in threads:
|
||||
thread.join()
|
||||
|
||||
if repeat_count > 0 and has_fail:
|
||||
# _thread.interrupt_main()
|
||||
print('Printing traceback: \n' + '\n'.join(tracebacks))
|
||||
pytest.exit("\n========== Test failed - "
|
||||
"Test session aborted without teardown to leave the "
|
||||
"system in state ==========")
|
||||
|
||||
if no_teardown:
|
||||
pytest.exit(
|
||||
"\n========== Test session stopped without teardown after first "
|
||||
"test executed ==========")
|
|
@ -0,0 +1,348 @@
|
|||
#
|
||||
# Copyright (c) 2019 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
class Tenant:
|
||||
__PASSWORD = 'St8rlingX*'
|
||||
__REGION = 'RegionOne'
|
||||
__URL_PLATFORM = 'http://192.168.204.2:5000/v3/'
|
||||
__URL_CONTAINERS = 'http://keystone.openstack.svc.cluster.local/v3'
|
||||
__DC_MAP = {'SystemController': {'region': 'SystemController',
|
||||
'auth_url': __URL_PLATFORM},
|
||||
'RegionOne': {'region': 'RegionOne',
|
||||
'auth_url': __URL_PLATFORM}}
|
||||
|
||||
# Platform openstack user - admin
|
||||
__ADMIN_PLATFORM = {
|
||||
'user': 'admin',
|
||||
'password': __PASSWORD,
|
||||
'tenant': 'admin',
|
||||
'domain': 'Default',
|
||||
'platform': True,
|
||||
}
|
||||
|
||||
# Containerized openstack users - admin, and two test users/tenants
|
||||
__ADMIN = {
|
||||
'user': 'admin',
|
||||
'password': __PASSWORD,
|
||||
'tenant': 'admin',
|
||||
'domain': 'Default'
|
||||
}
|
||||
|
||||
__TENANT1 = {
|
||||
'user': 'tenant1',
|
||||
'password': __PASSWORD,
|
||||
'tenant': 'tenant1',
|
||||
'domain': 'Default',
|
||||
'nova_keypair': 'keypair-tenant1'
|
||||
}
|
||||
|
||||
__TENANT2 = {
|
||||
'user': 'tenant2',
|
||||
'password': __PASSWORD,
|
||||
'tenant': 'tenant2',
|
||||
'domain': 'Default',
|
||||
'nova_keypair': 'keypair-tenant2'
|
||||
}
|
||||
|
||||
__tenants = {
|
||||
'ADMIN_PLATFORM': __ADMIN_PLATFORM,
|
||||
'ADMIN': __ADMIN,
|
||||
'TENANT1': __TENANT1,
|
||||
'TENANT2': __TENANT2}
|
||||
|
||||
@classmethod
|
||||
def add_dc_region(cls, region_info):
|
||||
cls.__DC_MAP.update(region_info)
|
||||
|
||||
@classmethod
|
||||
def set_platform_url(cls, url, central_region=False):
|
||||
"""
|
||||
Set auth_url for platform keystone
|
||||
Args:
|
||||
url (str):
|
||||
central_region (bool)
|
||||
"""
|
||||
if central_region:
|
||||
cls.__DC_MAP.get('SystemController')['auth_url'] = url
|
||||
cls.__DC_MAP.get('RegionOne')['auth_url'] = url
|
||||
else:
|
||||
cls.__URL_PLATFORM = url
|
||||
|
||||
@classmethod
|
||||
def set_region(cls, region):
|
||||
"""
|
||||
Set default region for all tenants
|
||||
Args:
|
||||
region (str): e.g., SystemController, subcloud-2
|
||||
|
||||
"""
|
||||
cls.__REGION = region
|
||||
|
||||
@classmethod
|
||||
def add(cls, tenantname, dictname=None, username=None, password=None,
|
||||
region=None, auth_url=None, domain='Default'):
|
||||
tenant_dict = dict(tenant=tenantname)
|
||||
tenant_dict['user'] = username if username else tenantname
|
||||
tenant_dict['password'] = password if password else cls.__PASSWORD
|
||||
tenant_dict['domain'] = domain
|
||||
if region:
|
||||
tenant_dict['region'] = region
|
||||
if auth_url:
|
||||
tenant_dict['auth_url'] = auth_url
|
||||
|
||||
dictname = dictname.upper() if dictname else tenantname.upper().\
|
||||
replace('-', '_')
|
||||
cls.__tenants[dictname] = tenant_dict
|
||||
return tenant_dict
|
||||
|
||||
__primary = 'TENANT1'
|
||||
|
||||
@classmethod
|
||||
def get(cls, tenant_dictname, dc_region=None):
|
||||
"""
|
||||
Get tenant auth dict that can be passed to auth_info in cli cmd
|
||||
Args:
|
||||
tenant_dictname (str): e.g., tenant1, TENANT2, system_controller
|
||||
dc_region (None|str): key for dc_region added via add_dc_region.
|
||||
Used to update auth_url and region
|
||||
e.g., SystemController, RegionOne, subcloud-2
|
||||
|
||||
Returns (dict): mutable dictionary. If changed, DC map or tenant dict
|
||||
will update as well.
|
||||
|
||||
"""
|
||||
tenant_dictname = tenant_dictname.upper().replace('-', '_')
|
||||
tenant_dict = cls.__tenants.get(tenant_dictname)
|
||||
if dc_region:
|
||||
region_dict = cls.__DC_MAP.get(dc_region, None)
|
||||
if not region_dict:
|
||||
raise ValueError(
|
||||
'Distributed cloud region {} is not added to '
|
||||
'DC_MAP yet. DC_MAP: {}'.format(dc_region, cls.__DC_MAP))
|
||||
tenant_dict.update({'region': region_dict['region']})
|
||||
else:
|
||||
tenant_dict.pop('region', None)
|
||||
|
||||
return tenant_dict
|
||||
|
||||
@classmethod
|
||||
def get_region_and_url(cls, platform=False, dc_region=None):
|
||||
auth_region_and_url = {
|
||||
'auth_url':
|
||||
cls.__URL_PLATFORM if platform else cls.__URL_CONTAINERS,
|
||||
'region': cls.__REGION
|
||||
}
|
||||
|
||||
if dc_region:
|
||||
region_dict = cls.__DC_MAP.get(dc_region, None)
|
||||
if not region_dict:
|
||||
raise ValueError(
|
||||
'Distributed cloud region {} is not added to DC_MAP yet. '
|
||||
'DC_MAP: {}'.format(dc_region, cls.__DC_MAP))
|
||||
auth_region_and_url['region'] = region_dict.get('region')
|
||||
if platform:
|
||||
auth_region_and_url['auth_url'] = region_dict.get('auth_url')
|
||||
|
||||
return auth_region_and_url
|
||||
|
||||
@classmethod
|
||||
def set_primary(cls, tenant_dictname):
|
||||
"""
|
||||
should be called after _set_region and _set_url
|
||||
Args:
|
||||
tenant_dictname (str): Tenant dict name
|
||||
|
||||
Returns:
|
||||
|
||||
"""
|
||||
cls.__primary = tenant_dictname.upper()
|
||||
|
||||
@classmethod
|
||||
def get_primary(cls):
|
||||
return cls.get(tenant_dictname=cls.__primary)
|
||||
|
||||
@classmethod
|
||||
def get_secondary(cls):
|
||||
secondary = 'TENANT1' if cls.__primary != 'TENANT1' else 'TENANT2'
|
||||
return cls.get(tenant_dictname=secondary)
|
||||
|
||||
@classmethod
|
||||
def update(cls, tenant_dictname, username=None, password=None, tenant=None,
|
||||
**kwargs):
|
||||
tenant_dict = cls.get(tenant_dictname)
|
||||
|
||||
if not isinstance(tenant_dict, dict):
|
||||
raise ValueError("{} dictionary does not exist in "
|
||||
"consts/auth.py".format(tenant_dictname))
|
||||
|
||||
if not username and not password and not tenant and not kwargs:
|
||||
raise ValueError("Please specify username, password, tenant, "
|
||||
"and/or domain to update for {} dict".
|
||||
format(tenant_dictname))
|
||||
|
||||
if username:
|
||||
kwargs['user'] = username
|
||||
if password:
|
||||
kwargs['password'] = password
|
||||
if tenant:
|
||||
kwargs['tenant'] = tenant
|
||||
tenant_dict.update(kwargs)
|
||||
cls.__tenants[tenant_dictname] = tenant_dict
|
||||
|
||||
@classmethod
|
||||
def get_dc_map(cls):
|
||||
return cls.__DC_MAP
|
||||
|
||||
|
||||
class HostLinuxUser:
|
||||
|
||||
__SYSADMIN = {
|
||||
'user': 'sysadmin',
|
||||
'password': 'St8rlingX*'
|
||||
}
|
||||
|
||||
@classmethod
|
||||
def get_user(cls):
|
||||
return cls.__SYSADMIN['user']
|
||||
|
||||
@classmethod
|
||||
def get_password(cls):
|
||||
return cls.__SYSADMIN['password']
|
||||
|
||||
@classmethod
|
||||
def get_home(cls):
|
||||
return cls.__SYSADMIN.get('home', '/home/{}'.format(cls.get_user()))
|
||||
|
||||
@classmethod
|
||||
def set_user(cls, username):
|
||||
cls.__SYSADMIN['user'] = username
|
||||
|
||||
@classmethod
|
||||
def set_password(cls, password):
|
||||
cls.__SYSADMIN['password'] = password
|
||||
|
||||
@classmethod
|
||||
def set_home(cls, home):
|
||||
if home:
|
||||
cls.__SYSADMIN['home'] = home
|
||||
|
||||
|
||||
class Guest:
|
||||
CREDS = {
|
||||
'tis-centos-guest': {
|
||||
'user': 'root',
|
||||
'password': 'root'
|
||||
},
|
||||
|
||||
'cgcs-guest': {
|
||||
'user': 'root',
|
||||
'password': 'root'
|
||||
},
|
||||
|
||||
'ubuntu': {
|
||||
'user': 'ubuntu',
|
||||
'password': None
|
||||
},
|
||||
|
||||
'centos_6': {
|
||||
'user': 'centos',
|
||||
'password': None
|
||||
},
|
||||
|
||||
'centos_7': {
|
||||
'user': 'centos',
|
||||
'password': None
|
||||
},
|
||||
|
||||
# This image has some issue where it usually fails to boot
|
||||
'opensuse_13': {
|
||||
'user': 'root',
|
||||
'password': None
|
||||
},
|
||||
|
||||
# OPV image has root/root enabled
|
||||
'rhel': {
|
||||
'user': 'root',
|
||||
'password': 'root'
|
||||
},
|
||||
|
||||
'cirros': {
|
||||
'user': 'cirros',
|
||||
'password': 'cubswin:)'
|
||||
},
|
||||
|
||||
'win_2012': {
|
||||
'user': 'Administrator',
|
||||
'password': 'Li69nux*'
|
||||
},
|
||||
|
||||
'win_2016': {
|
||||
'user': 'Administrator',
|
||||
'password': 'Li69nux*'
|
||||
},
|
||||
|
||||
'ge_edge': {
|
||||
'user': 'root',
|
||||
'password': 'root'
|
||||
},
|
||||
|
||||
'vxworks': {
|
||||
'user': 'root',
|
||||
'password': 'root'
|
||||
},
|
||||
|
||||
}
|
||||
|
||||
@classmethod
|
||||
def set_user(cls, image_name, username):
|
||||
cls.CREDS[image_name]['user'] = username
|
||||
|
||||
@classmethod
|
||||
def set_password(cls, image_name, password):
|
||||
cls.CREDS[image_name]['password'] = password
|
||||
|
||||
|
||||
class TestFileServer:
|
||||
# Place holder for shared file server in future.
|
||||
SERVER = 'server_name_or_ip_that_can_ssh_to'
|
||||
USER = 'username'
|
||||
PASSWORD = 'password'
|
||||
HOME = 'my_home'
|
||||
HOSTNAME = 'hostname'
|
||||
PROMPT = r'[\[]?.*@.*\$[ ]?'
|
||||
|
||||
|
||||
class CliAuth:
|
||||
|
||||
__var_dict = {
|
||||
'OS_AUTH_URL': 'http://192.168.204.2:5000/v3',
|
||||
'OS_ENDPOINT_TYPE': 'internalURL',
|
||||
'CINDER_ENDPOINT_TYPE': 'internalURL',
|
||||
'OS_USER_DOMAIN_NAME': 'Default',
|
||||
'OS_PROJECT_DOMAIN_NAME': 'Default',
|
||||
'OS_IDENTITY_API_VERSION': '3',
|
||||
'OS_REGION_NAME': 'RegionOne',
|
||||
'OS_INTERFACE': 'internal',
|
||||
'HTTPS': False,
|
||||
'OS_KEYSTONE_REGION_NAME': None,
|
||||
}
|
||||
|
||||
@classmethod
|
||||
def set_vars(cls, **kwargs):
|
||||
|
||||
for key in kwargs:
|
||||
cls.__var_dict[key.upper()] = kwargs[key]
|
||||
|
||||
@classmethod
|
||||
def get_var(cls, var_name):
|
||||
var_name = var_name.upper()
|
||||
valid_vars = cls.__var_dict.keys()
|
||||
if var_name not in valid_vars:
|
||||
raise ValueError("Invalid var_name. Valid vars: {}".
|
||||
format(valid_vars))
|
||||
|
||||
return cls.__var_dict[var_name]
|
|
@ -0,0 +1,192 @@
|
|||
#
|
||||
# Copyright (c) 2019 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
class VCPUSchedulerErr:
|
||||
CANNOT_SET_VCPU0 = "vcpu 0 cannot be specified"
|
||||
VCPU_VAL_OUT_OF_RANGE = "vcpu value out of range"
|
||||
INVALID_PRIORITY = "priority must be between 1-99"
|
||||
PRIORITY_NOT_INTEGER = "priority must be an integer"
|
||||
INVALID_FORMAT = "invalid format"
|
||||
UNSUPPORTED_POLICY = "not a supported policy"
|
||||
POLICY_MUST_SPECIFIED_LAST = "policy/priority for all vcpus must be " \
|
||||
"specified last"
|
||||
MISSING_PARAMETER = "missing required parameter"
|
||||
TOO_MANY_PARAMETERS = "too many parameters"
|
||||
VCPU_MULTIPLE_ASSIGNMENT = "specified multiple times, specification is " \
|
||||
"ambiguous"
|
||||
CPU_MODEL_UNAVAIL = "No valid host was found.*Host VCPU model.*required.*"
|
||||
CPU_MODEL_CONFLICT = "Image vCPU model is not permitted to override " \
|
||||
"configuration set against the flavor"
|
||||
|
||||
|
||||
class NumaErr:
|
||||
GENERAL_ERR_PIKE = 'Requested instance NUMA topology cannot fit the ' \
|
||||
'given host NUMA topology'
|
||||
# NUMA_AFFINITY_MISMATCH = " not match requested NUMA: {}"
|
||||
NUMA_VSWITCH_MISMATCH = 'vswitch not configured.* does not match ' \
|
||||
'requested NUMA'
|
||||
NUMA_NODE_EXCLUDED = "NUMA: {} excluded"
|
||||
# UNINITIALIZED = '(NUMATopologyFilter) Uninitialized'
|
||||
TWO_NUMA_ONE_VSWITCH = 'vswitch not configured'
|
||||
FLV_UNDEVISIBLE = 'ERROR (Conflict): flavor vcpus not evenly divisible ' \
|
||||
'by the specified hw:numa_nodes value'
|
||||
FLV_CPU_OR_MEM_UNSPECIFIED = 'ERROR (Conflict): CPU and memory ' \
|
||||
'allocation must be provided for all ' \
|
||||
'NUMA nodes'
|
||||
INSUFFICIENT_CORES = 'Not enough free cores to schedule the instance'
|
||||
|
||||
|
||||
class MinCPUErr:
|
||||
VAL_LARGER_THAN_VCPUS = "min_vcpus must be less than or equal to " \
|
||||
"the flavor vcpus value"
|
||||
VAL_LESS_THAN_1 = "min_vcpus must be greater than or equal to 1"
|
||||
CPU_POLICY_NOT_DEDICATED = "min_vcpus is only valid when hw:cpu_policy " \
|
||||
"is dedicated"
|
||||
|
||||
|
||||
class ScaleErr:
|
||||
SCALE_LIMIT_HIT = "When scaling, cannot scale beyond limits"
|
||||
|
||||
|
||||
class CpuAssignment:
|
||||
VSWITCH_TOO_MANY_CORES = "The vswitch function can only be assigned up to" \
|
||||
" 8 core"
|
||||
TOTAL_TOO_MANY_CORES = "More total logical cores requested than present " \
|
||||
"on 'Processor {}'"
|
||||
NO_VM_CORE = "There must be at least one unused core for VMs."
|
||||
VSWITCH_INSUFFICIENT_CORES = "The vswitch function must have at least {} " \
|
||||
"core(s)"
|
||||
|
||||
|
||||
class CPUThreadErr:
|
||||
INVALID_POLICY = "invalid hw:cpu_thread_policy '{}', must be one of " \
|
||||
"prefer, isolate, require"
|
||||
DEDICATED_CPU_REQUIRED_FLAVOR = 'ERROR (Conflict): hw:cpu_thread_policy ' \
|
||||
'is only valid when hw:cpu_policy is ' \
|
||||
'dedicated. Either unset ' \
|
||||
'hw:cpu_thread_policy or set ' \
|
||||
'hw:cpu_policy to dedicated.'
|
||||
DEDICATED_CPU_REQUIRED_BOOT_VM = 'ERROR (BadRequest): Cannot set cpu ' \
|
||||
'thread pinning policy in a non ' \
|
||||
'dedicated ' \
|
||||
'cpu pinning policy'
|
||||
VCPU_NUM_UNDIVISIBLE = "(NUMATopologyFilter) Cannot use 'require' cpu " \
|
||||
"threads policy as requested #VCPUs: {}, " \
|
||||
"is not divisible by number of threads: 2"
|
||||
INSUFFICIENT_CORES_FOR_ISOLATE = "{}: (NUMATopologyFilter) Cannot use " \
|
||||
"isolate cpu thread policy as requested " \
|
||||
"VCPUS: {} is greater than available " \
|
||||
"CPUs with all siblings free"
|
||||
HT_HOST_UNAVAIL = "(NUMATopologyFilter) Host not useable. Requested " \
|
||||
"threads policy: '{}'; from flavor or image " \
|
||||
"is not allowed on non-hyperthreaded host"
|
||||
UNSET_SHARED_VCPU = "Cannot set hw:cpu_thread_policy to {} if " \
|
||||
"hw:wrs:shared_vcpu is set. Either unset " \
|
||||
"hw:cpu_thread_policy, set it to prefer, or unset " \
|
||||
"hw:wrs:shared_vcpu"
|
||||
UNSET_MIN_VCPUS = "Cannot set hw:cpu_thread_policy to {} if " \
|
||||
"hw:wrs:min_vcpus is set. Either unset " \
|
||||
"hw:cpu_thread_policy, set it to another policy, " \
|
||||
"or unset hw:wrs:min_vcpus"
|
||||
CONFLICT_FLV_IMG = "Image property 'hw_cpu_thread_policy' is not " \
|
||||
"permitted to override CPU thread pinning policy " \
|
||||
"set against the flavor"
|
||||
|
||||
|
||||
class CPUPolicyErr:
|
||||
CONFLICT_FLV_IMG = "Image property 'hw_cpu_policy' is not permitted to " \
|
||||
"override CPU pinning policy set against " \
|
||||
"the flavor "
|
||||
|
||||
|
||||
class SharedCPUErr:
|
||||
DEDICATED_CPU_REQUIRED = "hw:wrs:shared_vcpu is only valid when " \
|
||||
"hw:cpu_policy is dedicated"
|
||||
INVALID_VCPU_ID = "hw:wrs:shared_vcpu must be greater than or equal to 0"
|
||||
MORE_THAN_FLAVOR = "hw:wrs:shared_vcpu value ({}) must be less than " \
|
||||
"flavor vcpus ({})"
|
||||
|
||||
|
||||
class ResizeVMErr:
|
||||
RESIZE_ERR = "Error resizing server"
|
||||
SHARED_NOT_ENABLED = 'Shared vCPU not enabled .*, required by instance ' \
|
||||
'cell {}'
|
||||
|
||||
|
||||
class ColdMigErr:
|
||||
HT_HOST_REQUIRED = "(NUMATopologyFilter) Host not useable. Requested " \
|
||||
"threads policy: '[{}, {}]'; from flavor or " \
|
||||
"image is not allowed on non-hyperthreaded host"
|
||||
|
||||
|
||||
class LiveMigErr:
|
||||
BLOCK_MIG_UNSUPPORTED = "is not on local storage: Block migration can " \
|
||||
"not be used with shared storage"
|
||||
GENERAL_NO_HOST = "No valid host was found. There are not enough hosts " \
|
||||
"available."
|
||||
BLOCK_MIG_UNSUPPORTED_LVM = 'Block live migration is not supported for ' \
|
||||
'hosts with LVM backed storage'
|
||||
LVM_PRECHECK_ERROR = 'Live migration can not be used with LVM backed ' \
|
||||
'storage except a booted from volume VM ' \
|
||||
'which does not have a local disk'
|
||||
|
||||
|
||||
class NetworkingErr:
|
||||
INVALID_VXLAN_VNI_RANGE = "exceeds 16777215"
|
||||
INVALID_MULTICAST_IP_ADDRESS = "is not a valid multicast IP address."
|
||||
INVALID_VXLAN_PROVISION_PORTS = "Invalid input for port"
|
||||
VXLAN_TTL_RANGE_MISSING = "VXLAN time-to-live attribute missing"
|
||||
VXLAN_TTL_RANGE_TOO_LARGE = "is too large - must be no larger than '255'."
|
||||
VXLAN_TTL_RANGE_TOO_SMALL = "is too small - must be at least '1'."
|
||||
OVERLAP_SEGMENTATION_RANGE = "segmentation id range overlaps with"
|
||||
INVALID_MTU_VALUE = "requires an interface MTU value of at least"
|
||||
VXLAN_MISSING_IP_ON_INTERFACE = "requires an IP address"
|
||||
WRONG_IF_ADDR_MODE = "interface address mode must be 'static'"
|
||||
SET_IF_ADDR_MODE_WHEN_IP_EXIST = "addresses still exist on interfac"
|
||||
NULL_IP_ADDR = "Address must not be null"
|
||||
NULL_NETWORK_ADDR = "Network must not be null"
|
||||
NULL_GATEWAY_ADDR = "Gateway address must not be null"
|
||||
NULL_HOST_PARTION_ADDR = "Host bits must not be zero"
|
||||
NOT_UNICAST_ADDR = "Address must be a unicast address"
|
||||
NOT_BROADCAST_ADDR = "Address cannot be the network broadcast address"
|
||||
DUPLICATE_IP_ADDR = "already exists"
|
||||
INVALID_IP_OR_PREFIX = "Invalid IP address and prefix"
|
||||
INVALID_IP_NETWORK = "Invalid IP network"
|
||||
ROUTE_GATEWAY_UNREACHABLE = "not reachable"
|
||||
IP_VERSION_NOT_MATCH = "Network and gateway IP versions must match"
|
||||
GATEWAY_IP_IN_SUBNET = "Gateway address must not be within destination " \
|
||||
"subnet"
|
||||
NETWORK_IP_EQUAL_TO_GATEWAY = "Network and gateway IP addresses must be " \
|
||||
"different"
|
||||
|
||||
|
||||
class PciAddrErr:
|
||||
NONE_ZERO_DOMAIN = 'Only domain 0000 is supported'
|
||||
LARGER_THAN_MAX_BUS = 'PCI bus maximum value is 8'
|
||||
NONE_ZERO_FUNCTION = 'Only function 0 is supported'
|
||||
RESERVED_SLOTS_BUS0 = 'Slots 0,1 are reserved for PCI bus 0'
|
||||
RESERVED_SLOT_ANY_BUS = 'Slots 0 is reserved for any PCI bus'
|
||||
LARGER_THAN_MAX_SLOT = 'PCI slot maximum value is 31'
|
||||
BAD_FORMAT = 'Bad PCI address format'
|
||||
WRONG_BUS_VAL = 'Wrong bus value for PCI address'
|
||||
|
||||
|
||||
class SrvGrpErr:
|
||||
EXCEEDS_GRP_SIZE = 'Action would result in server group {} exceeding the ' \
|
||||
'group size of {}'
|
||||
HOST_UNAVAIL_ANTI_AFFINITY = '(ServerGroupAntiAffinityFilter) ' \
|
||||
'Anti-affinity server group specified, ' \
|
||||
'but this host is already used by that group'
|
||||
|
||||
|
||||
class CpuRtErr:
|
||||
RT_AND_ORD_REQUIRED = 'Realtime policy needs vCPU.* mask configured with ' \
|
||||
'at least 1 RT vCPU and 1 ordinary vCPU'
|
||||
DED_CPU_POL_REQUIRED = 'Cannot set realtime policy in a non dedicated cpu' \
|
||||
' pinning policy'
|
||||
RT_MASK_SHARED_VCPU_CONFLICT = 'hw:wrs:shared_vcpu .* is not a subset of ' \
|
||||
'non-realtime vCPUs'
|
|
@ -0,0 +1,55 @@
|
|||
#
|
||||
# Copyright (c) 2019 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
class StxPath:
|
||||
TIS_UBUNTU_PATH = '~/userdata/ubuntu_if_config.sh'
|
||||
TIS_CENTOS_PATH = '~/userdata/centos_if_config.sh'
|
||||
USERDATA = '~/userdata/'
|
||||
IMAGES = '~/images/'
|
||||
HEAT = '~/heat/'
|
||||
BACKUPS = '/opt/backups'
|
||||
CUSTOM_HEAT_TEMPLATES = '~/custom_heat_templates/'
|
||||
HELM_CHARTS_DIR = '/www/pages/helm_charts/'
|
||||
DOCKER_CONF = '/etc/docker-distribution/registry/config.yml'
|
||||
DOCKER_REPO = '/var/lib/docker-distribution/docker/registry/v2/repositories'
|
||||
|
||||
|
||||
class VMPath:
|
||||
VM_IF_PATH_UBUNTU = '/etc/network/interfaces.d/'
|
||||
ETH_PATH_UBUNTU = '/etc/network/interfaces.d/{}.cfg'
|
||||
# Below two paths are common for CentOS, OpenSUSE, and RHEL
|
||||
VM_IF_PATH_CENTOS = '/etc/sysconfig/network-scripts/'
|
||||
ETH_PATH_CENTOS = '/etc/sysconfig/network-scripts/ifcfg-{}'
|
||||
|
||||
# Centos paths for ipv4:
|
||||
RT_TABLES = '/etc/iproute2/rt_tables'
|
||||
ETH_RT_SCRIPT = '/etc/sysconfig/network-scripts/route-{}'
|
||||
ETH_RULE_SCRIPT = '/etc/sysconfig/network-scripts/rule-{}'
|
||||
ETH_ARP_ANNOUNCE = '/proc/sys/net/ipv4/conf/{}/arp_announce'
|
||||
ETH_ARP_FILTER = '/proc/sys/net/ipv4/conf/{}/arp_filter'
|
||||
|
||||
|
||||
class UserData:
|
||||
ADDUSER_TO_GUEST = 'cloud_config_adduser.txt'
|
||||
DPDK_USER_DATA = 'dpdk_user_data.txt'
|
||||
|
||||
|
||||
class TestServerPath:
|
||||
USER_DATA = '/home/svc-cgcsauto/userdata/'
|
||||
TEST_SCRIPT = '/home/svc-cgcsauto/test_scripts/'
|
||||
CUSTOM_HEAT_TEMPLATES = '/sandbox/custom_heat_templates/'
|
||||
CUSTOM_APPS = '/sandbox/custom_apps/'
|
||||
|
||||
|
||||
class PrivKeyPath:
|
||||
OPT_PLATFORM = '/opt/platform/id_rsa'
|
||||
SYS_HOME = '~/.ssh/id_rsa'
|
||||
|
||||
|
||||
class SysLogPath:
|
||||
DC_MANAGER = '/var/log/dcmanager/dcmanager.log'
|
||||
DC_ORCH = '/var/log/dcorch/dcorch.log'
|
|
@ -0,0 +1,8 @@
|
|||
#
|
||||
# Copyright (c) 2019 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
test_result = False
|
|
@ -0,0 +1,162 @@
|
|||
#
|
||||
# Copyright (c) 2019 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
class Labs:
|
||||
# Place for existing stx systems for convenience.
|
||||
# --lab <short_name> can be used in cmdline specify an existing system
|
||||
|
||||
EXAMPLE = {
|
||||
'short_name': 'my_server',
|
||||
'name': 'my_server.com',
|
||||
'floating ip': '10.10.10.2',
|
||||
'controller-0 ip': '10.10.10.3',
|
||||
'controller-1 ip': '10.10.10.4',
|
||||
}
|
||||
|
||||
|
||||
def update_lab(lab_dict_name=None, lab_name=None, floating_ip=None, **kwargs):
|
||||
"""
|
||||
Update/Add lab dict params for specified lab
|
||||
Args:
|
||||
lab_dict_name (str|None):
|
||||
lab_name (str|None): lab short_name. This is used only if
|
||||
lab_dict_name is not specified
|
||||
floating_ip (str|None):
|
||||
**kwargs: Some possible keys: subcloud-1, name, etc
|
||||
|
||||
Returns (dict): updated lab dict
|
||||
|
||||
"""
|
||||
|
||||
if not lab_name and not lab_dict_name:
|
||||
from consts.proj_vars import ProjVar
|
||||
lab_name = ProjVar.get_var('LAB').get('short_name', None)
|
||||
if not lab_name:
|
||||
raise ValueError("lab_dict_name or lab_name needs to be specified")
|
||||
|
||||
if floating_ip:
|
||||
kwargs.update(**{'floating ip': floating_ip})
|
||||
|
||||
if not kwargs:
|
||||
raise ValueError("Please specify floating_ip and/or kwargs")
|
||||
|
||||
if not lab_dict_name:
|
||||
attr_names = [attr for attr in dir(Labs) if not attr.startswith('__')]
|
||||
lab_names = [getattr(Labs, attr).get('short_name') for attr in
|
||||
attr_names]
|
||||
lab_index = lab_names.index(lab_name.lower().strip())
|
||||
lab_dict_name = attr_names[lab_index]
|
||||
else:
|
||||
lab_dict_name = lab_dict_name.upper().replace('-', '_')
|
||||
|
||||
lab_dict = getattr(Labs, lab_dict_name)
|
||||
lab_dict.update(kwargs)
|
||||
return lab_dict
|
||||
|
||||
|
||||
def get_lab_dict(lab, key='short_name'):
|
||||
"""
|
||||
|
||||
Args:
|
||||
lab: lab name or fip
|
||||
key: unique identifier to locate a lab. Valid values: short_name,
|
||||
name, floating ip
|
||||
|
||||
Returns (dict|None): lab dict or None if no matching lab found
|
||||
"""
|
||||
__lab_attr_list = [attr for attr in dir(Labs) if not attr.startswith('__')]
|
||||
__lab_list = [getattr(Labs, attr) for attr in __lab_attr_list]
|
||||
__lab_list = [lab for lab in __lab_list if isinstance(lab, dict)]
|
||||
|
||||
lab_info = None
|
||||
for lab_ in __lab_list:
|
||||
if lab.lower().replace('-', '_') == lab_.get(key).lower().replace('-',
|
||||
'_'):
|
||||
lab_info = lab_
|
||||
break
|
||||
|
||||
return lab_info
|
||||
|
||||
|
||||
def add_lab_entry(floating_ip, dict_name=None, short_name=None, name=None,
|
||||
**kwargs):
|
||||
"""
|
||||
Add a new lab dictionary to Labs class
|
||||
Args:
|
||||
floating_ip (str): floating ip of a lab to be added
|
||||
dict_name: name of the entry, such as 'PV0'
|
||||
short_name: short name of the TiS system, such as ip_1_4
|
||||
name: name of the STX system, such as 'yow-cgcs-pv-0'
|
||||
**kwargs: other information of the lab such as controllers' ips, etc
|
||||
|
||||
Returns:
|
||||
dict: lab dict added to Labs class
|
||||
|
||||
"""
|
||||
for attr in dir(Labs):
|
||||
lab = getattr(Labs, attr)
|
||||
if isinstance(lab, dict):
|
||||
if lab['floating ip'] == floating_ip:
|
||||
raise ValueError(
|
||||
"Entry for {} already exists in Labs class!".format(
|
||||
floating_ip))
|
||||
|
||||
if dict_name and dict_name in dir(Labs):
|
||||
raise ValueError(
|
||||
"Entry for {} already exists in Labs class!".format(dict_name))
|
||||
|
||||
if not short_name:
|
||||
short_name = floating_ip
|
||||
|
||||
if not name:
|
||||
name = floating_ip
|
||||
|
||||
if not dict_name:
|
||||
dict_name = floating_ip
|
||||
|
||||
lab_dict = {'name': name,
|
||||
'short_name': short_name,
|
||||
'floating ip': floating_ip,
|
||||
}
|
||||
|
||||
lab_dict.update(kwargs)
|
||||
setattr(Labs, dict_name, lab_dict)
|
||||
return lab_dict
|
||||
|
||||
|
||||
class NatBoxes:
|
||||
# Place for existing NatBox that are already configured
|
||||
NAT_BOX_HW_EXAMPLE = {
|
||||
'name': 'nat_hw',
|
||||
'ip': '10.10.10.10',
|
||||
'user': 'natbox_user',
|
||||
'password': 'natbox_password'
|
||||
}
|
||||
|
||||
# Following example when localhost is configured as natbox, and test cases
|
||||
# are also ran from same localhost
|
||||
NAT_BOX_VBOX_EXAMPLE = {
|
||||
'name': 'localhost',
|
||||
'ip': 'localhost',
|
||||
'user': None,
|
||||
'password': None,
|
||||
}
|
||||
|
||||
@staticmethod
|
||||
def add_natbox(ip, user=None, password=None, prompt=None):
|
||||
user = user if user else 'svc-cgcsauto'
|
||||
password = password if password else ')OKM0okm'
|
||||
|
||||
nat_dict = {'ip': ip,
|
||||
'name': ip,
|
||||
'user': user,
|
||||
'password': password,
|
||||
}
|
||||
if prompt:
|
||||
nat_dict['prompt'] = prompt
|
||||
setattr(NatBoxes, 'NAT_NEW', nat_dict)
|
||||
return nat_dict
|
|
@ -0,0 +1,87 @@
|
|||
#
|
||||
# Copyright (c) 2019 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
# Please DO NOT import any modules
|
||||
|
||||
|
||||
class ProjVar:
|
||||
__var_dict = {'BUILD_PATH': None,
|
||||
'LOG_DIR': None,
|
||||
'SOURCE_OPENRC': False,
|
||||
'SW_VERSION': [],
|
||||
'PATCH': None,
|
||||
'SESSION_ID': None,
|
||||
'CGCS_DB': True,
|
||||
'IS_SIMPLEX': False,
|
||||
'KEYSTONE_DEBUG': False,
|
||||
'TEST_NAME': None,
|
||||
'PING_FAILURE': False,
|
||||
'LAB': None,
|
||||
'ALWAYS_COLLECT': False,
|
||||
'REGION': 'RegionOne',
|
||||
'COLLECT_TELNET': False,
|
||||
'TELNET_THREADS': None,
|
||||
'SYS_TYPE': None,
|
||||
'COLLECT_SYS_NET_INFO': False,
|
||||
'IS_VBOX': False,
|
||||
'RELEASE': 'R6',
|
||||
'REMOTE_CLI': False,
|
||||
'USER_FILE_DIR': '~/',
|
||||
'NO_TEARDOWN': False,
|
||||
'VSWITCH_TYPE': None,
|
||||
'IS_DC': False,
|
||||
'PRIMARY_SUBCLOUD': None,
|
||||
'BUILD_INFO': {},
|
||||
'TEMP_DIR': '',
|
||||
'INSTANCE_BACKING': {},
|
||||
'OPENSTACK_DEPLOYED': None,
|
||||
'DEFAULT_INSTANCE_BACKING': None,
|
||||
'STX_KEYFILE_PATH': '~/.ssh/id_rsa'
|
||||
}
|
||||
|
||||
@classmethod
|
||||
def init_vars(cls, lab, natbox, logdir, tenant, collect_all, always_collect,
|
||||
horizon_visible):
|
||||
|
||||
labname = lab['short_name']
|
||||
|
||||
cls.__var_dict.update(**{
|
||||
'NATBOX_KEYFILE_PATH': '~/priv_keys/keyfile_{}.pem'.format(labname),
|
||||
'STX_KEYFILE_SYS_HOME': '~/keyfile_{}.pem'.format(labname),
|
||||
'LOG_DIR': logdir,
|
||||
'TCLIST_PATH': logdir + '/test_results.log',
|
||||
'PYTESTLOG_PATH': logdir + '/pytestlog.log',
|
||||
'LAB_NAME': lab['short_name'],
|
||||
'TEMP_DIR': logdir + '/tmp_files/',
|
||||
'PING_FAILURE_DIR': logdir + '/ping_failures/',
|
||||
'GUEST_LOGS_DIR': logdir + '/guest_logs/',
|
||||
'PRIMARY_TENANT': tenant,
|
||||
'LAB': lab,
|
||||
'NATBOX': natbox,
|
||||
'COLLECT_ALL': collect_all,
|
||||
'ALWAYS_COLLECT': always_collect,
|
||||
'HORIZON_VISIBLE': horizon_visible
|
||||
})
|
||||
|
||||
@classmethod
|
||||
def set_var(cls, append=False, **kwargs):
|
||||
for key, val in kwargs.items():
|
||||
if append:
|
||||
cls.__var_dict[key.upper()].append(val)
|
||||
else:
|
||||
cls.__var_dict[key.upper()] = val
|
||||
|
||||
@classmethod
|
||||
def get_var(cls, var_name):
|
||||
var_name = var_name.upper()
|
||||
valid_vars = cls.__var_dict.keys()
|
||||
if var_name not in valid_vars:
|
||||
raise ValueError(
|
||||
"Invalid var_name: {}. Valid vars: {}".format(var_name,
|
||||
valid_vars))
|
||||
|
||||
return cls.__var_dict[var_name]
|
|
@ -0,0 +1,41 @@
|
|||
#
|
||||
# Copyright (c) 2019 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
class SkipStorageSpace:
|
||||
SMALL_CINDER_VOLUMES_POOL = "Cinder Volumes Pool is less than 30G"
|
||||
INSUFFICIENT_IMG_CONV = 'Insufficient image-conversion space to convert ' \
|
||||
'{} image to raw format'
|
||||
|
||||
|
||||
class SkipStorageBacking:
|
||||
LESS_THAN_TWO_HOSTS_WITH_BACKING = "Less than two hosts with {} instance " \
|
||||
"storage backing exist on system"
|
||||
NO_HOST_WITH_BACKING = "No host with {} instance storage backing exists " \
|
||||
"on system"
|
||||
|
||||
|
||||
class SkipHypervisor:
|
||||
LESS_THAN_TWO_HYPERVISORS = "Less than two hypervisors available"
|
||||
|
||||
|
||||
class SkipHyperthreading:
|
||||
LESS_THAN_TWO_HT_HOSTS = "Less than two hyperthreaded hosts available"
|
||||
MORE_THAN_ONE_HT_HOSTS = "More than one hyperthreaded hosts available"
|
||||
|
||||
|
||||
class SkipHostIf:
|
||||
PCI_IF_UNAVAIL = "SRIOV or PCI-passthrough interface unavailable"
|
||||
PCIPT_IF_UNAVAIL = "PCI-passthrough interface unavailable"
|
||||
SRIOV_IF_UNAVAIL = "SRIOV interface unavailable"
|
||||
MGMT_INFRA_UNAVAIL = 'traffic control class is not defined in this lab'
|
||||
|
||||
|
||||
class SkipSysType:
|
||||
SMALL_FOOTPRINT = "Skip for small footprint lab"
|
||||
LESS_THAN_TWO_CONTROLLERS = "Less than two controllers on system"
|
||||
SIMPLEX_SYSTEM = 'Not applicable to Simplex system'
|
||||
SIMPLEX_ONLY = 'Only applicable to Simplex system'
|
|
@ -0,0 +1,681 @@
|
|||
#
|
||||
# Copyright (c) 2019 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
from consts.proj_vars import ProjVar
|
||||
|
||||
# output of date. such as: Tue Mar 1 18:20:29 UTC 2016
|
||||
DATE_OUTPUT = r'[0-2]\d:[0-5]\d:[0-5]\d\s[A-Z]{3,}\s\d{4}$'
|
||||
|
||||
EXT_IP = '8.8.8.8'
|
||||
|
||||
# such as in string '5 packets transmitted, 0 received, 100% packet loss,
|
||||
# time 4031ms', number 100 will be found
|
||||
PING_LOSS_RATE = r'\, (\d{1,3})\% packet loss\,'
|
||||
|
||||
# vshell ping loss rate pattern. 3 packets transmitted, 0 received, 0 total,
|
||||
# 100.00%% loss
|
||||
VSHELL_PING_LOSS_RATE = r'\, (\d{1,3}).\d{1,2}[%]% loss'
|
||||
|
||||
# Matches 8-4-4-4-12 hexadecimal digits. Lower case only
|
||||
UUID = r'[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}'
|
||||
|
||||
# Match name and uuid.
|
||||
# Such as: 'ubuntu_14 (a764c205-eb82-4f18-bda6-6c8434223eb5)'
|
||||
NAME_UUID = r'(.*) \((' + UUID + r')\)'
|
||||
|
||||
# Message to indicate boot from volume from nova show
|
||||
BOOT_FROM_VOLUME = 'Attempt to boot from volume - no image supplied'
|
||||
|
||||
METADATA_SERVER = '169.254.169.254'
|
||||
|
||||
# Heat template path
|
||||
HEAT_PATH = 'heat/hot/simple/'
|
||||
HEAT_SCENARIO_PATH = 'heat/hot/scenarios/'
|
||||
HEAT_FLAVORS = ['small_ded', 'small_float']
|
||||
HEAT_CUSTOM_TEMPLATES = 'custom_heat_templates'
|
||||
|
||||
# special NIC patterns
|
||||
MELLANOX_DEVICE = 'MT27500|MT27710'
|
||||
MELLANOX4 = 'MT.*ConnectX-4'
|
||||
|
||||
PLATFORM_AFFINE_INCOMPLETE = '/etc/platform/.task_affining_incomplete'
|
||||
PLATFORM_CONF_PATH = '/etc/platform/platform.conf'
|
||||
|
||||
SUBCLOUD_PATTERN = 'subcloud'
|
||||
|
||||
PLATFORM_NET_TYPES = ('mgmt', 'oam', 'infra', 'pxeboot')
|
||||
|
||||
TIMEZONES = [
|
||||
"Asia/Hong_Kong", # UTC+8
|
||||
"America/Los_Angeles", # UTC-8, DST:UTC-7
|
||||
"Canada/Eastern", # UTC-5, DST:UTC-4
|
||||
"Canada/Central", # UTC-6, DST:UTC-5
|
||||
# "Europe/London", # UTC, DST:UTC+1
|
||||
"Europe/Berlin", # UTC+1, DST:UTC+2
|
||||
"UTC"
|
||||
]
|
||||
|
||||
STORAGE_AGGREGATE = {
|
||||
# 'local_lvm' : 'local_storage_lvm_hosts',
|
||||
'local_image': 'local_storage_image_hosts',
|
||||
'remote': 'remote_storage_hosts',
|
||||
}
|
||||
|
||||
|
||||
class NtpPool:
|
||||
NTP_POOL_1 = '2.pool.ntp.org,1.pool.ntp.org,0.pool.ntp.org'
|
||||
NTP_POOL_2 = '1.pool.ntp.org,2.pool.ntp.org,2.pool.ntp.org'
|
||||
NTP_POOL_3 = '3.ca.pool.ntp.org,2.ca.pool.ntp.org,1.ca.pool.ntp.org'
|
||||
NTP_POOL_TOO_LONG = '3.ca.pool.ntp.org,2.ca.pool.ntp.org,' \
|
||||
'1.ca.pool.ntp.org,1.com,2.com,3.com'
|
||||
NTP_NAME_TOO_LONG = 'garbage_' * 30
|
||||
|
||||
|
||||
class GuestImages:
|
||||
TMP_IMG_DIR = '/opt/backups'
|
||||
DEFAULT = {
|
||||
'image_dir': '{}/images'.format(ProjVar.get_var('USER_FILE_DIR')),
|
||||
'image_dir_file_server': '/sandbox/images',
|
||||
'guest': 'tis-centos-guest'
|
||||
}
|
||||
TIS_GUEST_PATTERN = 'cgcs-guest|tis-centos-guest'
|
||||
GUESTS_NO_RM = ['ubuntu_14', 'tis-centos-guest', 'cgcs-guest']
|
||||
# Image files name and size from TestFileServer
|
||||
# <glance_image_name>: <source_file_name>, <root disk size>,
|
||||
# <dest_file_name>, <disk_format>, <container_format>
|
||||
IMAGE_FILES = {
|
||||
'ubuntu_14': (
|
||||
'ubuntu-14.04-server-cloudimg-amd64-disk1.img', 3,
|
||||
'ubuntu_14.qcow2', 'qcow2', 'bare'),
|
||||
'ubuntu_12': (
|
||||
'ubuntu-12.04-server-cloudimg-amd64-disk1.img', 8,
|
||||
'ubuntu_12.qcow2', 'qcow2', 'bare'),
|
||||
'ubuntu_16': (
|
||||
'ubuntu-16.04-xenial-server-cloudimg-amd64-disk1.img', 8,
|
||||
'ubuntu_16.qcow2', 'qcow2', 'bare'),
|
||||
'centos_6': (
|
||||
'CentOS-6.8-x86_64-GenericCloud-1608.qcow2', 8,
|
||||
'centos_6.qcow2', 'qcow2', 'bare'),
|
||||
'centos_7': (
|
||||
'CentOS-7-x86_64-GenericCloud.qcow2', 8,
|
||||
'centos_7.qcow2', 'qcow2', 'bare'),
|
||||
'rhel_6': (
|
||||
'rhel-6.5-x86_64.qcow2', 11, 'rhel_6.qcow2', 'qcow2', 'bare'),
|
||||
'rhel_7': (
|
||||
'rhel-7.2-x86_64.qcow2', 11, 'rhel_7.qcow2', 'qcow2', 'bare'),
|
||||
'opensuse_11': (
|
||||
'openSUSE-11.3-x86_64.qcow2', 11,
|
||||
'opensuse_11.qcow2', 'qcow2', 'bare'),
|
||||
'opensuse_12': (
|
||||
'openSUSE-12.3-x86_64.qcow2', 21,
|
||||
'opensuse_12.qcow2', 'qcow2', 'bare'),
|
||||
'opensuse_13': (
|
||||
'openSUSE-13.2-OpenStack-Guest.x86_64-0.0.10-Build2.94.qcow2', 16,
|
||||
'opensuse_13.qcow2', 'qcow2', 'bare'),
|
||||
'win_2012': (
|
||||
'win2012r2_cygwin_compressed.qcow2', 13,
|
||||
'win2012r2.qcow2', 'qcow2', 'bare'),
|
||||
'win_2016': (
|
||||
'win2016_cygwin_compressed.qcow2', 29,
|
||||
'win2016.qcow2', 'qcow2', 'bare'),
|
||||
'ge_edge': (
|
||||
'edgeOS.hddirect.qcow2', 5,
|
||||
'ge_edge.qcow2', 'qcow2', 'bare'),
|
||||
'cgcs-guest': (
|
||||
'cgcs-guest.img', 1, 'cgcs-guest.img', 'raw', 'bare'),
|
||||
'vxworks': (
|
||||
'vxworks-tis.img', 1, 'vxworks.img', 'raw', 'bare'),
|
||||
'tis-centos-guest': (
|
||||
None, 2, 'tis-centos-guest.img', 'raw', 'bare'),
|
||||
'tis-centos-guest-rt': (
|
||||
None, 2, 'tis-centos-guest-rt.img', 'raw', 'bare'),
|
||||
'tis-centos-guest-qcow2': (
|
||||
None, 2, 'tis-centos-guest.qcow2', 'qcow2', 'bare'),
|
||||
'centos_gpu': (
|
||||
'centos-67-cloud-gpu.img', 8,
|
||||
'centos_6_gpu.qcow2', 'qcow2', 'bare'),
|
||||
'debian-8-m-agent': (
|
||||
'debian-8-m-agent.qcow2', 1.8,
|
||||
'debian-8-m-agent.qcow2', 'qcow2', 'bare'),
|
||||
'trusty_uefi': (
|
||||
'trusty-server-cloudimg-amd64-uefi1.img', 2.2,
|
||||
'trusty-uefi.qcow2', 'qcow2', 'bare'),
|
||||
'uefi_shell': (
|
||||
'uefi_shell.iso', 2, 'uefi_shell.iso', 'raw', 'bare'),
|
||||
}
|
||||
|
||||
|
||||
class Networks:
|
||||
INFRA_NETWORK_CIDR = "192.168.205.0/24"
|
||||
IPV4_IP = r'\d{1,3}.\d{1,3}.\d{1,3}.\d{1,3}'
|
||||
|
||||
__NEUTRON_NET_NAME_PATTERN = {
|
||||
'mgmt': r'tenant\d-mgmt-net',
|
||||
'data': r'tenant\d-net',
|
||||
'internal': 'internal',
|
||||
'external': 'external',
|
||||
}
|
||||
__NEUTRON_NET_IP_PATTERN = {
|
||||
'data': r'172.\d{1,3}.\d{1,3}.\d{1,3}',
|
||||
'mgmt': r'192.168.\d{3}\.\d{1,3}|192.168.[8|9]\d\.\d{1,3}',
|
||||
'internal': r'10.\d{1,3}.\d{1,3}.\d{1,3}',
|
||||
'external': r'192.168.\d\.\d{1,3}|192.168.[1-5]\d\.\d{1,3}|10.10.\d{'
|
||||
r'1,3}\.\d{1,3}'
|
||||
}
|
||||
|
||||
@classmethod
|
||||
def get_nenutron_net_patterns(cls, net_type='mgmt'):
|
||||
return cls.__NEUTRON_NET_NAME_PATTERN.get(
|
||||
net_type), cls.__NEUTRON_NET_IP_PATTERN.get(net_type)
|
||||
|
||||
@classmethod
|
||||
def set_neutron_net_patterns(cls, net_type, net_name_pattern=None,
|
||||
net_ip_pattern=None):
|
||||
if net_type not in cls.__NEUTRON_NET_NAME_PATTERN:
|
||||
raise ValueError("Unknown net_type {}. Select from: {}".format(
|
||||
net_type, list(cls.__NEUTRON_NET_NAME_PATTERN.keys())))
|
||||
|
||||
if net_name_pattern is not None:
|
||||
cls.__NEUTRON_NET_NAME_PATTERN[net_type] = net_name_pattern
|
||||
if net_ip_pattern is not None:
|
||||
cls.__NEUTRON_NET_IP_PATTERN[net_type] = net_ip_pattern
|
||||
|
||||
|
||||
class SystemType:
|
||||
CPE = 'All-in-one'
|
||||
STANDARD = 'Standard'
|
||||
|
||||
|
||||
class StorageAggregate:
|
||||
LOCAL_LVM = 'local_storage_lvm_hosts'
|
||||
LOCAL_IMAGE = 'local_storage_image_hosts'
|
||||
REMOTE = 'remote_storage_hosts'
|
||||
|
||||
|
||||
class VMStatus:
|
||||
# under http://docs.openstack.org/developer/nova/vmstates.html
|
||||
ACTIVE = 'ACTIVE'
|
||||
BUILD = 'BUILDING'
|
||||
REBUILD = 'REBUILD'
|
||||
VERIFY_RESIZE = 'VERIFY_RESIZE'
|
||||
RESIZE = 'RESIZED'
|
||||
ERROR = 'ERROR'
|
||||
SUSPENDED = 'SUSPENDED'
|
||||
PAUSED = 'PAUSED'
|
||||
NO_STATE = 'NO STATE'
|
||||
HARD_REBOOT = 'HARD REBOOT'
|
||||
SOFT_REBOOT = 'REBOOT'
|
||||
STOPPED = "SHUTOFF"
|
||||
MIGRATING = 'MIGRATING'
|
||||
|
||||
|
||||
class ImageStatus:
|
||||
QUEUED = 'queued'
|
||||
ACTIVE = 'active'
|
||||
SAVING = 'saving'
|
||||
|
||||
|
||||
class HostAdminState:
|
||||
UNLOCKED = 'unlocked'
|
||||
LOCKED = 'locked'
|
||||
|
||||
|
||||
class HostOperState:
|
||||
ENABLED = 'enabled'
|
||||
DISABLED = 'disabled'
|
||||
|
||||
|
||||
class HostAvailState:
|
||||
DEGRADED = 'degraded'
|
||||
OFFLINE = 'offline'
|
||||
ONLINE = 'online'
|
||||
AVAILABLE = 'available'
|
||||
FAILED = 'failed'
|
||||
POWER_OFF = 'power-off'
|
||||
|
||||
|
||||
class HostTask:
|
||||
BOOTING = 'Booting'
|
||||
REBOOTING = 'Rebooting'
|
||||
POWERING_ON = 'Powering-on'
|
||||
POWER_CYCLE = 'Critical Event Power-Cycle'
|
||||
POWER_DOWN = 'Critical Event Power-Down'
|
||||
|
||||
|
||||
class Prompt:
|
||||
CONTROLLER_0 = r'.*controller\-0[:| ].*\$'
|
||||
CONTROLLER_1 = r'.*controller\-1[:| ].*\$'
|
||||
CONTROLLER_PROMPT = r'.*controller\-[01][:| ].*\$ '
|
||||
|
||||
VXWORKS_PROMPT = '-> '
|
||||
|
||||
ADMIN_PROMPT = r'\[.*@controller\-[01].*\(keystone_admin\)\]\$'
|
||||
TENANT1_PROMPT = r'\[.*@controller\-[01] .*\(keystone_tenant1\)\]\$ '
|
||||
TENANT2_PROMPT = r'\[.*@controller\-[01] .*\(keystone_tenant2\)\]\$ '
|
||||
TENANT_PROMPT = r'\[.*@controller\-[01] .*\(keystone_{}\)\]\$ ' #
|
||||
# general prompt. Need to fill in tenant name
|
||||
REMOTE_CLI_PROMPT = r'\(keystone_{}\)\]\$ ' # remote cli prompt
|
||||
|
||||
COMPUTE_PROMPT = r'.*compute\-([0-9]){1,}\:~\$'
|
||||
STORAGE_PROMPT = r'.*storage\-([0-9]){1,}\:~\$'
|
||||
PASSWORD_PROMPT = r'.*assword\:[ ]?$|assword for .*:[ ]?$'
|
||||
LOGIN_PROMPT = "ogin:"
|
||||
SUDO_PASSWORD_PROMPT = 'Password: '
|
||||
BUILD_SERVER_PROMPT_BASE = r'{}@{}\:~.*'
|
||||
TEST_SERVER_PROMPT_BASE = r'\[{}@.*\]\$ '
|
||||
# TIS_NODE_PROMPT_BASE = r'{}\:~\$ '
|
||||
TIS_NODE_PROMPT_BASE = r'{}[: ]?~.*$'
|
||||
ADD_HOST = r'.*\(yes/no\).*'
|
||||
ROOT_PROMPT = '.*root@.*'
|
||||
Y_N_PROMPT = r'.*\(y/n\)\?.*'
|
||||
YES_N_PROMPT = r'.*\[yes/N\]\: ?'
|
||||
CONFIRM_PROMPT = '.*confirm: ?'
|
||||
|
||||
|
||||
class NovaCLIOutput:
|
||||
VM_ACTION_ACCEPTED = "Request to {} server (.*) has been accepted."
|
||||
VM_START_ACCEPTED = "Request to start server (.*) has been accepted."
|
||||
VM_STOP_ACCEPTED = "Request to stop server (.*) has been accepted."
|
||||
VM_DELETE_REJECTED_NOT_EXIST = "No server with a name or ID of '(.*)' " \
|
||||
"exists."
|
||||
VM_DELETE_ACCEPTED = "Request to delete server (.*) has been accepted."
|
||||
VM_BOOT_REJECT_MEM_PAGE_SIZE_FORBIDDEN = "Page size .* forbidden against .*"
|
||||
SRV_GRP_DEL_REJ_NOT_EXIST = "Delete for server group (.*) failed"
|
||||
SRV_GRP_DEL_SUCC = "Server group (.*) has been successfully deleted."
|
||||
|
||||
|
||||
class FlavorSpec:
|
||||
CPU_POLICY = 'hw:cpu_policy'
|
||||
VCPU_MODEL = 'hw:cpu_model'
|
||||
SHARED_VCPU = 'hw:wrs:shared_vcpu'
|
||||
CPU_THREAD_POLICY = 'hw:cpu_thread_policy'
|
||||
VCPU_SCHEDULER = 'hw:wrs:vcpu:scheduler'
|
||||
MIN_VCPUS = "hw:wrs:min_vcpus"
|
||||
STORAGE_BACKING = 'aggregate_instance_extra_specs:stx_storage'
|
||||
DISK_READ_BYTES = 'quota:disk_read_bytes_sec'
|
||||
DISK_READ_IOPS = 'quota:disk_read_iops_sec'
|
||||
DISK_WRITE_BYTES = 'quota:disk_write_bytes_sec'
|
||||
DISK_WRITE_IOPS = 'quota:disk_write_iops_sec'
|
||||
DISK_TOTAL_BYTES = 'quota:disk_total_bytes_sec'
|
||||
DISK_TOTAL_IOPS = 'quota:disk_total_iops_sec'
|
||||
NUMA_NODES = 'hw:numa_nodes'
|
||||
NUMA_0 = 'hw:numa_node.0'
|
||||
NUMA_1 = 'hw:numa_node.1'
|
||||
NUMA0_CPUS = 'hw:numa_cpus.0'
|
||||
NUMA1_CPUS = 'hw:numa_cpus.1'
|
||||
NUMA0_MEM = 'hw:numa_mem.0'
|
||||
NUMA1_MEM = 'hw:numa_mem.1'
|
||||
VSWITCH_NUMA_AFFINITY = 'hw:wrs:vswitch_numa_affinity'
|
||||
MEM_PAGE_SIZE = 'hw:mem_page_size'
|
||||
AUTO_RECOVERY = 'sw:wrs:auto_recovery'
|
||||
GUEST_HEARTBEAT = 'sw:wrs:guest:heartbeat'
|
||||
SRV_GRP_MSG = "sw:wrs:srv_grp_messaging"
|
||||
NIC_ISOLATION = "hw:wrs:nic_isolation"
|
||||
PCI_NUMA_AFFINITY = "hw:pci_numa_affinity_policy"
|
||||
PCI_PASSTHROUGH_ALIAS = "pci_passthrough:alias"
|
||||
PCI_IRQ_AFFINITY_MASK = "hw:pci_irq_affinity_mask"
|
||||
CPU_REALTIME = 'hw:cpu_realtime'
|
||||
CPU_REALTIME_MASK = 'hw:cpu_realtime_mask'
|
||||
HPET_TIMER = 'sw:wrs:guest:hpet'
|
||||
NESTED_VMX = 'hw:wrs:nested_vmx'
|
||||
NUMA0_CACHE_CPUS = 'hw:cache_vcpus.0'
|
||||
NUMA1_CACHE_CPUS = 'hw:cache_vcpus.1'
|
||||
NUMA0_L3_CACHE = 'hw:cache_l3.0'
|
||||
NUMA1_L3_CACHE = 'hw:cache_l3.1'
|
||||
LIVE_MIG_TIME_OUT = 'hw:wrs:live_migration_timeout'
|
||||
|
||||
|
||||
class ImageMetadata:
|
||||
MEM_PAGE_SIZE = 'hw_mem_page_size'
|
||||
AUTO_RECOVERY = 'sw_wrs_auto_recovery'
|
||||
VIF_MODEL = 'hw_vif_model'
|
||||
CPU_THREAD_POLICY = 'hw_cpu_thread_policy'
|
||||
CPU_POLICY = 'hw_cpu_policy'
|
||||
CPU_RT_MASK = 'hw_cpu_realtime_mask'
|
||||
CPU_RT = 'hw_cpu_realtime'
|
||||
CPU_MODEL = 'hw_cpu_model'
|
||||
FIRMWARE_TYPE = 'hw_firmware_type'
|
||||
|
||||
|
||||
class VMMetaData:
|
||||
EVACUATION_PRIORITY = 'sw:wrs:recovery_priority'
|
||||
|
||||
|
||||
class InstanceTopology:
|
||||
NODE = r'node:(\d),'
|
||||
PGSIZE = r'pgsize:(\d{1,3}),'
|
||||
VCPUS = r'vcpus:(\d{1,2}),'
|
||||
PCPUS = r'pcpus:(\d{1,2}),\s' # find a string separated by ',
|
||||
# ' if multiple numa nodes
|
||||
CPU_POLICY = 'pol:(.*),'
|
||||
SIBLINGS = 'siblings:(.*),'
|
||||
THREAD_POLICY = 'thr:(.*)$|thr:(.*),'
|
||||
TOPOLOGY = r'\d{1,2}s,\d{1,2}c,\d{1,2}t'
|
||||
|
||||
|
||||
class RouterStatus:
|
||||
ACTIVE = 'ACTIVE'
|
||||
DOWN = 'DOWN'
|
||||
|
||||
|
||||
class EventLogID:
|
||||
PATCH_INSTALL_FAIL = '900.002'
|
||||
PATCH_IN_PROGRESS = '900.001'
|
||||
CINDER_IO_CONGEST = '800.101'
|
||||
STORAGE_LOR = '800.011'
|
||||
STORAGE_POOLQUOTA = '800.003'
|
||||
STORAGE_ALARM_COND = '800.001'
|
||||
HEARTBEAT_CHECK_FAILED = '700.215'
|
||||
HEARTBEAT_ENABLED = '700.211'
|
||||
REBOOT_VM_COMPLETE = '700.186'
|
||||
REBOOT_VM_INPROGRESS = '700.182'
|
||||
REBOOT_VM_ISSUED = '700.181' # soft-reboot or hard-reboot in reason text
|
||||
VM_DELETED = '700.114'
|
||||
VM_DELETING = '700.110'
|
||||
VM_CREATED = '700.108'
|
||||
MULTI_NODE_RECOVERY = '700.016'
|
||||
HEARTBEAT_DISABLED = '700.015'
|
||||
VM_REBOOTING = '700.005'
|
||||
VM_FAILED = '700.001'
|
||||
IMA = '500.500'
|
||||
SERVICE_GROUP_STATE_CHANGE = '401.001'
|
||||
LOSS_OF_REDUNDANCY = '400.002'
|
||||
CON_DRBD_SYNC = '400.001'
|
||||
PROVIDER_NETWORK_FAILURE = '300.005'
|
||||
NETWORK_AGENT_NOT_RESPOND = '300.003'
|
||||
CONFIG_OUT_OF_DATE = '250.001'
|
||||
INFRA_NET_FAIL = '200.009'
|
||||
BMC_SENSOR_ACTION = '200.007'
|
||||
STORAGE_DEGRADE = '200.006'
|
||||
# 200.004 compute-0 experienced a service-affecting failure.
|
||||
# Auto-recovery in progress.
|
||||
# host=compute-0 critical April 7, 2017, 2:34 p.m.
|
||||
HOST_RECOVERY_IN_PROGRESS = '200.004'
|
||||
HOST_LOCK = '200.001'
|
||||
NTP_ALARM = '100.114'
|
||||
INFRA_PORT_FAIL = '100.110'
|
||||
FS_THRESHOLD_EXCEEDED = '100.104'
|
||||
CPU_USAGE_HIGH = '100.101'
|
||||
MNFA_MODE = '200.020'
|
||||
|
||||
|
||||
class NetworkingVmMapping:
|
||||
VSWITCH = {
|
||||
'vif': 'avp',
|
||||
'flavor': 'medium.dpdk',
|
||||
}
|
||||
AVP = {
|
||||
'vif': 'avp',
|
||||
'flavor': 'small',
|
||||
}
|
||||
VIRTIO = {
|
||||
'vif': 'avp',
|
||||
'flavor': 'small',
|
||||
}
|
||||
|
||||
|
||||
class VifMapping:
|
||||
VIF_MAP = {'vswitch': 'DPDKAPPS',
|
||||
'avp': 'AVPAPPS',
|
||||
'virtio': 'VIRTIOAPPS',
|
||||
'vhost': 'VHOSTAPPS',
|
||||
'sriov': 'SRIOVAPPS',
|
||||
'pcipt': 'PCIPTAPPS'
|
||||
}
|
||||
|
||||
|
||||
class LocalStorage:
|
||||
DIR_PROFILE = 'storage_profiles'
|
||||
TYPE_STORAGE_PROFILE = ['storageProfile', 'localstorageProfile']
|
||||
|
||||
|
||||
class VMNetwork:
|
||||
NET_IF = r"auto {}\niface {} inet dhcp\n"
|
||||
IFCFG_DHCP = """
|
||||
DEVICE={}
|
||||
BOOTPROTO=dhcp
|
||||
ONBOOT=yes
|
||||
TYPE=Ethernet
|
||||
USERCTL=yes
|
||||
PEERDNS=yes
|
||||
IPV6INIT={}
|
||||
PERSISTENT_DHCLIENT=1
|
||||
"""
|
||||
|
||||
IFCFG_STATIC = """
|
||||
DEVICE={}
|
||||
BOOTPROTO=static
|
||||
ONBOOT=yes
|
||||
TYPE=Ethernet
|
||||
USERCTL=yes
|
||||
PEERDNS=yes
|
||||
IPV6INIT={}
|
||||
PERSISTENT_DHCLIENT=1
|
||||
IPADDR={}
|
||||
"""
|
||||
|
||||
|
||||
class HTTPPort:
|
||||
NEUTRON_PORT = 9696
|
||||
NEUTRON_VER = "v2.0"
|
||||
CEIL_PORT = 8777
|
||||
CEIL_VER = "v2"
|
||||
GNOCCHI_PORT = 8041
|
||||
GNOCCHI_VER = 'v1'
|
||||
SYS_PORT = 6385
|
||||
SYS_VER = "v1"
|
||||
CINDER_PORT = 8776
|
||||
CINDER_VER = "v3" # v1 and v2 are also supported
|
||||
GLANCE_PORT = 9292
|
||||
GLANCE_VER = "v2"
|
||||
HEAT_PORT = 8004
|
||||
HEAT_VER = "v1"
|
||||
HEAT_CFN_PORT = 8000
|
||||
HEAT_CFN_VER = "v1"
|
||||
NOVA_PORT = 8774
|
||||
NOVA_VER = "v2.1" # v3 also supported
|
||||
NOVA_EC2_PORT = 8773
|
||||
NOVA_EC2_VER = "v2"
|
||||
PATCHING_PORT = 15491
|
||||
PATCHING_VER = "v1"
|
||||
|
||||
|
||||
class QoSSpec:
|
||||
READ_BYTES = 'read_bytes_sec'
|
||||
WRITE_BYTES = 'write_bytes_sec'
|
||||
TOTAL_BYTES = 'total_bytes_sec'
|
||||
READ_IOPS = 'read_iops_sec'
|
||||
WRITE_IOPS = 'write_iops_sec'
|
||||
TOTAL_IOPS = 'total_iops_sec'
|
||||
|
||||
|
||||
class DevClassID:
|
||||
QAT_VF = '0b4000'
|
||||
GPU = '030000'
|
||||
USB = '0c0320|0c0330'
|
||||
|
||||
|
||||
class MaxVmsSupported:
|
||||
SX = 10
|
||||
XEON_D = 4
|
||||
DX = 10
|
||||
VBOX = 2
|
||||
|
||||
|
||||
class CpuModel:
|
||||
CPU_MODELS = (
|
||||
'Skylake-Server', 'Skylake-Client',
|
||||
'Broadwell', 'Broadwell-noTSX',
|
||||
'Haswell-noTSX-IBRS', 'Haswell',
|
||||
'IvyBridge', 'SandyBridge',
|
||||
'Westmere', 'Nehalem', 'Penryn', 'Conroe')
|
||||
|
||||
|
||||
class BackendState:
|
||||
CONFIGURED = 'configured'
|
||||
CONFIGURING = 'configuring'
|
||||
|
||||
|
||||
class BackendTask:
|
||||
RECONFIG_CONTROLLER = 'reconfig-controller'
|
||||
APPLY_MANIFEST = 'applying-manifests'
|
||||
|
||||
|
||||
class PartitionStatus:
|
||||
READY = 'Ready'
|
||||
MODIFYING = 'Modifying'
|
||||
DELETING = 'Deleting'
|
||||
CREATING = 'Creating'
|
||||
IN_USE = 'In-Use'
|
||||
|
||||
|
||||
class SysType:
|
||||
AIO_DX = 'AIO-DX'
|
||||
AIO_SX = 'AIO-SX'
|
||||
STORAGE = 'Storage'
|
||||
REGULAR = 'Regular'
|
||||
MULTI_REGION = 'Multi-Region'
|
||||
DISTRIBUTED_CLOUD = 'Distributed_Cloud'
|
||||
|
||||
|
||||
class HeatStackStatus:
|
||||
CREATE_FAILED = 'CREATE_FAILED'
|
||||
CREATE_COMPLETE = 'CREATE_COMPLETE'
|
||||
UPDATE_COMPLETE = 'UPDATE_COMPLETE'
|
||||
UPDATE_FAILED = 'UPDATE_FAILED'
|
||||
DELETE_FAILED = 'DELETE_FAILED'
|
||||
|
||||
|
||||
class VimEventID:
|
||||
LIVE_MIG_BEGIN = 'instance-live-migrate-begin'
|
||||
LIVE_MIG_END = 'instance-live-migrated'
|
||||
COLD_MIG_BEGIN = 'instance-cold-migrate-begin'
|
||||
COLD_MIG_END = 'instance-cold-migrated'
|
||||
COLD_MIG_CONFIRM_BEGIN = 'instance-cold-migrate-confirm-begin'
|
||||
COLD_MIG_CONFIRMED = 'instance-cold-migrate-confirmed'
|
||||
|
||||
|
||||
class MigStatus:
|
||||
COMPLETED = 'completed'
|
||||
RUNNING = 'running'
|
||||
PREPARING = 'preparing'
|
||||
PRE_MIG = 'pre-migrating'
|
||||
POST_MIG = 'post-migrating'
|
||||
|
||||
|
||||
class TrafficControl:
|
||||
CLASSES = {'1:40': 'default', '1:1': 'root', '1:10': 'hiprio',
|
||||
'1:20': 'storage', '1:30': 'migration',
|
||||
'1:50': 'drbd'}
|
||||
|
||||
RATE_PATTERN_ROOT = r'class htb 1:1 root rate (\d+)([GMK])bit ceil (\d+)(' \
|
||||
r'[GMK])bit burst \d+b cburst \d+b'
|
||||
RATE_PATTERN = r'class htb (1:\d+) parent 1:1 leaf \d+: prio \d+ rate (' \
|
||||
r'\d+)([GMK])bit ceil (\d+)([GMK])bit ' \
|
||||
r'burst \d+b cburst \d+b'
|
||||
|
||||
# no infra
|
||||
MGMT_NO_INFRA = {
|
||||
'config': 'no infra',
|
||||
'root': (1, 1),
|
||||
'default': (0.1, 0.2),
|
||||
'hiprio': (0.1, 0.2),
|
||||
'storage': (0.5, 1),
|
||||
'migration': (0.3, 1),
|
||||
'drbd': (0.8, 1)}
|
||||
|
||||
# infra must be sep
|
||||
MGMT_SEP = {
|
||||
'config': 'separate mgmt',
|
||||
'root': (1, 1),
|
||||
'default': (0.1, 1),
|
||||
'hiprio': (0.1, 1)}
|
||||
|
||||
# infra could be sep or over pxe
|
||||
MGMT_USES_PXE = {
|
||||
'config': 'mgmt consolidated over pxeboot',
|
||||
'root': (1, 1),
|
||||
'default': (0.1, 0.2),
|
||||
'hiprio': (0.1, 0.2)}
|
||||
|
||||
# infra over mgmt
|
||||
MGMT_USED_BY_INFRA = {
|
||||
'config': 'infra consolidated over mgmt',
|
||||
'root': (1, 1),
|
||||
'default': (0.1, 0.2),
|
||||
'hiprio': (0.1, 0.2),
|
||||
'storage': (0.5, 1),
|
||||
'migration': (0.3, 1),
|
||||
'drbd': (0.8, 1)}
|
||||
|
||||
# infra over mgmt
|
||||
INFRA_USES_MGMT = {
|
||||
'config': 'infra consolidated over mgmt',
|
||||
'root': (0.99, 0.99),
|
||||
'default': (0.99 * 0.1, 0.99 * 0.2),
|
||||
'hiprio': (0.99 * 0.1, 0.99 * 0.2),
|
||||
'storage': (0.99 * 0.5, 0.99 * 1),
|
||||
'migration': (0.99 * 0.3, 0.99 * 1),
|
||||
'drbd': (0.99 * 0.8, 0.99 * 1)}
|
||||
|
||||
# mgmt could be sep or over pxe
|
||||
INFRA_SEP = {
|
||||
'config': 'separate infra',
|
||||
'root': (1, 1),
|
||||
'default': (0.1, 0.2),
|
||||
'hiprio': (0.1, 0.2),
|
||||
'storage': (0.5, 1),
|
||||
'migration': (0.3, 1),
|
||||
'drbd': (0.8, 1)}
|
||||
|
||||
# mgmt must be over pxe
|
||||
INFRA_USES_PXE = {
|
||||
'config': 'infra and mgmt consolidated over pxeboot',
|
||||
'root': (1, 1),
|
||||
'default': (0.99 * 0.1, 0.99 * 0.2), # 0.1, 0.2 is the ratio for mgmt
|
||||
'hiprio': (0.99 * 0.1, 0.99 * 0.2), # 0.1, 0.2 is the ratio for mgmt
|
||||
'storage': (0.99 * 0.5, 0.99),
|
||||
'migration': (0.99 * 0.3, 0.99),
|
||||
'drbd': (0.99 * 0.8, 0.99)}
|
||||
|
||||
|
||||
class SubcloudStatus:
|
||||
AVAIL_ONLINE = "online"
|
||||
AVAIL_OFFLINE = "offline"
|
||||
MGMT_MANAGED = "managed"
|
||||
MGMT_UNMANAGED = "unmanaged"
|
||||
SYNCED = 'in-sync'
|
||||
UNSYNCED = 'out-of-sync'
|
||||
|
||||
|
||||
class PodStatus:
|
||||
RUNNING = 'Running'
|
||||
COMPLETED = 'Completed'
|
||||
CRASH = 'CrashLoopBackOff'
|
||||
POD_INIT = 'PodInitializing'
|
||||
INIT = 'Init:0/1'
|
||||
PENDING = 'Pending'
|
||||
|
||||
|
||||
class AppStatus:
|
||||
UPLOADING = 'uploading'
|
||||
UPLOADED = 'uploaded'
|
||||
UPLOAD_FAILED = 'upload-failed'
|
||||
APPLIED = 'applied'
|
||||
APPLY_FAILED = 'apply-failed'
|
||||
REMOVE_FAILED = 'remove-failed'
|
||||
DELETE_FAILED = 'delete-failed'
|
||||
|
||||
|
||||
class VSwitchType:
|
||||
OVS_DPDK = 'ovs-dpdk'
|
||||
AVS = 'avs'
|
||||
NONE = 'none'
|
||||
|
||||
|
||||
class Container:
|
||||
LOCAL_DOCKER_REG = 'registry.local:9001'
|
|
@ -0,0 +1,160 @@
|
|||
#
|
||||
# Copyright (c) 2019 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
CLI_TIMEOUT = 600
|
||||
|
||||
|
||||
class HostTimeout:
|
||||
# Host in online state after locked
|
||||
ONLINE_AFTER_LOCK = 1200
|
||||
# Compute host reaches enabled/available state after system host-unlock
|
||||
# returned
|
||||
COMPUTE_UNLOCK = 840
|
||||
# Host reaches enabled/available state after system host-unlock returned
|
||||
CONTROLLER_UNLOCK = 1360
|
||||
# Host reaches enabled/available state after sudo reboot -f from host
|
||||
REBOOT = 2400
|
||||
# Active controller switched and being able to run openstack CLI after
|
||||
# system host-swact returned
|
||||
SWACT = 180
|
||||
# Host in locked state after system host-lock cli returned
|
||||
LOCK = 900
|
||||
# Task clears in system host-show after host reaches enabled/available state
|
||||
TASK_CLEAR = 600
|
||||
# Host in offline or failed state via system host-show after sudo reboot
|
||||
# -f returned
|
||||
FAIL_AFTER_REBOOT = 120
|
||||
# Hypervsior in enabled/up state after host in available state and task
|
||||
# clears
|
||||
HYPERVISOR_UP = 300
|
||||
# Web service up in sudo sm-dump after host in available state and task
|
||||
# clears
|
||||
WEB_SERVICE_UP = 180
|
||||
PING_TIMEOUT = 60
|
||||
TIMEOUT_BUFFER = 2
|
||||
# subfunction go enabled/available after host admin/avail states go
|
||||
# enabled/available
|
||||
SUBFUNC_READY = 300
|
||||
SYSTEM_RESTORE = 3600 # System restore complete
|
||||
SYSTEM_BACKUP = 1800 # system backup complete
|
||||
BACKUP_COPY_USB = 600
|
||||
INSTALL_CLONE = 3600
|
||||
INSTALL_CLONE_STATUS = 60
|
||||
INSTALL_CONTROLLER = 2400
|
||||
INSTALL_LOAD = 3600
|
||||
POST_INSTALL_SCRIPTS = 3600
|
||||
CONFIG_CONTROLLER_TIMEOUT = 1800
|
||||
CEPH_MON_ADD_CONFIG = 300
|
||||
NODES_STATUS_READY = 7200
|
||||
|
||||
|
||||
class InstallTimeout:
|
||||
# Host reaches enabled/available state after system host-unlock returned
|
||||
CONTROLLER_UNLOCK = 9000
|
||||
CONFIG_CONTROLLER_TIMEOUT = 1800
|
||||
# REBOOT = 2000 # Host reaches enabled/available state after sudo
|
||||
# reboot -f from host
|
||||
UPGRADE = 7200
|
||||
WIPE_DISK_TIMEOUT = 30
|
||||
SYSTEM_RESTORE = 3600 # System restore complete
|
||||
SYSTEM_BACKUP = 1800 # system backup complete
|
||||
BACKUP_COPY_USB = 600
|
||||
INSTALL_CLONE = 3600
|
||||
INSTALL_CLONE_STATUS = 60
|
||||
INSTALL_CONTROLLER = 2400
|
||||
INSTALL_LOAD = 3600
|
||||
POST_INSTALL_SCRIPTS = 3600
|
||||
|
||||
|
||||
class VMTimeout:
|
||||
STATUS_CHANGE = 300
|
||||
STATUS_VERIFY_RESIZE = 30
|
||||
LIVE_MIGRATE_COMPLETE = 240
|
||||
COLD_MIGRATE_CONFIRM = 600
|
||||
BOOT_VM = 1800
|
||||
DELETE = 180
|
||||
VOL_ATTACH = 60
|
||||
SSH_LOGIN = 90
|
||||
AUTO_RECOVERY = 600
|
||||
REBOOT = 180
|
||||
PAUSE = 180
|
||||
IF_ADD = 30
|
||||
REBUILD = 300
|
||||
DHCP_IP_ASSIGN = 30
|
||||
DHCP_RETRY = 500
|
||||
PING_VM = 200
|
||||
|
||||
|
||||
class VolumeTimeout:
|
||||
STATUS_CHANGE = 2700 # Windows guest takes a long time
|
||||
DELETE = 90
|
||||
|
||||
|
||||
class SysInvTimeout:
|
||||
RETENTION_PERIOD_SAVED = 30
|
||||
RETENTION_PERIOD_MODIFY = 60
|
||||
DNS_SERVERS_SAVED = 30
|
||||
DNS_MODIFY = 60
|
||||
PARTITION_CREATE = 120
|
||||
PARTITION_DELETE = 120
|
||||
PARTITION_MODIFY = 120
|
||||
|
||||
|
||||
class CMDTimeout:
|
||||
HOST_CPU_MODIFY = 600
|
||||
RESOURCE_LIST = 60
|
||||
REBOOT_VM = 60
|
||||
CPU_PROFILE_APPLY = 30
|
||||
|
||||
|
||||
class ImageTimeout:
|
||||
CREATE = 1800
|
||||
STATUS_CHANGE = 60
|
||||
DELETE = 120
|
||||
|
||||
|
||||
class EventLogTimeout:
|
||||
HEARTBEAT_ESTABLISH = 300
|
||||
HEALTH_CHECK_FAIL = 60
|
||||
VM_REBOOT = 60
|
||||
NET_AGENT_NOT_RESPOND_CLEAR = 120
|
||||
|
||||
|
||||
class MTCTimeout:
|
||||
KILL_PROCESS_HOST_CHANGE_STATUS = 40
|
||||
KILL_PROCESS_HOST_KEEP_STATUS = 20
|
||||
KILL_PROCESS_SWACT_NOT_START = 20
|
||||
KILL_PROCESS_SWACT_START = 40
|
||||
KILL_PROCESS_SWACT_COMPLETE = 40
|
||||
|
||||
|
||||
class CeilTimeout:
|
||||
EXPIRE = 300
|
||||
|
||||
|
||||
class OrchestrationPhaseTimeout:
|
||||
INITIAL = 20
|
||||
BUILD = 60
|
||||
ABORT = 7200
|
||||
APPLY = 86400
|
||||
|
||||
|
||||
class DCTimeout:
|
||||
SYNC = 660 # 10 minutes + 1
|
||||
SUBCLOUD_AUDIT = 600 # 4 minutes + 1
|
||||
PATCH_AUDIT = 240 # 3 minutes + 1
|
||||
|
||||
|
||||
class MiscTimeout:
|
||||
# timeout for two audits. 'sudo ntpq' got pulled every 10 minutes in
|
||||
# /var/log/user.log
|
||||
NTPQ_UPDATE = 1260
|
||||
|
||||
|
||||
class K8sTimeout:
|
||||
APP_UPLOAD = 300
|
||||
APP_APPLY = 600
|
|
@ -0,0 +1,10 @@
|
|||
#!/bin/bash
|
||||
|
||||
# Ubuntu cloud-init user data script to be executed after ubuntu vm
|
||||
# initialization
|
||||
|
||||
sudo echo -e "auto eth1\niface eth1 inet dhcp\n\nauto eth2\niface eth2 inet dhcp" >> "/etc/network/interfaces"
|
||||
sudo ifup eth1
|
||||
sudo ifup eth2
|
||||
|
||||
ip addr
|
|
@ -0,0 +1,67 @@
|
|||
#
|
||||
# Copyright (c) 2019 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
from consts.auth import Tenant
|
||||
from utils import table_parser, cli
|
||||
from utils.clients.ssh import ControllerClient
|
||||
from utils.tis_log import LOG
|
||||
|
||||
|
||||
def get_alarms(header='alarm_id', name=None, strict=False,
|
||||
auth_info=Tenant.get('admin'), con_ssh=None):
|
||||
"""
|
||||
|
||||
Args:
|
||||
header
|
||||
name:
|
||||
strict:
|
||||
auth_info:
|
||||
con_ssh:
|
||||
|
||||
Returns:
|
||||
|
||||
"""
|
||||
|
||||
table_ = table_parser.table(cli.openstack('alarm list',
|
||||
ssh_client=con_ssh,
|
||||
auth_info=auth_info)[1],
|
||||
combine_multiline_entry=True)
|
||||
if name is None:
|
||||
return table_parser.get_column(table_, header)
|
||||
|
||||
return table_parser.get_values(table_, header, Name=name, strict=strict)
|
||||
|
||||
|
||||
def get_events(event_type, limit=None, header='message_id', con_ssh=None,
|
||||
auth_info=None, **filters):
|
||||
"""
|
||||
|
||||
Args:
|
||||
event_type:
|
||||
limit
|
||||
header:
|
||||
con_ssh:
|
||||
auth_info:
|
||||
|
||||
Returns:
|
||||
|
||||
"""
|
||||
args = ''
|
||||
if limit:
|
||||
args = '--limit {}'.format(limit)
|
||||
|
||||
if event_type or filters:
|
||||
if event_type:
|
||||
filters['event_type'] = event_type
|
||||
|
||||
extra_args = ['{}={}'.format(k, v) for k, v in filters.items()]
|
||||
args += ' --filter {}'.format(';'.join(extra_args))
|
||||
|
||||
table_ = table_parser.table(cli.openstack('event list', args,
|
||||
ssh_client=con_ssh,
|
||||
auth_info=auth_info)[1])
|
||||
return table_parser.get_values(table_, header)
|
|
@ -0,0 +1,635 @@
|
|||
#
|
||||
# Copyright (c) 2019 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
###############################################################
|
||||
# Intended for check functions for test result verifications
|
||||
# assert is used to fail the check
|
||||
# LOG.tc_step is used log the info
|
||||
# Should be called by test function directly
|
||||
###############################################################
|
||||
import re
|
||||
import time
|
||||
import copy
|
||||
|
||||
from utils.tis_log import LOG
|
||||
from utils.rest import Rest
|
||||
from consts.auth import Tenant
|
||||
from consts.stx import GuestImages, EventLogID
|
||||
from keywords import host_helper, system_helper, vm_helper, common, \
|
||||
glance_helper, storage_helper
|
||||
|
||||
SEP = '\n------------------------------------ '
|
||||
|
||||
|
||||
def check_topology_of_vm(vm_id, vcpus, prev_total_cpus=None, numa_num=None,
|
||||
vm_host=None, cpu_pol=None,
|
||||
cpu_thr_pol=None, expt_increase=None, min_vcpus=None,
|
||||
current_vcpus=None,
|
||||
prev_siblings=None, shared_vcpu=None, con_ssh=None,
|
||||
guest=None):
|
||||
"""
|
||||
Check vm has the correct topology based on the number of vcpus,
|
||||
cpu policy, cpu threads policy, number of numa nodes
|
||||
|
||||
Check is done via vm-topology, nova host-describe, virsh vcpupin (on vm
|
||||
host), nova-compute.log (on vm host),
|
||||
/sys/devices/system/cpu/<cpu#>/topology/thread_siblings_list (on vm)
|
||||
|
||||
Args:
|
||||
vm_id (str):
|
||||
vcpus (int): number of vcpus specified in flavor
|
||||
prev_total_cpus (float): such as 37.0000, 37.0625
|
||||
numa_num (int): number of numa nodes vm vcpus are on. Default is 1 if
|
||||
unset in flavor.
|
||||
vm_host (str):
|
||||
cpu_pol (str): dedicated or shared
|
||||
cpu_thr_pol (str): isolate, require, or prefer
|
||||
expt_increase (int): expected total vcpu increase on vm host compared
|
||||
to prev_total_cpus
|
||||
min_vcpus (None|int): min vcpu flavor spec. vcpu scaling specific
|
||||
current_vcpus (None|int): current number of vcpus. vcpu scaling specific
|
||||
prev_siblings (list): list of siblings total. Usually used when
|
||||
checking vm topology after live migration
|
||||
con_ssh (SSHClient)
|
||||
shared_vcpu (int): which vcpu is shared
|
||||
guest (str|None): guest os. e.g., ubuntu_14. Default guest is assumed
|
||||
when None.
|
||||
|
||||
"""
|
||||
LOG.info(
|
||||
"------ Check topology of vm {} on controller, hypervisor and "
|
||||
"vm".format(
|
||||
vm_id))
|
||||
cpu_pol = cpu_pol if cpu_pol else 'shared'
|
||||
|
||||
if vm_host is None:
|
||||
vm_host = vm_helper.get_vm_host(vm_id, con_ssh=con_ssh)
|
||||
|
||||
log_cores_siblings = host_helper.get_logcore_siblings(host=vm_host,
|
||||
con_ssh=con_ssh)
|
||||
|
||||
if prev_total_cpus is not None:
|
||||
if expt_increase is None:
|
||||
expt_increase = vcpus
|
||||
|
||||
LOG.info(
|
||||
"{}Check total vcpus for vm host is increased by {} via "
|
||||
"'openstack hypervisor show'".format(
|
||||
SEP, expt_increase))
|
||||
expt_used_vcpus = prev_total_cpus + expt_increase
|
||||
end_time = time.time() + 70
|
||||
while time.time() < end_time:
|
||||
post_hosts_cpus = host_helper.get_vcpus_for_computes(
|
||||
hosts=vm_host, field='vcpus_used')
|
||||
if expt_used_vcpus == post_hosts_cpus[vm_host]:
|
||||
break
|
||||
time.sleep(10)
|
||||
else:
|
||||
post_hosts_cpus = host_helper.get_vcpus_for_computes(
|
||||
hosts=vm_host, field='used_now')
|
||||
assert expt_used_vcpus == post_hosts_cpus[
|
||||
vm_host], "Used vcpus on host {} is not as expected. " \
|
||||
"Expected: {}; Actual: {}".format(vm_host,
|
||||
expt_used_vcpus,
|
||||
post_hosts_cpus[
|
||||
vm_host])
|
||||
|
||||
LOG.info(
|
||||
"{}Check vm vcpus, pcpus on vm host via nova-compute.log and virsh "
|
||||
"vcpupin".format(SEP))
|
||||
# Note: floating vm pcpus will not be checked via virsh vcpupin
|
||||
vm_host_cpus, vm_siblings = _check_vm_topology_on_host(
|
||||
vm_id, vcpus=vcpus, vm_host=vm_host, cpu_pol=cpu_pol,
|
||||
cpu_thr_pol=cpu_thr_pol,
|
||||
host_log_core_siblings=log_cores_siblings,
|
||||
shared_vcpu=shared_vcpu)
|
||||
|
||||
LOG.info(
|
||||
"{}Check vm vcpus, siblings on vm via "
|
||||
"/sys/devices/system/cpu/<cpu>/topology/thread_siblings_list".
|
||||
format(SEP))
|
||||
check_sibling = True if shared_vcpu is None else False
|
||||
_check_vm_topology_on_vm(vm_id, vcpus=vcpus, siblings_total=vm_siblings,
|
||||
current_vcpus=current_vcpus,
|
||||
prev_siblings=prev_siblings, guest=guest,
|
||||
check_sibling=check_sibling)
|
||||
|
||||
return vm_host_cpus, vm_siblings
|
||||
|
||||
|
||||
def _check_vm_topology_on_host(vm_id, vcpus, vm_host, cpu_pol, cpu_thr_pol,
|
||||
host_log_core_siblings=None, shared_vcpu=None,
|
||||
shared_host_cpus=None):
|
||||
"""
|
||||
|
||||
Args:
|
||||
vm_id (str):
|
||||
vcpus (int):
|
||||
vm_host (str):
|
||||
cpu_pol (str):
|
||||
cpu_thr_pol (str):
|
||||
host_log_core_siblings (list|None):
|
||||
shared_vcpu (int|None):
|
||||
shared_host_cpus (None|list)
|
||||
|
||||
Returns: None
|
||||
|
||||
"""
|
||||
if not host_log_core_siblings:
|
||||
host_log_core_siblings = host_helper.get_logcore_siblings(host=vm_host)
|
||||
|
||||
if shared_vcpu and not shared_host_cpus:
|
||||
shared_cpus_ = host_helper.get_host_cpu_cores_for_function(
|
||||
func='Shared', hostname=vm_host, thread=None)
|
||||
shared_host_cpus = []
|
||||
for proc, shared_cores in shared_cpus_.items():
|
||||
shared_host_cpus += shared_cores
|
||||
|
||||
LOG.info(
|
||||
'======= Check vm topology from vm_host via: virsh vcpupin, taskset')
|
||||
instance_name = vm_helper.get_vm_instance_name(vm_id)
|
||||
|
||||
with host_helper.ssh_to_host(vm_host) as host_ssh:
|
||||
vcpu_cpu_map = vm_helper.get_vcpu_cpu_map(host_ssh=host_ssh)
|
||||
used_host_cpus = []
|
||||
vm_host_cpus = []
|
||||
vcpus_list = list(range(vcpus))
|
||||
for instance_name_, instance_map in vcpu_cpu_map.items():
|
||||
used_host_cpus += list(instance_map.values())
|
||||
if instance_name_ == instance_name:
|
||||
for vcpu in vcpus_list:
|
||||
vm_host_cpus.append(instance_map[vcpu])
|
||||
used_host_cpus = list(set(used_host_cpus))
|
||||
vm_siblings = None
|
||||
# Check vm sibling pairs
|
||||
if 'ded' in cpu_pol and cpu_thr_pol in ('isolate', 'require'):
|
||||
if len(host_log_core_siblings[0]) == 1:
|
||||
assert cpu_thr_pol != 'require', \
|
||||
"cpu_thread_policy 'require' must be used on a HT host"
|
||||
vm_siblings = [[vcpu_] for vcpu_ in vcpus_list]
|
||||
else:
|
||||
vm_siblings = []
|
||||
for vcpu_index in vcpus_list:
|
||||
vm_host_cpu = vm_host_cpus[vcpu_index]
|
||||
for host_sibling in host_log_core_siblings:
|
||||
if vm_host_cpu in host_sibling:
|
||||
other_cpu = host_sibling[0] if \
|
||||
vm_host_cpu == host_sibling[1] else \
|
||||
host_sibling[1]
|
||||
if cpu_thr_pol == 'require':
|
||||
assert other_cpu in vm_host_cpus, \
|
||||
"'require' vm uses only 1 of the sibling " \
|
||||
"cores"
|
||||
vm_siblings.append(sorted([vcpu_index,
|
||||
vm_host_cpus.index(
|
||||
other_cpu)]))
|
||||
else:
|
||||
assert other_cpu not in used_host_cpus, \
|
||||
"sibling core was not reserved for " \
|
||||
"'isolate' vm"
|
||||
vm_siblings.append([vcpu_index])
|
||||
|
||||
LOG.info("{}Check vcpus for vm via sudo virsh vcpupin".format(SEP))
|
||||
vcpu_pins = host_helper.get_vcpu_pins_for_instance_via_virsh(
|
||||
host_ssh=host_ssh,
|
||||
instance_name=instance_name)
|
||||
assert vcpus == len(vcpu_pins), \
|
||||
'Actual vm cpus number - {} is not as expected - {} in sudo ' \
|
||||
'virsh vcpupin'.format(len(vcpu_pins), vcpus)
|
||||
|
||||
virsh_cpus_sets = []
|
||||
for vcpu_pin in vcpu_pins:
|
||||
vcpu = int(vcpu_pin['vcpu'])
|
||||
cpu_set = common.parse_cpus_list(vcpu_pin['cpuset'])
|
||||
virsh_cpus_sets += cpu_set
|
||||
if shared_vcpu is not None and vcpu == shared_vcpu:
|
||||
assert len(cpu_set) == 1, \
|
||||
"shared vcpu is pinned to more than 1 host cpu"
|
||||
assert cpu_set[0] in shared_host_cpus, \
|
||||
"shared vcpu is not pinned to shared host cpu"
|
||||
|
||||
if 'ded' in cpu_pol:
|
||||
assert set(vm_host_cpus) == set(
|
||||
virsh_cpus_sets), "pinned cpus in virsh cpupin is not the " \
|
||||
"same as ps"
|
||||
else:
|
||||
assert set(vm_host_cpus) < set(
|
||||
virsh_cpus_sets), "floating vm should be affined to all " \
|
||||
"available host cpus"
|
||||
|
||||
LOG.info("{}Get cpu affinity list for vm via taskset -pc".format(SEP))
|
||||
ps_affined_cpus = \
|
||||
vm_helper.get_affined_cpus_for_vm(vm_id,
|
||||
host_ssh=host_ssh,
|
||||
vm_host=vm_host,
|
||||
instance_name=instance_name)
|
||||
assert set(ps_affined_cpus) == set(
|
||||
virsh_cpus_sets), "Actual affined cpu in taskset is different " \
|
||||
"than virsh"
|
||||
return vm_host_cpus, vm_siblings
|
||||
|
||||
|
||||
def _check_vm_topology_on_vm(vm_id, vcpus, siblings_total, current_vcpus=None,
|
||||
prev_siblings=None, guest=None,
|
||||
check_sibling=True):
|
||||
siblings_total_ = None
|
||||
if siblings_total:
|
||||
siblings_total_ = copy.deepcopy(siblings_total)
|
||||
# Check from vm in /proc/cpuinfo and
|
||||
# /sys/devices/.../cpu#/topology/thread_siblings_list
|
||||
if not guest:
|
||||
guest = ''
|
||||
if not current_vcpus:
|
||||
current_vcpus = int(vcpus)
|
||||
|
||||
LOG.info(
|
||||
'=== Check vm topology from within the vm via: /sys/devices/system/cpu')
|
||||
actual_sibs = []
|
||||
vm_helper.wait_for_vm_pingable_from_natbox(vm_id)
|
||||
with vm_helper.ssh_to_vm_from_natbox(vm_id) as vm_ssh:
|
||||
|
||||
win_expt_cores_per_sib = win_log_count_per_sibling = None
|
||||
if 'win' in guest:
|
||||
LOG.info(
|
||||
"{}Check windows guest cores via wmic cpu get cmds".format(SEP))
|
||||
offline_cores_count = 0
|
||||
log_cores_count, win_log_count_per_sibling = \
|
||||
get_procs_and_siblings_on_windows(vm_ssh)
|
||||
online_cores_count = present_cores_count = log_cores_count
|
||||
else:
|
||||
LOG.info(
|
||||
"{}Check vm present|online|offline cores from inside vm via "
|
||||
"/sys/devices/system/cpu/".format(SEP))
|
||||
present_cores, online_cores, offline_cores = \
|
||||
vm_helper.get_proc_nums_from_vm(vm_ssh)
|
||||
present_cores_count = len(present_cores)
|
||||
online_cores_count = len(online_cores)
|
||||
offline_cores_count = len(offline_cores)
|
||||
|
||||
assert vcpus == present_cores_count, \
|
||||
"Number of vcpus: {}, present cores: {}".format(
|
||||
vcpus, present_cores_count)
|
||||
assert current_vcpus == online_cores_count, \
|
||||
"Current vcpus for vm: {}, online cores: {}".format(
|
||||
current_vcpus, online_cores_count)
|
||||
|
||||
expt_total_cores = online_cores_count + offline_cores_count
|
||||
assert expt_total_cores in [present_cores_count, 512], \
|
||||
"Number of present cores: {}. online+offline cores: {}".format(
|
||||
vcpus, expt_total_cores)
|
||||
|
||||
if check_sibling and siblings_total_ and online_cores_count == \
|
||||
present_cores_count:
|
||||
expt_sibs_list = [[vcpu] for vcpu in
|
||||
range(present_cores_count)] if not \
|
||||
siblings_total_ \
|
||||
else siblings_total_
|
||||
|
||||
expt_sibs_list = [sorted(expt_sibs_list)]
|
||||
if prev_siblings:
|
||||
# siblings_total may get modified here
|
||||
expt_sibs_list.append(sorted(prev_siblings))
|
||||
|
||||
if 'win' in guest:
|
||||
LOG.info("{}Check windows guest siblings via wmic cpu get "
|
||||
"cmds".format(SEP))
|
||||
expt_cores_list = []
|
||||
for sib_list in expt_sibs_list:
|
||||
win_expt_cores_per_sib = [len(vcpus) for vcpus in sib_list]
|
||||
expt_cores_list.append(win_expt_cores_per_sib)
|
||||
assert win_log_count_per_sibling in expt_cores_list, \
|
||||
"Expected log cores count per sibling: {}, actual: {}".\
|
||||
format(win_expt_cores_per_sib, win_log_count_per_sibling)
|
||||
|
||||
else:
|
||||
LOG.info(
|
||||
"{}Check vm /sys/devices/system/cpu/["
|
||||
"cpu#]/topology/thread_siblings_list".format(
|
||||
SEP))
|
||||
for cpu in ['cpu{}'.format(i) for i in
|
||||
range(online_cores_count)]:
|
||||
actual_sibs_for_cpu = \
|
||||
vm_ssh.exec_cmd(
|
||||
'cat /sys/devices/system/cpu/{}/topology/thread_'
|
||||
'siblings_list'.format(cpu), fail_ok=False)[1]
|
||||
|
||||
sib_for_cpu = common.parse_cpus_list(actual_sibs_for_cpu)
|
||||
if sib_for_cpu not in actual_sibs:
|
||||
actual_sibs.append(sib_for_cpu)
|
||||
|
||||
assert sorted(
|
||||
actual_sibs) in expt_sibs_list, "Expt sib lists: {}, " \
|
||||
"actual sib list: {}". \
|
||||
format(expt_sibs_list, sorted(actual_sibs))
|
||||
|
||||
|
||||
def get_procs_and_siblings_on_windows(vm_ssh):
|
||||
cmd = 'wmic cpu get {}'
|
||||
|
||||
procs = []
|
||||
for param in ['NumberOfCores', 'NumberOfLogicalProcessors']:
|
||||
output = vm_ssh.exec_cmd(cmd.format(param), fail_ok=False)[1].strip()
|
||||
num_per_proc = [int(line.strip()) for line in output.splitlines() if
|
||||
line.strip()
|
||||
and not re.search('{}|x'.format(param), line)]
|
||||
procs.append(num_per_proc)
|
||||
procs = zip(procs[0], procs[1])
|
||||
log_procs_per_phy = [nums[0] * nums[1] for nums in procs]
|
||||
total_log_procs = sum(log_procs_per_phy)
|
||||
|
||||
LOG.info(
|
||||
"Windows guest total logical cores: {}, logical_cores_per_phy_core: {}".
|
||||
format(total_log_procs, log_procs_per_phy))
|
||||
return total_log_procs, log_procs_per_phy
|
||||
|
||||
|
||||
def check_vm_vswitch_affinity(vm_id, on_vswitch_nodes=True):
|
||||
vm_host, vm_numa_nodes = vm_helper.get_vm_host_and_numa_nodes(vm_id)
|
||||
vswitch_cores_dict = host_helper.get_host_cpu_cores_for_function(
|
||||
vm_host, func='vSwitch')
|
||||
vswitch_procs = [proc for proc in vswitch_cores_dict if
|
||||
vswitch_cores_dict[proc]]
|
||||
if not vswitch_procs:
|
||||
return
|
||||
|
||||
if on_vswitch_nodes:
|
||||
assert set(vm_numa_nodes) <= set(
|
||||
vswitch_procs), "VM {} is on numa nodes {} instead of vswitch " \
|
||||
"numa nodes {}".format(
|
||||
vm_id, vm_numa_nodes, vswitch_procs)
|
||||
else:
|
||||
assert not (set(vm_numa_nodes) & set(
|
||||
vswitch_procs)), "VM {} is on vswitch numa node(s). VM numa " \
|
||||
"nodes: {}, vSwitch numa nodes: {}".format(
|
||||
vm_id, vm_numa_nodes, vswitch_procs)
|
||||
|
||||
|
||||
def check_fs_sufficient(guest_os, boot_source='volume'):
|
||||
"""
|
||||
Check if volume pool, image storage, and/or image conversion space is
|
||||
sufficient to launch vm
|
||||
Args:
|
||||
guest_os (str): e.g., tis-centos-guest, win_2016
|
||||
boot_source (str): volume or image
|
||||
|
||||
Returns (str): image id
|
||||
|
||||
"""
|
||||
LOG.info("Check if storage fs is sufficient to launch boot-from-{} vm "
|
||||
"with {}".format(boot_source, guest_os))
|
||||
check_disk = True if 'win' in guest_os else False
|
||||
cleanup = None if re.search(
|
||||
'ubuntu_14|{}'.format(GuestImages.TIS_GUEST_PATTERN),
|
||||
guest_os) else 'function'
|
||||
img_id = glance_helper.get_guest_image(guest_os, check_disk=check_disk,
|
||||
cleanup=cleanup)
|
||||
return img_id
|
||||
|
||||
|
||||
def check_vm_files(vm_id, storage_backing, ephemeral, swap, vm_type, file_paths,
|
||||
content, root=None, vm_action=None,
|
||||
prev_host=None, post_host=None, disks=None, post_disks=None,
|
||||
guest_os=None,
|
||||
check_volume_root=False):
|
||||
"""
|
||||
Check the files on vm after specified action. This is to check the disks
|
||||
in the basic nova matrix table.
|
||||
Args:
|
||||
vm_id (str):
|
||||
storage_backing (str): local_image, local_lvm, or remote
|
||||
root (int): root disk size in flavor. e.g., 2, 5
|
||||
ephemeral (int): e.g., 0, 1
|
||||
swap (int): e.g., 0, 512
|
||||
vm_type (str): image, volume, image_with_vol, vol_with_vol
|
||||
file_paths (list): list of file paths to check
|
||||
content (str): content of the files (assume all files have the same
|
||||
content)
|
||||
vm_action (str|None): live_migrate, cold_migrate, resize, evacuate,
|
||||
None (expect no data loss)
|
||||
prev_host (None|str): vm host prior to vm_action. This is used to
|
||||
check if vm host has changed when needed.
|
||||
post_host (None|str): vm host after vm_action.
|
||||
disks (dict): disks that are returned from
|
||||
vm_helper.get_vm_devices_via_virsh()
|
||||
post_disks (dict): only used in resize case
|
||||
guest_os (str|None): default guest assumed for None. e,g., ubuntu_16
|
||||
check_volume_root (bool): whether to check root disk size even if vm
|
||||
is booted from image
|
||||
|
||||
Returns:
|
||||
|
||||
"""
|
||||
final_disks = post_disks if post_disks else disks
|
||||
final_paths = list(file_paths)
|
||||
if not disks:
|
||||
disks = vm_helper.get_vm_devices_via_virsh(vm_id=vm_id)
|
||||
|
||||
eph_disk = disks.get('eph', {})
|
||||
if not eph_disk:
|
||||
if post_disks:
|
||||
eph_disk = post_disks.get('eph', {})
|
||||
swap_disk = disks.get('swap', {})
|
||||
if not swap_disk:
|
||||
if post_disks:
|
||||
swap_disk = post_disks.get('swap', {})
|
||||
|
||||
disk_check = 'no_loss'
|
||||
if vm_action in [None, 'live_migrate']:
|
||||
disk_check = 'no_loss'
|
||||
elif vm_type == 'volume':
|
||||
# boot-from-vol, non-live migrate actions
|
||||
disk_check = 'no_loss'
|
||||
if storage_backing == 'local_lvm' and (eph_disk or swap_disk):
|
||||
disk_check = 'eph_swap_loss'
|
||||
elif storage_backing == 'local_image' and vm_action == 'evacuate' and (
|
||||
eph_disk or swap_disk):
|
||||
disk_check = 'eph_swap_loss'
|
||||
elif storage_backing == 'local_image':
|
||||
# local_image, boot-from-image, non-live migrate actions
|
||||
disk_check = 'no_loss'
|
||||
if vm_action == 'evacuate':
|
||||
disk_check = 'local_loss'
|
||||
elif storage_backing == 'local_lvm':
|
||||
# local_lvm, boot-from-image, non-live migrate actions
|
||||
disk_check = 'local_loss'
|
||||
if vm_action == 'resize':
|
||||
post_host = post_host if post_host else vm_helper.get_vm_host(vm_id)
|
||||
if post_host == prev_host:
|
||||
disk_check = 'eph_swap_loss'
|
||||
|
||||
LOG.info("disk check type: {}".format(disk_check))
|
||||
loss_paths = []
|
||||
if disk_check == 'no_loss':
|
||||
no_loss_paths = final_paths
|
||||
else:
|
||||
# If there's any loss, we must not have remote storage. And any
|
||||
# ephemeral/swap disks will be local.
|
||||
disks_to_check = disks.get('eph', {})
|
||||
# skip swap type checking for data loss since it's not a regular
|
||||
# filesystem
|
||||
# swap_disks = disks.get('swap', {})
|
||||
# disks_to_check.update(swap_disks)
|
||||
|
||||
for path_ in final_paths:
|
||||
# For tis-centos-guest, ephemeral disk is mounted to /mnt after
|
||||
# vm launch.
|
||||
if str(path_).rsplit('/', 1)[0] == '/mnt':
|
||||
loss_paths.append(path_)
|
||||
break
|
||||
|
||||
for disk in disks_to_check:
|
||||
for path in final_paths:
|
||||
if disk in path:
|
||||
# We mount disk vdb to /mnt/vdb, so this is looking for
|
||||
# vdb in the mount path
|
||||
loss_paths.append(path)
|
||||
break
|
||||
|
||||
if disk_check == 'local_loss':
|
||||
# if vm booted from image, then the root disk is also local disk
|
||||
root_img = disks.get('root_img', {})
|
||||
if root_img:
|
||||
LOG.info(
|
||||
"Auto mount vm disks again since root disk was local with "
|
||||
"data loss expected")
|
||||
vm_helper.auto_mount_vm_disks(vm_id=vm_id, disks=final_disks)
|
||||
file_name = final_paths[0].rsplit('/')[-1]
|
||||
root_path = '/{}'.format(file_name)
|
||||
loss_paths.append(root_path)
|
||||
assert root_path in final_paths, \
|
||||
"root_path:{}, file_paths:{}".format(root_path, final_paths)
|
||||
|
||||
no_loss_paths = list(set(final_paths) - set(loss_paths))
|
||||
|
||||
LOG.info("loss_paths: {}, no_loss_paths: {}, total_file_pahts: {}".format(
|
||||
loss_paths, no_loss_paths, final_paths))
|
||||
res_files = {}
|
||||
with vm_helper.ssh_to_vm_from_natbox(vm_id=vm_id,
|
||||
vm_image_name=guest_os) as vm_ssh:
|
||||
vm_ssh.exec_sudo_cmd('cat /etc/fstab')
|
||||
vm_ssh.exec_sudo_cmd("mount | grep --color=never '/dev'")
|
||||
|
||||
for file_path in loss_paths:
|
||||
vm_ssh.exec_sudo_cmd('touch {}2'.format(file_path), fail_ok=False)
|
||||
vm_ssh.exec_sudo_cmd('echo "{}" >> {}2'.format(content, file_path),
|
||||
fail_ok=False)
|
||||
|
||||
for file_path in no_loss_paths:
|
||||
output = vm_ssh.exec_sudo_cmd('cat {}'.format(file_path),
|
||||
fail_ok=False)[1]
|
||||
res = '' if content in output else 'content mismatch'
|
||||
res_files[file_path] = res
|
||||
|
||||
for file, error in res_files.items():
|
||||
assert not error, "Check {} failed: {}".format(file, error)
|
||||
|
||||
swap_disk = final_disks.get('swap', {})
|
||||
if swap_disk:
|
||||
disk_name = list(swap_disk.keys())[0]
|
||||
partition = '/dev/{}'.format(disk_name)
|
||||
if disk_check != 'local_loss' and not disks.get('swap', {}):
|
||||
mount_on, fs_type = storage_helper.mount_partition(
|
||||
ssh_client=vm_ssh, disk=disk_name,
|
||||
partition=partition, fs_type='swap')
|
||||
storage_helper.auto_mount_fs(ssh_client=vm_ssh, fs=partition,
|
||||
mount_on=mount_on, fs_type=fs_type)
|
||||
|
||||
LOG.info("Check swap disk is on")
|
||||
swap_output = vm_ssh.exec_sudo_cmd(
|
||||
'cat /proc/swaps | grep --color=never {}'.format(partition))[1]
|
||||
assert swap_output, "Expect swapon for {}. Actual output: {}". \
|
||||
format(partition, vm_ssh.exec_sudo_cmd('cat /proc/swaps')[1])
|
||||
|
||||
LOG.info("Check swap disk size")
|
||||
_check_disk_size(vm_ssh, disk_name=disk_name, expt_size=swap)
|
||||
|
||||
eph_disk = final_disks.get('eph', {})
|
||||
if eph_disk:
|
||||
LOG.info("Check ephemeral disk size")
|
||||
eph_name = list(eph_disk.keys())[0]
|
||||
_check_disk_size(vm_ssh, eph_name, expt_size=ephemeral * 1024)
|
||||
|
||||
if root:
|
||||
image_root = final_disks.get('root_img', {})
|
||||
root_name = ''
|
||||
if image_root:
|
||||
root_name = list(image_root.keys())[0]
|
||||
elif check_volume_root:
|
||||
root_name = list(final_disks.get('root_vol').keys())[0]
|
||||
|
||||
if root_name:
|
||||
LOG.info("Check root disk size")
|
||||
_check_disk_size(vm_ssh, disk_name=root_name,
|
||||
expt_size=root * 1024)
|
||||
|
||||
|
||||
def _check_disk_size(vm_ssh, disk_name, expt_size):
|
||||
partition = vm_ssh.exec_sudo_cmd(
|
||||
'cat /proc/partitions | grep --color=never "{}$"'.format(disk_name))[1]
|
||||
actual_size = int(
|
||||
int(partition.split()[-2].strip()) / 1024) if partition else 0
|
||||
expt_size = int(expt_size)
|
||||
assert actual_size == expt_size, "Expected disk size: {}M. Actual: {}M".\
|
||||
format(expt_size, actual_size)
|
||||
|
||||
|
||||
def check_alarms(before_alarms, timeout=300,
|
||||
auth_info=Tenant.get('admin_platform'), con_ssh=None,
|
||||
fail_ok=False):
|
||||
after_alarms = system_helper.get_alarms(auth_info=auth_info,
|
||||
con_ssh=con_ssh)
|
||||
new_alarms = []
|
||||
check_interval = 5
|
||||
for item in after_alarms:
|
||||
if item not in before_alarms:
|
||||
alarm_id, entity_id = item.split('::::')
|
||||
if alarm_id == EventLogID.CPU_USAGE_HIGH:
|
||||
check_interval = 45
|
||||
elif alarm_id == EventLogID.NTP_ALARM:
|
||||
# NTP alarm handling
|
||||
LOG.info("NTP alarm found, checking ntpq stats")
|
||||
host = entity_id.split('host=')[1].split('.ntp')[0]
|
||||
system_helper.wait_for_ntp_sync(host=host, fail_ok=False,
|
||||
auth_info=auth_info,
|
||||
con_ssh=con_ssh)
|
||||
continue
|
||||
|
||||
new_alarms.append((alarm_id, entity_id))
|
||||
|
||||
res = True
|
||||
remaining_alarms = None
|
||||
if new_alarms:
|
||||
LOG.info("New alarms detected. Waiting for new alarms to clear.")
|
||||
res, remaining_alarms = \
|
||||
system_helper.wait_for_alarms_gone(new_alarms,
|
||||
fail_ok=True,
|
||||
timeout=timeout,
|
||||
check_interval=check_interval,
|
||||
auth_info=auth_info,
|
||||
con_ssh=con_ssh)
|
||||
|
||||
if not res:
|
||||
msg = "New alarm(s) found and did not clear within {} seconds. " \
|
||||
"Alarm IDs and Entity IDs: {}".format(timeout, remaining_alarms)
|
||||
LOG.warning(msg)
|
||||
if not fail_ok:
|
||||
assert res, msg
|
||||
|
||||
return res, remaining_alarms
|
||||
|
||||
|
||||
def check_rest_api():
|
||||
LOG.info("Check sysinv REST API")
|
||||
sysinv_rest = Rest('sysinv', platform=True)
|
||||
resource = '/controller_fs'
|
||||
status_code, text = sysinv_rest.get(resource=resource, auth=True)
|
||||
message = "Retrieved: status_code: {} message: {}"
|
||||
LOG.debug(message.format(status_code, text))
|
||||
|
||||
LOG.info("Check status_code of 200 is received")
|
||||
message = "Expected status_code of 200 - received {} and message {}"
|
||||
assert status_code == 200, message.format(status_code, text)
|
File diff suppressed because it is too large
Load Diff
|
@ -0,0 +1,787 @@
|
|||
#
|
||||
# Copyright (c) 2019 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
#############################################################
|
||||
# DO NOT import anything from helper modules to this module #
|
||||
#############################################################
|
||||
|
||||
import os
|
||||
import re
|
||||
import time
|
||||
from contextlib import contextmanager
|
||||
from datetime import datetime
|
||||
|
||||
import pexpect
|
||||
from pytest import skip
|
||||
|
||||
from consts.auth import Tenant, TestFileServer, HostLinuxUser
|
||||
from consts.stx import Prompt
|
||||
from consts.proj_vars import ProjVar
|
||||
from utils import exceptions
|
||||
from utils.clients.ssh import ControllerClient, NATBoxClient, SSHClient, \
|
||||
get_cli_client
|
||||
from utils.tis_log import LOG
|
||||
|
||||
|
||||
def scp_from_test_server_to_user_file_dir(source_path, dest_dir, dest_name=None,
|
||||
timeout=900, con_ssh=None,
|
||||
central_region=False):
|
||||
if con_ssh is None:
|
||||
con_ssh = get_cli_client(central_region=central_region)
|
||||
if dest_name is None:
|
||||
dest_name = source_path.split(sep='/')[-1]
|
||||
|
||||
if ProjVar.get_var('USER_FILE_DIR') == ProjVar.get_var('TEMP_DIR'):
|
||||
LOG.info("Copy file from test server to localhost")
|
||||
source_server = TestFileServer.SERVER
|
||||
source_user = TestFileServer.USER
|
||||
source_password = TestFileServer.PASSWORD
|
||||
dest_path = dest_dir if not dest_name else os.path.join(dest_dir,
|
||||
dest_name)
|
||||
LOG.info('Check if file already exists on TiS')
|
||||
if con_ssh.file_exists(file_path=dest_path):
|
||||
LOG.info('dest path {} already exists. Return existing path'.format(
|
||||
dest_path))
|
||||
return dest_path
|
||||
|
||||
os.makedirs(dest_dir, exist_ok=True)
|
||||
con_ssh.scp_on_dest(source_user=source_user, source_ip=source_server,
|
||||
source_path=source_path,
|
||||
dest_path=dest_path, source_pswd=source_password,
|
||||
timeout=timeout)
|
||||
return dest_path
|
||||
else:
|
||||
LOG.info("Copy file from test server to active controller")
|
||||
return scp_from_test_server_to_active_controller(
|
||||
source_path=source_path, dest_dir=dest_dir,
|
||||
dest_name=dest_name, timeout=timeout, con_ssh=con_ssh)
|
||||
|
||||
|
||||
def _scp_from_remote_to_active_controller(source_server, source_path,
|
||||
dest_dir, dest_name=None,
|
||||
source_user=None,
|
||||
source_password=None,
|
||||
timeout=900, con_ssh=None,
|
||||
is_dir=False):
|
||||
"""
|
||||
SCP file or files under a directory from remote server to TiS server
|
||||
|
||||
Args:
|
||||
source_path (str): remote server file path or directory path
|
||||
dest_dir (str): destination directory. should end with '/'
|
||||
dest_name (str): destination file name if not dir
|
||||
timeout (int):
|
||||
con_ssh:
|
||||
is_dir
|
||||
|
||||
Returns (str|None): destination file/dir path if scp successful else None
|
||||
|
||||
"""
|
||||
if con_ssh is None:
|
||||
con_ssh = ControllerClient.get_active_controller()
|
||||
|
||||
if not source_user:
|
||||
source_user = TestFileServer.USER
|
||||
if not source_password:
|
||||
source_password = TestFileServer.PASSWORD
|
||||
|
||||
if dest_name is None and not is_dir:
|
||||
dest_name = source_path.split(sep='/')[-1]
|
||||
|
||||
dest_path = dest_dir if not dest_name else os.path.join(dest_dir, dest_name)
|
||||
|
||||
LOG.info('Check if file already exists on TiS')
|
||||
if not is_dir and con_ssh.file_exists(file_path=dest_path):
|
||||
LOG.info('dest path {} already exists. Return existing path'.format(
|
||||
dest_path))
|
||||
return dest_path
|
||||
|
||||
LOG.info('Create destination directory on tis server if not already exists')
|
||||
cmd = 'mkdir -p {}'.format(dest_dir)
|
||||
con_ssh.exec_cmd(cmd, fail_ok=False)
|
||||
|
||||
nat_name = ProjVar.get_var('NATBOX')
|
||||
if nat_name:
|
||||
nat_name = nat_name.get('name')
|
||||
if nat_name and ProjVar.get_var('IS_VBOX'):
|
||||
LOG.info('VBox detected, performing intermediate scp')
|
||||
|
||||
nat_dest_path = '/tmp/{}'.format(dest_name)
|
||||
nat_ssh = NATBoxClient.get_natbox_client()
|
||||
|
||||
if not nat_ssh.file_exists(nat_dest_path):
|
||||
LOG.info("scp file from {} to NatBox: {}".format(nat_name,
|
||||
source_server))
|
||||
nat_ssh.scp_on_dest(source_user=source_user,
|
||||
source_ip=source_server,
|
||||
source_path=source_path,
|
||||
dest_path=nat_dest_path,
|
||||
source_pswd=source_password, timeout=timeout,
|
||||
is_dir=is_dir)
|
||||
|
||||
LOG.info(
|
||||
'scp file from natbox {} to active controller'.format(nat_name))
|
||||
dest_user = HostLinuxUser.get_user()
|
||||
dest_pswd = HostLinuxUser.get_password()
|
||||
dest_ip = ProjVar.get_var('LAB').get('floating ip')
|
||||
nat_ssh.scp_on_source(source_path=nat_dest_path, dest_user=dest_user,
|
||||
dest_ip=dest_ip, dest_path=dest_path,
|
||||
dest_password=dest_pswd, timeout=timeout,
|
||||
is_dir=is_dir)
|
||||
|
||||
else: # if not a VBox lab, scp from remote server directly to TiS server
|
||||
LOG.info("scp file(s) from {} to tis".format(source_server))
|
||||
con_ssh.scp_on_dest(source_user=source_user, source_ip=source_server,
|
||||
source_path=source_path,
|
||||
dest_path=dest_path, source_pswd=source_password,
|
||||
timeout=timeout, is_dir=is_dir)
|
||||
|
||||
return dest_path
|
||||
|
||||
|
||||
def scp_from_test_server_to_active_controller(source_path, dest_dir,
|
||||
dest_name=None, timeout=900,
|
||||
con_ssh=None,
|
||||
is_dir=False):
|
||||
"""
|
||||
SCP file or files under a directory from test server to TiS server
|
||||
|
||||
Args:
|
||||
source_path (str): test server file path or directory path
|
||||
dest_dir (str): destination directory. should end with '/'
|
||||
dest_name (str): destination file name if not dir
|
||||
timeout (int):
|
||||
con_ssh:
|
||||
is_dir (bool)
|
||||
|
||||
Returns (str|None): destination file/dir path if scp successful else None
|
||||
|
||||
"""
|
||||
skip('Shared Test File Server is not ready')
|
||||
if con_ssh is None:
|
||||
con_ssh = ControllerClient.get_active_controller()
|
||||
|
||||
source_server = TestFileServer.SERVER
|
||||
source_user = TestFileServer.USER
|
||||
source_password = TestFileServer.PASSWORD
|
||||
|
||||
return _scp_from_remote_to_active_controller(
|
||||
source_server=source_server,
|
||||
source_path=source_path,
|
||||
dest_dir=dest_dir,
|
||||
dest_name=dest_name,
|
||||
source_user=source_user,
|
||||
source_password=source_password,
|
||||
timeout=timeout,
|
||||
con_ssh=con_ssh,
|
||||
is_dir=is_dir)
|
||||
|
||||
|
||||
def scp_from_active_controller_to_test_server(source_path, dest_dir,
|
||||
dest_name=None, timeout=900,
|
||||
is_dir=False,
|
||||
con_ssh=None):
|
||||
"""
|
||||
SCP file or files under a directory from test server to TiS server
|
||||
|
||||
Args:
|
||||
source_path (str): test server file path or directory path
|
||||
dest_dir (str): destination directory. should end with '/'
|
||||
dest_name (str): destination file name if not dir
|
||||
timeout (int):
|
||||
is_dir (bool):
|
||||
con_ssh:
|
||||
|
||||
Returns (str|None): destination file/dir path if scp successful else None
|
||||
|
||||
"""
|
||||
skip('Shared Test File Server is not ready')
|
||||
if con_ssh is None:
|
||||
con_ssh = ControllerClient.get_active_controller()
|
||||
|
||||
dir_option = '-r ' if is_dir else ''
|
||||
dest_server = TestFileServer.SERVER
|
||||
dest_user = TestFileServer.USER
|
||||
dest_password = TestFileServer.PASSWORD
|
||||
|
||||
dest_path = dest_dir if not dest_name else os.path.join(dest_dir, dest_name)
|
||||
|
||||
scp_cmd = 'scp -oStrictHostKeyChecking=no -o ' \
|
||||
'UserKnownHostsFile=/dev/null ' \
|
||||
'{}{} {}@{}:{}'.\
|
||||
format(dir_option, source_path, dest_user, dest_server, dest_path)
|
||||
|
||||
LOG.info("scp file(s) from tis server to test server")
|
||||
con_ssh.send(scp_cmd)
|
||||
index = con_ssh.expect(
|
||||
[con_ssh.prompt, Prompt.PASSWORD_PROMPT, Prompt.ADD_HOST],
|
||||
timeout=timeout)
|
||||
if index == 2:
|
||||
con_ssh.send('yes')
|
||||
index = con_ssh.expect([con_ssh.prompt, Prompt.PASSWORD_PROMPT],
|
||||
timeout=timeout)
|
||||
if index == 1:
|
||||
con_ssh.send(dest_password)
|
||||
index = con_ssh.expect(timeout=timeout)
|
||||
|
||||
assert index == 0, "Failed to scp files"
|
||||
|
||||
exit_code = con_ssh.get_exit_code()
|
||||
assert 0 == exit_code, "scp not fully succeeded"
|
||||
|
||||
return dest_path
|
||||
|
||||
|
||||
def scp_from_localhost_to_active_controller(
|
||||
source_path, dest_path=None,
|
||||
dest_user=None,
|
||||
dest_password=None,
|
||||
timeout=900, is_dir=False):
|
||||
|
||||
active_cont_ip = ControllerClient.get_active_controller().host
|
||||
if not dest_path:
|
||||
dest_path = HostLinuxUser.get_home()
|
||||
if not dest_user:
|
||||
dest_user = HostLinuxUser.get_user()
|
||||
if not dest_password:
|
||||
dest_password = HostLinuxUser.get_password()
|
||||
|
||||
return scp_from_local(source_path, active_cont_ip, dest_path=dest_path,
|
||||
dest_user=dest_user, dest_password=dest_password,
|
||||
timeout=timeout, is_dir=is_dir)
|
||||
|
||||
|
||||
def scp_from_active_controller_to_localhost(
|
||||
source_path, dest_path='',
|
||||
src_user=None,
|
||||
src_password=None,
|
||||
timeout=900, is_dir=False):
|
||||
|
||||
active_cont_ip = ControllerClient.get_active_controller().host
|
||||
if not src_user:
|
||||
src_user = HostLinuxUser.get_user()
|
||||
if not src_password:
|
||||
src_password = HostLinuxUser.get_password()
|
||||
|
||||
return scp_to_local(source_path=source_path, source_ip=active_cont_ip,
|
||||
source_user=src_user, source_password=src_password,
|
||||
dest_path=dest_path, timeout=timeout, is_dir=is_dir)
|
||||
|
||||
|
||||
def scp_from_local(source_path, dest_ip, dest_path,
|
||||
dest_user,
|
||||
dest_password,
|
||||
timeout=900, is_dir=False):
|
||||
"""
|
||||
Scp file(s) from localhost (i.e., from where the automated tests are
|
||||
executed).
|
||||
|
||||
Args:
|
||||
source_path (str): source file/directory path
|
||||
dest_ip (str): ip of the destination host
|
||||
dest_user (str): username of destination host.
|
||||
dest_password (str): password of destination host
|
||||
dest_path (str): destination directory path to copy the file(s) to
|
||||
timeout (int): max time to wait for scp finish in seconds
|
||||
is_dir (bool): whether to copy a single file or a directory
|
||||
|
||||
"""
|
||||
dir_option = '-r ' if is_dir else ''
|
||||
|
||||
cmd = 'scp -oStrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null ' \
|
||||
'{}{} {}@{}:{}'. \
|
||||
format(dir_option, source_path, dest_user, dest_ip, dest_path)
|
||||
|
||||
_scp_on_local(cmd, remote_password=dest_password, timeout=timeout)
|
||||
|
||||
|
||||
def scp_to_local(source_path, source_ip, source_user, source_password,
|
||||
dest_path, timeout=900, is_dir=False):
|
||||
"""
|
||||
Scp file(s) to localhost (i.e., to where the automated tests are executed).
|
||||
|
||||
Args:
|
||||
source_path (str): source file/directory path
|
||||
source_ip (str): ip of the source host.
|
||||
source_user (str): username of source host.
|
||||
source_password (str): password of source host
|
||||
dest_path (str): destination directory path to copy the file(s) to
|
||||
timeout (int): max time to wait for scp finish in seconds
|
||||
is_dir (bool): whether to copy a single file or a directory
|
||||
|
||||
"""
|
||||
dir_option = '-r ' if is_dir else ''
|
||||
cmd = 'scp -oStrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null ' \
|
||||
'{}{}@{}:{} {}'.\
|
||||
format(dir_option, source_user, source_ip, source_path, dest_path)
|
||||
|
||||
_scp_on_local(cmd, remote_password=source_password, timeout=timeout)
|
||||
|
||||
|
||||
def _scp_on_local(cmd, remote_password, logdir=None, timeout=900):
|
||||
LOG.debug('scp cmd: {}'.format(cmd))
|
||||
|
||||
logdir = logdir or ProjVar.get_var('LOG_DIR')
|
||||
logfile = os.path.join(logdir, 'scp_files.log')
|
||||
|
||||
with open(logfile, mode='a') as f:
|
||||
local_child = pexpect.spawn(command=cmd, encoding='utf-8', logfile=f)
|
||||
index = local_child.expect([pexpect.EOF, 'assword:', 'yes/no'],
|
||||
timeout=timeout)
|
||||
|
||||
if index == 2:
|
||||
local_child.sendline('yes')
|
||||
index = local_child.expect([pexpect.EOF, 'assword:'],
|
||||
timeout=timeout)
|
||||
|
||||
if index == 1:
|
||||
local_child.sendline(remote_password)
|
||||
local_child.expect(pexpect.EOF, timeout=timeout)
|
||||
|
||||
|
||||
def get_tenant_name(auth_info=None):
|
||||
"""
|
||||
Get name of given tenant. If None is given, primary tenant name will be
|
||||
returned.
|
||||
|
||||
Args:
|
||||
auth_info (dict|None): Tenant dict
|
||||
|
||||
Returns:
|
||||
str: name of the tenant
|
||||
|
||||
"""
|
||||
if auth_info is None:
|
||||
auth_info = Tenant.get_primary()
|
||||
return auth_info['tenant']
|
||||
|
||||
|
||||
class Count:
|
||||
__vm_count = 0
|
||||
__flavor_count = 0
|
||||
__volume_count = 0
|
||||
__image_count = 0
|
||||
__server_group = 0
|
||||
__router = 0
|
||||
__subnet = 0
|
||||
__other = 0
|
||||
|
||||
@classmethod
|
||||
def get_vm_count(cls):
|
||||
cls.__vm_count += 1
|
||||
return cls.__vm_count
|
||||
|
||||
@classmethod
|
||||
def get_flavor_count(cls):
|
||||
cls.__flavor_count += 1
|
||||
return cls.__flavor_count
|
||||
|
||||
@classmethod
|
||||
def get_volume_count(cls):
|
||||
cls.__volume_count += 1
|
||||
return cls.__volume_count
|
||||
|
||||
@classmethod
|
||||
def get_image_count(cls):
|
||||
cls.__image_count += 1
|
||||
return cls.__image_count
|
||||
|
||||
@classmethod
|
||||
def get_sever_group_count(cls):
|
||||
cls.__server_group += 1
|
||||
return cls.__server_group
|
||||
|
||||
@classmethod
|
||||
def get_router_count(cls):
|
||||
cls.__router += 1
|
||||
return cls.__router
|
||||
|
||||
@classmethod
|
||||
def get_subnet_count(cls):
|
||||
cls.__subnet += 1
|
||||
return cls.__subnet
|
||||
|
||||
@classmethod
|
||||
def get_other_count(cls):
|
||||
cls.__other += 1
|
||||
return cls.__other
|
||||
|
||||
|
||||
class NameCount:
|
||||
__names_count = {
|
||||
'vm': 0,
|
||||
'flavor': 0,
|
||||
'volume': 0,
|
||||
'image': 0,
|
||||
'server_group': 0,
|
||||
'subnet': 0,
|
||||
'heat_stack': 0,
|
||||
'qos': 0,
|
||||
'other': 0,
|
||||
}
|
||||
|
||||
@classmethod
|
||||
def get_number(cls, resource_type='other'):
|
||||
cls.__names_count[resource_type] += 1
|
||||
return cls.__names_count[resource_type]
|
||||
|
||||
@classmethod
|
||||
def get_valid_types(cls):
|
||||
return list(cls.__names_count.keys())
|
||||
|
||||
|
||||
def get_unique_name(name_str, existing_names=None, resource_type='other'):
|
||||
"""
|
||||
Get a unique name string by appending a number to given name_str
|
||||
|
||||
Args:
|
||||
name_str (str): partial name string
|
||||
existing_names (list): names to avoid
|
||||
resource_type (str): type of resource. valid values: 'vm'
|
||||
|
||||
Returns:
|
||||
|
||||
"""
|
||||
valid_types = NameCount.get_valid_types()
|
||||
if resource_type not in valid_types:
|
||||
raise ValueError(
|
||||
"Invalid resource_type provided. Valid types: {}".format(
|
||||
valid_types))
|
||||
|
||||
if existing_names:
|
||||
if resource_type in ['image', 'volume', 'flavor']:
|
||||
unique_name = name_str
|
||||
else:
|
||||
unique_name = "{}-{}".format(name_str, NameCount.get_number(
|
||||
resource_type=resource_type))
|
||||
|
||||
for i in range(50):
|
||||
if unique_name not in existing_names:
|
||||
return unique_name
|
||||
|
||||
unique_name = "{}-{}".format(name_str, NameCount.get_number(
|
||||
resource_type=resource_type))
|
||||
else:
|
||||
raise LookupError("Cannot find unique name.")
|
||||
else:
|
||||
unique_name = "{}-{}".format(name_str, NameCount.get_number(
|
||||
resource_type=resource_type))
|
||||
|
||||
return unique_name
|
||||
|
||||
|
||||
def parse_cpus_list(cpus):
|
||||
"""
|
||||
Convert human friendly pcup list to list of integers.
|
||||
e.g., '5-7,41-43, 43, 45' >> [5, 6, 7, 41, 42, 43, 43, 45]
|
||||
|
||||
Args:
|
||||
cpus (str):
|
||||
|
||||
Returns (list): list of integers
|
||||
|
||||
"""
|
||||
if isinstance(cpus, str):
|
||||
if cpus.strip() == '':
|
||||
return []
|
||||
|
||||
cpus = cpus.split(sep=',')
|
||||
|
||||
cpus_list = list(cpus)
|
||||
|
||||
for val in cpus:
|
||||
# convert '3-6' to [3, 4, 5, 6]
|
||||
if '-' in val:
|
||||
cpus_list.remove(val)
|
||||
min_, max_ = val.split(sep='-')
|
||||
|
||||
# unpinned:20; pinned_cpulist:-, unpinned_cpulist:10-19,30-39
|
||||
if min_ != '':
|
||||
cpus_list += list(range(int(min_), int(max_) + 1))
|
||||
|
||||
return sorted([int(val) for val in cpus_list])
|
||||
|
||||
|
||||
def get_timedelta_for_isotimes(time1, time2):
|
||||
"""
|
||||
|
||||
Args:
|
||||
time1 (str): such as "2016-08-16T12:59:45.440697+00:00"
|
||||
time2 (str):
|
||||
|
||||
Returns ()
|
||||
|
||||
"""
|
||||
|
||||
def _parse_time(time_):
|
||||
time_ = time_.strip().split(sep='.')[0].split(sep='+')[0]
|
||||
if 'T' in time_:
|
||||
pattern = "%Y-%m-%dT%H:%M:%S"
|
||||
elif ' ' in time_:
|
||||
pattern = "%Y-%m-%d %H:%M:%S"
|
||||
else:
|
||||
raise ValueError("Unknown format for time1: {}".format(time_))
|
||||
time_datetime = datetime.strptime(time_, pattern)
|
||||
return time_datetime
|
||||
|
||||
time1_datetime = _parse_time(time_=time1)
|
||||
time2_datetime = _parse_time(time_=time2)
|
||||
|
||||
return time2_datetime - time1_datetime
|
||||
|
||||
|
||||
def _execute_with_openstack_cli():
|
||||
"""
|
||||
DO NOT USE THIS IN TEST FUNCTIONS!
|
||||
"""
|
||||
return ProjVar.get_var('OPENSTACK_CLI')
|
||||
|
||||
|
||||
def get_date_in_format(ssh_client=None, date_format="%Y%m%d %T"):
|
||||
"""
|
||||
Get date in given format.
|
||||
Args:
|
||||
ssh_client (SSHClient):
|
||||
date_format (str): Please see date --help for valid format strings
|
||||
|
||||
Returns (str): date output in given format
|
||||
|
||||
"""
|
||||
if ssh_client is None:
|
||||
ssh_client = ControllerClient.get_active_controller()
|
||||
return ssh_client.exec_cmd("date +'{}'".format(date_format), fail_ok=False)[
|
||||
1]
|
||||
|
||||
|
||||
def write_to_file(file_path, content, mode='a'):
|
||||
"""
|
||||
Write content to specified local file
|
||||
Args:
|
||||
file_path (str): file path on localhost
|
||||
content (str): content to write to file
|
||||
mode (str): file operation mode. Default is 'a' (append to end of file).
|
||||
|
||||
Returns: None
|
||||
|
||||
"""
|
||||
time_stamp = time.strftime('%Y-%m-%d %H:%M:%S', time.gmtime())
|
||||
with open(file_path, mode=mode) as f:
|
||||
f.write(
|
||||
'\n-----------------[{}]-----------------\n{}\n'.format(time_stamp,
|
||||
content))
|
||||
|
||||
|
||||
def collect_software_logs(con_ssh=None):
|
||||
if not con_ssh:
|
||||
con_ssh = ControllerClient.get_active_controller()
|
||||
LOG.info("Collecting all hosts logs...")
|
||||
con_ssh.exec_cmd('source /etc/platform/openrc', get_exit_code=False)
|
||||
con_ssh.send('collect all')
|
||||
|
||||
expect_list = ['.*password for sysadmin:', 'collecting data.',
|
||||
con_ssh.prompt]
|
||||
index_1 = con_ssh.expect(expect_list, timeout=20)
|
||||
if index_1 == 2:
|
||||
LOG.error(
|
||||
"Something is wrong with collect all. Check ssh console log for "
|
||||
"detail.")
|
||||
return
|
||||
elif index_1 == 0:
|
||||
con_ssh.send(con_ssh.password)
|
||||
con_ssh.expect('collecting data')
|
||||
|
||||
index_2 = con_ssh.expect(['/scratch/ALL_NODES.*', con_ssh.prompt],
|
||||
timeout=1200)
|
||||
if index_2 == 0:
|
||||
output = con_ssh.cmd_output
|
||||
con_ssh.expect()
|
||||
logpath = re.findall('.*(/scratch/ALL_NODES_.*.tar).*', output)[0]
|
||||
LOG.info(
|
||||
"\n################### TiS server log path: {}".format(logpath))
|
||||
else:
|
||||
LOG.error("Collecting logs failed. No ALL_NODES logs found.")
|
||||
return
|
||||
|
||||
dest_path = ProjVar.get_var('LOG_DIR')
|
||||
try:
|
||||
LOG.info("Copying log file from active controller to local {}".format(
|
||||
dest_path))
|
||||
scp_from_active_controller_to_localhost(
|
||||
source_path=logpath, dest_path=dest_path, timeout=300)
|
||||
LOG.info("{} is successfully copied to local directory: {}".format(
|
||||
logpath, dest_path))
|
||||
except Exception as e:
|
||||
LOG.warning("Failed to copy log file to localhost.")
|
||||
LOG.error(e, exc_info=True)
|
||||
|
||||
|
||||
def parse_args(args_dict, repeat_arg=False, vals_sep=' '):
|
||||
"""
|
||||
parse args dictionary and convert it to string
|
||||
Args:
|
||||
args_dict (dict): key/value pairs
|
||||
repeat_arg: if value is tuple, list, dict, should the arg be repeated.
|
||||
e.g., True for --nic in nova boot. False for -m in gnocchi
|
||||
measures aggregation
|
||||
vals_sep (str): separator to join multiple vals. Only applicable when
|
||||
repeat_arg=False.
|
||||
|
||||
Returns (str):
|
||||
|
||||
"""
|
||||
|
||||
def convert_val_dict(key__, vals_dict, repeat_key):
|
||||
vals_ = []
|
||||
for k, v in vals_dict.items():
|
||||
if ' ' in v:
|
||||
v = '"{}"'.format(v)
|
||||
vals_.append('{}={}'.format(k, v))
|
||||
if repeat_key:
|
||||
args_str = ' ' + ' '.join(
|
||||
['{} {}'.format(key__, v_) for v_ in vals_])
|
||||
else:
|
||||
args_str = ' {} {}'.format(key__, vals_sep.join(vals_))
|
||||
return args_str
|
||||
|
||||
args = ''
|
||||
for key, val in args_dict.items():
|
||||
if val is None:
|
||||
continue
|
||||
|
||||
key = key if key.startswith('-') else '--{}'.format(key)
|
||||
if isinstance(val, str):
|
||||
if ' ' in val:
|
||||
val = '"{}"'.format(val)
|
||||
args += ' {}={}'.format(key, val)
|
||||
elif isinstance(val, bool):
|
||||
if val:
|
||||
args += ' {}'.format(key)
|
||||
elif isinstance(val, (int, float)):
|
||||
args += ' {}={}'.format(key, val)
|
||||
elif isinstance(val, dict):
|
||||
args += convert_val_dict(key__=key, vals_dict=val,
|
||||
repeat_key=repeat_arg)
|
||||
elif isinstance(val, (list, tuple)):
|
||||
if repeat_arg:
|
||||
for val_ in val:
|
||||
if isinstance(val_, dict):
|
||||
args += convert_val_dict(key__=key, vals_dict=val_,
|
||||
repeat_key=False)
|
||||
else:
|
||||
args += ' {}={}'.format(key, val_)
|
||||
else:
|
||||
args += ' {}={}'.format(key, vals_sep.join(val))
|
||||
else:
|
||||
raise ValueError(
|
||||
"Unrecognized value type. Key: {}; value: {}".format(key, val))
|
||||
|
||||
return args.strip()
|
||||
|
||||
|
||||
def get_symlink(ssh_client, file_path):
|
||||
code, output = ssh_client.exec_cmd(
|
||||
'ls -l {} | grep --color=never ""'.format(file_path))
|
||||
if code != 0:
|
||||
LOG.warning('{} not found!'.format(file_path))
|
||||
return None
|
||||
|
||||
res = re.findall('> (.*)', output)
|
||||
if not res:
|
||||
LOG.warning('No symlink found for {}'.format(file_path))
|
||||
return None
|
||||
|
||||
link = res[0].strip()
|
||||
return link
|
||||
|
||||
|
||||
def is_file(filename, ssh_client):
|
||||
code = ssh_client.exec_cmd('test -f {}'.format(filename), fail_ok=True)[0]
|
||||
return 0 == code
|
||||
|
||||
|
||||
def is_directory(dirname, ssh_client):
|
||||
code = ssh_client.exec_cmd('test -d {}'.format(dirname), fail_ok=True)[0]
|
||||
return 0 == code
|
||||
|
||||
|
||||
def lab_time_now(con_ssh=None, date_format='%Y-%m-%dT%H:%M:%S'):
|
||||
if not con_ssh:
|
||||
con_ssh = ControllerClient.get_active_controller()
|
||||
|
||||
date_cmd_format = date_format + '.%N'
|
||||
timestamp = get_date_in_format(ssh_client=con_ssh,
|
||||
date_format=date_cmd_format)
|
||||
with_milliseconds = timestamp.split('.')[0] + '.{}'.format(
|
||||
int(int(timestamp.split('.')[1]) / 1000))
|
||||
format1 = date_format + '.%f'
|
||||
parsed = datetime.strptime(with_milliseconds, format1)
|
||||
|
||||
return with_milliseconds.split('.')[0], parsed
|
||||
|
||||
|
||||
@contextmanager
|
||||
def ssh_to_remote_node(host, username=None, password=None, prompt=None,
|
||||
ssh_client=None, use_telnet=False,
|
||||
telnet_session=None):
|
||||
"""
|
||||
ssh to a external node from sshclient.
|
||||
|
||||
Args:
|
||||
host (str|None): hostname or ip address of remote node to ssh to.
|
||||
username (str):
|
||||
password (str):
|
||||
prompt (str):
|
||||
ssh_client (SSHClient): client to ssh from
|
||||
use_telnet:
|
||||
telnet_session:
|
||||
|
||||
Returns (SSHClient): ssh client of the host
|
||||
|
||||
Examples: with ssh_to_remote_node('128.224.150.92) as remote_ssh:
|
||||
remote_ssh.exec_cmd(cmd)
|
||||
\ """
|
||||
|
||||
if not host:
|
||||
raise exceptions.SSHException(
|
||||
"Remote node hostname or ip address must be provided")
|
||||
|
||||
if use_telnet and not telnet_session:
|
||||
raise exceptions.SSHException(
|
||||
"Telnet session cannot be none if using telnet.")
|
||||
|
||||
if not ssh_client and not use_telnet:
|
||||
ssh_client = ControllerClient.get_active_controller()
|
||||
|
||||
if not use_telnet:
|
||||
from keywords.security_helper import LinuxUser
|
||||
default_user, default_password = LinuxUser.get_current_user_password()
|
||||
else:
|
||||
default_user = HostLinuxUser.get_user()
|
||||
default_password = HostLinuxUser.get_password()
|
||||
|
||||
user = username if username else default_user
|
||||
password = password if password else default_password
|
||||
if use_telnet:
|
||||
original_host = telnet_session.exec_cmd('hostname')[1]
|
||||
else:
|
||||
original_host = ssh_client.host
|
||||
|
||||
if not prompt:
|
||||
prompt = '.*' + host + r'\:~\$'
|
||||
|
||||
remote_ssh = SSHClient(host, user=user, password=password,
|
||||
initial_prompt=prompt)
|
||||
remote_ssh.connect()
|
||||
current_host = remote_ssh.host
|
||||
if not current_host == host:
|
||||
raise exceptions.SSHException(
|
||||
"Current host is {} instead of {}".format(current_host, host))
|
||||
try:
|
||||
yield remote_ssh
|
||||
finally:
|
||||
if current_host != original_host:
|
||||
remote_ssh.close()
|
|
@ -0,0 +1,853 @@
|
|||
#
|
||||
# Copyright (c) 2019 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
"""
|
||||
Container/Application related helper functions for non-kubectl commands.
|
||||
For example:
|
||||
- docker commands
|
||||
- system application-xxx commands
|
||||
- helm commands
|
||||
|
||||
"""
|
||||
|
||||
import os
|
||||
import time
|
||||
import yaml
|
||||
|
||||
from utils import cli, exceptions, table_parser
|
||||
from utils.tis_log import LOG
|
||||
from utils.clients.ssh import ControllerClient
|
||||
from consts.auth import Tenant
|
||||
from consts.proj_vars import ProjVar
|
||||
from consts.stx import AppStatus, Prompt, EventLogID, Container
|
||||
from consts.filepaths import StxPath
|
||||
from keywords import system_helper, host_helper
|
||||
|
||||
|
||||
def exec_helm_upload_cmd(tarball, repo=None, timeout=120, con_ssh=None,
|
||||
fail_ok=False):
|
||||
if not con_ssh:
|
||||
con_ssh = ControllerClient.get_active_controller()
|
||||
|
||||
if not repo:
|
||||
repo = 'starlingx'
|
||||
cmd = 'helm-upload {} {}'.format(repo, tarball)
|
||||
con_ssh.send(cmd)
|
||||
pw_prompt = Prompt.PASSWORD_PROMPT
|
||||
prompts = [con_ssh.prompt, pw_prompt]
|
||||
|
||||
index = con_ssh.expect(prompts, timeout=timeout, searchwindowsize=100,
|
||||
fail_ok=fail_ok)
|
||||
if index == 1:
|
||||
con_ssh.send(con_ssh.password)
|
||||
prompts.remove(pw_prompt)
|
||||
con_ssh.expect(prompts, timeout=timeout, searchwindowsize=100,
|
||||
fail_ok=fail_ok)
|
||||
|
||||
code, output = con_ssh._process_exec_result(rm_date=True,
|
||||
get_exit_code=True)
|
||||
if code != 0 and not fail_ok:
|
||||
raise exceptions.SSHExecCommandFailed(
|
||||
"Non-zero return code for cmd: {}. Output: {}".
|
||||
format(cmd, output))
|
||||
|
||||
return code, output
|
||||
|
||||
|
||||
def exec_docker_cmd(sub_cmd, args, timeout=120, con_ssh=None, fail_ok=False):
|
||||
if not con_ssh:
|
||||
con_ssh = ControllerClient.get_active_controller()
|
||||
|
||||
cmd = 'docker {} {}'.format(sub_cmd, args)
|
||||
code, output = con_ssh.exec_sudo_cmd(cmd, expect_timeout=timeout,
|
||||
fail_ok=fail_ok)
|
||||
|
||||
return code, output
|
||||
|
||||
|
||||
def upload_helm_charts(tar_file, repo=None, delete_first=False, con_ssh=None,
|
||||
timeout=120, fail_ok=False):
|
||||
"""
|
||||
Upload helm charts via helm-upload cmd
|
||||
Args:
|
||||
tar_file:
|
||||
repo
|
||||
delete_first:
|
||||
con_ssh:
|
||||
timeout:
|
||||
fail_ok:
|
||||
|
||||
Returns (tuple):
|
||||
(0, <path_to_charts>)
|
||||
(1, <std_err>)
|
||||
(2, <hostname for host that does not have helm charts in expected dir>)
|
||||
|
||||
"""
|
||||
if not con_ssh:
|
||||
con_ssh = ControllerClient.get_active_controller()
|
||||
|
||||
helm_dir = os.path.normpath(StxPath.HELM_CHARTS_DIR)
|
||||
if not repo:
|
||||
repo = 'starlingx'
|
||||
file_path = os.path.join(helm_dir, repo, os.path.basename(tar_file))
|
||||
current_host = con_ssh.get_hostname()
|
||||
controllers = [current_host]
|
||||
if not system_helper.is_aio_simplex(con_ssh=con_ssh):
|
||||
con_name = 'controller-1' if controllers[
|
||||
0] == 'controller-0' else \
|
||||
'controller-0'
|
||||
controllers.append(con_name)
|
||||
|
||||
if delete_first:
|
||||
for host in controllers:
|
||||
with host_helper.ssh_to_host(hostname=host,
|
||||
con_ssh=con_ssh) as host_ssh:
|
||||
if host_ssh.file_exists(file_path):
|
||||
host_ssh.exec_sudo_cmd('rm -f {}'.format(file_path))
|
||||
|
||||
code, output = exec_helm_upload_cmd(tarball=tar_file, repo=repo,
|
||||
timeout=timeout, con_ssh=con_ssh,
|
||||
fail_ok=fail_ok)
|
||||
if code != 0:
|
||||
return 1, output
|
||||
|
||||
file_exist = con_ssh.file_exists(file_path)
|
||||
if not file_exist:
|
||||
raise exceptions.ContainerError(
|
||||
"{} not found on {} after helm-upload".format(file_path,
|
||||
current_host))
|
||||
|
||||
LOG.info("Helm charts {} uploaded successfully".format(file_path))
|
||||
return 0, file_path
|
||||
|
||||
|
||||
def upload_app(tar_file, app_name=None, app_version=None, check_first=True,
|
||||
fail_ok=False, uploaded_timeout=300,
|
||||
con_ssh=None, auth_info=Tenant.get('admin_platform')):
|
||||
"""
|
||||
Upload an application via 'system application-upload'
|
||||
Args:
|
||||
app_name:
|
||||
app_version:
|
||||
tar_file:
|
||||
check_first
|
||||
fail_ok:
|
||||
uploaded_timeout:
|
||||
con_ssh:
|
||||
auth_info:
|
||||
|
||||
Returns:
|
||||
|
||||
"""
|
||||
if check_first and get_apps(application=app_name, con_ssh=con_ssh,
|
||||
auth_info=auth_info):
|
||||
msg = '{} already exists. Do nothing.'.format(app_name)
|
||||
LOG.info(msg)
|
||||
return -1, msg
|
||||
|
||||
args = ''
|
||||
if app_name:
|
||||
args += '-n {} '.format(app_name)
|
||||
if app_version:
|
||||
args += '-v {} '.format(app_version)
|
||||
args = '{}{}'.format(args, tar_file)
|
||||
code, output = cli.system('application-upload', args, ssh_client=con_ssh,
|
||||
fail_ok=fail_ok, auth_info=auth_info)
|
||||
|
||||
if code > 0:
|
||||
return 1, output
|
||||
|
||||
res = wait_for_apps_status(apps=app_name, status=AppStatus.UPLOADED,
|
||||
timeout=uploaded_timeout,
|
||||
con_ssh=con_ssh, auth_info=auth_info,
|
||||
fail_ok=fail_ok)[0]
|
||||
if not res:
|
||||
return 2, "{} failed to upload".format(app_name)
|
||||
|
||||
msg = '{} uploaded successfully'.format(app_name)
|
||||
LOG.info(msg)
|
||||
return 0, msg
|
||||
|
||||
|
||||
def get_apps(field='status', application=None, con_ssh=None,
|
||||
auth_info=Tenant.get('admin_platform'),
|
||||
rtn_dict=False, **kwargs):
|
||||
"""
|
||||
Get applications values for give apps and fields via system application-list
|
||||
Args:
|
||||
application (str|list|tuple):
|
||||
field (str|list|tuple):
|
||||
con_ssh:
|
||||
auth_info:
|
||||
rtn_dict:
|
||||
**kwargs: extra filters other than application
|
||||
|
||||
Returns (list|dict):
|
||||
list of list, or
|
||||
dict with app name(str) as key and values(list) for given fields for
|
||||
each app as value
|
||||
|
||||
"""
|
||||
table_ = table_parser.table(
|
||||
cli.system('application-list', ssh_client=con_ssh, auth_info=auth_info)[
|
||||
1])
|
||||
if application:
|
||||
kwargs['application'] = application
|
||||
|
||||
return table_parser.get_multi_values(table_, fields=field,
|
||||
rtn_dict=rtn_dict, zip_values=True,
|
||||
**kwargs)
|
||||
|
||||
|
||||
def get_app_values(app_name, fields, con_ssh=None,
|
||||
auth_info=Tenant.get('admin_platform')):
|
||||
"""
|
||||
Get values from system application-show
|
||||
Args:
|
||||
app_name:
|
||||
fields (str|list|tuple):
|
||||
con_ssh:
|
||||
auth_info:
|
||||
|
||||
Returns:
|
||||
|
||||
"""
|
||||
if isinstance(fields, str):
|
||||
fields = [fields]
|
||||
|
||||
table_ = table_parser.table(
|
||||
cli.system('application-show', app_name, ssh_client=con_ssh,
|
||||
auth_info=auth_info)[1],
|
||||
combine_multiline_entry=True)
|
||||
values = table_parser.get_multi_values_two_col_table(table_, fields=fields)
|
||||
return values
|
||||
|
||||
|
||||
def wait_for_apps_status(apps, status, timeout=360, check_interval=5,
|
||||
fail_ok=False, con_ssh=None,
|
||||
auth_info=Tenant.get('admin_platform')):
|
||||
"""
|
||||
Wait for applications to reach expected status via system application-list
|
||||
Args:
|
||||
apps:
|
||||
status:
|
||||
timeout:
|
||||
check_interval:
|
||||
fail_ok:
|
||||
con_ssh:
|
||||
auth_info:
|
||||
|
||||
Returns (tuple):
|
||||
|
||||
"""
|
||||
status = '' if not status else status
|
||||
if isinstance(apps, str):
|
||||
apps = [apps]
|
||||
apps_to_check = list(apps)
|
||||
check_failed = []
|
||||
end_time = time.time() + timeout
|
||||
|
||||
LOG.info(
|
||||
"Wait for {} application(s) to reach status: {}".format(apps, status))
|
||||
while time.time() < end_time:
|
||||
apps_status = get_apps(application=apps_to_check,
|
||||
field=('application', 'status'), con_ssh=con_ssh,
|
||||
auth_info=auth_info)
|
||||
apps_status = {item[0]: item[1] for item in apps_status if item}
|
||||
|
||||
checked = []
|
||||
for app in apps_to_check:
|
||||
current_app_status = apps_status.get(app, '')
|
||||
if current_app_status == status:
|
||||
checked.append(app)
|
||||
elif current_app_status.endswith('ed'):
|
||||
check_failed.append(app)
|
||||
checked.append(app)
|
||||
|
||||
apps_to_check = list(set(apps_to_check) - set(checked))
|
||||
if not apps_to_check:
|
||||
if check_failed:
|
||||
msg = '{} failed to reach status - {}'.format(check_failed,
|
||||
status)
|
||||
LOG.warning(msg)
|
||||
if fail_ok:
|
||||
return False, check_failed
|
||||
else:
|
||||
raise exceptions.ContainerError(msg)
|
||||
|
||||
LOG.info("{} reached expected status {}".format(apps, status))
|
||||
return True, None
|
||||
|
||||
time.sleep(check_interval)
|
||||
|
||||
check_failed += apps_to_check
|
||||
msg = '{} did not reach status {} within {}s'.format(check_failed, status,
|
||||
timeout)
|
||||
LOG.warning(msg)
|
||||
if fail_ok:
|
||||
return False, check_failed
|
||||
raise exceptions.ContainerError(msg)
|
||||
|
||||
|
||||
def apply_app(app_name, check_first=False, fail_ok=False, applied_timeout=300,
|
||||
check_interval=10,
|
||||
wait_for_alarm_gone=True, con_ssh=None,
|
||||
auth_info=Tenant.get('admin_platform')):
|
||||
"""
|
||||
Apply/Re-apply application via system application-apply. Check for status
|
||||
reaches 'applied'.
|
||||
Args:
|
||||
app_name (str):
|
||||
check_first:
|
||||
fail_ok:
|
||||
applied_timeout:
|
||||
check_interval:
|
||||
con_ssh:
|
||||
wait_for_alarm_gone (bool):
|
||||
auth_info:
|
||||
|
||||
Returns (tuple):
|
||||
(-1, "<app_name> is already applied. Do nothing.") # only returns
|
||||
if check_first=True.
|
||||
(0, "<app_name> (re)applied successfully")
|
||||
(1, <std_err>) # cli rejected
|
||||
(2, "<app_name> failed to apply") # did not reach applied status
|
||||
after apply.
|
||||
|
||||
"""
|
||||
if check_first:
|
||||
app_status = get_apps(application=app_name, field='status',
|
||||
con_ssh=con_ssh, auth_info=auth_info)
|
||||
if app_status and app_status[0] == AppStatus.APPLIED:
|
||||
msg = '{} is already applied. Do nothing.'.format(app_name)
|
||||
LOG.info(msg)
|
||||
return -1, msg
|
||||
|
||||
LOG.info("Apply application: {}".format(app_name))
|
||||
code, output = cli.system('application-apply', app_name, ssh_client=con_ssh,
|
||||
fail_ok=fail_ok, auth_info=auth_info)
|
||||
if code > 0:
|
||||
return 1, output
|
||||
|
||||
res = wait_for_apps_status(apps=app_name, status=AppStatus.APPLIED,
|
||||
timeout=applied_timeout,
|
||||
check_interval=check_interval, con_ssh=con_ssh,
|
||||
auth_info=auth_info, fail_ok=fail_ok)[0]
|
||||
if not res:
|
||||
return 2, "{} failed to apply".format(app_name)
|
||||
|
||||
if wait_for_alarm_gone:
|
||||
alarm_id = EventLogID.CONFIG_OUT_OF_DATE
|
||||
if system_helper.wait_for_alarm(alarm_id=alarm_id,
|
||||
entity_id='controller',
|
||||
timeout=15, fail_ok=True,
|
||||
auth_info=auth_info,
|
||||
con_ssh=con_ssh)[0]:
|
||||
system_helper.wait_for_alarm_gone(alarm_id=alarm_id,
|
||||
entity_id='controller',
|
||||
timeout=120,
|
||||
check_interval=10,
|
||||
con_ssh=con_ssh,
|
||||
auth_info=auth_info)
|
||||
|
||||
msg = '{} (re)applied successfully'.format(app_name)
|
||||
LOG.info(msg)
|
||||
return 0, msg
|
||||
|
||||
|
||||
def delete_app(app_name, check_first=True, fail_ok=False, applied_timeout=300,
|
||||
con_ssh=None,
|
||||
auth_info=Tenant.get('admin_platform')):
|
||||
"""
|
||||
Delete an application via system application-delete. Verify application
|
||||
no longer listed.
|
||||
Args:
|
||||
app_name:
|
||||
check_first:
|
||||
fail_ok:
|
||||
applied_timeout:
|
||||
con_ssh:
|
||||
auth_info:
|
||||
|
||||
Returns (tuple):
|
||||
(-1, "<app_name> does not exist. Do nothing.")
|
||||
(0, "<app_name> deleted successfully")
|
||||
(1, <std_err>)
|
||||
(2, "<app_name> failed to delete")
|
||||
|
||||
"""
|
||||
|
||||
if check_first:
|
||||
app_vals = get_apps(application=app_name, field='status',
|
||||
con_ssh=con_ssh, auth_info=auth_info)
|
||||
if not app_vals:
|
||||
msg = '{} does not exist. Do nothing.'.format(app_name)
|
||||
LOG.info(msg)
|
||||
return -1, msg
|
||||
|
||||
code, output = cli.system('application-delete', app_name,
|
||||
ssh_client=con_ssh, fail_ok=fail_ok,
|
||||
auth_info=auth_info)
|
||||
if code > 0:
|
||||
return 1, output
|
||||
|
||||
res = wait_for_apps_status(apps=app_name, status=None,
|
||||
timeout=applied_timeout,
|
||||
con_ssh=con_ssh, auth_info=auth_info,
|
||||
fail_ok=fail_ok)[
|
||||
0]
|
||||
if not res:
|
||||
return 2, "{} failed to delete".format(app_name)
|
||||
|
||||
msg = '{} deleted successfully'.format(app_name)
|
||||
LOG.info(msg)
|
||||
return 0, msg
|
||||
|
||||
|
||||
def remove_app(app_name, check_first=True, fail_ok=False, applied_timeout=300,
|
||||
con_ssh=None,
|
||||
auth_info=Tenant.get('admin_platform')):
|
||||
"""
|
||||
Remove applied application via system application-remove. Verify it is in
|
||||
'uploaded' status.
|
||||
Args:
|
||||
app_name (str):
|
||||
check_first:
|
||||
fail_ok:
|
||||
applied_timeout:
|
||||
con_ssh:
|
||||
auth_info:
|
||||
|
||||
Returns (tuple):
|
||||
(-1, "<app_name> is not applied. Do nothing.")
|
||||
(0, "<app_name> removed successfully")
|
||||
(1, <std_err>)
|
||||
(2, "<app_name> failed to remove") # Did not reach uploaded status
|
||||
|
||||
"""
|
||||
|
||||
if check_first:
|
||||
app_vals = get_apps(application=app_name, field='status',
|
||||
con_ssh=con_ssh, auth_info=auth_info)
|
||||
if not app_vals or app_vals[0] in (AppStatus.UPLOADED,
|
||||
AppStatus.UPLOAD_FAILED):
|
||||
msg = '{} is not applied. Do nothing.'.format(app_name)
|
||||
LOG.info(msg)
|
||||
return -1, msg
|
||||
|
||||
code, output = cli.system('application-remove', app_name,
|
||||
ssh_client=con_ssh, fail_ok=fail_ok,
|
||||
auth_info=auth_info)
|
||||
if code > 0:
|
||||
return 1, output
|
||||
|
||||
res = wait_for_apps_status(apps=app_name, status=AppStatus.UPLOADED,
|
||||
timeout=applied_timeout,
|
||||
con_ssh=con_ssh, auth_info=auth_info,
|
||||
fail_ok=fail_ok)[0]
|
||||
if not res:
|
||||
return 2, "{} failed to remove".format(app_name)
|
||||
|
||||
msg = '{} removed successfully'.format(app_name)
|
||||
LOG.info(msg)
|
||||
return 0, msg
|
||||
|
||||
|
||||
def get_docker_reg_addr(con_ssh=None):
|
||||
"""
|
||||
Get local docker registry ip address in docker conf file.
|
||||
Args:
|
||||
con_ssh:
|
||||
|
||||
Returns (str):
|
||||
|
||||
"""
|
||||
if not con_ssh:
|
||||
con_ssh = ControllerClient.get_active_controller()
|
||||
|
||||
output = con_ssh.exec_cmd(
|
||||
'grep --color=never "addr: " {}'.format(StxPath.DOCKER_CONF),
|
||||
fail_ok=False)[1]
|
||||
reg_addr = output.split('addr: ')[1].strip()
|
||||
return reg_addr
|
||||
|
||||
|
||||
def pull_docker_image(name, tag=None, digest=None, con_ssh=None, timeout=300,
|
||||
fail_ok=False):
|
||||
"""
|
||||
Pull docker image via docker image pull. Verify image is listed in docker
|
||||
image list.
|
||||
Args:
|
||||
name:
|
||||
tag:
|
||||
digest:
|
||||
con_ssh:
|
||||
timeout:
|
||||
fail_ok:
|
||||
|
||||
Returns (tuple):
|
||||
(0, <docker image ID>)
|
||||
(1, <std_err>)
|
||||
|
||||
"""
|
||||
|
||||
args = '{}'.format(name.strip())
|
||||
if tag:
|
||||
args += ':{}'.format(tag)
|
||||
elif digest:
|
||||
args += '@{}'.format(digest)
|
||||
|
||||
LOG.info("Pull docker image {}".format(args))
|
||||
code, out = exec_docker_cmd('image pull', args, timeout=timeout,
|
||||
fail_ok=fail_ok, con_ssh=con_ssh)
|
||||
if code != 0:
|
||||
return 1, out
|
||||
|
||||
image_id = get_docker_images(repo=name, tag=tag, field='IMAGE ID',
|
||||
con_ssh=con_ssh, fail_ok=False)[0]
|
||||
LOG.info(
|
||||
'docker image {} successfully pulled. ID: {}'.format(args, image_id))
|
||||
|
||||
return 0, image_id
|
||||
|
||||
|
||||
def login_to_docker(registry=None, user=None, password=None, con_ssh=None,
|
||||
fail_ok=False):
|
||||
"""
|
||||
Login to docker registry
|
||||
Args:
|
||||
registry (str|None): default docker registry will be used when None
|
||||
user (str|None): admin user will be used when None
|
||||
password (str|None): admin password will be used when None
|
||||
con_ssh (SSHClient|None):
|
||||
fail_ok (bool):
|
||||
|
||||
Returns (tuple):
|
||||
(0, <cmd_args>(str)) # login succeeded
|
||||
(1, <std_err>(str)) # login failed
|
||||
|
||||
"""
|
||||
if not user:
|
||||
user = 'admin'
|
||||
if not password:
|
||||
password = Tenant.get('admin_platform').get('password')
|
||||
if not registry:
|
||||
registry = Container.LOCAL_DOCKER_REG
|
||||
|
||||
args = '-u {} -p {} {}'.format(user, password, registry)
|
||||
LOG.info("Login to docker registry {}".format(registry))
|
||||
code, out = exec_docker_cmd('login', args, timeout=60, fail_ok=fail_ok,
|
||||
con_ssh=con_ssh)
|
||||
if code != 0:
|
||||
return 1, out
|
||||
|
||||
LOG.info('Logged into docker registry successfully: {}'.format(registry))
|
||||
return 0, args
|
||||
|
||||
|
||||
def push_docker_image(name, tag=None, login_registry=None, con_ssh=None,
|
||||
timeout=300, fail_ok=False):
|
||||
"""
|
||||
Push docker image via docker image push.
|
||||
Args:
|
||||
name:
|
||||
tag:
|
||||
login_registry (str|None): when set, login to given docker registry
|
||||
before push
|
||||
con_ssh:
|
||||
timeout:
|
||||
fail_ok:
|
||||
|
||||
Returns (tuple):
|
||||
(0, <args_used>)
|
||||
(1, <std_err>)
|
||||
|
||||
"""
|
||||
args = '{}'.format(name.strip())
|
||||
if tag:
|
||||
args += ':{}'.format(tag)
|
||||
|
||||
if login_registry:
|
||||
login_to_docker(registry=login_registry, con_ssh=con_ssh)
|
||||
|
||||
LOG.info("Push docker image: {}".format(args))
|
||||
code, out = exec_docker_cmd('image push', args, timeout=timeout,
|
||||
fail_ok=fail_ok, con_ssh=con_ssh)
|
||||
if code != 0:
|
||||
return 1, out
|
||||
|
||||
LOG.info('docker image {} successfully pushed.'.format(args))
|
||||
return 0, args
|
||||
|
||||
|
||||
def tag_docker_image(source_image, target_name, source_tag=None,
|
||||
target_tag=None, con_ssh=None, timeout=300,
|
||||
fail_ok=False):
|
||||
"""
|
||||
Tag docker image via docker image tag. Verify image is tagged via docker
|
||||
image list.
|
||||
Args:
|
||||
source_image:
|
||||
target_name:
|
||||
source_tag:
|
||||
target_tag:
|
||||
con_ssh:
|
||||
timeout:
|
||||
fail_ok:
|
||||
|
||||
Returns:
|
||||
(0, <target_args>)
|
||||
(1, <std_err>)
|
||||
|
||||
"""
|
||||
source_args = source_image.strip()
|
||||
if source_tag:
|
||||
source_args += ':{}'.format(source_tag)
|
||||
|
||||
target_args = target_name.strip()
|
||||
if target_tag:
|
||||
target_args += ':{}'.format(target_tag)
|
||||
|
||||
LOG.info("Tag docker image {} as {}".format(source_args, target_args))
|
||||
args = '{} {}'.format(source_args, target_args)
|
||||
code, out = exec_docker_cmd('image tag', args, timeout=timeout,
|
||||
fail_ok=fail_ok, con_ssh=con_ssh)
|
||||
if code != 0:
|
||||
return 1, out
|
||||
|
||||
if not get_docker_images(repo=target_name, tag=target_tag, con_ssh=con_ssh,
|
||||
fail_ok=False):
|
||||
raise exceptions.ContainerError(
|
||||
"Docker image {} is not listed after tagging {}".format(
|
||||
target_name, source_image))
|
||||
|
||||
LOG.info('docker image {} successfully tagged as {}.'.format(source_args,
|
||||
target_args))
|
||||
return 0, target_args
|
||||
|
||||
|
||||
def remove_docker_images(images, force=False, con_ssh=None, timeout=300,
|
||||
fail_ok=False):
|
||||
"""
|
||||
Remove docker image(s) via docker image rm
|
||||
Args:
|
||||
images (str|tuple|list):
|
||||
force (bool):
|
||||
con_ssh:
|
||||
timeout:
|
||||
fail_ok:
|
||||
|
||||
Returns (tuple):
|
||||
(0, <std_out>)
|
||||
(1, <std_err>)
|
||||
|
||||
"""
|
||||
if isinstance(images, str):
|
||||
images = (images,)
|
||||
|
||||
LOG.info("Remove docker images: {}".format(images))
|
||||
args = ' '.join(images)
|
||||
if force:
|
||||
args = '--force {}'.format(args)
|
||||
|
||||
code, out = exec_docker_cmd('image rm', args, timeout=timeout,
|
||||
fail_ok=fail_ok, con_ssh=con_ssh)
|
||||
return code, out
|
||||
|
||||
|
||||
def get_docker_images(repo=None, tag=None, field='IMAGE ID', con_ssh=None,
|
||||
fail_ok=False):
|
||||
"""
|
||||
get values for given docker image via 'docker image ls <repo>'
|
||||
Args:
|
||||
repo (str):
|
||||
tag (str|None):
|
||||
field (str|tuple|list):
|
||||
con_ssh:
|
||||
fail_ok
|
||||
|
||||
Returns (list|None): return None if no docker images returned at all due
|
||||
to cmd failure
|
||||
|
||||
"""
|
||||
args = None
|
||||
if repo:
|
||||
args = repo
|
||||
if tag:
|
||||
args += ':{}'.format(tag)
|
||||
code, output = exec_docker_cmd(sub_cmd='image ls', args=args,
|
||||
fail_ok=fail_ok, con_ssh=con_ssh)
|
||||
if code != 0:
|
||||
return None
|
||||
|
||||
table_ = table_parser.table_kube(output)
|
||||
if not table_['values']:
|
||||
if fail_ok:
|
||||
return None
|
||||
else:
|
||||
raise exceptions.ContainerError(
|
||||
"docker image {} does not exist".format(args))
|
||||
|
||||
values = table_parser.get_multi_values(table_, fields=field,
|
||||
zip_values=True)
|
||||
|
||||
return values
|
||||
|
||||
|
||||
def get_helm_overrides(field='overrides namespaces', app_name='stx-openstack',
|
||||
charts=None,
|
||||
auth_info=Tenant.get('admin_platform'), con_ssh=None):
|
||||
"""
|
||||
Get helm overrides values via system helm-override-list
|
||||
Args:
|
||||
field (str):
|
||||
app_name
|
||||
charts (None|str|list|tuple):
|
||||
auth_info:
|
||||
con_ssh:
|
||||
|
||||
Returns (list):
|
||||
|
||||
"""
|
||||
table_ = table_parser.table(
|
||||
cli.system('helm-override-list', app_name, ssh_client=con_ssh,
|
||||
auth_info=auth_info)[1])
|
||||
|
||||
if charts:
|
||||
table_ = table_parser.filter_table(table_, **{'chart name': charts})
|
||||
|
||||
vals = table_parser.get_multi_values(table_, fields=field, evaluate=True)
|
||||
|
||||
return vals
|
||||
|
||||
|
||||
def get_helm_override_values(chart, namespace, app_name='stx-openstack',
|
||||
fields=('combined_overrides',),
|
||||
auth_info=Tenant.get('admin_platform'),
|
||||
con_ssh=None):
|
||||
"""
|
||||
Get helm-override values for given chart via system helm-override-show
|
||||
Args:
|
||||
chart (str):
|
||||
namespace (str):
|
||||
app_name (str)
|
||||
fields (str|tuple|list):
|
||||
auth_info:
|
||||
con_ssh:
|
||||
|
||||
Returns (list): list of parsed yaml formatted output. e.g., list of dict,
|
||||
list of list, list of str
|
||||
|
||||
"""
|
||||
args = '{} {} {}'.format(app_name, chart, namespace)
|
||||
table_ = table_parser.table(
|
||||
cli.system('helm-override-show', args, ssh_client=con_ssh,
|
||||
auth_info=auth_info)[1],
|
||||
rstrip_value=True)
|
||||
|
||||
if isinstance(fields, str):
|
||||
fields = (fields,)
|
||||
|
||||
values = []
|
||||
for field in fields:
|
||||
value = table_parser.get_value_two_col_table(table_, field=field,
|
||||
merge_lines=False)
|
||||
values.append(yaml.load('\n'.join(value)))
|
||||
|
||||
return values
|
||||
|
||||
|
||||
def __convert_kv(k, v):
|
||||
if '.' not in k:
|
||||
return {k: v}
|
||||
new_key, new_val = k.rsplit('.', maxsplit=1)
|
||||
return __convert_kv(new_key, {new_val: v})
|
||||
|
||||
|
||||
def update_helm_override(chart, namespace, app_name='stx-openstack',
|
||||
yaml_file=None, kv_pairs=None,
|
||||
reset_vals=False, reuse_vals=False,
|
||||
auth_info=Tenant.get('admin_platform'),
|
||||
con_ssh=None, fail_ok=False):
|
||||
"""
|
||||
Update helm_override values for given chart
|
||||
Args:
|
||||
chart:
|
||||
namespace:
|
||||
app_name
|
||||
yaml_file:
|
||||
kv_pairs:
|
||||
reset_vals:
|
||||
reuse_vals:
|
||||
fail_ok
|
||||
con_ssh
|
||||
auth_info
|
||||
|
||||
Returns (tuple):
|
||||
(0, <overrides>(str|list|dict)) # cmd accepted.
|
||||
(1, <std_err>) # system helm-override-update cmd rejected
|
||||
|
||||
"""
|
||||
args = '{} {} {}'.format(app_name, chart, namespace)
|
||||
if reset_vals:
|
||||
args = '--reset-values {}'.format(args)
|
||||
if reuse_vals:
|
||||
args = '--reuse-values {}'.format(args)
|
||||
if yaml_file:
|
||||
args = '--values {} {}'.format(yaml_file, args)
|
||||
if kv_pairs:
|
||||
cmd_overrides = ','.join(
|
||||
['{}={}'.format(k, v) for k, v in kv_pairs.items()])
|
||||
args = '--set {} {}'.format(cmd_overrides, args)
|
||||
|
||||
code, output = cli.system('helm-override-update', args, ssh_client=con_ssh,
|
||||
fail_ok=fail_ok, auth_info=auth_info)
|
||||
if code != 0:
|
||||
return 1, output
|
||||
|
||||
table_ = table_parser.table(output, rstrip_value=True)
|
||||
overrides = table_parser.get_value_two_col_table(table_, 'user_overrides')
|
||||
overrides = yaml.load('\n'.join(overrides))
|
||||
# yaml.load converts str to bool, int, float; but does not convert
|
||||
# None type. Updates are not verified here since it is rather complicated
|
||||
# to verify properly.
|
||||
LOG.info("Helm-override updated : {}".format(overrides))
|
||||
|
||||
return 0, overrides
|
||||
|
||||
|
||||
def is_stx_openstack_deployed(applied_only=False, con_ssh=None,
|
||||
auth_info=Tenant.get('admin_platform'),
|
||||
force_check=False):
|
||||
"""
|
||||
Whether stx-openstack application is deployed.
|
||||
Args:
|
||||
applied_only (bool): if True, then only return True when application
|
||||
is in applied state
|
||||
con_ssh:
|
||||
auth_info:
|
||||
force_check:
|
||||
|
||||
Returns (bool):
|
||||
|
||||
"""
|
||||
openstack_deployed = ProjVar.get_var('OPENSTACK_DEPLOYED')
|
||||
if not applied_only and not force_check and openstack_deployed is not None:
|
||||
return openstack_deployed
|
||||
|
||||
openstack_status = get_apps(application='stx-openstack', field='status',
|
||||
con_ssh=con_ssh, auth_info=auth_info)
|
||||
|
||||
LOG.info("{}".format(openstack_status))
|
||||
|
||||
res = False
|
||||
if openstack_status and 'appl' in openstack_status[0].lower():
|
||||
res = True
|
||||
if applied_only and openstack_status[0] != AppStatus.APPLIED:
|
||||
res = False
|
||||
|
||||
return res
|
File diff suppressed because it is too large
Load Diff
|
@ -0,0 +1,165 @@
|
|||
#
|
||||
# Copyright (c) 2019 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
from utils import cli
|
||||
from utils import table_parser
|
||||
from utils.tis_log import LOG
|
||||
|
||||
from consts.auth import Tenant
|
||||
from keywords import common
|
||||
|
||||
|
||||
def get_aggregated_measures(field='value', resource_type=None, metrics=None,
|
||||
start=None, stop=None, overlap=None,
|
||||
refresh=None, resource_ids=None, extra_query=None,
|
||||
fail_ok=False, auth_info=Tenant.get('admin'),
|
||||
con_ssh=None):
|
||||
"""
|
||||
Get measurements via 'openstack metric measures aggregation'
|
||||
Args:
|
||||
field (str): header of a column
|
||||
resource_type (str|None): used in --resource-type <resource_type>
|
||||
metrics (str|list|tuple|None): used in --metric <metric1> [metric2 ...]
|
||||
start (str|None): used in --start <start>
|
||||
stop (str|None): used in --stop <stop>
|
||||
refresh (bool): used in --refresh
|
||||
overlap (str|None): overlap percentage. used in
|
||||
--needed-overlap <overlap>
|
||||
resource_ids (str|list|tuple|None): used in --query "id=<resource_id1>[
|
||||
or id=<resource_id2> ...]"
|
||||
extra_query (str|None): used in --query <extra_query>
|
||||
fail_ok:
|
||||
auth_info:
|
||||
con_ssh:
|
||||
|
||||
Returns (list): list of strings
|
||||
|
||||
"""
|
||||
LOG.info("Getting aggregated measurements...")
|
||||
args_dict = {
|
||||
'resource-type': resource_type,
|
||||
'metric': metrics,
|
||||
'start': start,
|
||||
'stop': stop,
|
||||
'needed-overlap': overlap,
|
||||
'refresh': refresh,
|
||||
}
|
||||
|
||||
args = common.parse_args(args_dict, vals_sep=' ')
|
||||
query_str = ''
|
||||
if resource_ids:
|
||||
if isinstance(resource_ids, str):
|
||||
resource_ids = [resource_ids]
|
||||
resource_ids = ['id={}'.format(val) for val in resource_ids]
|
||||
query_str = ' or '.join(resource_ids)
|
||||
|
||||
if extra_query:
|
||||
if resource_ids:
|
||||
query_str += ' and '
|
||||
query_str += '{}'.format(extra_query)
|
||||
|
||||
if query_str:
|
||||
args += ' --query "{}"'.format(query_str)
|
||||
|
||||
code, out = cli.openstack('metric measures aggregation', args,
|
||||
ssh_client=con_ssh, fail_ok=fail_ok,
|
||||
auth_info=auth_info)
|
||||
if code > 0:
|
||||
return 1, out
|
||||
|
||||
table_ = table_parser.table(out)
|
||||
return 0, table_parser.get_values(table_, field)
|
||||
|
||||
|
||||
def get_metric_values(metric_id=None, metric_name=None, resource_id=None,
|
||||
fields='id', fail_ok=False,
|
||||
auth_info=Tenant.get('admin'), con_ssh=None):
|
||||
"""
|
||||
Get metric info via 'openstack metric show'
|
||||
Args:
|
||||
metric_id (str|None):
|
||||
metric_name (str|None): Only used if metric_id is not provided
|
||||
resource_id (str|None): Only used if metric_id is not provided
|
||||
fields (str|list|tuple): field name
|
||||
fail_ok (bool):
|
||||
auth_info:
|
||||
con_ssh:
|
||||
|
||||
Returns (list):
|
||||
|
||||
"""
|
||||
if metric_id is None and metric_name is None:
|
||||
raise ValueError("metric_id or metric_name has to be provided.")
|
||||
|
||||
if metric_id:
|
||||
arg = metric_id
|
||||
else:
|
||||
if resource_id:
|
||||
arg = '--resource-id {} "{}"'.format(resource_id, metric_name)
|
||||
else:
|
||||
if not fail_ok:
|
||||
raise ValueError("resource_id needs to be provided when using "
|
||||
"metric_name")
|
||||
arg = '"{}"'.format(metric_name)
|
||||
|
||||
code, output = cli.openstack('openstack metric show', arg,
|
||||
ssh_client=con_ssh, fail_ok=fail_ok,
|
||||
auth_info=auth_info)
|
||||
if code > 0:
|
||||
return output
|
||||
|
||||
table_ = table_parser.table(output)
|
||||
return table_parser.get_multi_values_two_col_table(table_, fields)
|
||||
|
||||
|
||||
def get_metrics(field='id', metric_name=None, resource_id=None, fail_ok=True,
|
||||
auth_info=Tenant.get('admin'), con_ssh=None):
|
||||
"""
|
||||
Get metrics values via 'openstack metric list'
|
||||
Args:
|
||||
field (str|list|tuple): header of the metric list table
|
||||
metric_name (str|None):
|
||||
resource_id (str|None):
|
||||
fail_ok (bool):
|
||||
auth_info:
|
||||
con_ssh:
|
||||
|
||||
Returns (list): list of strings
|
||||
|
||||
"""
|
||||
columns = ['id', 'archive_policy/name', 'name', 'unit', 'resource_id']
|
||||
arg = '-f value '
|
||||
arg += ' '.join(['-c {}'.format(column) for column in columns])
|
||||
|
||||
grep_str = ''
|
||||
if resource_id:
|
||||
grep_str += ' | grep --color=never -E -i {}'.format(resource_id)
|
||||
if metric_name:
|
||||
grep_str += ' | grep --color=never -E -i {}'.format(metric_name)
|
||||
|
||||
arg += grep_str
|
||||
|
||||
code, output = cli.openstack('metric list', arg, ssh_client=con_ssh,
|
||||
fail_ok=fail_ok, auth_info=auth_info)
|
||||
if code > 0:
|
||||
return []
|
||||
|
||||
values = []
|
||||
convert = False
|
||||
if isinstance(field, str):
|
||||
field = (field, )
|
||||
convert = True
|
||||
|
||||
for header in field:
|
||||
lines = output.splitlines()
|
||||
index = columns.index(header.lower())
|
||||
vals = [line.split(sep=' ')[index] for line in lines]
|
||||
values.append(vals)
|
||||
|
||||
if convert:
|
||||
values = values[0]
|
||||
return values
|
|
@ -0,0 +1,398 @@
|
|||
#
|
||||
# Copyright (c) 2019 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
import time
|
||||
|
||||
from utils import table_parser, cli, exceptions
|
||||
from utils.tis_log import LOG
|
||||
from utils.clients.ssh import get_cli_client
|
||||
from consts.stx import GuestImages, HeatStackStatus, HEAT_CUSTOM_TEMPLATES
|
||||
from consts.filepaths import TestServerPath
|
||||
from keywords import network_helper, common
|
||||
from testfixtures.fixture_resources import ResourceCleanup
|
||||
|
||||
|
||||
def _wait_for_heat_stack_deleted(stack_name=None, timeout=120,
|
||||
check_interval=3, con_ssh=None,
|
||||
auth_info=None):
|
||||
"""
|
||||
This will wait for the heat stack to be deleted
|
||||
Args:
|
||||
stack_name(str): Heat stack name to check for state
|
||||
con_ssh (SSHClient): If None, active controller ssh will be used.
|
||||
auth_info (dict): Tenant dict. If None, primary tenant will be used.
|
||||
|
||||
Returns:
|
||||
|
||||
"""
|
||||
LOG.info("Waiting for {} to be deleted...".format(stack_name))
|
||||
end_time = time.time() + timeout
|
||||
while time.time() < end_time:
|
||||
stack_status = get_stack_status(stack=stack_name, auth_info=auth_info,
|
||||
con_ssh=con_ssh, fail_ok=True)
|
||||
if not stack_status:
|
||||
return True
|
||||
elif stack_status[0] == HeatStackStatus.DELETE_FAILED:
|
||||
LOG.warning('Heat stack in DELETE_FAILED state')
|
||||
return False
|
||||
|
||||
time.sleep(check_interval)
|
||||
|
||||
msg = "Heat stack {} did not get deleted within timeout".format(stack_name)
|
||||
|
||||
LOG.warning(msg)
|
||||
return False
|
||||
|
||||
|
||||
def wait_for_heat_status(stack_name=None,
|
||||
status=HeatStackStatus.CREATE_COMPLETE,
|
||||
timeout=300, check_interval=5,
|
||||
fail_ok=False, con_ssh=None, auth_info=None):
|
||||
"""
|
||||
This will wait for the desired state of the heat stack or timeout
|
||||
Args:
|
||||
stack_name(str): Heat stack name to check for state
|
||||
status(str): Status to check for
|
||||
timeout (int)
|
||||
check_interval (int)
|
||||
fail_ok (bool
|
||||
con_ssh (SSHClient): If None, active controller ssh will be used.
|
||||
auth_info (dict): Tenant dict. If None, primary tenant will be used.
|
||||
|
||||
Returns (tuple): <res_bool>, <msg>
|
||||
|
||||
"""
|
||||
LOG.info("Waiting for {} to be shown in {} ...".format(stack_name, status))
|
||||
end_time = time.time() + timeout
|
||||
|
||||
fail_status = current_status = None
|
||||
if status == HeatStackStatus.CREATE_COMPLETE:
|
||||
fail_status = HeatStackStatus.CREATE_FAILED
|
||||
elif status == HeatStackStatus.UPDATE_COMPLETE:
|
||||
fail_status = HeatStackStatus.UPDATE_FAILED
|
||||
|
||||
while time.time() < end_time:
|
||||
current_status = get_stack_status(stack=stack_name, auth_info=auth_info,
|
||||
con_ssh=con_ssh)[0]
|
||||
if status == current_status:
|
||||
return True, 'Heat stack {} has reached {} status'.format(
|
||||
stack_name, status)
|
||||
elif fail_status == current_status:
|
||||
stack_id = get_stack_values(stack=stack_name, fields='id',
|
||||
auth_info=auth_info, con_ssh=con_ssh)[0]
|
||||
get_stack_resources(stack=stack_id, auth_info=auth_info,
|
||||
con_ssh=con_ssh)
|
||||
|
||||
err = "Heat stack {} failed to reach {}, actual status: {}".format(
|
||||
stack_name, status, fail_status)
|
||||
if fail_ok:
|
||||
LOG.warning(err)
|
||||
return False, err
|
||||
raise exceptions.HeatError(err)
|
||||
|
||||
time.sleep(check_interval)
|
||||
|
||||
stack_id = get_stack_values(stack=stack_name, fields='id',
|
||||
auth_info=auth_info, con_ssh=con_ssh)[0]
|
||||
get_stack_resources(stack=stack_id, auth_info=auth_info, con_ssh=con_ssh)
|
||||
err_msg = "Heat stack {} did not reach {} within {}s. Actual " \
|
||||
"status: {}".format(stack_name, status, timeout, current_status)
|
||||
if fail_ok:
|
||||
LOG.warning(err_msg)
|
||||
return False, err_msg
|
||||
raise exceptions.HeatError(err_msg)
|
||||
|
||||
|
||||
def get_stack_values(stack, fields='stack_status_reason', con_ssh=None,
|
||||
auth_info=None, fail_ok=False):
|
||||
code, out = cli.openstack('stack show', stack, ssh_client=con_ssh,
|
||||
auth_info=auth_info, fail_ok=fail_ok)
|
||||
if code > 0:
|
||||
return None
|
||||
|
||||
table_ = table_parser.table(out)
|
||||
return table_parser.get_multi_values_two_col_table(table_=table_,
|
||||
fields=fields)
|
||||
|
||||
|
||||
def get_stacks(name=None, field='id', con_ssh=None, auth_info=None, all_=True):
|
||||
"""
|
||||
Get the stacks list based on name if given for a given tenant.
|
||||
|
||||
Args:
|
||||
con_ssh (SSHClient): If None, active controller ssh will be used.
|
||||
auth_info (dict): Tenant dict. If None, primary tenant will be used.
|
||||
all_ (bool): whether to display all stacks for admin user
|
||||
name (str): Given name for the heat stack
|
||||
field (str|list|tuple)
|
||||
|
||||
Returns (list): list of heat stacks.
|
||||
|
||||
"""
|
||||
args = ''
|
||||
if auth_info is not None:
|
||||
if auth_info['user'] == 'admin' and all_:
|
||||
args = '--a'
|
||||
table_ = table_parser.table(
|
||||
cli.openstack('stack list', positional_args=args, ssh_client=con_ssh,
|
||||
auth_info=auth_info)[1])
|
||||
|
||||
kwargs = {'Stack Name': name} if name else {}
|
||||
return table_parser.get_multi_values(table_, field, **kwargs)
|
||||
|
||||
|
||||
def get_stack_status(stack, con_ssh=None, auth_info=None, fail_ok=False):
|
||||
"""
|
||||
Get the stacks status based on name if given for a given tenant.
|
||||
|
||||
Args:
|
||||
con_ssh (SSHClient): If None, active controller ssh will be used.
|
||||
auth_info (dict): Tenant dict. If None, primary tenant will be used.
|
||||
stack (str): Given name for the heat stack
|
||||
fail_ok (bool):
|
||||
|
||||
Returns (str): Heat stack status of a specific tenant.
|
||||
|
||||
"""
|
||||
status = get_stack_values(stack, fields='stack_status', con_ssh=con_ssh,
|
||||
auth_info=auth_info, fail_ok=fail_ok)
|
||||
status = status[0] if status else None
|
||||
return status
|
||||
|
||||
|
||||
def get_stack_resources(stack, field='resource_name', auth_info=None,
|
||||
con_ssh=None, **kwargs):
|
||||
"""
|
||||
|
||||
Args:
|
||||
stack (str): id (or name) for the heat stack. ID is required if admin
|
||||
user is used to display tenant resource.
|
||||
field: values to return
|
||||
auth_info:
|
||||
con_ssh:
|
||||
kwargs: key/value pair to filer out the values to return
|
||||
|
||||
Returns (list):
|
||||
|
||||
"""
|
||||
table_ = table_parser.table(
|
||||
cli.openstack('stack resource list --long', stack, ssh_client=con_ssh,
|
||||
auth_info=auth_info)[1])
|
||||
return table_parser.get_values(table_, target_header=field, **kwargs)
|
||||
|
||||
|
||||
def delete_stack(stack, fail_ok=False, check_first=False, con_ssh=None,
|
||||
auth_info=None):
|
||||
"""
|
||||
Delete the given heat stack for a given tenant.
|
||||
|
||||
Args:
|
||||
con_ssh (SSHClient): If None, active controller ssh will be used.
|
||||
fail_ok (bool):
|
||||
check_first (bool): whether or not to check the stack existence
|
||||
before attempt to delete
|
||||
auth_info (dict): Tenant dict. If None, primary tenant will be used.
|
||||
stack (str): Given name for the heat stack
|
||||
|
||||
Returns (tuple): Status and msg of the heat deletion.
|
||||
|
||||
"""
|
||||
|
||||
if not stack:
|
||||
raise ValueError("stack_name is not provided.")
|
||||
|
||||
if check_first:
|
||||
if not get_stack_status(stack, con_ssh=con_ssh, auth_info=auth_info,
|
||||
fail_ok=True):
|
||||
msg = "Heat stack {} doesn't exist on the system. Do " \
|
||||
"nothing.".format(stack)
|
||||
LOG.info(msg)
|
||||
return -1, msg
|
||||
|
||||
LOG.info("Deleting Heat Stack %s", stack)
|
||||
exitcode, output = cli.openstack('stack delete -y', stack,
|
||||
ssh_client=con_ssh, fail_ok=fail_ok,
|
||||
auth_info=auth_info)
|
||||
if exitcode > 1:
|
||||
LOG.warning("Delete heat stack request rejected.")
|
||||
return 1, output
|
||||
|
||||
if not _wait_for_heat_stack_deleted(stack_name=stack, auth_info=auth_info):
|
||||
stack_id = get_stack_values(stack=stack, fields='id',
|
||||
auth_info=auth_info, con_ssh=con_ssh)[0]
|
||||
get_stack_resources(stack=stack_id, auth_info=auth_info,
|
||||
con_ssh=con_ssh)
|
||||
|
||||
msg = "heat stack {} is not removed after stack-delete.".format(stack)
|
||||
if fail_ok:
|
||||
LOG.warning(msg)
|
||||
return 2, msg
|
||||
raise exceptions.HeatError(msg)
|
||||
|
||||
succ_msg = "Heat stack {} is successfully deleted.".format(stack)
|
||||
LOG.info(succ_msg)
|
||||
return 0, succ_msg
|
||||
|
||||
|
||||
def get_heat_params(param_name=None):
|
||||
"""
|
||||
Generate parameters for heat based on keywords
|
||||
|
||||
Args:
|
||||
param_name (str): template to be used to create heat stack.
|
||||
|
||||
Returns (str): return None if failure or the val for the given param
|
||||
|
||||
"""
|
||||
if param_name is 'NETWORK':
|
||||
net_id = network_helper.get_mgmt_net_id()
|
||||
return network_helper.get_net_name_from_id(net_id=net_id)
|
||||
elif param_name is 'FLAVOR':
|
||||
return 'small_ded'
|
||||
elif param_name is 'IMAGE':
|
||||
return GuestImages.DEFAULT['guest']
|
||||
else:
|
||||
return None
|
||||
|
||||
|
||||
def create_stack(stack_name, template, pre_creates=None, environments=None,
|
||||
stack_timeout=None, parameters=None, param_files=None,
|
||||
enable_rollback=None, dry_run=None, wait=None, tags=None,
|
||||
fail_ok=False, con_ssh=None, auth_info=None,
|
||||
cleanup='function', timeout=300):
|
||||
"""
|
||||
Create the given heat stack for a given tenant.
|
||||
|
||||
Args:
|
||||
stack_name (str): Given name for the heat stack
|
||||
template (str): path of heat template
|
||||
pre_creates (str|list|None)
|
||||
environments (str|list|None)
|
||||
stack_timeout (int|str|None): stack creating timeout in minutes
|
||||
parameters (str|dict|None)
|
||||
param_files (str|dict|None)
|
||||
enable_rollback (bool|None)
|
||||
dry_run (bool|None)
|
||||
wait (bool|None)
|
||||
tags (str|list|None)
|
||||
auth_info (dict): Tenant dict. If None, primary tenant will be used.
|
||||
con_ssh (SSHClient): If None, active controller ssh will be used.
|
||||
timeout (int): automation timeout in seconds
|
||||
fail_ok (bool):
|
||||
cleanup (str|None)
|
||||
|
||||
Returns (tuple): Status and msg of the heat deletion.
|
||||
"""
|
||||
|
||||
args_dict = {
|
||||
'--template': template,
|
||||
'--environment': environments,
|
||||
'--timeout': stack_timeout,
|
||||
'--pre-create': pre_creates,
|
||||
'--enable-rollback': enable_rollback,
|
||||
'--parameter': parameters,
|
||||
'--parameter-file': param_files,
|
||||
'--wait': wait,
|
||||
'--tags': ','.join(tags) if isinstance(tags, (list, tuple)) else tags,
|
||||
'--dry-run': dry_run,
|
||||
}
|
||||
args = common.parse_args(args_dict, repeat_arg=True)
|
||||
LOG.info("Create Heat Stack {} with args: {}".format(stack_name, args))
|
||||
exitcode, output = cli.openstack('stack create', '{} {}'.
|
||||
format(args, stack_name),
|
||||
ssh_client=con_ssh, fail_ok=fail_ok,
|
||||
auth_info=auth_info, timeout=timeout)
|
||||
if exitcode > 0:
|
||||
return 1, output
|
||||
|
||||
if cleanup:
|
||||
ResourceCleanup.add('heat_stack', resource_id=stack_name, scope=cleanup)
|
||||
|
||||
LOG.info("Wait for Heat Stack Status to reach CREATE_COMPLETE for "
|
||||
"stack %s", stack_name)
|
||||
res, msg = wait_for_heat_status(stack_name=stack_name,
|
||||
status=HeatStackStatus.CREATE_COMPLETE,
|
||||
auth_info=auth_info, fail_ok=fail_ok)
|
||||
if not res:
|
||||
return 2, msg
|
||||
|
||||
LOG.info("Stack {} created successfully".format(stack_name))
|
||||
return 0, stack_name
|
||||
|
||||
|
||||
def update_stack(stack_name, params_string, fail_ok=False, con_ssh=None,
|
||||
auth_info=None, timeout=300):
|
||||
"""
|
||||
Update the given heat stack for a given tenant.
|
||||
|
||||
Args:
|
||||
con_ssh (SSHClient): If None, active controller ssh will be used.
|
||||
fail_ok (bool):
|
||||
params_string: Parameters to pass to the heat create cmd.
|
||||
ex: -f <stack.yaml> -P IMAGE=tis <stack_name>
|
||||
auth_info (dict): Tenant dict. If None, primary tenant will be used.
|
||||
stack_name (str): Given name for the heat stack
|
||||
timeout (int)
|
||||
|
||||
Returns (tuple): Status and msg of the heat deletion.
|
||||
"""
|
||||
|
||||
if not params_string:
|
||||
raise ValueError("Parameters not provided.")
|
||||
|
||||
LOG.info("Create Heat Stack %s", params_string)
|
||||
exitcode, output = cli.heat('stack-update', params_string,
|
||||
ssh_client=con_ssh, fail_ok=fail_ok,
|
||||
auth_info=auth_info)
|
||||
if exitcode == 1:
|
||||
LOG.warning("Create heat stack request rejected.")
|
||||
return 1, output
|
||||
|
||||
LOG.info("Wait for Heat Stack Status to reach UPDATE_COMPLETE for stack %s",
|
||||
stack_name)
|
||||
res, msg = wait_for_heat_status(stack_name=stack_name,
|
||||
status=HeatStackStatus.UPDATE_COMPLETE,
|
||||
auth_info=auth_info, fail_ok=fail_ok,
|
||||
timeout=timeout)
|
||||
if not res:
|
||||
return 2, msg
|
||||
|
||||
LOG.info("Stack {} updated successfully".format(stack_name))
|
||||
return 0, stack_name
|
||||
|
||||
|
||||
def get_custom_heat_files(file_name, file_dir=HEAT_CUSTOM_TEMPLATES,
|
||||
cli_client=None):
|
||||
"""
|
||||
|
||||
Args:
|
||||
file_name:
|
||||
file_dir:
|
||||
cli_client:
|
||||
|
||||
Returns:
|
||||
|
||||
"""
|
||||
file_path = '{}/{}'.format(file_dir, file_name)
|
||||
|
||||
if cli_client is None:
|
||||
cli_client = get_cli_client()
|
||||
|
||||
if not cli_client.file_exists(file_path=file_path):
|
||||
LOG.debug('Create userdata directory if not already exists')
|
||||
cmd = 'mkdir -p {}'.format(file_dir)
|
||||
cli_client.exec_cmd(cmd, fail_ok=False)
|
||||
source_file = TestServerPath.CUSTOM_HEAT_TEMPLATES + file_name
|
||||
dest_path = common.scp_from_test_server_to_user_file_dir(
|
||||
source_path=source_file, dest_dir=file_dir,
|
||||
dest_name=file_name, timeout=300, con_ssh=cli_client)
|
||||
if dest_path is None:
|
||||
raise exceptions.CommonError(
|
||||
"Heat template file {} does not exist after download".format(
|
||||
file_path))
|
||||
|
||||
return file_path
|
|
@ -0,0 +1,45 @@
|
|||
#
|
||||
# Copyright (c) 2019 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
import os
|
||||
|
||||
from utils.tis_log import LOG
|
||||
from utils.horizon.helper import HorizonDriver
|
||||
from consts.auth import Tenant
|
||||
from consts.proj_vars import ProjVar
|
||||
|
||||
|
||||
def download_openrc_files(quit_driver=True):
|
||||
"""
|
||||
Download openrc files from Horizon to <LOG_DIR>/horizon/.
|
||||
|
||||
"""
|
||||
LOG.info("Download openrc files from horizon")
|
||||
local_dir = os.path.join(ProjVar.get_var('LOG_DIR'), 'horizon')
|
||||
|
||||
from utils.horizon.pages import loginpage
|
||||
rc_files = []
|
||||
login_pg = loginpage.LoginPage()
|
||||
login_pg.go_to_target_page()
|
||||
try:
|
||||
for auth_info in (Tenant.get('admin'), Tenant.get('tenant1'), Tenant.get('tenant2')):
|
||||
user = auth_info['user']
|
||||
password = auth_info['password']
|
||||
openrc_file = '{}-openrc.sh'.format(user)
|
||||
home_pg = login_pg.login(user, password=password)
|
||||
home_pg.download_rc_v3()
|
||||
home_pg.log_out()
|
||||
openrc_path = os.path.join(local_dir, openrc_file)
|
||||
assert os.path.exists(openrc_path), "{} not found after download".format(openrc_file)
|
||||
rc_files.append(openrc_path)
|
||||
|
||||
finally:
|
||||
if quit_driver:
|
||||
HorizonDriver.quit_driver()
|
||||
|
||||
LOG.info("openrc files are successfully downloaded to: {}".format(local_dir))
|
||||
return rc_files
|
File diff suppressed because it is too large
Load Diff
|
@ -0,0 +1,198 @@
|
|||
#
|
||||
# Copyright (c) 2019 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
import json
|
||||
import requests
|
||||
|
||||
from consts.auth import Tenant
|
||||
from utils import table_parser, cli
|
||||
from utils.tis_log import LOG
|
||||
from consts.proj_vars import ProjVar
|
||||
from keywords import keystone_helper
|
||||
|
||||
|
||||
def get_ip_addr():
|
||||
return ProjVar.get_var('lab')['floating ip']
|
||||
|
||||
|
||||
def create_url(ip=None, port=None, version=None, extension=None):
|
||||
"""
|
||||
Creates a url with the given parameters inn the form:
|
||||
http(s)://<ip address>:<port>/<version>/<extension>
|
||||
Args:
|
||||
ip (str): the main ip address. If set to None will be set to the lab's
|
||||
ip address by default.
|
||||
port (int): the port number to connect to.
|
||||
version (str): for REST API. version number, e.g. "v1", "v2.0"
|
||||
extension (str): extensions to add to the url
|
||||
|
||||
Returns (str): a url created with the given parameters
|
||||
|
||||
"""
|
||||
if keystone_helper.is_https_enabled() is True:
|
||||
url = 'https://'
|
||||
else:
|
||||
url = 'http://'
|
||||
if ip:
|
||||
url += ip
|
||||
else:
|
||||
url += get_ip_addr()
|
||||
|
||||
if port:
|
||||
url += ':{}'.format(port)
|
||||
|
||||
if version:
|
||||
url += '/{}'.format(version)
|
||||
|
||||
if extension:
|
||||
url += '/{}'.format(extension)
|
||||
|
||||
return url
|
||||
|
||||
|
||||
def get_user_token(field='id', con_ssh=None, auth_info=Tenant.get('admin')):
|
||||
"""
|
||||
Return an authentication token for the admin.
|
||||
|
||||
Args:
|
||||
field (str):
|
||||
con_ssh (SSHClient):
|
||||
auth_info
|
||||
Returns (list): a list containing at most one authentication token
|
||||
|
||||
"""
|
||||
table_ = table_parser.table(cli.openstack('token issue', ssh_client=con_ssh,
|
||||
auth_info=auth_info)[1])
|
||||
token = table_parser.get_value_two_col_table(table_, field)
|
||||
return token
|
||||
|
||||
|
||||
def get_request(url, headers, verify=True):
|
||||
"""
|
||||
Sends a GET request to the url
|
||||
Args:
|
||||
url (str): url to send request to
|
||||
headers (dict): header to add to the request
|
||||
verify: Verify SSL certificate
|
||||
|
||||
Returns (dict): The response for the request
|
||||
|
||||
"""
|
||||
LOG.info("Sending GET request to {}. Headers: {}".format(url, headers))
|
||||
resp = requests.get(url, headers=headers, verify=verify)
|
||||
|
||||
if resp.status_code == requests.codes.ok:
|
||||
data = json.loads(resp.text)
|
||||
LOG.info("The returned data is: {}".format(data))
|
||||
return data
|
||||
|
||||
LOG.info("Error {}".format(resp.status_code))
|
||||
return None
|
||||
|
||||
|
||||
def post_request(url, data, headers, verify=True):
|
||||
"""
|
||||
Sends a POST request to the url
|
||||
Args:
|
||||
url (str): url to send request to
|
||||
data (dict): data to be sent in the request body
|
||||
headers (dict): header to add to the request
|
||||
verify: Verify SSL certificate
|
||||
|
||||
Returns (dict): The response for the request
|
||||
|
||||
"""
|
||||
if not isinstance(data, str):
|
||||
data = json.dumps(data)
|
||||
LOG.info("Sending POST request to {}. Headers: {}. Data: "
|
||||
"{}".format(url, headers, data))
|
||||
resp = requests.post(url, headers=headers, data=data, verify=verify)
|
||||
|
||||
if resp.status_code == requests.codes.ok:
|
||||
data = json.loads(resp.text)
|
||||
LOG.info("The returned data is: {}".format(data))
|
||||
return data
|
||||
|
||||
LOG.info("Error {}".format(resp.status_code))
|
||||
return None
|
||||
|
||||
|
||||
def put_request(url, data, headers, verify=True):
|
||||
"""
|
||||
Sends a GET request to the url
|
||||
Args:
|
||||
url (str): url to send request to
|
||||
data (dict): data to be sent in the request body
|
||||
headers (dict): header to add to the request
|
||||
verify: Verify SSL certificate
|
||||
|
||||
Returns (dict): The response for the request
|
||||
|
||||
"""
|
||||
if not isinstance(data, str):
|
||||
data = json.dumps(data)
|
||||
LOG.info("Sending PUT request to {}. Headers: {}. Data: "
|
||||
"{}".format(url, headers, data))
|
||||
resp = requests.put(url, headers=headers, data=data, verify=verify)
|
||||
|
||||
if resp.status_code == requests.codes.ok:
|
||||
data = json.loads(resp.text)
|
||||
LOG.info("The returned data is: {}".format(data))
|
||||
return data
|
||||
|
||||
LOG.info("Error {}".format(resp.status_code))
|
||||
return None
|
||||
|
||||
|
||||
def delete_request(url, headers, verify=True):
|
||||
"""
|
||||
Sends a GET request to the url
|
||||
Args:
|
||||
url (str): url to send request to
|
||||
headers (dict): header to add to the request
|
||||
verify: Verify SSL certificate
|
||||
|
||||
Returns (dict): The response for the request
|
||||
|
||||
"""
|
||||
LOG.info("Sending DELETE request to {}. Headers: {}".format(url, headers))
|
||||
resp = requests.delete(url, headers=headers, verify=verify)
|
||||
|
||||
if resp.status_code == requests.codes.ok:
|
||||
data = json.loads(resp.text)
|
||||
LOG.info("The returned data is: {}".format(data))
|
||||
return data
|
||||
|
||||
LOG.info("Error {}".format(resp.status_code))
|
||||
return None
|
||||
|
||||
|
||||
def patch_request(url, data, headers, verify=True):
|
||||
"""
|
||||
Sends a PATCH request to the url
|
||||
Args:
|
||||
url (str): url to send request to
|
||||
data (dict|str|list): data to be sent in the request body
|
||||
headers (dict): header to add to the request
|
||||
verify: Verify SSL certificate
|
||||
|
||||
Returns (dict): The response for the request
|
||||
|
||||
"""
|
||||
if not isinstance(data, str):
|
||||
data = json.dumps(data)
|
||||
LOG.info("Sending PATCH request to {}. Headers: {}. Data: "
|
||||
"{}".format(url, headers, data))
|
||||
resp = requests.patch(url, headers=headers, data=data, verify=verify)
|
||||
|
||||
if resp.status_code == requests.codes.ok:
|
||||
data = json.loads(resp.text)
|
||||
LOG.info("The returned data is: {}".format(data))
|
||||
return data
|
||||
|
||||
LOG.info("Error {}".format(resp.status_code))
|
||||
return None
|
|
@ -0,0 +1,540 @@
|
|||
#
|
||||
# Copyright (c) 2019 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
import re
|
||||
|
||||
from consts.auth import Tenant, HostLinuxUser
|
||||
from consts.proj_vars import ProjVar
|
||||
from utils import cli, exceptions, table_parser
|
||||
from utils.clients.ssh import ControllerClient
|
||||
from utils.tis_log import LOG
|
||||
from keywords import common
|
||||
|
||||
|
||||
def get_roles(field='ID', con_ssh=None, auth_info=Tenant.get('admin'),
|
||||
**kwargs):
|
||||
table_ = table_parser.table(cli.openstack('role list', ssh_client=con_ssh,
|
||||
auth_info=auth_info)[1])
|
||||
return table_parser.get_multi_values(table_, field, **kwargs)
|
||||
|
||||
|
||||
def get_users(field='ID', con_ssh=None, auth_info=Tenant.get('admin'),
|
||||
**kwargs):
|
||||
"""
|
||||
Return a list of user id(s) with given user name.
|
||||
|
||||
Args:
|
||||
field (str|list|tuple):
|
||||
con_ssh (SSHClient):
|
||||
auth_info
|
||||
|
||||
Returns (list): list of user id(s)
|
||||
|
||||
"""
|
||||
table_ = table_parser.table(cli.openstack('user list', ssh_client=con_ssh,
|
||||
auth_info=auth_info)[1])
|
||||
return table_parser.get_multi_values(table_, field, **kwargs)
|
||||
|
||||
|
||||
def add_or_remove_role(add_=True, role='admin', project=None, user=None,
|
||||
domain=None, group=None, group_domain=None,
|
||||
project_domain=None, user_domain=None, inherited=None,
|
||||
check_first=True, fail_ok=False,
|
||||
con_ssh=None, auth_info=Tenant.get('admin')):
|
||||
"""
|
||||
Add or remove given role for specified user and tenant. e.g., add admin
|
||||
role to tenant2 user on tenant2 project
|
||||
|
||||
Args:
|
||||
add_(bool): whether to add or remove
|
||||
role (str): an existing role from openstack role list
|
||||
project (str): tenant name. When unset, the primary tenant name
|
||||
will be used
|
||||
user (str): an existing user that belongs to given tenant
|
||||
domain (str): Include <domain> (name or ID)
|
||||
group (str): Include <group> (name or ID)
|
||||
group_domain (str): Domain the group belongs to (name or ID).
|
||||
This can be used in case collisions between group names exist.
|
||||
project_domain (str): Domain the project belongs to (name or ID).
|
||||
This can be used in case collisions between project names exist.
|
||||
user_domain (str): Domain the user belongs to (name or ID).
|
||||
This can be used in case collisions between user names exist.
|
||||
inherited (bool): Specifies if the role grant is inheritable to the
|
||||
sub projects
|
||||
check_first (bool): whether to check if role already exists for given
|
||||
user and tenant
|
||||
fail_ok (bool): whether to throw exception on failure
|
||||
con_ssh (SSHClient): active controller ssh session
|
||||
auth_info (dict): auth info to use to executing the add role cli
|
||||
|
||||
Returns (tuple):
|
||||
|
||||
"""
|
||||
tenant_dict = {}
|
||||
|
||||
if project is None:
|
||||
tenant_dict = Tenant.get_primary()
|
||||
project = tenant_dict['tenant']
|
||||
|
||||
if user is None:
|
||||
user = tenant_dict.get('user', project)
|
||||
|
||||
if check_first:
|
||||
existing_roles = get_role_assignments(role=role, project=project,
|
||||
user=user,
|
||||
user_domain=user_domain,
|
||||
group=group,
|
||||
group_domain=group_domain,
|
||||
domain=domain,
|
||||
project_domain=project_domain,
|
||||
inherited=inherited,
|
||||
effective_only=False,
|
||||
con_ssh=con_ssh,
|
||||
auth_info=auth_info)
|
||||
if existing_roles:
|
||||
if add_:
|
||||
msg = "Role already exists with given criteria: {}".format(
|
||||
existing_roles)
|
||||
LOG.info(msg)
|
||||
return -1, msg
|
||||
else:
|
||||
if not add_:
|
||||
msg = "Role with given criteria does not exist. Do nothing."
|
||||
LOG.info(msg)
|
||||
return -1, msg
|
||||
|
||||
msg_str = 'Add' if add_ else 'Remov'
|
||||
LOG.info(
|
||||
"{}ing {} role to {} user under {} project".format(msg_str, role, user,
|
||||
project))
|
||||
|
||||
sub_cmd = "--user {} --project {}".format(user, project)
|
||||
if inherited is True:
|
||||
sub_cmd += ' --inherited'
|
||||
|
||||
optional_args = {
|
||||
'domain': domain,
|
||||
'group': group,
|
||||
'group-domain': group_domain,
|
||||
'project-domain': project_domain,
|
||||
'user-domain': user_domain,
|
||||
}
|
||||
|
||||
for key, val in optional_args.items():
|
||||
if val is not None:
|
||||
sub_cmd += ' --{} {}'.format(key, val)
|
||||
|
||||
sub_cmd += ' {}'.format(role)
|
||||
|
||||
cmd = 'role add' if add_ else 'role remove'
|
||||
res, out = cli.openstack(cmd, sub_cmd, ssh_client=con_ssh, fail_ok=fail_ok,
|
||||
auth_info=auth_info)
|
||||
|
||||
if res == 1:
|
||||
return 1, out
|
||||
|
||||
LOG.info("{} cli accepted. Check role is {}ed "
|
||||
"successfully".format(cmd, msg_str))
|
||||
post_roles = get_role_assignments(role=role, project=project, user=user,
|
||||
user_domain=user_domain, group=group,
|
||||
group_domain=group_domain, domain=domain,
|
||||
project_domain=project_domain,
|
||||
inherited=inherited, effective_only=True,
|
||||
con_ssh=con_ssh, auth_info=auth_info)
|
||||
|
||||
err_msg = ''
|
||||
if add_ and not post_roles:
|
||||
err_msg = "No role is added with given criteria"
|
||||
elif post_roles and not add_:
|
||||
err_msg = "Role is not removed"
|
||||
if err_msg:
|
||||
if fail_ok:
|
||||
LOG.warning(err_msg)
|
||||
return 2, err_msg
|
||||
else:
|
||||
raise exceptions.KeystoneError(err_msg)
|
||||
|
||||
succ_msg = "Role is successfully {}ed".format(msg_str)
|
||||
LOG.info(succ_msg)
|
||||
return 0, succ_msg
|
||||
|
||||
|
||||
def get_role_assignments(field='Role', names=True, role=None, user=None,
|
||||
project=None, user_domain=None, group=None,
|
||||
group_domain=None, domain=None, project_domain=None,
|
||||
inherited=None, effective_only=None,
|
||||
con_ssh=None, auth_info=Tenant.get('admin')):
|
||||
"""
|
||||
Get values from 'openstack role assignment list' table
|
||||
|
||||
Args:
|
||||
field (str|list|tuple): role assignment table header to determine
|
||||
which values to return
|
||||
names (bool): whether to display role assignment with name
|
||||
(default is ID)
|
||||
role (str): an existing role from openstack role list
|
||||
project (str): tenant name. When unset, the primary tenant name
|
||||
will be used
|
||||
user (str): an existing user that belongs to given tenant
|
||||
domain (str): Include <domain> (name or ID)
|
||||
group (str): Include <group> (name or ID)
|
||||
group_domain (str): Domain the group belongs to (name or ID). This can
|
||||
be used in case collisions between group names exist.
|
||||
project_domain (str): Domain the project belongs to (name or ID). This
|
||||
can be used in case collisions between project names exist.
|
||||
user_domain (str): Domain the user belongs to (name or ID). This can
|
||||
be used in case collisions between user names exist.
|
||||
inherited (bool): Specifies if the role grant is inheritable to the
|
||||
sub projects
|
||||
effective_only (bool): Whether to show effective roles only
|
||||
con_ssh (SSHClient): active controller ssh session
|
||||
auth_info (dict): auth info to use to executing the add role cli
|
||||
|
||||
Returns (list): list of values
|
||||
|
||||
"""
|
||||
optional_args = {
|
||||
'role': role,
|
||||
'user': user,
|
||||
'project': project,
|
||||
'domain': domain,
|
||||
'group': group,
|
||||
'group-domain': group_domain,
|
||||
'project-domain': project_domain,
|
||||
'user-domain': user_domain,
|
||||
'names': True if names else None,
|
||||
'effective': True if effective_only else None,
|
||||
'inherited': True if inherited else None
|
||||
}
|
||||
args = common.parse_args(optional_args)
|
||||
|
||||
role_assignment_tab = table_parser.table(
|
||||
cli.openstack('role assignment list', args, ssh_client=con_ssh,
|
||||
auth_info=auth_info)[1])
|
||||
|
||||
if not role_assignment_tab['headers']:
|
||||
LOG.info("No role assignment is found with criteria: {}".format(args))
|
||||
return []
|
||||
|
||||
return table_parser.get_multi_values(role_assignment_tab, field)
|
||||
|
||||
|
||||
def set_user(user, name=None, project=None, password=None, project_doamin=None,
|
||||
email=None, description=None,
|
||||
enable=None, fail_ok=False, auth_info=Tenant.get('admin'),
|
||||
con_ssh=None):
|
||||
LOG.info("Updating {}...".format(user))
|
||||
arg = ''
|
||||
optional_args = {
|
||||
'name': name,
|
||||
'project': project,
|
||||
'password': password,
|
||||
'project-domain': project_doamin,
|
||||
'email': email,
|
||||
'description': description,
|
||||
}
|
||||
for key, val in optional_args.items():
|
||||
if val is not None:
|
||||
arg += "--{} '{}' ".format(key, val)
|
||||
|
||||
if enable is not None:
|
||||
arg += '--{} '.format('enable' if enable else 'disable')
|
||||
|
||||
if not arg.strip():
|
||||
raise ValueError(
|
||||
"Please specify the param(s) and value(s) to change to")
|
||||
|
||||
arg += user
|
||||
|
||||
code, output = cli.openstack('user set', arg, ssh_client=con_ssh,
|
||||
fail_ok=fail_ok, auth_info=auth_info)
|
||||
|
||||
if code > 0:
|
||||
return 1, output
|
||||
|
||||
if name or project or password:
|
||||
tenant_dictname = user.upper()
|
||||
Tenant.update(tenant_dictname, username=name, password=password,
|
||||
tenant=project)
|
||||
|
||||
if password and user == 'admin':
|
||||
from consts.proj_vars import ProjVar
|
||||
if ProjVar.get_var('REGION') != 'RegionOne':
|
||||
LOG.info(
|
||||
"Run openstack_update_admin_password on secondary region "
|
||||
"after admin password change")
|
||||
if not con_ssh:
|
||||
con_ssh = ControllerClient.get_active_controller()
|
||||
with con_ssh.login_as_root(timeout=30) as con_ssh:
|
||||
con_ssh.exec_cmd(
|
||||
"echo 'y' | openstack_update_admin_password '{}'".format(
|
||||
password))
|
||||
|
||||
msg = 'User {} updated successfully'.format(user)
|
||||
LOG.info(msg)
|
||||
return 0, output
|
||||
|
||||
|
||||
def get_endpoints(field='ID', endpoint_id=None, service_name=None,
|
||||
service_type=None, enabled=None, interface="admin",
|
||||
region=None, url=None, strict=False,
|
||||
auth_info=Tenant.get('admin'), con_ssh=None, cli_filter=True):
|
||||
"""
|
||||
Get a list of endpoints with given arguments
|
||||
Args:
|
||||
field (str|list|tuple): valid header of openstack endpoints list
|
||||
table. 'ID'
|
||||
endpoint_id (str): id of the endpoint
|
||||
service_name (str): Service name of endpoint like novaav3, neutron,
|
||||
keystone. vim, heat, swift, etc
|
||||
service_type(str): Service type
|
||||
enabled (str): True/False
|
||||
interface (str): Interface of endpoints. valid entries: admin,
|
||||
internal, public
|
||||
region (str): RegionOne or RegionTwo
|
||||
url (str): url of endpoint
|
||||
strict(bool):
|
||||
auth_info (dict):
|
||||
con_ssh (SSHClient):
|
||||
cli_filter (bool): whether to filter out using cli. e.g., openstack
|
||||
endpoint list --service xxx
|
||||
|
||||
Returns (list):
|
||||
|
||||
"""
|
||||
pre_args_str = ''
|
||||
if cli_filter:
|
||||
pre_args_dict = {
|
||||
'--service': service_name,
|
||||
'--interface': interface,
|
||||
'--region': region,
|
||||
}
|
||||
|
||||
pre_args = []
|
||||
for key, val in pre_args_dict.items():
|
||||
if val:
|
||||
pre_args.append('{}={}'.format(key, val))
|
||||
pre_args_str = ' '.join(pre_args)
|
||||
|
||||
output = cli.openstack('endpoint list', positional_args=pre_args_str,
|
||||
ssh_client=con_ssh, auth_info=auth_info)[1]
|
||||
if not output.strip():
|
||||
LOG.warning("No endpoints returned with param: {}".format(pre_args_str))
|
||||
return []
|
||||
|
||||
table_ = table_parser.table(output)
|
||||
|
||||
kwargs = {
|
||||
'ID': endpoint_id,
|
||||
'Service Name': service_name,
|
||||
'Service Type': service_type,
|
||||
'Enabled': enabled,
|
||||
'Interface': interface,
|
||||
'URL': url,
|
||||
'Region': region,
|
||||
}
|
||||
kwargs = {k: v for k, v in kwargs.items() if v}
|
||||
return table_parser.get_multi_values(table_, field, strict=strict,
|
||||
regex=True, merge_lines=True, **kwargs)
|
||||
|
||||
|
||||
def get_endpoints_values(endpoint_id, fields, con_ssh=None,
|
||||
auth_info=Tenant.get('admin')):
|
||||
"""
|
||||
Gets the endpoint target field value for given endpoint Id
|
||||
Args:
|
||||
endpoint_id: the endpoint id to get the value of
|
||||
fields: the target field name to retrieve value of
|
||||
con_ssh:
|
||||
auth_info
|
||||
|
||||
Returns (list): list of endpoint values
|
||||
|
||||
"""
|
||||
table_ = table_parser.table(
|
||||
cli.openstack('endpoint show', endpoint_id, ssh_client=con_ssh,
|
||||
auth_info=auth_info)[1])
|
||||
return table_parser.get_multi_values_two_col_table(table_, fields)
|
||||
|
||||
|
||||
def is_https_enabled(con_ssh=None, source_openrc=True,
|
||||
auth_info=Tenant.get('admin_platform')):
|
||||
if not con_ssh:
|
||||
con_name = auth_info.get('region') if (
|
||||
auth_info and ProjVar.get_var('IS_DC')) else None
|
||||
con_ssh = ControllerClient.get_active_controller(name=con_name)
|
||||
|
||||
table_ = table_parser.table(
|
||||
cli.openstack('endpoint list', ssh_client=con_ssh, auth_info=auth_info,
|
||||
source_openrc=source_openrc)[1])
|
||||
con_ssh.exec_cmd('unset OS_REGION_NAME') # Workaround
|
||||
filters = {'Service Name': 'keystone', 'Service Type': 'identity',
|
||||
'Interface': 'public'}
|
||||
keystone_pub = table_parser.get_values(table_=table_, target_header='URL',
|
||||
**filters)[0]
|
||||
return 'https' in keystone_pub
|
||||
|
||||
|
||||
def delete_users(user, fail_ok=False, auth_info=Tenant.get('admin'),
|
||||
con_ssh=None):
|
||||
"""
|
||||
Delete the given openstack user
|
||||
Args:
|
||||
user: user name to delete
|
||||
fail_ok: if the deletion expected to fail
|
||||
auth_info
|
||||
con_ssh
|
||||
|
||||
Returns: tuple, (code, msg)
|
||||
"""
|
||||
return cli.openstack('user delete', user, ssh_client=con_ssh,
|
||||
fail_ok=fail_ok, auth_info=auth_info)
|
||||
|
||||
|
||||
def get_projects(field='ID', auth_info=Tenant.get('admin'), con_ssh=None,
|
||||
strict=False, **filters):
|
||||
"""
|
||||
Get list of Project names or IDs
|
||||
Args:
|
||||
field (str|list|tuple):
|
||||
auth_info:
|
||||
con_ssh:
|
||||
strict (bool): used for filters
|
||||
filters
|
||||
|
||||
Returns (list):
|
||||
|
||||
"""
|
||||
table_ = table_parser.table(
|
||||
cli.openstack('project list', ssh_client=con_ssh, auth_info=auth_info)[
|
||||
1])
|
||||
return table_parser.get_multi_values(table_, field, strict=strict,
|
||||
**filters)
|
||||
|
||||
|
||||
def create_project(name=None, field='ID', domain=None, parent=None,
|
||||
description=None, enable=None, con_ssh=None,
|
||||
rtn_exist=None, fail_ok=False, auth_info=Tenant.get('admin'),
|
||||
**properties):
|
||||
"""
|
||||
Create a openstack project
|
||||
Args:
|
||||
name (str|None):
|
||||
field (str): ID or Name. Whether to return project id or name if
|
||||
created successfully
|
||||
domain (str|None):
|
||||
parent (str|None):
|
||||
description (str|None):
|
||||
enable (bool|None):
|
||||
con_ssh:
|
||||
rtn_exist
|
||||
fail_ok:
|
||||
auth_info:
|
||||
**properties:
|
||||
|
||||
Returns (tuple):
|
||||
(0, <project>)
|
||||
(1, <std_err>)
|
||||
|
||||
"""
|
||||
if not name:
|
||||
existing_names = get_projects(field='Name',
|
||||
auth_info=Tenant.get('admin'),
|
||||
con_ssh=con_ssh)
|
||||
max_count = 0
|
||||
end_str = ''
|
||||
for name in existing_names:
|
||||
match = re.match(r'tenant(\d+)(.*)', name)
|
||||
if match:
|
||||
count, end_str = match.groups()
|
||||
max_count = max(int(count), max_count)
|
||||
name = 'tenant{}{}'.format(max_count + 1, end_str)
|
||||
|
||||
LOG.info("Create/Show openstack project {}".format(name))
|
||||
|
||||
arg_dict = {
|
||||
'domain': domain,
|
||||
'parent': parent,
|
||||
'description': description,
|
||||
'enable': True if enable is True else None,
|
||||
'disable': True if enable is False else None,
|
||||
'or-show': rtn_exist,
|
||||
'property': properties,
|
||||
}
|
||||
|
||||
arg_str = common.parse_args(args_dict=arg_dict, repeat_arg=True)
|
||||
arg_str += ' {}'.format(name)
|
||||
|
||||
code, output = cli.openstack('project create', arg_str, ssh_client=con_ssh,
|
||||
fail_ok=fail_ok, auth_info=auth_info)
|
||||
if code > 0:
|
||||
return 1, output
|
||||
|
||||
project_ = table_parser.get_value_two_col_table(table_parser.table(output),
|
||||
field=field)
|
||||
LOG.info("Project {} successfully created/showed.".format(project_))
|
||||
|
||||
return 0, project_
|
||||
|
||||
|
||||
def create_user(name=None, field='name', domain=None, project=None,
|
||||
project_domain=None, rtn_exist=None,
|
||||
password=HostLinuxUser.get_password(), email=None,
|
||||
description=None, enable=None,
|
||||
auth_info=Tenant.get('admin'), fail_ok=False, con_ssh=None):
|
||||
"""
|
||||
Create an openstack user
|
||||
Args:
|
||||
name (str|None):
|
||||
field: name or id
|
||||
domain:
|
||||
project (str|None): default project
|
||||
project_domain:
|
||||
rtn_exist (bool)
|
||||
password:
|
||||
email:
|
||||
description:
|
||||
enable:
|
||||
auth_info:
|
||||
fail_ok:
|
||||
con_ssh:
|
||||
|
||||
Returns (tuple):
|
||||
(0, <user>)
|
||||
(1, <std_err>)
|
||||
|
||||
"""
|
||||
|
||||
if not name:
|
||||
name = 'user'
|
||||
common.get_unique_name(name_str=name)
|
||||
|
||||
LOG.info("Create/Show openstack user {}".format(name))
|
||||
arg_dict = {
|
||||
'domain': domain,
|
||||
'project': project,
|
||||
'project-domain': project_domain,
|
||||
'password': password,
|
||||
'email': email,
|
||||
'description': description,
|
||||
'enable': True if enable is True else None,
|
||||
'disable': True if enable is False else None,
|
||||
'or-show': rtn_exist,
|
||||
}
|
||||
|
||||
arg_str = '{} {}'.format(common.parse_args(args_dict=arg_dict), name)
|
||||
|
||||
code, output = cli.openstack('user create', arg_str, ssh_client=con_ssh,
|
||||
fail_ok=fail_ok, auth_info=auth_info)
|
||||
if code > 0:
|
||||
return 1, output
|
||||
|
||||
user = table_parser.get_value_two_col_table(table_parser.table(output),
|
||||
field=field)
|
||||
LOG.info("Openstack user {} successfully created/showed".format(user))
|
||||
|
||||
return 0, user
|
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
|
@ -0,0 +1,21 @@
|
|||
[pytest]
|
||||
addopts = -s -rxs -v
|
||||
testpaths = testcases/functional
|
||||
log_print = False
|
||||
markers =
|
||||
sanity: mark test for sanity run
|
||||
cpe_sanity: mark tests for cpe sanity
|
||||
storage_sanity: mark tests for storage sanity
|
||||
sx_sanity: mark tests for simplex sanity
|
||||
nightly: nightly regression
|
||||
sx_nightly: mark tests for simplex nightly regression
|
||||
platform: mark tests for container platform tests that don't require openstack services
|
||||
p1: mark test priority as p1
|
||||
p2: mark test priority as p2
|
||||
p3: mark test priority as p3
|
||||
domain_sanity: mark test priority as domain sanity
|
||||
nics: networking testcases for nic testing
|
||||
dc: distributed cloud test cases
|
||||
# features(feature1, feature2, ...): mark impacted feature(s) for a test case.
|
||||
slow: slow test that possibly involves reboot or lock/unlock host(s)
|
||||
abslast: test case that absolutely should be run the last
|
|
@ -0,0 +1,6 @@
|
|||
pytest>=3.1.0,<4.0
|
||||
pexpect
|
||||
requests
|
||||
selenium
|
||||
pyvirtualdisplay
|
||||
PyYAML
|
|
@ -0,0 +1,750 @@
|
|||
#
|
||||
# Copyright (c) 2019 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
import os
|
||||
import re
|
||||
import time
|
||||
import ipaddress
|
||||
import configparser
|
||||
|
||||
from consts.auth import Tenant, HostLinuxUser, CliAuth, Guest
|
||||
from consts.stx import Prompt, SUBCLOUD_PATTERN, SysType, GuestImages, Networks
|
||||
from consts.lab import Labs, add_lab_entry, NatBoxes
|
||||
from consts.proj_vars import ProjVar
|
||||
from keywords import host_helper, nova_helper, system_helper, keystone_helper, \
|
||||
common, container_helper
|
||||
from utils import exceptions
|
||||
from utils.clients.ssh import SSHClient, CONTROLLER_PROMPT, ControllerClient, \
|
||||
NATBoxClient, PASSWORD_PROMPT
|
||||
from utils.tis_log import LOG
|
||||
|
||||
|
||||
def less_than_two_controllers(con_ssh=None,
|
||||
auth_info=Tenant.get('admin_platform')):
|
||||
return len(
|
||||
system_helper.get_controllers(con_ssh=con_ssh, auth_info=auth_info)) < 2
|
||||
|
||||
|
||||
def setup_tis_ssh(lab):
|
||||
con_ssh = ControllerClient.get_active_controller(fail_ok=True)
|
||||
|
||||
if con_ssh is None:
|
||||
con_ssh = SSHClient(lab['floating ip'], HostLinuxUser.get_user(),
|
||||
HostLinuxUser.get_password(),
|
||||
CONTROLLER_PROMPT)
|
||||
con_ssh.connect(retry=True, retry_timeout=30)
|
||||
ControllerClient.set_active_controller(con_ssh)
|
||||
|
||||
return con_ssh
|
||||
|
||||
|
||||
def setup_vbox_tis_ssh(lab):
|
||||
if 'external_ip' in lab.keys():
|
||||
|
||||
con_ssh = ControllerClient.get_active_controller(fail_ok=True)
|
||||
if con_ssh:
|
||||
con_ssh.disconnect()
|
||||
|
||||
con_ssh = SSHClient(lab['external_ip'], HostLinuxUser.get_user(),
|
||||
HostLinuxUser.get_password(),
|
||||
CONTROLLER_PROMPT, port=lab['external_port'])
|
||||
con_ssh.connect(retry=True, retry_timeout=30)
|
||||
ControllerClient.set_active_controller(con_ssh)
|
||||
|
||||
else:
|
||||
con_ssh = setup_tis_ssh(lab)
|
||||
|
||||
return con_ssh
|
||||
|
||||
|
||||
def setup_primary_tenant(tenant):
|
||||
Tenant.set_primary(tenant)
|
||||
LOG.info("Primary Tenant for test session is set to {}".format(
|
||||
Tenant.get(tenant)['tenant']))
|
||||
|
||||
|
||||
def setup_natbox_ssh(natbox, con_ssh):
|
||||
natbox_ip = natbox['ip'] if natbox else None
|
||||
if not natbox_ip and not container_helper.is_stx_openstack_deployed(
|
||||
con_ssh=con_ssh):
|
||||
LOG.info(
|
||||
"stx-openstack is not applied and natbox is unspecified. Skip "
|
||||
"natbox config.")
|
||||
return None
|
||||
|
||||
NATBoxClient.set_natbox_client(natbox_ip)
|
||||
nat_ssh = NATBoxClient.get_natbox_client()
|
||||
ProjVar.set_var(natbox_ssh=nat_ssh)
|
||||
|
||||
setup_keypair(con_ssh=con_ssh, natbox_client=nat_ssh)
|
||||
|
||||
return nat_ssh
|
||||
|
||||
|
||||
def setup_keypair(con_ssh, natbox_client=None):
|
||||
"""
|
||||
copy private keyfile from controller-0:/opt/platform to natbox: priv_keys/
|
||||
Args:
|
||||
natbox_client (SSHClient): NATBox client
|
||||
con_ssh (SSHClient)
|
||||
"""
|
||||
"""
|
||||
copy private keyfile from controller-0:/opt/platform to natbox: priv_keys/
|
||||
Args:
|
||||
natbox_client (SSHClient): NATBox client
|
||||
con_ssh (SSHClient)
|
||||
"""
|
||||
if not container_helper.is_stx_openstack_deployed(con_ssh=con_ssh):
|
||||
LOG.info("stx-openstack is not applied. Skip nova keypair config.")
|
||||
return
|
||||
|
||||
# ssh private key should now exist under keyfile_path
|
||||
if not natbox_client:
|
||||
natbox_client = NATBoxClient.get_natbox_client()
|
||||
|
||||
LOG.info("scp key file from controller to NATBox")
|
||||
# keyfile path that can be specified in testcase config
|
||||
keyfile_stx_origin = os.path.normpath(ProjVar.get_var('STX_KEYFILE_PATH'))
|
||||
|
||||
# keyfile will always be copied to sysadmin home dir first and update file
|
||||
# permission
|
||||
keyfile_stx_final = os.path.normpath(
|
||||
ProjVar.get_var('STX_KEYFILE_SYS_HOME'))
|
||||
public_key_stx = '{}.pub'.format(keyfile_stx_final)
|
||||
|
||||
# keyfile will also be saved to /opt/platform as well, so it won't be
|
||||
# lost during system upgrade.
|
||||
keyfile_opt_pform = '/opt/platform/{}'.format(
|
||||
os.path.basename(keyfile_stx_final))
|
||||
|
||||
# copy keyfile to following NatBox location. This can be specified in
|
||||
# testcase config
|
||||
keyfile_path_natbox = os.path.normpath(
|
||||
ProjVar.get_var('NATBOX_KEYFILE_PATH'))
|
||||
|
||||
auth_info = Tenant.get_primary()
|
||||
keypair_name = auth_info.get('nova_keypair',
|
||||
'keypair-{}'.format(auth_info['user']))
|
||||
nova_keypair = nova_helper.get_keypairs(name=keypair_name,
|
||||
auth_info=auth_info)
|
||||
|
||||
linux_user = HostLinuxUser.get_user()
|
||||
nonroot_group = _get_nonroot_group(con_ssh=con_ssh, user=linux_user)
|
||||
if not con_ssh.file_exists(keyfile_stx_final):
|
||||
with host_helper.ssh_to_host('controller-0',
|
||||
con_ssh=con_ssh) as con_0_ssh:
|
||||
if not con_0_ssh.file_exists(keyfile_opt_pform):
|
||||
if con_0_ssh.file_exists(keyfile_stx_origin):
|
||||
# Given private key file exists. Need to ensure public
|
||||
# key exists in same dir.
|
||||
if not con_0_ssh.file_exists('{}.pub'.format(
|
||||
keyfile_stx_origin)) and not nova_keypair:
|
||||
raise FileNotFoundError(
|
||||
'{}.pub is not found'.format(keyfile_stx_origin))
|
||||
else:
|
||||
# Need to generate ssh key
|
||||
if nova_keypair:
|
||||
raise FileNotFoundError(
|
||||
"Cannot find private key for existing nova "
|
||||
"keypair {}".format(nova_keypair))
|
||||
|
||||
con_0_ssh.exec_cmd("ssh-keygen -f '{}' -t rsa -N ''".format(
|
||||
keyfile_stx_origin), fail_ok=False)
|
||||
if not con_0_ssh.file_exists(keyfile_stx_origin):
|
||||
raise FileNotFoundError(
|
||||
"{} not found after ssh-keygen".format(
|
||||
keyfile_stx_origin))
|
||||
|
||||
# keyfile_stx_origin and matching public key should now exist
|
||||
# on controller-0
|
||||
# copy keyfiles to home dir and opt platform dir
|
||||
con_0_ssh.exec_cmd(
|
||||
'cp {} {}'.format(keyfile_stx_origin, keyfile_stx_final),
|
||||
fail_ok=False)
|
||||
con_0_ssh.exec_cmd(
|
||||
'cp {}.pub {}'.format(keyfile_stx_origin, public_key_stx),
|
||||
fail_ok=False)
|
||||
con_0_ssh.exec_sudo_cmd(
|
||||
'cp {} {}'.format(keyfile_stx_final, keyfile_opt_pform),
|
||||
fail_ok=False)
|
||||
|
||||
# Make sure owner is sysadmin
|
||||
# If private key exists in opt platform, then it must also exist
|
||||
# in home dir
|
||||
con_0_ssh.exec_sudo_cmd(
|
||||
'chown {}:{} {}'.format(linux_user, nonroot_group,
|
||||
keyfile_stx_final),
|
||||
fail_ok=False)
|
||||
|
||||
# ssh private key should now exists under home dir and opt platform
|
||||
# on controller-0
|
||||
if con_ssh.get_hostname() != 'controller-0':
|
||||
# copy file from controller-0 home dir to controller-1
|
||||
con_ssh.scp_on_dest(source_user=HostLinuxUser.get_user(),
|
||||
source_ip='controller-0',
|
||||
source_path=keyfile_stx_final,
|
||||
source_pswd=HostLinuxUser.get_password(),
|
||||
dest_path=keyfile_stx_final, timeout=60)
|
||||
|
||||
if not nova_keypair:
|
||||
LOG.info("Create nova keypair {} using public key {}".
|
||||
format(nova_keypair, public_key_stx))
|
||||
if not con_ssh.file_exists(public_key_stx):
|
||||
con_ssh.scp_on_dest(source_user=HostLinuxUser.get_user(),
|
||||
source_ip='controller-0',
|
||||
source_path=public_key_stx,
|
||||
source_pswd=HostLinuxUser.get_password(),
|
||||
dest_path=public_key_stx, timeout=60)
|
||||
con_ssh.exec_sudo_cmd('chown {}:{} {}'.format(
|
||||
linux_user, nonroot_group, public_key_stx),
|
||||
fail_ok=False)
|
||||
|
||||
if ProjVar.get_var('REMOTE_CLI'):
|
||||
dest_path = os.path.join(ProjVar.get_var('TEMP_DIR'),
|
||||
os.path.basename(public_key_stx))
|
||||
common.scp_from_active_controller_to_localhost(
|
||||
source_path=public_key_stx, dest_path=dest_path, timeout=60)
|
||||
public_key_stx = dest_path
|
||||
LOG.info("Public key file copied to localhost: {}".format(
|
||||
public_key_stx))
|
||||
|
||||
nova_helper.create_keypair(keypair_name, public_key=public_key_stx,
|
||||
auth_info=auth_info)
|
||||
|
||||
natbox_client.exec_cmd(
|
||||
'mkdir -p {}'.format(os.path.dirname(keyfile_path_natbox)))
|
||||
tis_ip = ProjVar.get_var('LAB').get('floating ip')
|
||||
for i in range(10):
|
||||
try:
|
||||
natbox_client.scp_on_dest(source_ip=tis_ip,
|
||||
source_user=HostLinuxUser.get_user(),
|
||||
source_pswd=HostLinuxUser.get_password(),
|
||||
source_path=keyfile_stx_final,
|
||||
dest_path=keyfile_path_natbox,
|
||||
timeout=120)
|
||||
LOG.info("private key is copied to NatBox: {}".format(
|
||||
keyfile_path_natbox))
|
||||
break
|
||||
except exceptions.SSHException as e:
|
||||
if i == 9:
|
||||
raise
|
||||
|
||||
LOG.info(e.__str__())
|
||||
time.sleep(10)
|
||||
|
||||
|
||||
def _get_nonroot_group(con_ssh, user=None):
|
||||
if not user:
|
||||
user = HostLinuxUser.get_user()
|
||||
groups = con_ssh.exec_cmd('groups {}'.format(user), fail_ok=False)[1]
|
||||
err = 'Please ensure linux_user {} belongs to both root and non_root ' \
|
||||
'groups'.format(user)
|
||||
if 'root' not in groups:
|
||||
raise ValueError(err)
|
||||
|
||||
groups = groups.split(': ')[-1].split()
|
||||
for group in groups:
|
||||
if group.strip() != 'root':
|
||||
return group
|
||||
|
||||
raise ValueError('Please ensure linux_user {} belongs to both root '
|
||||
'and at least one non-root groups'.format(user))
|
||||
|
||||
|
||||
def get_lab_dict(labname):
|
||||
labname = labname.strip().lower().replace('-', '_')
|
||||
labs = get_labs_list()
|
||||
|
||||
for lab in labs:
|
||||
if labname in lab.get('name').replace('-', '_').lower().strip() \
|
||||
or labname == lab.get('short_name').replace('-', '_').\
|
||||
lower().strip() or labname == lab.get('floating ip'):
|
||||
return lab
|
||||
else:
|
||||
return add_lab_entry(labname)
|
||||
|
||||
|
||||
def get_labs_list():
|
||||
labs = [getattr(Labs, item) for item in dir(Labs) if
|
||||
not item.startswith('__')]
|
||||
labs = [lab_ for lab_ in labs if isinstance(lab_, dict)]
|
||||
return labs
|
||||
|
||||
|
||||
def get_natbox_dict(natboxname, user=None, password=None, prompt=None):
|
||||
natboxname = natboxname.lower().strip()
|
||||
natboxes = [getattr(NatBoxes, item) for item in dir(NatBoxes) if
|
||||
item.startswith('NAT_')]
|
||||
|
||||
for natbox in natboxes:
|
||||
if natboxname.replace('-', '_') in natbox.get('name').\
|
||||
replace('-', '_') or natboxname == natbox.get('ip'):
|
||||
return natbox
|
||||
else:
|
||||
if __get_ip_version(natboxname) == 6:
|
||||
raise ValueError('Only IPv4 address is supported for now')
|
||||
|
||||
return NatBoxes.add_natbox(ip=natboxname, user=user,
|
||||
password=password, prompt=prompt)
|
||||
|
||||
|
||||
def get_tenant_dict(tenantname):
|
||||
# tenantname = tenantname.lower().strip().replace('_', '').replace('-', '')
|
||||
tenants = [getattr(Tenant, item) for item in dir(Tenant) if
|
||||
not item.startswith('_') and item.isupper()]
|
||||
|
||||
for tenant in tenants:
|
||||
if tenantname == tenant.get('tenant').replace('_', '').replace('-', ''):
|
||||
return tenant
|
||||
else:
|
||||
raise ValueError("{} is not a valid input".format(tenantname))
|
||||
|
||||
|
||||
def collect_tis_logs(con_ssh):
|
||||
common.collect_software_logs(con_ssh=con_ssh)
|
||||
|
||||
|
||||
def get_tis_timestamp(con_ssh):
|
||||
return con_ssh.exec_cmd('date +"%T"')[1]
|
||||
|
||||
|
||||
def set_build_info(con_ssh):
|
||||
system_helper.get_build_info(con_ssh=con_ssh)
|
||||
|
||||
|
||||
def _rsync_files_to_con1(con_ssh=None, central_region=False,
|
||||
file_to_check=None):
|
||||
region = 'RegionOne' if central_region else None
|
||||
auth_info = Tenant.get('admin_platform', dc_region=region)
|
||||
if less_than_two_controllers(auth_info=auth_info, con_ssh=con_ssh):
|
||||
LOG.info("Less than two controllers on system. Skip copying file to "
|
||||
"controller-1.")
|
||||
return
|
||||
|
||||
LOG.info("rsync test files from controller-0 to controller-1 if not "
|
||||
"already done")
|
||||
stx_home = HostLinuxUser.get_home()
|
||||
if not file_to_check:
|
||||
file_to_check = '{}/images/tis-centos-guest.img'.format(stx_home)
|
||||
try:
|
||||
with host_helper.ssh_to_host("controller-1",
|
||||
con_ssh=con_ssh) as con_1_ssh:
|
||||
if con_1_ssh.file_exists(file_to_check):
|
||||
LOG.info(
|
||||
"Test files already exist on controller-1. Skip rsync.")
|
||||
return
|
||||
|
||||
except Exception as e:
|
||||
LOG.error(
|
||||
"Cannot ssh to controller-1. Skip rsync. "
|
||||
"\nException caught: {}".format(e.__str__()))
|
||||
return
|
||||
|
||||
cmd = "rsync -avr -e 'ssh -o UserKnownHostsFile=/dev/null -o " \
|
||||
"StrictHostKeyChecking=no ' " \
|
||||
"{}/* controller-1:{}".format(stx_home, stx_home)
|
||||
|
||||
timeout = 1800
|
||||
with host_helper.ssh_to_host("controller-0", con_ssh=con_ssh) as con_0_ssh:
|
||||
LOG.info("rsync files from controller-0 to controller-1...")
|
||||
con_0_ssh.send(cmd)
|
||||
|
||||
end_time = time.time() + timeout
|
||||
while time.time() < end_time:
|
||||
index = con_0_ssh.expect(
|
||||
[con_0_ssh.prompt, PASSWORD_PROMPT, Prompt.ADD_HOST],
|
||||
timeout=timeout,
|
||||
searchwindowsize=100)
|
||||
if index == 2:
|
||||
con_0_ssh.send('yes')
|
||||
|
||||
if index == 1:
|
||||
con_0_ssh.send(HostLinuxUser.get_password())
|
||||
|
||||
if index == 0:
|
||||
output = int(con_0_ssh.exec_cmd('echo $?')[1])
|
||||
if output in [0, 23]:
|
||||
LOG.info(
|
||||
"Test files are successfully copied to controller-1 "
|
||||
"from controller-0")
|
||||
break
|
||||
else:
|
||||
raise exceptions.SSHExecCommandFailed(
|
||||
"Failed to rsync files from controller-0 to "
|
||||
"controller-1")
|
||||
|
||||
else:
|
||||
raise exceptions.TimeoutException(
|
||||
"Timed out rsync files to controller-1")
|
||||
|
||||
|
||||
def copy_test_files():
|
||||
con_ssh = None
|
||||
central_region = False
|
||||
if ProjVar.get_var('IS_DC'):
|
||||
_rsync_files_to_con1(
|
||||
con_ssh=ControllerClient.get_active_controller(
|
||||
name=ProjVar.get_var('PRIMARY_SUBCLOUD')),
|
||||
file_to_check='~/heat/README',
|
||||
central_region=central_region)
|
||||
con_ssh = ControllerClient.get_active_controller(name='RegionOne')
|
||||
central_region = True
|
||||
|
||||
_rsync_files_to_con1(con_ssh=con_ssh, central_region=central_region)
|
||||
|
||||
|
||||
def get_auth_via_openrc(con_ssh, use_telnet=False, con_telnet=None):
|
||||
valid_keys = ['OS_AUTH_URL',
|
||||
'OS_ENDPOINT_TYPE',
|
||||
'CINDER_ENDPOINT_TYPE',
|
||||
'OS_USER_DOMAIN_NAME',
|
||||
'OS_PROJECT_DOMAIN_NAME',
|
||||
'OS_IDENTITY_API_VERSION',
|
||||
'OS_REGION_NAME',
|
||||
'OS_INTERFACE',
|
||||
'OS_KEYSTONE_REGION_NAME']
|
||||
|
||||
client = con_telnet if use_telnet and con_telnet else con_ssh
|
||||
code, output = client.exec_cmd('cat /etc/platform/openrc')
|
||||
if code != 0:
|
||||
return None
|
||||
|
||||
lines = output.splitlines()
|
||||
auth_dict = {}
|
||||
for line in lines:
|
||||
if 'export' in line:
|
||||
if line.split('export ')[1].split(sep='=')[0] in valid_keys:
|
||||
key, value = line.split(sep='export ')[1].split(sep='=')
|
||||
auth_dict[key.strip().upper()] = value.strip()
|
||||
|
||||
return auth_dict
|
||||
|
||||
|
||||
def is_https(con_ssh):
|
||||
return keystone_helper.is_https_enabled(con_ssh=con_ssh, source_openrc=True,
|
||||
auth_info=Tenant.get(
|
||||
'admin_platform'))
|
||||
|
||||
|
||||
def get_version_and_patch_info():
|
||||
version = ProjVar.get_var('SW_VERSION')[0]
|
||||
info = 'Software Version: {}\n'.format(version)
|
||||
|
||||
patches = ProjVar.get_var('PATCH')
|
||||
if patches:
|
||||
info += 'Patches:\n{}\n'.format('\n'.join(patches))
|
||||
|
||||
# LOG.info("SW Version and Patch info: {}".format(info))
|
||||
return info
|
||||
|
||||
|
||||
def get_system_mode_from_lab_info(lab, multi_region_lab=False,
|
||||
dist_cloud_lab=False):
|
||||
"""
|
||||
|
||||
Args:
|
||||
lab:
|
||||
multi_region_lab:
|
||||
dist_cloud_lab:
|
||||
|
||||
Returns:
|
||||
|
||||
"""
|
||||
|
||||
if multi_region_lab:
|
||||
return SysType.MULTI_REGION
|
||||
elif dist_cloud_lab:
|
||||
return SysType.DISTRIBUTED_CLOUD
|
||||
|
||||
elif 'system_mode' not in lab:
|
||||
if 'storage_nodes' in lab:
|
||||
return SysType.STORAGE
|
||||
elif 'compute_nodes' in lab:
|
||||
return SysType.REGULAR
|
||||
|
||||
elif len(lab['controller_nodes']) > 1:
|
||||
return SysType.AIO_DX
|
||||
else:
|
||||
return SysType.AIO_SX
|
||||
|
||||
elif 'system_mode' in lab:
|
||||
if "simplex" in lab['system_mode']:
|
||||
return SysType.AIO_SX
|
||||
else:
|
||||
return SysType.AIO_DX
|
||||
else:
|
||||
LOG.warning(
|
||||
"Can not determine the lab to install system type based on "
|
||||
"provided information. Lab info: {}"
|
||||
.format(lab))
|
||||
return None
|
||||
|
||||
|
||||
def add_ping_failure(test_name):
|
||||
file_path = '{}{}'.format(ProjVar.get_var('PING_FAILURE_DIR'),
|
||||
'ping_failures.txt')
|
||||
with open(file_path, mode='a') as f:
|
||||
f.write(test_name + '\n')
|
||||
|
||||
|
||||
def set_region(region=None):
|
||||
"""
|
||||
set global variable region.
|
||||
This needs to be called after CliAuth.set_vars, since the custom region
|
||||
value needs to override what is
|
||||
specified in openrc file.
|
||||
|
||||
local region and auth url is saved in CliAuth, while the remote region
|
||||
and auth url is saved in Tenant.
|
||||
|
||||
Args:
|
||||
region: region to set
|
||||
|
||||
"""
|
||||
local_region = CliAuth.get_var('OS_REGION_NAME')
|
||||
if not region:
|
||||
if ProjVar.get_var('IS_DC'):
|
||||
region = 'SystemController'
|
||||
else:
|
||||
region = local_region
|
||||
Tenant.set_region(region=region)
|
||||
ProjVar.set_var(REGION=region)
|
||||
if re.search(SUBCLOUD_PATTERN, region):
|
||||
# Distributed cloud, lab specified is a subcloud.
|
||||
urls = keystone_helper.get_endpoints(region=region, field='URL',
|
||||
interface='internal',
|
||||
service_name='keystone')
|
||||
if not urls:
|
||||
raise ValueError(
|
||||
"No internal endpoint found for region {}. Invalid value for "
|
||||
"--region with specified lab."
|
||||
"sub-cloud tests can be run on controller, but not the other "
|
||||
"way round".format(
|
||||
region))
|
||||
Tenant.set_platform_url(urls[0])
|
||||
|
||||
|
||||
def set_sys_type(con_ssh):
|
||||
sys_type = system_helper.get_sys_type(con_ssh=con_ssh)
|
||||
ProjVar.set_var(SYS_TYPE=sys_type)
|
||||
|
||||
|
||||
def arp_for_fip(lab, con_ssh):
|
||||
fip = lab['floating ip']
|
||||
code, output = con_ssh.exec_cmd(
|
||||
'ip addr | grep -B 4 {} | grep --color=never BROADCAST'.format(fip))
|
||||
if output:
|
||||
target_str = output.splitlines()[-1]
|
||||
dev = target_str.split(sep=': ')[1].split('@')[0]
|
||||
con_ssh.exec_cmd('arping -c 3 -A -q -I {} {}'.format(dev, fip))
|
||||
|
||||
|
||||
def __get_ip_version(ip_addr):
|
||||
try:
|
||||
ip_version = ipaddress.ip_address(ip_addr).version
|
||||
except ValueError:
|
||||
ip_version = None
|
||||
|
||||
return ip_version
|
||||
|
||||
|
||||
def setup_testcase_config(testcase_config, lab=None, natbox=None):
|
||||
fip_error = 'A valid IPv4 OAM floating IP has to be specified via ' \
|
||||
'cmdline option --lab=<oam_floating_ip>, ' \
|
||||
'or testcase config file has to be provided via ' \
|
||||
'--testcase-config with oam_floating_ip ' \
|
||||
'specified under auth_platform section.'
|
||||
if not testcase_config:
|
||||
if not lab:
|
||||
raise ValueError(fip_error)
|
||||
return lab, natbox
|
||||
|
||||
testcase_config = os.path.expanduser(testcase_config)
|
||||
auth_section = 'auth'
|
||||
guest_image_section = 'guest_image'
|
||||
guest_networks_section = 'guest_networks'
|
||||
guest_keypair_section = 'guest_keypair'
|
||||
natbox_section = 'natbox'
|
||||
|
||||
config = configparser.ConfigParser()
|
||||
config.read(testcase_config)
|
||||
|
||||
#
|
||||
# Update global variables for auth section
|
||||
#
|
||||
# Update OAM floating IP
|
||||
if lab:
|
||||
fip = lab.get('floating ip')
|
||||
config.set(auth_section, 'oam_floating_ip', fip)
|
||||
else:
|
||||
fip = config.get(auth_section, 'oam_floating_ip', fallback='').strip()
|
||||
lab = get_lab_dict(fip)
|
||||
|
||||
if __get_ip_version(fip) != 4:
|
||||
raise ValueError(fip_error)
|
||||
|
||||
# controller-0 oam ip is updated with best effort if a valid IPv4 IP is
|
||||
# provided
|
||||
if not lab.get('controller-0 ip') and config.get(auth_section,
|
||||
'controller0_oam_ip',
|
||||
fallback='').strip():
|
||||
con0_ip = config.get(auth_section, 'controller0_oam_ip').strip()
|
||||
if __get_ip_version(con0_ip) == 4:
|
||||
lab['controller-0 ip'] = con0_ip
|
||||
else:
|
||||
LOG.info(
|
||||
"controller0_oam_ip specified in testcase config file is not "
|
||||
"a valid IPv4 address. Ignore.")
|
||||
|
||||
# Update linux user credentials
|
||||
if config.get(auth_section, 'linux_username', fallback='').strip():
|
||||
HostLinuxUser.set_user(
|
||||
config.get(auth_section, 'linux_username').strip())
|
||||
if config.get(auth_section, 'linux_user_password', fallback='').strip():
|
||||
HostLinuxUser.set_password(
|
||||
config.get(auth_section, 'linux_user_password').strip())
|
||||
|
||||
# Update openstack keystone user credentials
|
||||
auth_dict_map = {
|
||||
'platform_admin': 'admin_platform',
|
||||
'admin': 'admin',
|
||||
'test1': 'tenant1',
|
||||
'test2': 'tenant2',
|
||||
}
|
||||
for conf_prefix, dict_name in auth_dict_map.items():
|
||||
kwargs = {}
|
||||
default_auth = Tenant.get(dict_name)
|
||||
conf_user = config.get(auth_section, '{}_username'.format(conf_prefix),
|
||||
fallback='').strip()
|
||||
conf_password = config.get(auth_section,
|
||||
'{}_password'.format(conf_prefix),
|
||||
fallback='').strip()
|
||||
conf_project = config.get(auth_section,
|
||||
'{}_project_name'.format(conf_prefix),
|
||||
fallback='').strip()
|
||||
conf_domain = config.get(auth_section,
|
||||
'{}_domain_name'.format(conf_prefix),
|
||||
fallback='').strip()
|
||||
conf_keypair = config.get(auth_section,
|
||||
'{}_nova_keypair'.format(conf_prefix),
|
||||
fallback='').strip()
|
||||
if conf_user and conf_user != default_auth.get('user'):
|
||||
kwargs['username'] = conf_user
|
||||
if conf_password and conf_password != default_auth.get('password'):
|
||||
kwargs['password'] = conf_password
|
||||
if conf_project and conf_project != default_auth.get('tenant'):
|
||||
kwargs['tenant'] = conf_project
|
||||
if conf_domain and conf_domain != default_auth.get('domain'):
|
||||
kwargs['domain'] = conf_domain
|
||||
if conf_keypair and conf_keypair != default_auth.get('nova_keypair'):
|
||||
kwargs['nova_keypair'] = conf_keypair
|
||||
|
||||
if kwargs:
|
||||
Tenant.update(dict_name, **kwargs)
|
||||
|
||||
#
|
||||
# Update global variables for natbox section
|
||||
#
|
||||
natbox_host = config.get(natbox_section, 'natbox_host', fallback='').strip()
|
||||
natbox_user = config.get(natbox_section, 'natbox_user', fallback='').strip()
|
||||
natbox_password = config.get(natbox_section, 'natbox_password',
|
||||
fallback='').strip()
|
||||
natbox_prompt = config.get(natbox_section, 'natbox_prompt',
|
||||
fallback='').strip()
|
||||
if natbox_host and (not natbox or natbox_host != natbox['ip']):
|
||||
natbox = get_natbox_dict(natbox_host, user=natbox_user,
|
||||
password=natbox_password, prompt=natbox_prompt)
|
||||
#
|
||||
# Update global variables for guest_image section
|
||||
#
|
||||
img_file_dir = config.get(guest_image_section, 'img_file_dir',
|
||||
fallback='').strip()
|
||||
glance_image_name = config.get(guest_image_section, 'glance_image_name',
|
||||
fallback='').strip()
|
||||
img_file_name = config.get(guest_image_section, 'img_file_name',
|
||||
fallback='').strip()
|
||||
img_disk_format = config.get(guest_image_section, 'img_disk_format',
|
||||
fallback='').strip()
|
||||
min_disk_size = config.get(guest_image_section, 'min_disk_size',
|
||||
fallback='').strip()
|
||||
img_container_format = config.get(guest_image_section,
|
||||
'img_container_format',
|
||||
fallback='').strip()
|
||||
image_ssh_user = config.get(guest_image_section, 'image_ssh_user',
|
||||
fallback='').strip()
|
||||
image_ssh_password = config.get(guest_image_section, 'image_ssh_password',
|
||||
fallback='').strip()
|
||||
|
||||
if img_file_dir and img_file_dir != GuestImages.DEFAULT['image_dir']:
|
||||
# Update default image file directory
|
||||
img_file_dir = os.path.expanduser(img_file_dir)
|
||||
if not os.path.isabs(img_file_dir):
|
||||
raise ValueError(
|
||||
"Please provide a valid absolute path for img_file_dir "
|
||||
"under guest_image section in testcase config file")
|
||||
GuestImages.DEFAULT['image_dir'] = img_file_dir
|
||||
|
||||
if glance_image_name and glance_image_name != GuestImages.DEFAULT['guest']:
|
||||
# Update default glance image name
|
||||
GuestImages.DEFAULT['guest'] = glance_image_name
|
||||
if glance_image_name not in GuestImages.IMAGE_FILES:
|
||||
# Add guest image info to consts.stx.GuestImages
|
||||
if not (img_file_name and img_disk_format and min_disk_size):
|
||||
raise ValueError(
|
||||
"img_file_name and img_disk_format under guest_image "
|
||||
"section have to be "
|
||||
"specified in testcase config file")
|
||||
|
||||
img_container_format = img_container_format if \
|
||||
img_container_format else 'bare'
|
||||
GuestImages.IMAGE_FILES[glance_image_name] = \
|
||||
(None, min_disk_size, img_file_name, img_disk_format,
|
||||
img_container_format)
|
||||
|
||||
# Add guest login credentials
|
||||
Guest.CREDS[glance_image_name] = {
|
||||
'user': image_ssh_user if image_ssh_user else 'root',
|
||||
'password': image_ssh_password if image_ssh_password else None,
|
||||
}
|
||||
|
||||
#
|
||||
# Update global variables for guest_keypair section
|
||||
#
|
||||
natbox_keypair_dir = config.get(guest_keypair_section, 'natbox_keypair_dir',
|
||||
fallback='').strip()
|
||||
private_key_path = config.get(guest_keypair_section, 'private_key_path',
|
||||
fallback='').strip()
|
||||
|
||||
if natbox_keypair_dir:
|
||||
natbox_keypair_path = os.path.join(natbox_keypair_dir,
|
||||
'keyfile_{}.pem'.format(
|
||||
lab['short_name']))
|
||||
ProjVar.set_var(NATBOX_KEYFILE_PATH=natbox_keypair_path)
|
||||
if private_key_path:
|
||||
ProjVar.set_var(STX_KEYFILE_PATH=private_key_path)
|
||||
|
||||
#
|
||||
# Update global variables for guest_networks section
|
||||
#
|
||||
net_name_patterns = {
|
||||
'mgmt': config.get(guest_networks_section, 'mgmt_net_name_pattern',
|
||||
fallback='').strip(),
|
||||
'data': config.get(guest_networks_section, 'data_net_name_pattern',
|
||||
fallback='').strip(),
|
||||
'internal': config.get(guest_networks_section,
|
||||
'internal_net_name_pattern',
|
||||
fallback='').strip(),
|
||||
'external': config.get(guest_networks_section,
|
||||
'external_net_name_pattern', fallback='').strip()
|
||||
}
|
||||
|
||||
for net_type, net_name_pattern in net_name_patterns.items():
|
||||
if net_name_pattern:
|
||||
Networks.set_neutron_net_patterns(net_type=net_type,
|
||||
net_name_pattern=net_name_pattern)
|
||||
|
||||
return lab, natbox
|
|
@ -0,0 +1,137 @@
|
|||
[auth]
|
||||
#
|
||||
# Auth info to ssh to active controller and run platform commands
|
||||
#
|
||||
|
||||
# Linux user info for ssh to StarlingX controller node
|
||||
# controllers' OAM network floating ip and unit ip if applicable.
|
||||
# controller_fip is mandatory unless --lab=<controller_fip> is provided
|
||||
# via cmdline. Only IPv4 is supported by test framework for now.
|
||||
# Required by all configurations.
|
||||
|
||||
oam_floating_ip =
|
||||
controller0_oam_ip =
|
||||
controller1_oam_ip =
|
||||
linux_username = sysadmin
|
||||
linux_user_password = Li69nux*
|
||||
|
||||
# Platform keystone admin user and project info
|
||||
platform_admin_username = admin
|
||||
platform_admin_project_name = admin
|
||||
platform_admin_password = Li69nux*
|
||||
platform_admin_domain_name = Default
|
||||
|
||||
|
||||
# Non-platform keystone info
|
||||
# Required if stx-openstack is deployed
|
||||
|
||||
# non-platform keystone: admin user and project info
|
||||
admin_username = admin
|
||||
admin_project_name = admin
|
||||
admin_password = Li69nux*
|
||||
admin_domain_name = Default
|
||||
|
||||
# non-platform keystone: first test user and tenant. Will be used for most of
|
||||
# the openstack related test cases.
|
||||
test1_username = tenant1
|
||||
test1_project_name = tenant1
|
||||
test1_password = Li69nux*
|
||||
test1_domain_name = Default
|
||||
# nova keypair to use when create VM
|
||||
test1_nova_keypair = keypair-tenant1
|
||||
|
||||
# non-platform keystone: second test user and tenant. Should be in the same
|
||||
# domain as first test user and tenant.
|
||||
test2_username = tenant2
|
||||
test2_project_name = tenant2
|
||||
test2_password = Li69nux*
|
||||
test2_domain_name = Default
|
||||
test2_nova_keypair = keypair-tenant2
|
||||
|
||||
|
||||
[natbox]
|
||||
#
|
||||
# NATBox will be used to ping/ssh to a guest
|
||||
# Required if stx-openstack is deployed
|
||||
#
|
||||
|
||||
# Info to ssh to a NATBox. If NatBox is localhost from where the tests are
|
||||
# executed from, set: natbox_host = localhost
|
||||
natbox_host = <server name or ip used in ssh>
|
||||
natbox_user = <ssh_user>
|
||||
natbox_password = <ssh login password>
|
||||
|
||||
# python regex pattern for natbox prompt,
|
||||
# default prompt is natbox_user@.*[$#] when unspecified
|
||||
natbox_prompt =
|
||||
|
||||
|
||||
[guest_image]
|
||||
#
|
||||
# Glance image info
|
||||
# Required if stx-openstack is deployed
|
||||
#
|
||||
|
||||
# Image file path on active controller. Will be used to create glance image
|
||||
# in some test cases.
|
||||
img_file_dir = /home/sysadmin/images
|
||||
img_file_name = tis-centos-guest.img
|
||||
# minimum root disk size in GiB if this image is used to launch VM
|
||||
min_disk_size = 2
|
||||
img_disk_format=raw
|
||||
img_container_format = bare
|
||||
|
||||
# Full name of an existing glance image that will be used as default image
|
||||
# to create cinder volume, VM, etc. If glance_image_name is not provided,
|
||||
# an glance image will be created from above image file at the begining
|
||||
# of the test session.
|
||||
glance_image_name = tis-centos-guest
|
||||
|
||||
# username and password that will be used to ssh to VM that is created
|
||||
# from above glance image
|
||||
image_ssh_user = root
|
||||
image_ssh_password = root
|
||||
|
||||
|
||||
[guest_keypair]
|
||||
#
|
||||
# Nova keypair to ssh to VM from NATBox without using password in some tests
|
||||
# Required if stx-openstack is deployed
|
||||
#
|
||||
|
||||
# Directory to store private keyfile on natbox.
|
||||
natbox_keypair_dir = ~/priv_keys/
|
||||
|
||||
# private key path on controller-0 that was used to create above nova keypair.
|
||||
# If not provided or not exist, a nova keypair will be created using a key from
|
||||
# ssh-keygen on controller-0.
|
||||
private_key_path = /home/sysadmin/.ssh/id_rsa
|
||||
|
||||
|
||||
[guest_networks]
|
||||
#
|
||||
# Neutron networks for openstack VM
|
||||
# Required if stx-openstack is deployed
|
||||
#
|
||||
|
||||
# Python pattern for different types of neutron networks -
|
||||
# used in re.search(<pattern>, <full_network_name>)
|
||||
# Pattern needs to be unique for each network type
|
||||
|
||||
# mgmt networks - need to be reachable from above NATBox. Will always be
|
||||
# used to create first nic of the vm, so that VM can be ping'd or ssh'd
|
||||
# from NATBox.
|
||||
mgmt_net_name_pattern = tenant\d-mgmt-net
|
||||
|
||||
# data networks - usually un-shared. Will be used in some test cases
|
||||
# that require communication between two VMs
|
||||
data_net_name_pattern = tenant\d-net
|
||||
|
||||
# internal network - need to be shared among tenants. Will be used in a few
|
||||
# test cases to route data network traffic via internal interface between
|
||||
# two VMs that belong to different tenants
|
||||
internal_net_name_pattern = internal
|
||||
|
||||
# external network - neutron floating ips will be created off this network.
|
||||
# Needs to be reachable from NATBox.
|
||||
external_net_name_pattern = external
|
|
@ -0,0 +1,72 @@
|
|||
import pytest
|
||||
|
||||
import setups
|
||||
from consts.auth import CliAuth, Tenant
|
||||
from consts.proj_vars import ProjVar
|
||||
from utils.tis_log import LOG
|
||||
from utils.clients.ssh import ControllerClient
|
||||
|
||||
natbox_ssh = None
|
||||
initialized = False
|
||||
|
||||
|
||||
@pytest.fixture(scope='session', autouse=True)
|
||||
def setup_test_session(global_setup):
|
||||
"""
|
||||
Setup primary tenant and Nax Box ssh before the first test gets executed.
|
||||
STX ssh was already set up at collecting phase.
|
||||
"""
|
||||
LOG.fixture_step("(session) Setting up test session...")
|
||||
setups.setup_primary_tenant(ProjVar.get_var('PRIMARY_TENANT'))
|
||||
|
||||
global con_ssh
|
||||
if not con_ssh:
|
||||
con_ssh = ControllerClient.get_active_controller()
|
||||
# set build id to be used to upload/write test results
|
||||
setups.set_build_info(con_ssh)
|
||||
|
||||
# Ensure tis and natbox (if applicable) ssh are connected
|
||||
con_ssh.connect(retry=True, retry_interval=3, retry_timeout=300)
|
||||
|
||||
# set up natbox connection and copy keyfile
|
||||
natbox_dict = ProjVar.get_var('NATBOX')
|
||||
global natbox_ssh
|
||||
natbox_ssh = setups.setup_natbox_ssh(natbox_dict, con_ssh=con_ssh)
|
||||
|
||||
# set global var for sys_type
|
||||
setups.set_sys_type(con_ssh=con_ssh)
|
||||
|
||||
# rsync files between controllers
|
||||
setups.copy_test_files()
|
||||
|
||||
|
||||
def pytest_collectstart():
|
||||
"""
|
||||
Set up the ssh session at collectstart. Because skipif condition is
|
||||
evaluated at the collecting test cases phase.
|
||||
"""
|
||||
global initialized
|
||||
if not initialized:
|
||||
global con_ssh
|
||||
con_ssh = setups.setup_tis_ssh(ProjVar.get_var("LAB"))
|
||||
ProjVar.set_var(con_ssh=con_ssh)
|
||||
CliAuth.set_vars(**setups.get_auth_via_openrc(con_ssh))
|
||||
if setups.is_https(con_ssh):
|
||||
CliAuth.set_vars(HTTPS=True)
|
||||
|
||||
auth_url = CliAuth.get_var('OS_AUTH_URL')
|
||||
Tenant.set_platform_url(auth_url)
|
||||
setups.set_region(region=None)
|
||||
if ProjVar.get_var('IS_DC'):
|
||||
Tenant.set_platform_url(url=auth_url, central_region=True)
|
||||
initialized = True
|
||||
|
||||
|
||||
def pytest_runtest_teardown():
|
||||
for con_ssh_ in ControllerClient.get_active_controllers(
|
||||
current_thread_only=True):
|
||||
con_ssh_.flush()
|
||||
con_ssh_.connect(retry=True, retry_interval=3, retry_timeout=300)
|
||||
if natbox_ssh:
|
||||
natbox_ssh.flush()
|
||||
natbox_ssh.connect(retry=False)
|
|
@ -0,0 +1,3 @@
|
|||
from testfixtures.resource_mgmt import *
|
||||
from testfixtures.resource_create import *
|
||||
from testfixtures.config_host import *
|
|
@ -0,0 +1,102 @@
|
|||
#
|
||||
# Copyright (c) 2019 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
import time
|
||||
import random
|
||||
from datetime import datetime, timedelta
|
||||
from pytest import mark, skip
|
||||
|
||||
from utils.tis_log import LOG
|
||||
|
||||
from consts.stx import GuestImages
|
||||
from consts.auth import Tenant
|
||||
from keywords import common, ceilometer_helper, network_helper, \
|
||||
glance_helper, system_helper, gnocchi_helper
|
||||
|
||||
|
||||
def _wait_for_measurements(meter, resource_type, extra_query, start_time,
|
||||
overlap=None, timeout=1860,
|
||||
check_interval=60):
|
||||
end_time = time.time() + timeout
|
||||
|
||||
while time.time() < end_time:
|
||||
values = gnocchi_helper.get_aggregated_measures(
|
||||
metrics=meter, resource_type=resource_type, start=start_time,
|
||||
overlap=overlap, extra_query=extra_query)[1]
|
||||
if values:
|
||||
return values
|
||||
|
||||
time.sleep(check_interval)
|
||||
|
||||
|
||||
@mark.cpe_sanity
|
||||
@mark.sanity
|
||||
@mark.sx_nightly
|
||||
@mark.parametrize('meter', [
|
||||
'image.size'
|
||||
])
|
||||
def test_measurements_for_metric(meter):
|
||||
"""
|
||||
Validate statistics for one meter
|
||||
|
||||
"""
|
||||
LOG.tc_step('Get ceilometer statistics table for image.size meter')
|
||||
|
||||
now = datetime.utcnow()
|
||||
start = (now - timedelta(minutes=10))
|
||||
start = start.strftime("%Y-%m-%dT%H:%M:%S")
|
||||
image_name = GuestImages.DEFAULT['guest']
|
||||
resource_type = 'image'
|
||||
extra_query = "name='{}'".format(image_name)
|
||||
overlap = None
|
||||
|
||||
code, output = gnocchi_helper.get_aggregated_measures(
|
||||
metrics=meter, resource_type=resource_type, start=start,
|
||||
extra_query=extra_query, fail_ok=True)
|
||||
if code > 0:
|
||||
if "Metrics can't being aggregated" in output:
|
||||
# there was another glance image that has the same
|
||||
# string in its name
|
||||
overlap = '0'
|
||||
else:
|
||||
assert False, output
|
||||
|
||||
values = output
|
||||
if code == 0 and values:
|
||||
assert len(values) <= 4, "Incorrect count for {} {} metric via " \
|
||||
"'openstack metric measures aggregation'". \
|
||||
format(image_name, meter)
|
||||
else:
|
||||
values = _wait_for_measurements(meter=meter,
|
||||
resource_type=resource_type,
|
||||
extra_query=extra_query,
|
||||
start_time=start, overlap=overlap)
|
||||
assert values, "No measurements for image.size for 25+ minutes"
|
||||
|
||||
LOG.tc_step('Check that values are larger than zero')
|
||||
for val in values:
|
||||
assert 0 <= float(val), "{} {} value in metric measurements " \
|
||||
"table is less than zero".format(
|
||||
image_name, meter)
|
||||
|
||||
|
||||
def check_event_in_tenant_or_admin(resource_id, event_type):
|
||||
for auth_ in (None, Tenant.get('admin')):
|
||||
traits = ceilometer_helper.get_events(event_type=event_type,
|
||||
header='traits:value',
|
||||
auth_info=auth_)
|
||||
for trait in traits:
|
||||
if resource_id in trait:
|
||||
LOG.info("Resource found in ceilometer events using "
|
||||
"auth: {}".format(auth_))
|
||||
break
|
||||
else:
|
||||
continue
|
||||
break
|
||||
else:
|
||||
assert False, "{} event for resource {} was not found under admin or " \
|
||||
"tenant".format(event_type, resource_id)
|
|
@ -0,0 +1,3 @@
|
|||
from testfixtures.resource_mgmt import *
|
||||
from testfixtures.config_host import *
|
||||
from testfixtures.resource_create import *
|
|
@ -0,0 +1,66 @@
|
|||
#
|
||||
# Copyright (c) 2019 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
from pytest import mark
|
||||
|
||||
from consts.stx import HostAvailState
|
||||
from keywords import system_helper, network_helper, host_helper
|
||||
from utils.clients.ssh import ControllerClient
|
||||
from utils.tis_log import LOG
|
||||
|
||||
|
||||
@mark.p3
|
||||
def test_ping_hosts():
|
||||
con_ssh = ControllerClient.get_active_controller()
|
||||
|
||||
ping_failed_list = []
|
||||
for hostname in system_helper.get_hosts():
|
||||
LOG.tc_step(
|
||||
"Send 100 pings to {} from Active Controller".format(hostname))
|
||||
ploss_rate, untran_p = network_helper.ping_server(hostname, con_ssh,
|
||||
num_pings=100,
|
||||
timeout=300,
|
||||
fail_ok=True)
|
||||
if ploss_rate > 0:
|
||||
if ploss_rate == 100:
|
||||
ping_failed_list.append(
|
||||
"{}: Packet loss rate: {}/100\n".format(hostname,
|
||||
ploss_rate))
|
||||
else:
|
||||
ping_failed_list.append(
|
||||
"{}: All packets dropped.\n".format(hostname))
|
||||
if untran_p > 0:
|
||||
ping_failed_list.append(
|
||||
"{}: {}/100 pings are untransmitted within 300 seconds".format(
|
||||
hostname, untran_p))
|
||||
|
||||
LOG.tc_step("Ensure all packets are received.")
|
||||
assert not ping_failed_list, "Dropped/Un-transmitted packets detected " \
|
||||
"when ping hosts. " \
|
||||
"Details:\n{}".format(ping_failed_list)
|
||||
|
||||
|
||||
@mark.sanity
|
||||
@mark.cpe_sanity
|
||||
@mark.sx_sanity
|
||||
def test_ssh_to_hosts():
|
||||
"""
|
||||
Test ssh to every host on system from active controller
|
||||
|
||||
"""
|
||||
hosts_to_ssh = system_helper.get_hosts(
|
||||
availability=[HostAvailState.AVAILABLE, HostAvailState.ONLINE])
|
||||
failed_list = []
|
||||
for hostname in hosts_to_ssh:
|
||||
LOG.tc_step("Attempt SSH to {}".format(hostname))
|
||||
try:
|
||||
with host_helper.ssh_to_host(hostname):
|
||||
pass
|
||||
except Exception as e:
|
||||
failed_list.append("\n{}: {}".format(hostname, e.__str__()))
|
||||
|
||||
assert not failed_list, "SSH to host(s) failed: {}".format(failed_list)
|
|
@ -0,0 +1,58 @@
|
|||
#
|
||||
# Copyright (c) 2019 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
from pytest import mark, fixture
|
||||
|
||||
from utils.tis_log import LOG
|
||||
from keywords import host_helper, check_helper
|
||||
|
||||
|
||||
# Do not check alarms for test in this module, which are read only tests.
|
||||
@fixture()
|
||||
def check_alarms():
|
||||
pass
|
||||
|
||||
|
||||
class TestCoreDumpsAndCrashes:
|
||||
@fixture(scope='class')
|
||||
def post_coredumps_and_crash_reports(self):
|
||||
LOG.fixture_step(
|
||||
"Gather core dumps and crash reports info for all hosts")
|
||||
return host_helper.get_coredumps_and_crashreports()
|
||||
|
||||
@mark.abslast
|
||||
@mark.sanity
|
||||
@mark.cpe_sanity
|
||||
@mark.sx_sanity
|
||||
@mark.parametrize('report_type', [
|
||||
'core_dumps',
|
||||
'crash_reports',
|
||||
])
|
||||
def test_system_coredumps_and_crashes(self, report_type,
|
||||
post_coredumps_and_crash_reports):
|
||||
|
||||
LOG.tc_step("Check {} does not exist on any host".format(report_type))
|
||||
existing_files = {}
|
||||
for host in post_coredumps_and_crash_reports:
|
||||
core_dumps, crash_reports = post_coredumps_and_crash_reports[host]
|
||||
failures = {'core_dumps': core_dumps,
|
||||
'crash_reports': crash_reports}
|
||||
|
||||
if failures[report_type]:
|
||||
existing_files[host] = failures[report_type]
|
||||
|
||||
assert not existing_files, "{} exist on {}".format(report_type, list(
|
||||
existing_files.keys()))
|
||||
|
||||
|
||||
@mark.abslast
|
||||
@mark.sanity
|
||||
@mark.cpe_sanity
|
||||
@mark.sx_sanity
|
||||
def test_system_alarms(pre_alarms_session):
|
||||
LOG.tc_step("Gathering system alarms at the end of test session")
|
||||
check_helper.check_alarms(before_alarms=pre_alarms_session)
|
||||
LOG.info("No new alarms found after test session.")
|
|
@ -0,0 +1,5 @@
|
|||
# Do NOT remove following imports. Needed for test fixture discovery purpose
|
||||
from testfixtures.resource_mgmt import delete_resources_func, delete_resources_class, delete_resources_module
|
||||
from testfixtures.recover_hosts import hosts_recover_func, hosts_recover_class, hosts_recover_module
|
||||
from testfixtures.verify_fixtures import *
|
||||
from testfixtures.pre_checks_and_configs import *
|
|
@ -0,0 +1,3 @@
|
|||
from testfixtures.resource_mgmt import *
|
||||
from testfixtures.resource_create import *
|
||||
from testfixtures.config_host import *
|
|
@ -0,0 +1,110 @@
|
|||
#
|
||||
# Copyright (c) 2019 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
from pytest import mark, skip
|
||||
|
||||
from utils import table_parser, cli
|
||||
from utils.tis_log import LOG
|
||||
|
||||
from consts.stx import EventLogID
|
||||
from keywords import system_helper, host_helper, common
|
||||
|
||||
from testfixtures.recover_hosts import HostsToRecover
|
||||
|
||||
|
||||
@mark.sanity
|
||||
def test_system_alarms_and_events_on_lock_unlock_compute(no_simplex):
|
||||
"""
|
||||
Verify fm alarm-show command
|
||||
|
||||
Test Steps:
|
||||
- Delete active alarms
|
||||
- Lock a host
|
||||
- Check active alarm generated for host lock
|
||||
- Check relative values are the same in fm alarm-list and fm alarm-show
|
||||
<uuid>
|
||||
- Check host lock 'set' event logged via fm event-list
|
||||
- Unlock host
|
||||
- Check active alarms cleared via fm alarm-list
|
||||
- Check host lock 'clear' event logged via fm event-list
|
||||
"""
|
||||
|
||||
# Remove following step because it's unnecessary and fails the test when
|
||||
# alarm is re-generated
|
||||
# # Clear the alarms currently present
|
||||
# LOG.tc_step("Clear the alarms table")
|
||||
# system_helper.delete_alarms()
|
||||
|
||||
# Raise a new alarm by locking a compute node
|
||||
# Get the compute
|
||||
compute_host = host_helper.get_up_hypervisors()[0]
|
||||
if compute_host == system_helper.get_active_controller_name():
|
||||
compute_host = system_helper.get_standby_controller_name()
|
||||
if not compute_host:
|
||||
skip('Standby controller unavailable')
|
||||
|
||||
LOG.tc_step("Lock a nova hypervisor host {}".format(compute_host))
|
||||
pre_lock_time = common.get_date_in_format()
|
||||
HostsToRecover.add(compute_host)
|
||||
host_helper.lock_host(compute_host)
|
||||
|
||||
LOG.tc_step("Check host lock alarm is generated")
|
||||
post_lock_alarms = \
|
||||
system_helper.wait_for_alarm(field='UUID', entity_id=compute_host,
|
||||
reason=compute_host,
|
||||
alarm_id=EventLogID.HOST_LOCK,
|
||||
strict=False,
|
||||
fail_ok=False)[1]
|
||||
|
||||
LOG.tc_step(
|
||||
"Check related fields in fm alarm-list and fm alarm-show are of the "
|
||||
"same values")
|
||||
post_lock_alarms_tab = system_helper.get_alarms_table(uuid=True)
|
||||
|
||||
alarms_l = ['Alarm ID', 'Entity ID', 'Severity', 'Reason Text']
|
||||
alarms_s = ['alarm_id', 'entity_instance_id', 'severity', 'reason_text']
|
||||
|
||||
# Only 1 alarm since we are now checking the specific alarm ID
|
||||
for post_alarm in post_lock_alarms:
|
||||
LOG.tc_step(
|
||||
"Verify {} for alarm {} in alarm-list are in sync with "
|
||||
"alarm-show".format(
|
||||
alarms_l, post_alarm))
|
||||
|
||||
alarm_show_tab = table_parser.table(cli.fm('alarm-show', post_alarm)[1])
|
||||
alarm_list_tab = table_parser.filter_table(post_lock_alarms_tab,
|
||||
UUID=post_alarm)
|
||||
|
||||
for i in range(len(alarms_l)):
|
||||
alarm_l_val = table_parser.get_column(alarm_list_tab,
|
||||
alarms_l[i])[0]
|
||||
alarm_s_val = table_parser.get_value_two_col_table(alarm_show_tab,
|
||||
alarms_s[i])
|
||||
|
||||
assert alarm_l_val == alarm_s_val, \
|
||||
"{} value in alarm-list: {} is different than alarm-show: " \
|
||||
"{}".format(alarms_l[i], alarm_l_val, alarm_s_val)
|
||||
|
||||
LOG.tc_step("Check host lock is logged via fm event-list")
|
||||
system_helper.wait_for_events(entity_instance_id=compute_host,
|
||||
start=pre_lock_time, timeout=60,
|
||||
event_log_id=EventLogID.HOST_LOCK,
|
||||
fail_ok=False, **{'state': 'set'})
|
||||
|
||||
pre_unlock_time = common.get_date_in_format()
|
||||
LOG.tc_step("Unlock {}".format(compute_host))
|
||||
host_helper.unlock_host(compute_host)
|
||||
|
||||
LOG.tc_step("Check host lock active alarm cleared")
|
||||
alarm_sets = [(EventLogID.HOST_LOCK, compute_host)]
|
||||
system_helper.wait_for_alarms_gone(alarm_sets, fail_ok=False)
|
||||
|
||||
LOG.tc_step("Check host lock clear event logged")
|
||||
system_helper.wait_for_events(event_log_id=EventLogID.HOST_LOCK,
|
||||
start=pre_unlock_time,
|
||||
entity_instance_id=compute_host,
|
||||
fail_ok=False, **{'state': 'clear'})
|
|
@ -0,0 +1 @@
|
|||
from testfixtures.horizon import *
|
|
@ -0,0 +1,322 @@
|
|||
#
|
||||
# Copyright (c) 2019 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
import re
|
||||
|
||||
from pytest import fixture, mark
|
||||
|
||||
from consts import horizon
|
||||
from utils import table_parser, cli
|
||||
from utils.tis_log import LOG
|
||||
from utils.horizon.pages.admin.platform import hostinventorypage
|
||||
from keywords import system_helper
|
||||
|
||||
|
||||
@fixture(scope='function')
|
||||
def host_inventory_pg(admin_home_pg, request):
|
||||
LOG.fixture_step('Go to Admin > Platform > Host Inventory')
|
||||
host_inventory_pg = hostinventorypage.HostInventoryPage(
|
||||
admin_home_pg.driver)
|
||||
host_inventory_pg.go_to_target_page()
|
||||
|
||||
def teardown():
|
||||
LOG.fixture_step('Back to Host Inventory page')
|
||||
host_inventory_pg.go_to_target_page()
|
||||
|
||||
request.addfinalizer(teardown)
|
||||
return host_inventory_pg
|
||||
|
||||
|
||||
def format_uptime(uptime):
|
||||
"""
|
||||
Uptime displays in horizon may display like this format:
|
||||
2 weeks, 10 hours
|
||||
2 hours, 2 minutes
|
||||
45 minutes
|
||||
...
|
||||
"""
|
||||
uptime = int(uptime)
|
||||
min_ = 60
|
||||
hour = min_ * 60
|
||||
day = hour * 24
|
||||
week = day * 7
|
||||
month = week * 4
|
||||
|
||||
uptime_months = uptime // month
|
||||
uptime_weeks = uptime % month // week
|
||||
uptime_days = uptime % month % week // day
|
||||
uptime_hours = uptime % month % week % day // hour
|
||||
uptime_mins = uptime % month % week % day % hour // min_
|
||||
|
||||
if uptime < min_:
|
||||
return '0 minutes'
|
||||
elif uptime < hour:
|
||||
return '{} minute'.format(uptime_mins)
|
||||
elif uptime < day:
|
||||
return '{} hour, {} minute'.format(uptime_hours, uptime_mins)
|
||||
elif uptime < week:
|
||||
return '{} day, {} hour'.format(uptime_days, uptime_hours)
|
||||
elif uptime < month:
|
||||
return '{} week, {} day'.format(uptime_weeks, uptime_days)
|
||||
elif uptime > week:
|
||||
return '{} month'.format(uptime_months, uptime_weeks)
|
||||
|
||||
|
||||
@mark.platform_sanity
|
||||
def test_horizon_host_inventory_display(host_inventory_pg):
|
||||
"""
|
||||
Test the hosts inventory display:
|
||||
|
||||
Setups:
|
||||
- Login as Admin
|
||||
- Go to Admin > Platform > Host Inventory
|
||||
|
||||
Test Steps:
|
||||
- Test host tables display
|
||||
|
||||
Teardown:
|
||||
- Back to Host Inventory page
|
||||
- Logout
|
||||
|
||||
"""
|
||||
LOG.tc_step('Test host inventory display')
|
||||
host_inventory_pg.go_to_hosts_tab()
|
||||
host_list = system_helper.get_hosts()
|
||||
for host_name in host_list:
|
||||
LOG.info("Checking {}...".format(host_name))
|
||||
headers_map = host_inventory_pg.hosts_table(
|
||||
host_name).get_cli_horizon_mapping()
|
||||
fields = list(headers_map.keys())
|
||||
cli_values = system_helper.get_host_values(host_name, fields,
|
||||
rtn_dict=True)
|
||||
cli_values['uptime'] = format_uptime(cli_values['uptime'])
|
||||
if cli_values.get('peers'):
|
||||
cli_values['peers'] = cli_values.get('peers').get('name')
|
||||
|
||||
horizon_vals = host_inventory_pg.horizon_vals(host_name)
|
||||
for cli_field in fields:
|
||||
cli_val = cli_values[cli_field]
|
||||
horizon_field = headers_map[cli_field]
|
||||
horizon_val = horizon_vals[horizon_field]
|
||||
if cli_field == 'uptime':
|
||||
assert re.match(r'\d+ [dhm]', horizon_val)
|
||||
else:
|
||||
assert str(cli_val).lower() in horizon_val.lower(), \
|
||||
'{} {} display incorrectly, expect: {} actual: {}'. \
|
||||
format(host_name, horizon_field, cli_val, horizon_val)
|
||||
|
||||
horizon.test_result = True
|
||||
|
||||
|
||||
@mark.parametrize('host_name', [
|
||||
'controller-0'
|
||||
])
|
||||
def test_horizon_host_details_display(host_inventory_pg, host_name):
|
||||
"""
|
||||
Test the host details display:
|
||||
|
||||
Setups:
|
||||
- Login as Admin
|
||||
- Go to Admin > Platform > Host Inventory > Controller-0
|
||||
|
||||
Test Steps:
|
||||
- Test host controller-0 overview display
|
||||
- Test host controller-0 processor display
|
||||
- Test host controller-0 memory display
|
||||
- Test host controller-0 storage display
|
||||
- Test host controller-0 ports display
|
||||
- Test host controller-0 lldp display
|
||||
|
||||
Teardown:
|
||||
- Logout
|
||||
"""
|
||||
host_table = host_inventory_pg.hosts_table(host_name)
|
||||
host_details_pg = host_inventory_pg.go_to_host_detail_page(host_name)
|
||||
|
||||
# OVERVIEW TAB
|
||||
LOG.tc_step('Test host: {} overview display'.format(host_name))
|
||||
host_details_pg.go_to_overview_tab()
|
||||
horizon_vals = host_details_pg.host_detail_overview(
|
||||
host_table.driver).get_content()
|
||||
fields_map = host_details_pg.host_detail_overview(
|
||||
host_table.driver).OVERVIEW_INFO_HEADERS_MAP
|
||||
cli_host_vals = system_helper.get_host_values(host_name, fields_map.keys(),
|
||||
rtn_dict=True)
|
||||
for field in fields_map:
|
||||
horizon_header = fields_map[field]
|
||||
cli_host_val = cli_host_vals[field]
|
||||
horizon_val = horizon_vals.get(horizon_header)
|
||||
if horizon_val is None:
|
||||
horizon_val = 'None'
|
||||
assert cli_host_val == horizon_val, '{} display incorrectly'.\
|
||||
format(horizon_header)
|
||||
else:
|
||||
assert cli_host_val.upper() in horizon_val.upper(), \
|
||||
'{} display incorrectly'.format(horizon_header)
|
||||
LOG.info('Host: {} overview display correct'.format(host_name))
|
||||
|
||||
# PROCESSOR TAB
|
||||
LOG.tc_step('Test host {} processor display'.format(host_name))
|
||||
host_details_pg.go_to_processor_tab()
|
||||
cpu_table = table_parser.table(
|
||||
cli.system('host-cpu-list {}'.format(host_name))[1])
|
||||
expt_cpu_info = {
|
||||
'Processor Model:':
|
||||
table_parser.get_values(cpu_table, 'processor_model')[0],
|
||||
'Processors:': str(
|
||||
len(set(table_parser.get_values(cpu_table, 'processor'))))}
|
||||
|
||||
horizon_cpu_info = host_details_pg.inventory_details_processor_info\
|
||||
.get_content()
|
||||
assert horizon_cpu_info['Processor Model:'] == expt_cpu_info[
|
||||
'Processor Model:']
|
||||
assert horizon_cpu_info['Processors:'] == expt_cpu_info['Processors:']
|
||||
|
||||
# MEMORY TABLE
|
||||
LOG.tc_step('Test host {} memory display'.format(host_name))
|
||||
checking_list = ['mem_total(MiB)', 'mem_avail(MiB)']
|
||||
|
||||
host_details_pg.go_to_memory_tab()
|
||||
memory_table = table_parser.table(
|
||||
cli.system('host-memory-list {}'.format(host_name))[1])
|
||||
column_names = host_details_pg.memory_table.column_names
|
||||
processor_list = table_parser.get_values(memory_table, column_names[0])
|
||||
cli_memory_table_dict = table_parser.row_dict_table(memory_table,
|
||||
column_names[0],
|
||||
lower_case=False)
|
||||
|
||||
for processor in processor_list:
|
||||
horizon_vm_pages_val = \
|
||||
host_details_pg.get_memory_table_info(processor, column_names[2])
|
||||
horizon_memory_val = \
|
||||
host_details_pg.get_memory_table_info(processor, 'Memory')
|
||||
if cli_memory_table_dict[processor]['hugepages(hp)_configured'] == \
|
||||
'False':
|
||||
assert horizon_vm_pages_val is None, \
|
||||
'Horizon {} display incorrectly'.format(column_names[2])
|
||||
else:
|
||||
for field in checking_list:
|
||||
assert cli_memory_table_dict[processor][field] in \
|
||||
horizon_memory_val, 'Memory {} display incorrectly'
|
||||
|
||||
# STORAGE TABLE
|
||||
# This test will loop each table and test their display
|
||||
# Test may fail in following case:
|
||||
# 1. disk table's Size header eg. Size(GiB) used different unit such as
|
||||
# Size (MiB), Size (TiB)
|
||||
# 2. lvg table may display different:
|
||||
# Case 1: Name | State | Access | Size (GiB) | Avail Size(GiB) |
|
||||
# Current Physical Volume - Current Logical Volumes
|
||||
# Case 2: Name | State | Access | Size |
|
||||
# Current Physical Volume - Current Logical Volumes
|
||||
# Case 2 Size values in horizon are rounded by 2 digits but in CLI not
|
||||
# rounded
|
||||
|
||||
LOG.tc_step('Test host {} storage display'.format(host_name))
|
||||
host_details_pg.go_to_storage_tab()
|
||||
|
||||
cmd_list = ['host-disk-list {}'.format(host_name),
|
||||
'host-disk-partition-list {}'.format(host_name),
|
||||
'host-lvg-list {}'.format(host_name),
|
||||
'host-pv-list {}'.format(host_name)]
|
||||
table_names = ['disk table', 'disk partition table',
|
||||
'local volume groups table', 'physical volumes table']
|
||||
|
||||
horizon_storage_tables = [host_details_pg.storage_disks_table,
|
||||
host_details_pg.storage_partitions_table,
|
||||
host_details_pg.storage_lvg_table,
|
||||
host_details_pg.storage_pv_table]
|
||||
cli_storage_tables = []
|
||||
for cmd in cmd_list:
|
||||
cli_storage_tables.append(table_parser.table(cli.system(cmd))[1])
|
||||
|
||||
for i in range(len(horizon_storage_tables)):
|
||||
horizon_table = horizon_storage_tables[i]
|
||||
unique_key = horizon_table.column_names[0]
|
||||
horizon_row_dict_table = host_details_pg.get_horizon_row_dict(
|
||||
horizon_table, key_header_index=0)
|
||||
cli_table = cli_storage_tables[i]
|
||||
table_dict_unique_key = list(horizon_table.HEADERS_MAP.keys())[
|
||||
list(horizon_table.HEADERS_MAP.values()).index(unique_key)]
|
||||
|
||||
cli_row_dict_storage_table = \
|
||||
table_parser.row_dict_table(cli_table,
|
||||
table_dict_unique_key,
|
||||
lower_case=False)
|
||||
for key_header in horizon_row_dict_table:
|
||||
for cli_header in horizon_table.HEADERS_MAP:
|
||||
horizon_header = horizon_table.HEADERS_MAP[cli_header]
|
||||
horizon_row_dict = horizon_row_dict_table[key_header]
|
||||
cli_row_dict = cli_row_dict_storage_table[key_header]
|
||||
# Solve parser issue: e.g. Size (GiB)' should be '558.029'
|
||||
# not ['5589.', '029']
|
||||
cli_val = cli_row_dict[cli_header]
|
||||
if isinstance(cli_val, list):
|
||||
cli_row_dict[cli_header] = ''.join(cli_val)
|
||||
assert horizon_row_dict[horizon_header] == cli_row_dict[
|
||||
cli_header], \
|
||||
'In {}: disk: {} {} display incorrectly'.format(
|
||||
table_names[i], key_header, horizon_header)
|
||||
LOG.info('{} display correct'.format(table_names[i]))
|
||||
|
||||
# PORT TABLE
|
||||
LOG.tc_step('Test host {} port display'.format(host_name))
|
||||
host_details_pg.go_to_ports_tab()
|
||||
horizon_port_table = host_details_pg.ports_table()
|
||||
cli_port_table = table_parser.table(
|
||||
cli.system('host-ethernet-port-list {}'.format(host_name))[1])
|
||||
horizon_row_dict_port_table = host_details_pg.get_horizon_row_dict(
|
||||
horizon_port_table, key_header_index=0)
|
||||
|
||||
cli_row_dict_port_table = table_parser.row_dict_table(cli_port_table,
|
||||
'name',
|
||||
lower_case=False)
|
||||
for ethernet_name in cli_row_dict_port_table:
|
||||
for cli_header in horizon_port_table.HEADERS_MAP:
|
||||
horizon_header = horizon_port_table.HEADERS_MAP[cli_header]
|
||||
horizon_row_dict = horizon_row_dict_port_table[ethernet_name]
|
||||
cli_row_dict = cli_row_dict_port_table[ethernet_name]
|
||||
if cli_header not in cli_row_dict and cli_header == 'mac address':
|
||||
cli_val = cli_row_dict['macaddress']
|
||||
else:
|
||||
cli_val = cli_row_dict[cli_header]
|
||||
horizon_val = horizon_row_dict[horizon_header]
|
||||
# Solve table parser issue: MAC Address returns list eg: [
|
||||
# 'a4:bf:01:35:4a:', '32']
|
||||
if isinstance(cli_val, list):
|
||||
cli_val = ''.join(cli_val)
|
||||
assert cli_val in horizon_val, '{} display incorrectly'.format(
|
||||
horizon_header)
|
||||
|
||||
# LLDP TABLE
|
||||
LOG.tc_step('Test host {} lldp display'.format(host_name))
|
||||
host_details_pg.go_to_lldp_tab()
|
||||
lldp_list_table = table_parser.table(
|
||||
cli.system('host-lldp-neighbor-list {}'.format(host_name))[1])
|
||||
lldp_uuid_list = table_parser.get_values(lldp_list_table, 'uuid')
|
||||
horizon_lldp_table = host_details_pg.lldp_table()
|
||||
cli_row_dict_lldp_table = {}
|
||||
horizon_row_dict_lldp_table = host_details_pg.get_horizon_row_dict(
|
||||
horizon_lldp_table, key_header_index=1)
|
||||
for uuid in lldp_uuid_list:
|
||||
cli_row_dict = {}
|
||||
lldp_show_table = table_parser.table(
|
||||
cli.system('lldp-neighbor-show {}'.format(uuid))[1])
|
||||
row_dict_key = table_parser.get_value_two_col_table(lldp_show_table,
|
||||
'port_identifier')
|
||||
for cli_header in horizon_lldp_table.HEADERS_MAP:
|
||||
horizon_header = horizon_lldp_table.HEADERS_MAP[cli_header]
|
||||
horizon_row_dict = horizon_row_dict_lldp_table[row_dict_key]
|
||||
cli_row_dict[cli_header] = table_parser.get_value_two_col_table(
|
||||
lldp_show_table, cli_header)
|
||||
cli_row_dict_lldp_table[row_dict_key] = cli_row_dict
|
||||
assert cli_row_dict[cli_header] == \
|
||||
horizon_row_dict[horizon_header], \
|
||||
'lldp neighbor:{} {} display incorrectly'.\
|
||||
format(row_dict_key, horizon_header)
|
||||
|
||||
horizon.test_result = True
|
|
@ -0,0 +1,86 @@
|
|||
#
|
||||
# Copyright (c) 2019 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
from pytest import fixture, mark
|
||||
|
||||
from consts import horizon
|
||||
from consts.auth import Tenant
|
||||
from consts.stx import GuestImages
|
||||
from keywords import nova_helper
|
||||
from utils.tis_log import LOG
|
||||
from utils.horizon import helper
|
||||
from utils.horizon.regions import messages
|
||||
from utils.horizon.pages.project.compute import instancespage
|
||||
|
||||
|
||||
@fixture(scope='function')
|
||||
def instances_pg(tenant_home_pg_container, request):
|
||||
LOG.fixture_step('Go to Project > Compute > Instance')
|
||||
instance_name = helper.gen_resource_name('instance')
|
||||
instances_pg = instancespage.InstancesPage(
|
||||
tenant_home_pg_container.driver, port=tenant_home_pg_container.port)
|
||||
instances_pg.go_to_target_page()
|
||||
|
||||
def teardown():
|
||||
LOG.fixture_step('Back to instance page')
|
||||
if instances_pg.is_instance_present(instance_name):
|
||||
instances_pg.delete_instance_by_row(instance_name)
|
||||
instances_pg.go_to_target_page()
|
||||
|
||||
request.addfinalizer(teardown)
|
||||
|
||||
return instances_pg, instance_name
|
||||
|
||||
|
||||
@mark.sanity
|
||||
@mark.cpe_sanity
|
||||
@mark.sx_sanity
|
||||
def test_horizon_create_delete_instance(instances_pg):
|
||||
"""
|
||||
Test the instance creation and deletion functionality:
|
||||
|
||||
Setups:
|
||||
- Login as Tenant
|
||||
- Go to Project > Compute > Instance
|
||||
|
||||
Teardown:
|
||||
- Back to Instances page
|
||||
- Logout
|
||||
|
||||
Test Steps:
|
||||
- Create a new instance
|
||||
- Verify the instance appears in the instances table as active
|
||||
- Delete the newly lunched instance
|
||||
- Verify the instance does not appear in the table after deletion
|
||||
"""
|
||||
instances_pg, instance_name = instances_pg
|
||||
|
||||
mgmt_net_name = '-'.join([Tenant.get_primary()['tenant'], 'mgmt', 'net'])
|
||||
flavor_name = nova_helper.get_basic_flavor(rtn_id=False)
|
||||
guest_img = GuestImages.DEFAULT['guest']
|
||||
|
||||
LOG.tc_step('Create new instance {}'.format(instance_name))
|
||||
instances_pg.create_instance(instance_name,
|
||||
boot_source_type='Image',
|
||||
source_name=guest_img,
|
||||
flavor_name=flavor_name,
|
||||
network_names=mgmt_net_name,
|
||||
create_new_volume=False)
|
||||
assert not instances_pg.find_message_and_dismiss(messages.ERROR)
|
||||
|
||||
LOG.tc_step('Verify the instance appears in the instances table as active')
|
||||
assert instances_pg.is_instance_active(instance_name)
|
||||
|
||||
LOG.tc_step('Delete instance {}'.format(instance_name))
|
||||
instances_pg.delete_instance_by_row(instance_name)
|
||||
assert instances_pg.find_message_and_dismiss(messages.INFO)
|
||||
assert not instances_pg.find_message_and_dismiss(messages.ERROR)
|
||||
|
||||
LOG.tc_step(
|
||||
'Verify the instance does not appear in the table after deletion')
|
||||
assert instances_pg.is_instance_deleted(instance_name)
|
||||
horizon.test_result = True
|
|
@ -0,0 +1,3 @@
|
|||
from testfixtures.resource_mgmt import *
|
||||
from testfixtures.resource_create import *
|
||||
from testfixtures.config_host import *
|
|
@ -0,0 +1,146 @@
|
|||
#
|
||||
# Copyright (c) 2019 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
from pytest import fixture, skip, mark
|
||||
|
||||
from utils.tis_log import LOG
|
||||
from consts.reasons import SkipHypervisor
|
||||
|
||||
from keywords import vm_helper, host_helper, nova_helper, system_helper, \
|
||||
network_helper
|
||||
from testfixtures.fixture_resources import ResourceCleanup
|
||||
|
||||
|
||||
@fixture(scope='module', autouse=True)
|
||||
def skip_test_if_less_than_two_hosts(no_simplex):
|
||||
hypervisors = host_helper.get_up_hypervisors()
|
||||
if len(hypervisors) < 2:
|
||||
skip(SkipHypervisor.LESS_THAN_TWO_HYPERVISORS)
|
||||
|
||||
LOG.fixture_step(
|
||||
"Update instance and volume quota to at least 10 and 20 respectively")
|
||||
vm_helper.ensure_vms_quotas(vms_num=10)
|
||||
|
||||
return len(hypervisors)
|
||||
|
||||
|
||||
class TestDefaultGuest:
|
||||
|
||||
@fixture(scope='class')
|
||||
def vms_(self, add_admin_role_class):
|
||||
LOG.fixture_step("Create a flavor without ephemeral or swap disks")
|
||||
flavor_1 = nova_helper.create_flavor('flv_nolocaldisk')[1]
|
||||
ResourceCleanup.add('flavor', flavor_1, scope='class')
|
||||
|
||||
LOG.fixture_step("Create a flavor with ephemeral and swap disks")
|
||||
flavor_2 = \
|
||||
nova_helper.create_flavor('flv_localdisk', ephemeral=1, swap=512)[1]
|
||||
ResourceCleanup.add('flavor', flavor_2, scope='class')
|
||||
|
||||
LOG.fixture_step(
|
||||
"Boot vm1 from volume with flavor flv_nolocaldisk and wait for it "
|
||||
"pingable from NatBox")
|
||||
vm1_name = "vol_nolocal"
|
||||
vm1 = vm_helper.boot_vm(vm1_name, flavor=flavor_1, source='volume',
|
||||
cleanup='class')[1]
|
||||
vm_helper.wait_for_vm_pingable_from_natbox(vm1)
|
||||
|
||||
vm_host = vm_helper.get_vm_host(vm_id=vm1)
|
||||
|
||||
LOG.fixture_step(
|
||||
"Boot vm2 from volume with flavor flv_localdisk and wait for it "
|
||||
"pingable from NatBox")
|
||||
vm2_name = "vol_local"
|
||||
vm2 = vm_helper.boot_vm(vm2_name, flavor=flavor_2, source='volume',
|
||||
cleanup='class', avail_zone='nova',
|
||||
vm_host=vm_host)[1]
|
||||
vm_helper.wait_for_vm_pingable_from_natbox(vm2)
|
||||
|
||||
LOG.fixture_step(
|
||||
"Boot vm3 from image with flavor flv_nolocaldisk and wait for it "
|
||||
"pingable from NatBox")
|
||||
vm3_name = "image_novol"
|
||||
vm3 = vm_helper.boot_vm(vm3_name, flavor=flavor_1, source='image',
|
||||
cleanup='class', avail_zone='nova',
|
||||
vm_host=vm_host)[1]
|
||||
vm_helper.wait_for_vm_pingable_from_natbox(vm3)
|
||||
|
||||
LOG.fixture_step(
|
||||
"Boot vm4 from image with flavor flv_nolocaldisk and wait for it "
|
||||
"pingable from NatBox")
|
||||
vm4_name = 'image_vol'
|
||||
vm4 = vm_helper.boot_vm(vm4_name, flavor_1, source='image',
|
||||
cleanup='class', avail_zone='nova',
|
||||
vm_host=vm_host)[1]
|
||||
vm_helper.wait_for_vm_pingable_from_natbox(vm4)
|
||||
|
||||
LOG.fixture_step(
|
||||
"Attach volume to vm4 which was booted from image: {}.".format(vm4))
|
||||
vm_helper.attach_vol_to_vm(vm4)
|
||||
|
||||
return [vm1, vm2, vm3, vm4], vm_host
|
||||
|
||||
@mark.trylast
|
||||
@mark.sanity
|
||||
@mark.cpe_sanity
|
||||
def test_evacuate_vms(self, vms_):
|
||||
"""
|
||||
Test evacuated vms
|
||||
Args:
|
||||
vms_: (fixture to create vms)
|
||||
|
||||
Pre-requisites:
|
||||
- At least two up hypervisors on system
|
||||
|
||||
Test Steps:
|
||||
- Create vms with various options:
|
||||
- vm booted from cinder volume,
|
||||
- vm booted from glance image,
|
||||
- vm booted from glance image, and have an extra cinder
|
||||
volume attached after launch,
|
||||
- vm booed from cinder volume with ephemeral and swap disks
|
||||
- Move vms onto same hypervisor
|
||||
- sudo reboot -f on the host
|
||||
- Ensure vms are successfully evacuated to other host
|
||||
- Live migrate vms back to original host
|
||||
- Check vms can move back, and vms are still reachable from natbox
|
||||
- Check system services are enabled and neutron agents are alive
|
||||
|
||||
"""
|
||||
vms, target_host = vms_
|
||||
|
||||
pre_res_sys, pre_msg_sys = system_helper.wait_for_services_enable(
|
||||
timeout=20, fail_ok=True)
|
||||
up_hypervisors = host_helper.get_up_hypervisors()
|
||||
pre_res_neutron, pre_msg_neutron = \
|
||||
network_helper.wait_for_agents_healthy(
|
||||
up_hypervisors, timeout=20, fail_ok=True)
|
||||
|
||||
LOG.tc_step(
|
||||
"reboot -f on vms host, ensure vms are successfully evacuated and "
|
||||
"host is recovered after reboot")
|
||||
vm_helper.evacuate_vms(host=target_host, vms_to_check=vms,
|
||||
wait_for_host_up=True, ping_vms=True)
|
||||
|
||||
LOG.tc_step("Check rebooted host can still host vm")
|
||||
vm_helper.live_migrate_vm(vms[0], destination_host=target_host)
|
||||
vm_helper.wait_for_vm_pingable_from_natbox(vms[0])
|
||||
|
||||
LOG.tc_step("Check system services and neutron agents after {} "
|
||||
"reboot".format(target_host))
|
||||
post_res_sys, post_msg_sys = system_helper.wait_for_services_enable(
|
||||
fail_ok=True)
|
||||
post_res_neutron, post_msg_neutron = \
|
||||
network_helper.wait_for_agents_healthy(hosts=up_hypervisors,
|
||||
fail_ok=True)
|
||||
|
||||
assert post_res_sys, "\nPost-evac system services stats: {}" \
|
||||
"\nPre-evac system services stats: {}". \
|
||||
format(post_msg_sys, pre_msg_sys)
|
||||
assert post_res_neutron, "\nPost evac neutron agents stats: {}" \
|
||||
"\nPre-evac neutron agents stats: {}". \
|
||||
format(pre_msg_neutron, post_msg_neutron)
|
|
@ -0,0 +1,31 @@
|
|||
#
|
||||
# Copyright (c) 2019 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
from pytest import mark
|
||||
|
||||
from utils import cli
|
||||
from utils.tis_log import LOG
|
||||
|
||||
|
||||
@mark.sx_sanity
|
||||
def test_add_host_simplex_negative(simplex_only):
|
||||
"""
|
||||
Test add second controller is rejected on simplex system
|
||||
Args:
|
||||
simplex_only: skip if non-sx system detected
|
||||
|
||||
Test Steps:
|
||||
- On simplex system, check 'system host-add -n controller-1' is rejected
|
||||
|
||||
"""
|
||||
LOG.tc_step("Check adding second controller is rejected on simplex system")
|
||||
code, out = cli.system('host-add', '-n controller-1', fail_ok=True)
|
||||
|
||||
assert 1 == code, "Unexpected exitcode for 'system host-add " \
|
||||
"controller-1': {}".format(code)
|
||||
assert 'Adding a host on a simplex system is not allowed' in out, \
|
||||
"Unexpected error message: {}".format(out)
|
|
@ -0,0 +1,93 @@
|
|||
#
|
||||
# Copyright (c) 2019 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
import time
|
||||
from pytest import mark, skip, param
|
||||
|
||||
from utils.tis_log import LOG
|
||||
from consts.stx import HostOperState, HostAvailState
|
||||
from testfixtures.recover_hosts import HostsToRecover
|
||||
from keywords import host_helper, system_helper
|
||||
|
||||
|
||||
@mark.platform_sanity
|
||||
def test_lock_active_controller_reject(no_simplex):
|
||||
"""
|
||||
Verify lock unlock active controller. Expected it to fail
|
||||
|
||||
Test Steps:
|
||||
- Get active controller
|
||||
- Attempt to lock active controller and ensure it's rejected
|
||||
|
||||
"""
|
||||
LOG.tc_step('Retrieve the active controller from the lab')
|
||||
active_controller = system_helper.get_active_controller_name()
|
||||
assert active_controller, "No active controller available"
|
||||
|
||||
# lock standby controller node and verify it is successfully locked
|
||||
LOG.tc_step("Lock active controller and ensure it fail to lock")
|
||||
exit_code, cmd_output = host_helper.lock_host(active_controller,
|
||||
fail_ok=True, swact=False,
|
||||
check_first=False)
|
||||
assert exit_code == 1, 'Expect locking active controller to ' \
|
||||
'be rejected. Actual: {}'.format(cmd_output)
|
||||
status = system_helper.get_host_values(active_controller,
|
||||
'administrative')[0]
|
||||
assert status == 'unlocked', "Fail: The active controller was locked."
|
||||
|
||||
|
||||
@mark.parametrize('host_type', [
|
||||
param('controller', marks=mark.priorities('platform_sanity',
|
||||
'sanity', 'cpe_sanity')),
|
||||
param('compute', marks=mark.priorities('platform_sanity')),
|
||||
param('storage', marks=mark.priorities('platform_sanity')),
|
||||
])
|
||||
def test_lock_unlock_host(host_type):
|
||||
"""
|
||||
Verify lock unlock host
|
||||
|
||||
Test Steps:
|
||||
- Select a host per given type. If type is controller, select
|
||||
standby controller.
|
||||
- Lock selected host and ensure it is successfully locked
|
||||
- Unlock selected host and ensure it is successfully unlocked
|
||||
|
||||
"""
|
||||
LOG.tc_step("Select a {} node from system if any".format(host_type))
|
||||
if host_type == 'controller':
|
||||
if system_helper.is_aio_simplex():
|
||||
host = 'controller-0'
|
||||
else:
|
||||
host = system_helper.get_standby_controller_name()
|
||||
assert host, "No standby controller available"
|
||||
|
||||
else:
|
||||
if host_type == 'compute' and system_helper.is_aio_system():
|
||||
skip("No compute host on AIO system")
|
||||
elif host_type == 'storage' and not system_helper.is_storage_system():
|
||||
skip("System does not have storage nodes")
|
||||
|
||||
hosts = system_helper.get_hosts(personality=host_type,
|
||||
availability=HostAvailState.AVAILABLE,
|
||||
operational=HostOperState.ENABLED)
|
||||
|
||||
assert hosts, "No good {} host on system".format(host_type)
|
||||
host = hosts[0]
|
||||
|
||||
LOG.tc_step("Lock {} host - {} and ensure it is successfully "
|
||||
"locked".format(host_type, host))
|
||||
HostsToRecover.add(host)
|
||||
host_helper.lock_host(host, swact=False)
|
||||
|
||||
# wait for services to stabilize before unlocking
|
||||
time.sleep(20)
|
||||
|
||||
# unlock standby controller node and verify controller node is
|
||||
# successfully unlocked
|
||||
LOG.tc_step("Unlock {} host - {} and ensure it is successfully "
|
||||
"unlocked".format(host_type, host))
|
||||
host_helper.unlock_host(host)
|
|
@ -0,0 +1,85 @@
|
|||
#
|
||||
# Copyright (c) 2019 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
import time
|
||||
from pytest import mark, skip, param
|
||||
|
||||
from utils.tis_log import LOG
|
||||
|
||||
from consts.stx import VMStatus
|
||||
from consts.timeout import VMTimeout
|
||||
from keywords import host_helper, system_helper, vm_helper, network_helper
|
||||
from testfixtures.recover_hosts import HostsToRecover
|
||||
|
||||
|
||||
@mark.usefixtures('check_alarms')
|
||||
@mark.parametrize('host_type', [
|
||||
param('controller', marks=mark.sanity),
|
||||
'compute',
|
||||
# 'storage'
|
||||
])
|
||||
def test_system_persist_over_host_reboot(host_type):
|
||||
"""
|
||||
Validate Inventory summary over reboot of one of the controller see if
|
||||
data persists over reboot
|
||||
|
||||
Test Steps:
|
||||
- capture Inventory summary for list of hosts on system service-list
|
||||
and neutron agent-list
|
||||
- reboot the current Controller-Active
|
||||
- Wait for reboot to complete
|
||||
- Validate key items from inventory persist over reboot
|
||||
|
||||
"""
|
||||
if host_type == 'controller':
|
||||
host = system_helper.get_active_controller_name()
|
||||
elif host_type == 'compute':
|
||||
if system_helper.is_aio_system():
|
||||
skip("No compute host for AIO system")
|
||||
|
||||
host = None
|
||||
else:
|
||||
hosts = system_helper.get_hosts(personality='storage')
|
||||
if not hosts:
|
||||
skip(msg="Lab has no storage nodes. Skip rebooting storage node.")
|
||||
|
||||
host = hosts[0]
|
||||
|
||||
LOG.tc_step("Pre-check for system status")
|
||||
system_helper.wait_for_services_enable()
|
||||
up_hypervisors = host_helper.get_up_hypervisors()
|
||||
network_helper.wait_for_agents_healthy(hosts=up_hypervisors)
|
||||
|
||||
LOG.tc_step("Launch a vm")
|
||||
vm_id = vm_helper.boot_vm(cleanup='function')[1]
|
||||
vm_helper.wait_for_vm_pingable_from_natbox(vm_id)
|
||||
|
||||
if host is None:
|
||||
host = vm_helper.get_vm_host(vm_id)
|
||||
|
||||
LOG.tc_step("Reboot a {} node and wait for reboot completes: "
|
||||
"{}".format(host_type, host))
|
||||
HostsToRecover.add(host)
|
||||
host_helper.reboot_hosts(host)
|
||||
host_helper.wait_for_hosts_ready(host)
|
||||
|
||||
LOG.tc_step("Check vm is still active and pingable after {} "
|
||||
"reboot".format(host))
|
||||
vm_helper.wait_for_vm_status(vm_id, status=VMStatus.ACTIVE, fail_ok=False)
|
||||
vm_helper.wait_for_vm_pingable_from_natbox(vm_id=vm_id,
|
||||
timeout=VMTimeout.DHCP_RETRY)
|
||||
|
||||
LOG.tc_step("Check neutron agents and system services are in good state "
|
||||
"after {} reboot".format(host))
|
||||
network_helper.wait_for_agents_healthy(up_hypervisors)
|
||||
system_helper.wait_for_services_enable()
|
||||
|
||||
if host in up_hypervisors:
|
||||
LOG.tc_step("Check {} can still host vm after reboot".format(host))
|
||||
if not vm_helper.get_vm_host(vm_id) == host:
|
||||
time.sleep(30)
|
||||
vm_helper.live_migrate_vm(vm_id, destination_host=host)
|
|
@ -0,0 +1,123 @@
|
|||
#
|
||||
# Copyright (c) 2019 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
from pytest import mark, skip
|
||||
|
||||
from utils.tis_log import LOG
|
||||
from consts.reasons import SkipSysType
|
||||
from keywords import host_helper, system_helper, vm_helper, network_helper, \
|
||||
kube_helper
|
||||
|
||||
|
||||
@mark.sanity
|
||||
@mark.cpe_sanity
|
||||
def test_swact_controllers(wait_for_con_drbd_sync_complete):
|
||||
"""
|
||||
Verify swact active controller
|
||||
|
||||
Test Steps:
|
||||
- Boot a vm on system and check ping works
|
||||
- Swact active controller
|
||||
- Verify standby controller and active controller are swapped
|
||||
- Verify vm is still pingable
|
||||
|
||||
"""
|
||||
if system_helper.is_aio_simplex():
|
||||
skip("Simplex system detected")
|
||||
|
||||
if not wait_for_con_drbd_sync_complete:
|
||||
skip(SkipSysType.LESS_THAN_TWO_CONTROLLERS)
|
||||
|
||||
LOG.tc_step('retrieve active and available controllers')
|
||||
pre_active_controller, pre_standby_controller = \
|
||||
system_helper.get_active_standby_controllers()
|
||||
assert pre_standby_controller, "No standby controller available"
|
||||
|
||||
pre_res_sys, pre_msg_sys = system_helper.wait_for_services_enable(
|
||||
timeout=20, fail_ok=True)
|
||||
up_hypervisors = host_helper.get_up_hypervisors()
|
||||
pre_res_neutron, pre_msg_neutron = network_helper.wait_for_agents_healthy(
|
||||
up_hypervisors, timeout=20, fail_ok=True)
|
||||
|
||||
LOG.tc_step("Boot a vm from image and ping it")
|
||||
vm_id_img = vm_helper.boot_vm(name='swact_img', source='image',
|
||||
cleanup='function')[1]
|
||||
vm_helper.wait_for_vm_pingable_from_natbox(vm_id_img)
|
||||
|
||||
LOG.tc_step("Boot a vm from volume and ping it")
|
||||
vm_id_vol = vm_helper.boot_vm(name='swact', cleanup='function')[1]
|
||||
vm_helper.wait_for_vm_pingable_from_natbox(vm_id_vol)
|
||||
|
||||
LOG.tc_step("Swact active controller and ensure active controller is "
|
||||
"changed")
|
||||
host_helper.swact_host(hostname=pre_active_controller)
|
||||
|
||||
LOG.tc_step("Verify standby controller and active controller are swapped")
|
||||
post_active_controller = system_helper.get_active_controller_name()
|
||||
post_standby_controller = system_helper.get_standby_controller_name()
|
||||
|
||||
assert pre_standby_controller == post_active_controller, \
|
||||
"Prev standby: {}; Post active: {}".format(
|
||||
pre_standby_controller, post_active_controller)
|
||||
assert pre_active_controller == post_standby_controller, \
|
||||
"Prev active: {}; Post standby: {}".format(
|
||||
pre_active_controller, post_standby_controller)
|
||||
|
||||
LOG.tc_step("Check boot-from-image vm still pingable after swact")
|
||||
vm_helper.wait_for_vm_pingable_from_natbox(vm_id_img, timeout=30)
|
||||
LOG.tc_step("Check boot-from-volume vm still pingable after swact")
|
||||
vm_helper.wait_for_vm_pingable_from_natbox(vm_id_vol, timeout=30)
|
||||
|
||||
LOG.tc_step("Check system services and neutron agents after swact "
|
||||
"from {}".format(pre_active_controller))
|
||||
post_res_sys, post_msg_sys = \
|
||||
system_helper.wait_for_services_enable(fail_ok=True)
|
||||
post_res_neutron, post_msg_neutron = \
|
||||
network_helper.wait_for_agents_healthy(hosts=up_hypervisors,
|
||||
fail_ok=True)
|
||||
|
||||
assert post_res_sys, "\nPost-evac system services stats: {}" \
|
||||
"\nPre-evac system services stats: {}". \
|
||||
format(post_msg_sys, pre_msg_sys)
|
||||
assert post_res_neutron, "\nPost evac neutron agents stats: {}" \
|
||||
"\nPre-evac neutron agents stats: {}". \
|
||||
format(pre_msg_neutron, post_msg_neutron)
|
||||
|
||||
LOG.tc_step("Check hosts are Ready in kubectl get nodes after swact")
|
||||
kube_helper.wait_for_nodes_ready(hosts=(pre_active_controller,
|
||||
pre_standby_controller), timeout=30)
|
||||
|
||||
|
||||
@mark.platform_sanity
|
||||
def test_swact_controller_platform(wait_for_con_drbd_sync_complete):
|
||||
"""
|
||||
Verify swact active controller
|
||||
|
||||
Test Steps:
|
||||
- Swact active controller
|
||||
- Verify standby controller and active controller are swapped
|
||||
- Verify nodes are ready in kubectl get nodes
|
||||
|
||||
"""
|
||||
if system_helper.is_aio_simplex():
|
||||
skip("Simplex system detected")
|
||||
|
||||
if not wait_for_con_drbd_sync_complete:
|
||||
skip(SkipSysType.LESS_THAN_TWO_CONTROLLERS)
|
||||
|
||||
LOG.tc_step('retrieve active and available controllers')
|
||||
pre_active_controller, pre_standby_controller = \
|
||||
system_helper.get_active_standby_controllers()
|
||||
assert pre_standby_controller, "No standby controller available"
|
||||
|
||||
LOG.tc_step("Swact active controller and ensure active controller "
|
||||
"is changed")
|
||||
host_helper.swact_host(hostname=pre_active_controller)
|
||||
|
||||
LOG.tc_step("Check hosts are Ready in kubectl get nodes after swact")
|
||||
kube_helper.wait_for_nodes_ready(hosts=(pre_active_controller,
|
||||
pre_standby_controller), timeout=30)
|
|
@ -0,0 +1,45 @@
|
|||
#
|
||||
# Copyright (c) 2019 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
from pytest import mark, skip, param
|
||||
|
||||
from utils.tis_log import LOG
|
||||
from consts.stx import HostAvailState
|
||||
from testfixtures.recover_hosts import HostsToRecover
|
||||
from keywords import host_helper, system_helper
|
||||
|
||||
|
||||
@mark.parametrize('host_type', [
|
||||
param('controller', marks=mark.platform),
|
||||
param('compute', marks=mark.platform),
|
||||
param('storage', marks=mark.platform),
|
||||
])
|
||||
def test_force_reboot_host(host_type):
|
||||
"""
|
||||
Verify lock unlock host
|
||||
|
||||
Test Steps:
|
||||
- Select a host per given type. If type is controller, select standby
|
||||
controller.
|
||||
- Lock selected host and ensure it is successfully locked
|
||||
- Unlock selected host and ensure it is successfully unlocked
|
||||
|
||||
"""
|
||||
|
||||
LOG.tc_step("Select a {} node from system if any".format(host_type))
|
||||
hosts = system_helper.get_hosts(availability=(HostAvailState.AVAILABLE,
|
||||
HostAvailState.DEGRADED),
|
||||
personality=host_type)
|
||||
if not hosts:
|
||||
skip("No available or degraded {} host found on system".format(
|
||||
host_type))
|
||||
|
||||
host = hosts[0]
|
||||
LOG.tc_step("Force reboot {} host: {}".format(host_type, host))
|
||||
HostsToRecover.add(host)
|
||||
host_helper.reboot_hosts(hostnames=host)
|
||||
host_helper.wait_for_hosts_ready(host)
|
|
@ -0,0 +1,3 @@
|
|||
from testfixtures.resource_mgmt import *
|
||||
from testfixtures.resource_create import *
|
||||
from testfixtures.config_host import *
|
|
@ -0,0 +1,203 @@
|
|||
#
|
||||
# Copyright (c) 2019 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
import time
|
||||
|
||||
from pytest import mark, fixture, skip, param
|
||||
|
||||
from utils.tis_log import LOG
|
||||
|
||||
from consts.auth import Tenant
|
||||
from consts.stx import RouterStatus
|
||||
from keywords import network_helper, vm_helper, system_helper, host_helper, \
|
||||
cinder_helper
|
||||
from testfixtures.fixture_resources import ResourceCleanup
|
||||
|
||||
result_ = None
|
||||
|
||||
|
||||
@fixture(scope='module')
|
||||
def router_info(request):
|
||||
global result_
|
||||
result_ = False
|
||||
|
||||
LOG.fixture_step(
|
||||
"Disable SNAT and update router to DVR if not already done.")
|
||||
|
||||
router_id = network_helper.get_tenant_router()
|
||||
network_helper.set_router_gateway(router_id, enable_snat=False)
|
||||
is_dvr = network_helper.get_router_values(router_id, fields='distributed',
|
||||
auth_info=Tenant.get('admin'))[0]
|
||||
|
||||
def teardown():
|
||||
post_dvr = \
|
||||
network_helper.get_router_values(router_id, fields='distributed',
|
||||
auth_info=Tenant.get('admin'))[0]
|
||||
if post_dvr != is_dvr:
|
||||
network_helper.set_router_mode(router_id, distributed=is_dvr)
|
||||
|
||||
request.addfinalizer(teardown)
|
||||
|
||||
if not is_dvr:
|
||||
network_helper.set_router_mode(router_id, distributed=True,
|
||||
enable_on_failure=False)
|
||||
|
||||
result_ = True
|
||||
return router_id
|
||||
|
||||
|
||||
@fixture()
|
||||
def _bring_up_router(request):
|
||||
def _router_up():
|
||||
if result_ is False:
|
||||
router_id = network_helper.get_tenant_router()
|
||||
network_helper.set_router(router=router_id, fail_ok=False,
|
||||
enable=True)
|
||||
|
||||
request.addfinalizer(_router_up)
|
||||
|
||||
|
||||
@mark.domain_sanity
|
||||
def test_dvr_update_router(router_info, _bring_up_router):
|
||||
"""
|
||||
Test update router to distributed and non-distributed
|
||||
|
||||
Args:
|
||||
router_info (str): router_id (str)
|
||||
|
||||
Setups:
|
||||
- Get the router id and original distributed setting
|
||||
|
||||
Test Steps:
|
||||
- Boot a vm before updating router and ping vm from NatBox
|
||||
- Change the distributed value of the router and verify it's updated
|
||||
successfully
|
||||
- Verify router is in ACTIVE state
|
||||
- Verify vm can still be ping'd from NatBox
|
||||
- Repeat the three steps above with the distributed value reverted to
|
||||
original value
|
||||
|
||||
Teardown:
|
||||
- Delete vm
|
||||
- Revert router to it's original distributed setting if not already
|
||||
done so
|
||||
|
||||
"""
|
||||
global result_
|
||||
result_ = False
|
||||
router_id = router_info
|
||||
|
||||
LOG.tc_step("Boot a vm before updating router and ping vm from NatBox")
|
||||
vm_id = vm_helper.boot_vm(name='dvr_update', reuse_vol=False,
|
||||
cleanup='function')[1]
|
||||
vm_helper.wait_for_vm_pingable_from_natbox(vm_id, fail_ok=False)
|
||||
|
||||
for update_to_val in [False, True]:
|
||||
LOG.tc_step("Update router distributed to {}".format(update_to_val))
|
||||
network_helper.set_router_mode(router_id, distributed=update_to_val,
|
||||
enable_on_failure=False)
|
||||
|
||||
# Wait for 30 seconds to allow the router update completes
|
||||
time.sleep(30)
|
||||
LOG.tc_step(
|
||||
"Verify router is in active state and vm can be ping'd from NatBox")
|
||||
assert RouterStatus.ACTIVE == \
|
||||
network_helper.get_router_values(router_id,
|
||||
fields='status')[0], \
|
||||
"Router is not in active state after updating distributed to " \
|
||||
"{}.".format(update_to_val)
|
||||
vm_helper.wait_for_vm_pingable_from_natbox(vm_id, fail_ok=False)
|
||||
|
||||
result_ = True
|
||||
|
||||
|
||||
@mark.parametrize(('vms_num', 'srv_grp_policy'), [
|
||||
param(2, 'affinity', marks=mark.p2),
|
||||
param(2, 'anti-affinity', marks=mark.nightly),
|
||||
param(3, 'affinity', marks=mark.p2),
|
||||
param(3, 'anti-affinity', marks=mark.p2),
|
||||
])
|
||||
def test_dvr_vms_network_connection(vms_num, srv_grp_policy, server_groups,
|
||||
router_info):
|
||||
"""
|
||||
Test vms East West connection by pinging vms' data network from vm
|
||||
|
||||
Args:
|
||||
vms_num (int): number of vms to boot
|
||||
srv_grp_policy (str): affinity to boot vms on same host,
|
||||
anti-affinity to boot vms on different hosts
|
||||
server_groups: test fixture to return affinity and anti-affinity
|
||||
server groups
|
||||
router_info (str): id of tenant router
|
||||
|
||||
Skip Conditions:
|
||||
- Only one nova host on the system
|
||||
|
||||
Setups:
|
||||
- Enable DVR (module)
|
||||
|
||||
Test Steps
|
||||
- Update router to distributed if not already done
|
||||
- Boot given number of vms with specific server group policy to
|
||||
schedule vms on same or different host(s)
|
||||
- Ping vms' over data and management networks from one vm to test NS
|
||||
and EW traffic
|
||||
|
||||
Teardown:
|
||||
- Delete vms
|
||||
- Revert router to
|
||||
|
||||
"""
|
||||
# Increase instance quota count if needed
|
||||
current_vms = len(vm_helper.get_vms(strict=False))
|
||||
quota_needed = current_vms + vms_num
|
||||
vm_helper.ensure_vms_quotas(quota_needed)
|
||||
|
||||
if srv_grp_policy == 'anti-affinity' and len(
|
||||
host_helper.get_up_hypervisors()) == 1:
|
||||
skip("Only one nova host on the system.")
|
||||
|
||||
LOG.tc_step("Update router to distributed if not already done")
|
||||
router_id = router_info
|
||||
is_dvr = network_helper.get_router_values(router_id, fields='distributed',
|
||||
auth_info=Tenant.get('admin'))[0]
|
||||
if not is_dvr:
|
||||
network_helper.set_router_mode(router_id, distributed=True)
|
||||
|
||||
LOG.tc_step("Boot {} vms with server group policy {}".format(
|
||||
vms_num, srv_grp_policy))
|
||||
affinity_grp, anti_affinity_grp = server_groups(soft=True)
|
||||
srv_grp_id = affinity_grp if srv_grp_policy == 'affinity' else \
|
||||
anti_affinity_grp
|
||||
|
||||
vms = []
|
||||
tenant_net_id = network_helper.get_tenant_net_id()
|
||||
mgmt_net_id = network_helper.get_mgmt_net_id()
|
||||
internal_net_id = network_helper.get_internal_net_id()
|
||||
|
||||
internal_vif = {'net-id': internal_net_id}
|
||||
if system_helper.is_avs():
|
||||
internal_vif['vif-model'] = 'avp'
|
||||
|
||||
nics = [{'net-id': mgmt_net_id}, {'net-id': tenant_net_id}, internal_vif]
|
||||
for i in range(vms_num):
|
||||
vol = cinder_helper.create_volume()[1]
|
||||
ResourceCleanup.add(resource_type='volume', resource_id=vol)
|
||||
vm_id = \
|
||||
vm_helper.boot_vm('dvr_ew_traffic', source='volume', source_id=vol,
|
||||
nics=nics, cleanup='function',
|
||||
hint={'group': srv_grp_id})[1]
|
||||
vms.append(vm_id)
|
||||
LOG.tc_step("Wait for vm {} pingable from NatBox".format(vm_id))
|
||||
vm_helper.wait_for_vm_pingable_from_natbox(vm_id, fail_ok=False)
|
||||
|
||||
from_vm = vms[0]
|
||||
LOG.tc_step(
|
||||
"Ping vms over management and data networks from vm {}, and "
|
||||
"verify ping successful.".format(from_vm))
|
||||
vm_helper.ping_vms_from_vm(to_vms=vms, from_vm=from_vm, fail_ok=False,
|
||||
net_types=['data', 'mgmt', 'internal'])
|
|
@ -0,0 +1,538 @@
|
|||
#
|
||||
# Copyright (c) 2019 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
import copy
|
||||
|
||||
from pytest import fixture, mark, skip, param
|
||||
|
||||
from utils.tis_log import LOG
|
||||
|
||||
from consts.stx import FlavorSpec, VMStatus
|
||||
from consts.reasons import SkipHostIf
|
||||
from keywords import vm_helper, nova_helper, network_helper, glance_helper, \
|
||||
system_helper
|
||||
from testfixtures.fixture_resources import ResourceCleanup
|
||||
|
||||
|
||||
def id_params(val):
|
||||
if not isinstance(val, str):
|
||||
new_val = []
|
||||
for val_1 in val:
|
||||
if isinstance(val_1, (tuple, list)):
|
||||
val_1 = '_'.join([str(val_2).lower() for val_2 in val_1])
|
||||
new_val.append(val_1)
|
||||
else:
|
||||
new_val = val
|
||||
|
||||
return '_'.join(new_val)
|
||||
|
||||
|
||||
def _append_nics_for_net(vifs, net_id, nics):
|
||||
glance_vif = None
|
||||
nics = copy.deepcopy(nics)
|
||||
for vif in vifs:
|
||||
vif_ = vif.split(sep='_x')
|
||||
vif_model = vif_[0]
|
||||
if vif_model in ('e1000', 'rt18139'):
|
||||
glance_vif = vif_model
|
||||
iter_ = int(vif_[1]) if len(vif_) > 1 else 1
|
||||
for i in range(iter_):
|
||||
nic = {'net-id': net_id, 'vif-model': vif_model}
|
||||
nics.append(nic)
|
||||
|
||||
return nics, glance_vif
|
||||
|
||||
|
||||
def _boot_multiports_vm(flavor, mgmt_net_id, vifs, net_id, net_type, base_vm,
|
||||
pcipt_seg_id=None):
|
||||
nics = [{'net-id': mgmt_net_id}]
|
||||
|
||||
nics, glance_vif = _append_nics_for_net(vifs, net_id=net_id, nics=nics)
|
||||
img_id = None
|
||||
if glance_vif:
|
||||
img_id = glance_helper.create_image(name=glance_vif,
|
||||
hw_vif_model=glance_vif,
|
||||
cleanup='function')[1]
|
||||
|
||||
LOG.tc_step("Boot a test_vm with following nics on same networks as "
|
||||
"base_vm: {}".format(nics))
|
||||
vm_under_test = \
|
||||
vm_helper.boot_vm(name='multiports', nics=nics, flavor=flavor,
|
||||
cleanup='function',
|
||||
image_id=img_id)[1]
|
||||
vm_helper.wait_for_vm_pingable_from_natbox(vm_under_test, fail_ok=False)
|
||||
|
||||
if pcipt_seg_id:
|
||||
LOG.tc_step("Add vlan to pci-passthrough interface for VM.")
|
||||
vm_helper.add_vlan_for_vm_pcipt_interfaces(vm_id=vm_under_test,
|
||||
net_seg_id=pcipt_seg_id,
|
||||
init_conf=True)
|
||||
|
||||
LOG.tc_step("Ping test_vm's own {} network ips".format(net_type))
|
||||
vm_helper.ping_vms_from_vm(to_vms=vm_under_test, from_vm=vm_under_test,
|
||||
net_types=net_type)
|
||||
|
||||
vm_helper.configure_vm_vifs_on_same_net(vm_id=vm_under_test)
|
||||
|
||||
LOG.tc_step(
|
||||
"Ping test_vm from base_vm to verify management and data networks "
|
||||
"connection")
|
||||
vm_helper.ping_vms_from_vm(to_vms=vm_under_test, from_vm=base_vm,
|
||||
net_types=['mgmt', net_type])
|
||||
|
||||
return vm_under_test, nics
|
||||
|
||||
|
||||
class TestMutiPortsBasic:
|
||||
@fixture(scope='class')
|
||||
def base_setup(self):
|
||||
|
||||
flavor_id = nova_helper.create_flavor(name='dedicated')[1]
|
||||
ResourceCleanup.add('flavor', flavor_id, scope='class')
|
||||
|
||||
extra_specs = {FlavorSpec.CPU_POLICY: 'dedicated'}
|
||||
nova_helper.set_flavor(flavor=flavor_id, **extra_specs)
|
||||
|
||||
mgmt_net_id = network_helper.get_mgmt_net_id()
|
||||
tenant_net_id = network_helper.get_tenant_net_id()
|
||||
internal_net_id = network_helper.get_internal_net_id()
|
||||
|
||||
nics = [{'net-id': mgmt_net_id},
|
||||
{'net-id': tenant_net_id},
|
||||
{'net-id': internal_net_id}]
|
||||
|
||||
LOG.fixture_step(
|
||||
"(class) Boot a base vm with following nics: {}".format(nics))
|
||||
base_vm = vm_helper.boot_vm(name='multiports_base',
|
||||
flavor=flavor_id, nics=nics,
|
||||
cleanup='class',
|
||||
reuse_vol=False)[1]
|
||||
|
||||
vm_helper.wait_for_vm_pingable_from_natbox(base_vm)
|
||||
vm_helper.ping_vms_from_vm(base_vm, base_vm, net_types='data')
|
||||
|
||||
return base_vm, flavor_id, mgmt_net_id, tenant_net_id, internal_net_id
|
||||
|
||||
@mark.parametrize('vifs', [
|
||||
param(('virtio_x4',), marks=mark.priorities('nightly', 'sx_nightly'))
|
||||
], ids=id_params)
|
||||
def test_multiports_on_same_network_vm_actions(self, vifs, base_setup):
|
||||
"""
|
||||
Test vm actions on vm with multiple ports with given vif models on
|
||||
the same tenant network
|
||||
|
||||
Args:
|
||||
vifs (tuple): each item in the tuple is 1 nic to be added to vm
|
||||
with specified (vif_mode, pci_address)
|
||||
base_setup (list): test fixture to boot base vm
|
||||
|
||||
Setups:
|
||||
- create a flavor with dedicated cpu policy (class)
|
||||
- choose one tenant network and one internal network to be used
|
||||
by test (class)
|
||||
- boot a base vm - vm1 with above flavor and networks, and ping
|
||||
it from NatBox (class)
|
||||
- Boot a vm under test - vm2 with above flavor and with multiple
|
||||
ports on same tenant network with base vm,
|
||||
and ping it from NatBox (class)
|
||||
- Ping vm2's own data network ips (class)
|
||||
- Ping vm2 from vm1 to verify management and data networks
|
||||
connection (class)
|
||||
|
||||
Test Steps:
|
||||
- Perform given actions on vm2 (migrate, start/stop, etc)
|
||||
- Verify pci_address preserves
|
||||
- Verify ping from vm1 to vm2 over management and data networks
|
||||
still works
|
||||
|
||||
Teardown:
|
||||
- Delete created vms and flavor
|
||||
"""
|
||||
base_vm, flavor, mgmt_net_id, tenant_net_id, internal_net_id = \
|
||||
base_setup
|
||||
|
||||
vm_under_test, nics = _boot_multiports_vm(flavor=flavor,
|
||||
mgmt_net_id=mgmt_net_id,
|
||||
vifs=vifs,
|
||||
net_id=tenant_net_id,
|
||||
net_type='data',
|
||||
base_vm=base_vm)
|
||||
|
||||
for vm_actions in [['auto_recover'],
|
||||
['cold_migrate'],
|
||||
['pause', 'unpause'],
|
||||
['suspend', 'resume'],
|
||||
['hard_reboot']]:
|
||||
if vm_actions[0] == 'auto_recover':
|
||||
LOG.tc_step(
|
||||
"Set vm to error state and wait for auto recovery "
|
||||
"complete, then verify ping from "
|
||||
"base vm over management and data networks")
|
||||
vm_helper.set_vm_state(vm_id=vm_under_test, error_state=True,
|
||||
fail_ok=False)
|
||||
vm_helper.wait_for_vm_values(vm_id=vm_under_test,
|
||||
status=VMStatus.ACTIVE,
|
||||
fail_ok=True, timeout=600)
|
||||
else:
|
||||
LOG.tc_step("Perform following action(s) on vm {}: {}".format(
|
||||
vm_under_test, vm_actions))
|
||||
for action in vm_actions:
|
||||
if 'migrate' in action and system_helper.is_aio_simplex():
|
||||
continue
|
||||
|
||||
kwargs = {}
|
||||
if action == 'hard_reboot':
|
||||
action = 'reboot'
|
||||
kwargs['hard'] = True
|
||||
kwargs['action'] = action
|
||||
|
||||
vm_helper.perform_action_on_vm(vm_under_test, **kwargs)
|
||||
|
||||
vm_helper.wait_for_vm_pingable_from_natbox(vm_under_test)
|
||||
|
||||
# LOG.tc_step("Verify vm pci address preserved after {}".format(
|
||||
# vm_actions))
|
||||
# check_helper.check_vm_pci_addr(vm_under_test, nics)
|
||||
|
||||
LOG.tc_step(
|
||||
"Verify ping from base_vm to vm_under_test over management "
|
||||
"and data networks still works "
|
||||
"after {}".format(vm_actions))
|
||||
vm_helper.ping_vms_from_vm(to_vms=vm_under_test, from_vm=base_vm,
|
||||
net_types=['mgmt', 'data'])
|
||||
|
||||
|
||||
class TestMutiPortsPCI:
|
||||
|
||||
@fixture(scope='class')
|
||||
def base_setup_pci(self):
|
||||
LOG.fixture_step(
|
||||
"(class) Get an internal network that supports both pci-sriov and "
|
||||
"pcipt vif to boot vm")
|
||||
avail_pcipt_nets, is_cx4 = network_helper.get_pci_vm_network(
|
||||
pci_type='pci-passthrough',
|
||||
net_name='internal0-net', rtn_all=True)
|
||||
avail_sriov_nets, _ = network_helper.get_pci_vm_network(
|
||||
pci_type='pci-sriov',
|
||||
net_name='internal0-net', rtn_all=True)
|
||||
|
||||
if not avail_pcipt_nets and not avail_sriov_nets:
|
||||
skip(SkipHostIf.PCI_IF_UNAVAIL)
|
||||
|
||||
avail_nets = list(set(avail_pcipt_nets) & set(avail_sriov_nets))
|
||||
extra_pcipt_net = avail_pcipt_net = avail_sriov_net = None
|
||||
pcipt_seg_ids = {}
|
||||
if avail_nets:
|
||||
avail_net_name = avail_nets[-1]
|
||||
avail_net, segment_id = network_helper.get_network_values(
|
||||
network=avail_net_name,
|
||||
fields=('id', 'provider:segmentation_id'))
|
||||
internal_nets = [avail_net]
|
||||
pcipt_seg_ids[avail_net_name] = segment_id
|
||||
avail_pcipt_net = avail_sriov_net = avail_net
|
||||
LOG.info(
|
||||
"Internal network(s) selected for pcipt and sriov: {}".format(
|
||||
avail_net_name))
|
||||
else:
|
||||
LOG.info("No internal network support both sriov and pcipt")
|
||||
internal_nets = []
|
||||
if avail_pcipt_nets:
|
||||
avail_pcipt_net_name = avail_pcipt_nets[-1]
|
||||
avail_pcipt_net, segment_id = network_helper.get_network_values(
|
||||
network=avail_pcipt_net_name,
|
||||
fields=('id', 'provider:segmentation_id'))
|
||||
internal_nets.append(avail_pcipt_net)
|
||||
pcipt_seg_ids[avail_pcipt_net_name] = segment_id
|
||||
LOG.info("pci-passthrough net: {}".format(avail_pcipt_net_name))
|
||||
if avail_sriov_nets:
|
||||
avail_sriov_net_name = avail_sriov_nets[-1]
|
||||
avail_sriov_net = network_helper.get_net_id_from_name(
|
||||
avail_sriov_net_name)
|
||||
internal_nets.append(avail_sriov_net)
|
||||
LOG.info("pci-sriov net: {}".format(avail_sriov_net_name))
|
||||
|
||||
mgmt_net_id = network_helper.get_mgmt_net_id()
|
||||
tenant_net_id = network_helper.get_tenant_net_id()
|
||||
base_nics = [{'net-id': mgmt_net_id}, {'net-id': tenant_net_id}]
|
||||
nics = base_nics + [{'net-id': net_id} for net_id in internal_nets]
|
||||
|
||||
if avail_pcipt_nets and is_cx4:
|
||||
extra_pcipt_net_name = avail_nets[0] if avail_nets else \
|
||||
avail_pcipt_nets[0]
|
||||
extra_pcipt_net, seg_id = network_helper.get_network_values(
|
||||
network=extra_pcipt_net_name,
|
||||
fields=('id', 'provider:segmentation_id'))
|
||||
if extra_pcipt_net not in internal_nets:
|
||||
nics.append({'net-id': extra_pcipt_net})
|
||||
pcipt_seg_ids[extra_pcipt_net_name] = seg_id
|
||||
|
||||
LOG.fixture_step("(class) Create a flavor with dedicated cpu policy.")
|
||||
flavor_id = \
|
||||
nova_helper.create_flavor(name='dedicated', vcpus=2, ram=2048,
|
||||
cleanup='class')[1]
|
||||
extra_specs = {FlavorSpec.CPU_POLICY: 'dedicated',
|
||||
FlavorSpec.PCI_NUMA_AFFINITY: 'preferred'}
|
||||
nova_helper.set_flavor(flavor=flavor_id, **extra_specs)
|
||||
|
||||
LOG.fixture_step(
|
||||
"(class) Boot a base pci vm with following nics: {}".format(nics))
|
||||
base_vm_pci = \
|
||||
vm_helper.boot_vm(name='multiports_pci_base', flavor=flavor_id,
|
||||
nics=nics, cleanup='class')[1]
|
||||
|
||||
LOG.fixture_step("(class) Ping base PCI vm interfaces")
|
||||
vm_helper.wait_for_vm_pingable_from_natbox(base_vm_pci)
|
||||
vm_helper.ping_vms_from_vm(to_vms=base_vm_pci, from_vm=base_vm_pci,
|
||||
net_types=['data', 'internal'])
|
||||
|
||||
return base_vm_pci, flavor_id, base_nics, avail_sriov_net, \
|
||||
avail_pcipt_net, pcipt_seg_ids, extra_pcipt_net
|
||||
|
||||
@mark.parametrize('vifs', [
|
||||
param(('virtio', 'pci-sriov', 'pci-passthrough'), marks=mark.p3),
|
||||
param(('pci-passthrough',), marks=mark.nightly),
|
||||
param(('pci-sriov',), marks=mark.nightly),
|
||||
], ids=id_params)
|
||||
def test_multiports_on_same_network_pci_vm_actions(self, base_setup_pci,
|
||||
vifs):
|
||||
"""
|
||||
Test vm actions on vm with multiple ports with given vif models on
|
||||
the same tenant network
|
||||
|
||||
Args:
|
||||
base_setup_pci (tuple): base_vm_pci, flavor, mgmt_net_id,
|
||||
tenant_net_id, internal_net_id, seg_id
|
||||
vifs (list): list of vifs to add to same internal net
|
||||
|
||||
Setups:
|
||||
- Create a flavor with dedicated cpu policy (class)
|
||||
- Choose management net, one tenant net, and internal0-net1 to be
|
||||
used by test (class)
|
||||
- Boot a base pci-sriov vm - vm1 with above flavor and networks,
|
||||
ping it from NatBox (class)
|
||||
- Ping vm1 from itself over data, and internal networks
|
||||
|
||||
Test Steps:
|
||||
- Boot a vm under test - vm2 with above flavor and with multiple
|
||||
ports on same tenant network with vm1,
|
||||
and ping it from NatBox
|
||||
- Ping vm2's own data and internal network ips
|
||||
- Ping vm2 from vm1 to verify management and data networks
|
||||
connection
|
||||
- Perform one of the following actions on vm2
|
||||
- set to error/ wait for auto recovery
|
||||
- suspend/resume
|
||||
- cold migration
|
||||
- pause/unpause
|
||||
- Update vlan interface to proper eth if pci-passthrough device
|
||||
moves to different eth
|
||||
- Verify ping from vm1 to vm2 over management and data networks
|
||||
still works
|
||||
- Repeat last 3 steps with different vm actions
|
||||
|
||||
Teardown:
|
||||
- Delete created vms and flavor
|
||||
"""
|
||||
|
||||
base_vm_pci, flavor, base_nics, avail_sriov_net, avail_pcipt_net, \
|
||||
pcipt_seg_ids, extra_pcipt_net = base_setup_pci
|
||||
|
||||
pcipt_included = False
|
||||
internal_net_id = None
|
||||
for vif in vifs:
|
||||
if not isinstance(vif, str):
|
||||
vif = vif[0]
|
||||
if 'pci-passthrough' in vif:
|
||||
if not avail_pcipt_net:
|
||||
skip(SkipHostIf.PCIPT_IF_UNAVAIL)
|
||||
internal_net_id = avail_pcipt_net
|
||||
pcipt_included = True
|
||||
continue
|
||||
elif 'pci-sriov' in vif:
|
||||
if not avail_sriov_net:
|
||||
skip(SkipHostIf.SRIOV_IF_UNAVAIL)
|
||||
internal_net_id = avail_sriov_net
|
||||
|
||||
assert internal_net_id, "test script error. Internal net should have " \
|
||||
"been determined."
|
||||
|
||||
nics, glance_vif = _append_nics_for_net(vifs, net_id=internal_net_id,
|
||||
nics=base_nics)
|
||||
if pcipt_included and extra_pcipt_net:
|
||||
nics.append(
|
||||
{'net-id': extra_pcipt_net, 'vif-model': 'pci-passthrough'})
|
||||
|
||||
img_id = None
|
||||
if glance_vif:
|
||||
img_id = glance_helper.create_image(name=glance_vif,
|
||||
hw_vif_model=glance_vif,
|
||||
cleanup='function')[1]
|
||||
|
||||
LOG.tc_step("Boot a vm with following vifs on same internal net: "
|
||||
"{}".format(vifs))
|
||||
vm_under_test = vm_helper.boot_vm(name='multiports_pci',
|
||||
nics=nics, flavor=flavor,
|
||||
cleanup='function',
|
||||
reuse_vol=False, image_id=img_id)[1]
|
||||
vm_helper.wait_for_vm_pingable_from_natbox(vm_under_test, fail_ok=False)
|
||||
|
||||
if pcipt_included:
|
||||
LOG.tc_step("Add vlan to pci-passthrough interface for VM.")
|
||||
vm_helper.add_vlan_for_vm_pcipt_interfaces(vm_id=vm_under_test,
|
||||
net_seg_id=pcipt_seg_ids,
|
||||
init_conf=True)
|
||||
|
||||
LOG.tc_step("Ping vm's own data and internal network ips")
|
||||
vm_helper.ping_vms_from_vm(to_vms=vm_under_test, from_vm=vm_under_test,
|
||||
net_types=['data', 'internal'])
|
||||
|
||||
LOG.tc_step(
|
||||
"Ping vm_under_test from base_vm over management, data, "
|
||||
"and internal networks")
|
||||
vm_helper.ping_vms_from_vm(to_vms=vm_under_test, from_vm=base_vm_pci,
|
||||
net_types=['mgmt', 'data', 'internal'])
|
||||
|
||||
for vm_actions in [['auto_recover'], ['cold_migrate'],
|
||||
['pause', 'unpause'], ['suspend', 'resume']]:
|
||||
if 'auto_recover' in vm_actions:
|
||||
LOG.tc_step(
|
||||
"Set vm to error state and wait for auto recovery "
|
||||
"complete, "
|
||||
"then verify ping from base vm over management and "
|
||||
"internal networks")
|
||||
vm_helper.set_vm_state(vm_id=vm_under_test, error_state=True,
|
||||
fail_ok=False)
|
||||
vm_helper.wait_for_vm_values(vm_id=vm_under_test,
|
||||
status=VMStatus.ACTIVE,
|
||||
fail_ok=False, timeout=600)
|
||||
else:
|
||||
LOG.tc_step("Perform following action(s) on vm {}: {}".format(
|
||||
vm_under_test, vm_actions))
|
||||
for action in vm_actions:
|
||||
vm_helper.perform_action_on_vm(vm_under_test, action=action)
|
||||
|
||||
vm_helper.wait_for_vm_pingable_from_natbox(vm_id=vm_under_test)
|
||||
if pcipt_included:
|
||||
LOG.tc_step(
|
||||
"Bring up vlan interface for pci-passthrough vm {}.".format(
|
||||
vm_under_test))
|
||||
vm_helper.add_vlan_for_vm_pcipt_interfaces(
|
||||
vm_id=vm_under_test, net_seg_id=pcipt_seg_ids)
|
||||
|
||||
LOG.tc_step(
|
||||
"Verify ping from base_vm to vm_under_test over management "
|
||||
"and internal networks still works "
|
||||
"after {}".format(vm_actions))
|
||||
vm_helper.ping_vms_from_vm(to_vms=vm_under_test,
|
||||
from_vm=base_vm_pci,
|
||||
net_types=['mgmt', 'internal'])
|
||||
|
||||
@mark.parametrize('vifs', [
|
||||
('pci-sriov',),
|
||||
('pci-passthrough',),
|
||||
], ids=id_params)
|
||||
def test_multiports_on_same_network_pci_evacuate_vm(self, base_setup_pci,
|
||||
vifs):
|
||||
"""
|
||||
Test evacuate vm with multiple ports on same network
|
||||
|
||||
Args:
|
||||
base_setup_pci (tuple): base vm id, vm under test id, segment id
|
||||
for internal0-net1
|
||||
vifs (list): list of vifs to add to same internal net
|
||||
|
||||
Setups:
|
||||
- create a flavor with dedicated cpu policy (module)
|
||||
- choose one tenant network and one internal network to be used
|
||||
by test (module)
|
||||
- boot a base vm - vm1 with above flavor and networks, and ping
|
||||
it from NatBox (module)
|
||||
- Boot a vm under test - vm2 with above flavor and with multiple
|
||||
ports on same tenant network with base vm,
|
||||
and ping it from NatBox (class)
|
||||
- Ping vm2's own data network ips (class)
|
||||
- Ping vm2 from vm1 to verify management and internal networks
|
||||
connection (class)
|
||||
|
||||
Test Steps:
|
||||
- Reboot vm2 host
|
||||
- Wait for vm2 to be evacuated to other host
|
||||
- Wait for vm2 pingable from NatBox
|
||||
- Verify ping from vm1 to vm2 over management and internal
|
||||
networks still works
|
||||
|
||||
Teardown:
|
||||
- Delete created vms and flavor
|
||||
"""
|
||||
base_vm_pci, flavor, base_nics, avail_sriov_net, avail_pcipt_net, \
|
||||
pcipt_seg_ids, extra_pcipt_net = base_setup_pci
|
||||
|
||||
internal_net_id = None
|
||||
pcipt_included = False
|
||||
nics = copy.deepcopy(base_nics)
|
||||
if 'pci-passthrough' in vifs:
|
||||
if not avail_pcipt_net:
|
||||
skip(SkipHostIf.PCIPT_IF_UNAVAIL)
|
||||
pcipt_included = True
|
||||
internal_net_id = avail_pcipt_net
|
||||
if extra_pcipt_net:
|
||||
nics.append(
|
||||
{'net-id': extra_pcipt_net, 'vif-model': 'pci-passthrough'})
|
||||
if 'pci-sriov' in vifs:
|
||||
if not avail_sriov_net:
|
||||
skip(SkipHostIf.SRIOV_IF_UNAVAIL)
|
||||
internal_net_id = avail_sriov_net
|
||||
assert internal_net_id, "test script error. sriov or pcipt has to be " \
|
||||
"included."
|
||||
|
||||
for vif in vifs:
|
||||
nics.append({'net-id': internal_net_id, 'vif-model': vif})
|
||||
|
||||
LOG.tc_step(
|
||||
"Boot a vm with following vifs on same network internal0-net1: "
|
||||
"{}".format(vifs))
|
||||
vm_under_test = vm_helper.boot_vm(name='multiports_pci_evac',
|
||||
nics=nics, flavor=flavor,
|
||||
cleanup='function',
|
||||
reuse_vol=False)[1]
|
||||
vm_helper.wait_for_vm_pingable_from_natbox(vm_under_test, fail_ok=False)
|
||||
|
||||
if pcipt_included:
|
||||
LOG.tc_step("Add vlan to pci-passthrough interface.")
|
||||
vm_helper.add_vlan_for_vm_pcipt_interfaces(vm_id=vm_under_test,
|
||||
net_seg_id=pcipt_seg_ids,
|
||||
init_conf=True)
|
||||
|
||||
LOG.tc_step("Ping vm's own data and internal network ips")
|
||||
vm_helper.ping_vms_from_vm(to_vms=vm_under_test, from_vm=vm_under_test,
|
||||
net_types=['data', 'internal'])
|
||||
vm_helper.configure_vm_vifs_on_same_net(vm_id=vm_under_test)
|
||||
|
||||
LOG.tc_step(
|
||||
"Ping vm_under_test from base_vm over management, data, and "
|
||||
"internal networks")
|
||||
vm_helper.ping_vms_from_vm(to_vms=vm_under_test, from_vm=base_vm_pci,
|
||||
net_types=['mgmt', 'data', 'internal'])
|
||||
|
||||
host = vm_helper.get_vm_host(vm_under_test)
|
||||
|
||||
LOG.tc_step("Reboot vm host {}".format(host))
|
||||
vm_helper.evacuate_vms(host=host, vms_to_check=vm_under_test,
|
||||
ping_vms=True)
|
||||
|
||||
if pcipt_included:
|
||||
LOG.tc_step(
|
||||
"Add/Check vlan interface is added to pci-passthrough device "
|
||||
"for vm {}.".format(vm_under_test))
|
||||
vm_helper.add_vlan_for_vm_pcipt_interfaces(vm_id=vm_under_test,
|
||||
net_seg_id=pcipt_seg_ids)
|
||||
|
||||
LOG.tc_step(
|
||||
"Verify ping from base_vm to vm_under_test over management and "
|
||||
"internal networks still works after evacuation.")
|
||||
vm_helper.ping_vms_from_vm(to_vms=vm_under_test, from_vm=base_vm_pci,
|
||||
net_types=['mgmt', 'internal'])
|
|
@ -0,0 +1,117 @@
|
|||
#
|
||||
# Copyright (c) 2019 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
from pytest import mark, param
|
||||
|
||||
from utils.tis_log import LOG
|
||||
from consts.stx import FlavorSpec, GuestImages
|
||||
from keywords import vm_helper, glance_helper, nova_helper, network_helper, \
|
||||
cinder_helper
|
||||
|
||||
|
||||
def id_gen(val):
|
||||
if not isinstance(val, str):
|
||||
new_val = []
|
||||
for val_1 in val:
|
||||
if not isinstance(val_1, str):
|
||||
val_1 = '_'.join([str(val_2).lower() for val_2 in val_1])
|
||||
new_val.append(val_1)
|
||||
new_val = '_'.join(new_val)
|
||||
else:
|
||||
new_val = val
|
||||
|
||||
return new_val
|
||||
|
||||
|
||||
def _compose_nics(vifs, net_ids, image_id, guest_os):
|
||||
nics = []
|
||||
glance_vif = None
|
||||
if isinstance(vifs, str):
|
||||
vifs = (vifs,)
|
||||
for i in range(len(vifs)):
|
||||
vif_model = vifs[i]
|
||||
nic = {'net-id': net_ids[i]}
|
||||
if vif_model in ('e1000', 'rt18139'):
|
||||
glance_vif = vif_model
|
||||
elif vif_model != 'virtio':
|
||||
nic['vif-model'] = vif_model
|
||||
nics.append(nic)
|
||||
|
||||
if glance_vif:
|
||||
glance_helper.set_image(image=image_id, hw_vif_model=glance_vif,
|
||||
new_name='{}_{}'.format(guest_os, glance_vif))
|
||||
|
||||
return nics
|
||||
|
||||
|
||||
@mark.parametrize(('guest_os', 'vm1_vifs', 'vm2_vifs'), [
|
||||
param('default', 'virtio', 'virtio',
|
||||
marks=mark.priorities('cpe_sanity', 'sanity', 'sx_sanity')),
|
||||
('ubuntu_14', 'virtio', 'virtio'),
|
||||
], ids=id_gen)
|
||||
def test_ping_between_two_vms(guest_os, vm1_vifs, vm2_vifs):
|
||||
"""
|
||||
Ping between two vms with given vif models
|
||||
|
||||
Test Steps:
|
||||
- Create a favor with dedicated cpu policy and proper root disk size
|
||||
- Create a volume from guest image under test with proper size
|
||||
- Boot two vms with given vif models from above volume and flavor
|
||||
- Ping VMs from NatBox and between two vms
|
||||
|
||||
Test Teardown:
|
||||
- Delete vms, volumes, flavor, glance image created
|
||||
|
||||
"""
|
||||
if guest_os == 'default':
|
||||
guest_os = GuestImages.DEFAULT['guest']
|
||||
|
||||
reuse = False if 'e1000' in vm1_vifs or 'e1000' in vm2_vifs else True
|
||||
cleanup = 'function' if not reuse or 'ubuntu' in guest_os else None
|
||||
image_id = glance_helper.get_guest_image(guest_os, cleanup=cleanup,
|
||||
use_existing=reuse)
|
||||
|
||||
LOG.tc_step("Create a favor dedicated cpu policy")
|
||||
flavor_id = nova_helper.create_flavor(name='dedicated', guest_os=guest_os,
|
||||
cleanup='function')[1]
|
||||
nova_helper.set_flavor(flavor_id, **{FlavorSpec.CPU_POLICY: 'dedicated'})
|
||||
|
||||
mgmt_net_id = network_helper.get_mgmt_net_id()
|
||||
tenant_net_id = network_helper.get_tenant_net_id()
|
||||
internal_net_id = network_helper.get_internal_net_id()
|
||||
net_ids = (mgmt_net_id, tenant_net_id, internal_net_id)
|
||||
vms = []
|
||||
for vifs_for_vm in (vm1_vifs, vm2_vifs):
|
||||
# compose vm nics
|
||||
nics = _compose_nics(vifs_for_vm, net_ids=net_ids, image_id=image_id,
|
||||
guest_os=guest_os)
|
||||
net_types = ['mgmt', 'data', 'internal'][:len(nics)]
|
||||
LOG.tc_step("Create a volume from {} image".format(guest_os))
|
||||
vol_id = cinder_helper.create_volume(name='vol-{}'.format(guest_os),
|
||||
source_id=image_id,
|
||||
guest_image=guest_os,
|
||||
cleanup='function')[1]
|
||||
|
||||
LOG.tc_step(
|
||||
"Boot a {} vm with {} vifs from above flavor and volume".format(
|
||||
guest_os, vifs_for_vm))
|
||||
vm_id = vm_helper.boot_vm('{}_vifs'.format(guest_os), flavor=flavor_id,
|
||||
cleanup='function',
|
||||
source='volume', source_id=vol_id, nics=nics,
|
||||
guest_os=guest_os)[1]
|
||||
|
||||
LOG.tc_step("Ping VM {} from NatBox(external network)".format(vm_id))
|
||||
vm_helper.wait_for_vm_pingable_from_natbox(vm_id, fail_ok=False)
|
||||
|
||||
vms.append(vm_id)
|
||||
|
||||
LOG.tc_step(
|
||||
"Ping between two vms over management, data, and internal networks")
|
||||
vm_helper.ping_vms_from_vm(to_vms=vms[0], from_vm=vms[1],
|
||||
net_types=net_types)
|
||||
vm_helper.ping_vms_from_vm(to_vms=vms[1], from_vm=vms[0],
|
||||
net_types=net_types)
|
|
@ -0,0 +1,45 @@
|
|||
from pytest import mark
|
||||
|
||||
from utils.tis_log import LOG
|
||||
from keywords import vm_helper
|
||||
from consts.stx import METADATA_SERVER
|
||||
|
||||
|
||||
@mark.sanity
|
||||
def test_vm_meta_data_retrieval():
|
||||
"""
|
||||
VM meta-data retrieval
|
||||
|
||||
Test Steps:
|
||||
- Launch a boot-from-image vm
|
||||
- Retrieve vm meta_data within vm from metadata server
|
||||
- Ensure vm uuid from metadata server is the same as nova show
|
||||
|
||||
Test Teardown:
|
||||
- Delete created vm and flavor
|
||||
"""
|
||||
LOG.tc_step("Launch a boot-from-image vm")
|
||||
vm_id = vm_helper.boot_vm(source='image', cleanup='function')[1]
|
||||
vm_helper.wait_for_vm_pingable_from_natbox(vm_id, fail_ok=False)
|
||||
|
||||
LOG.tc_step('Retrieve vm meta_data within vm from metadata server')
|
||||
# retrieve meta instance id by ssh to VM from natbox and wget to remote
|
||||
# server
|
||||
_access_metadata_server_from_vm(vm_id=vm_id)
|
||||
|
||||
|
||||
def _access_metadata_server_from_vm(vm_id):
|
||||
with vm_helper.ssh_to_vm_from_natbox(vm_id) as vm_ssh:
|
||||
vm_ssh.exec_cmd('ip route')
|
||||
command = 'wget http://{}/openstack/latest/meta_data.json'.format(
|
||||
METADATA_SERVER)
|
||||
vm_ssh.exec_cmd(command, fail_ok=False)
|
||||
metadata = vm_ssh.exec_cmd('more meta_data.json', fail_ok=False)[1]
|
||||
|
||||
LOG.tc_step("Ensure vm uuid from metadata server is the same as nova show")
|
||||
metadata = metadata.replace('\n', '')
|
||||
LOG.info(metadata)
|
||||
metadata_uuid = eval(metadata)['uuid']
|
||||
|
||||
assert vm_id == metadata_uuid, "VM UUID retrieved from metadata server " \
|
||||
"is not the same as nova show"
|
|
@ -0,0 +1,3 @@
|
|||
from testfixtures.resource_mgmt import *
|
||||
from testfixtures.resource_create import *
|
||||
from testfixtures.config_host import *
|
|
@ -0,0 +1,131 @@
|
|||
from pytest import fixture, skip, mark
|
||||
|
||||
from consts.timeout import VMTimeout
|
||||
from keywords import vm_helper, host_helper, cinder_helper, glance_helper, \
|
||||
system_helper
|
||||
from testfixtures.fixture_resources import ResourceCleanup
|
||||
from testfixtures.recover_hosts import HostsToRecover
|
||||
from utils.tis_log import LOG
|
||||
|
||||
TEST_STRING = 'Config-drive test file content'
|
||||
|
||||
|
||||
@fixture(scope='module')
|
||||
def hosts_per_stor_backing():
|
||||
hosts_per_backing = host_helper.get_hosts_per_storage_backing()
|
||||
LOG.fixture_step("Hosts per storage backing: {}".format(hosts_per_backing))
|
||||
|
||||
return hosts_per_backing
|
||||
|
||||
|
||||
@mark.nightly
|
||||
@mark.sx_nightly
|
||||
def test_vm_with_config_drive(hosts_per_stor_backing):
|
||||
"""
|
||||
Skip Condition:
|
||||
- no host with local_image backend
|
||||
|
||||
Test Steps:
|
||||
- Launch a vm using config drive
|
||||
- Add test data to config drive on vm
|
||||
- Do some operations (reboot vm for simplex, cold migrate and lock
|
||||
host for non-simplex) and
|
||||
check test data persisted in config drive after each operation
|
||||
Teardown:
|
||||
- Delete created vm, volume, flavor
|
||||
|
||||
"""
|
||||
guest_os = 'cgcs-guest'
|
||||
img_id = glance_helper.get_guest_image(guest_os)
|
||||
hosts_num = len(hosts_per_stor_backing.get('local_image', []))
|
||||
if hosts_num < 1:
|
||||
skip("No host with local_image storage backing")
|
||||
|
||||
volume_id = cinder_helper.create_volume(name='vol_inst1', source_id=img_id,
|
||||
guest_image=guest_os)[1]
|
||||
ResourceCleanup.add('volume', volume_id, scope='function')
|
||||
|
||||
block_device = {'source': 'volume', 'dest': 'volume', 'id': volume_id,
|
||||
'device': 'vda'}
|
||||
vm_id = vm_helper.boot_vm(name='config_drive', config_drive=True,
|
||||
block_device=block_device,
|
||||
cleanup='function', guest_os=guest_os,
|
||||
meta={'foo': 'bar'})[1]
|
||||
|
||||
LOG.tc_step("Confirming the config drive is set to True in vm ...")
|
||||
assert str(vm_helper.get_vm_values(vm_id, "config_drive")[
|
||||
0]) == 'True', "vm config-drive not true"
|
||||
|
||||
LOG.tc_step("Add date to config drive ...")
|
||||
check_vm_config_drive_data(vm_id)
|
||||
|
||||
vm_host = vm_helper.get_vm_host(vm_id)
|
||||
instance_name = vm_helper.get_vm_instance_name(vm_id)
|
||||
LOG.tc_step("Check config_drive vm files on hypervisor after vm launch")
|
||||
check_vm_files_on_hypervisor(vm_id, vm_host=vm_host,
|
||||
instance_name=instance_name)
|
||||
|
||||
if not system_helper.is_aio_simplex():
|
||||
LOG.tc_step("Cold migrate VM")
|
||||
vm_helper.cold_migrate_vm(vm_id)
|
||||
|
||||
LOG.tc_step("Check config drive after cold migrate VM...")
|
||||
check_vm_config_drive_data(vm_id)
|
||||
|
||||
LOG.tc_step("Lock the compute host")
|
||||
compute_host = vm_helper.get_vm_host(vm_id)
|
||||
HostsToRecover.add(compute_host)
|
||||
host_helper.lock_host(compute_host, swact=True)
|
||||
|
||||
LOG.tc_step("Check config drive after locking VM host")
|
||||
check_vm_config_drive_data(vm_id, ping_timeout=VMTimeout.DHCP_RETRY)
|
||||
vm_host = vm_helper.get_vm_host(vm_id)
|
||||
|
||||
else:
|
||||
LOG.tc_step("Reboot vm")
|
||||
vm_helper.reboot_vm(vm_id)
|
||||
|
||||
LOG.tc_step("Check config drive after vm rebooted")
|
||||
check_vm_config_drive_data(vm_id)
|
||||
|
||||
LOG.tc_step("Check vm files exist after nova operations")
|
||||
check_vm_files_on_hypervisor(vm_id, vm_host=vm_host,
|
||||
instance_name=instance_name)
|
||||
|
||||
|
||||
def check_vm_config_drive_data(vm_id, ping_timeout=VMTimeout.PING_VM):
|
||||
"""
|
||||
Args:
|
||||
vm_id:
|
||||
ping_timeout
|
||||
|
||||
Returns:
|
||||
|
||||
"""
|
||||
vm_helper.wait_for_vm_pingable_from_natbox(vm_id, timeout=ping_timeout)
|
||||
dev = '/dev/hd'
|
||||
with vm_helper.ssh_to_vm_from_natbox(vm_id) as vm_ssh:
|
||||
# Run mount command to determine the /dev/hdX is mount at:
|
||||
cmd = """mount | grep "{}" | awk '{{print $3}} '""".format(dev)
|
||||
mount = vm_ssh.exec_cmd(cmd)[1]
|
||||
assert mount, "{} is not mounted".format(dev)
|
||||
|
||||
file_path = '{}/openstack/latest/meta_data.json'.format(mount)
|
||||
content = vm_ssh.exec_cmd('python -m json.tool {} | grep '
|
||||
'foo'.format(file_path), fail_ok=False)[1]
|
||||
assert '"foo": "bar"' in content
|
||||
|
||||
|
||||
def check_vm_files_on_hypervisor(vm_id, vm_host, instance_name):
|
||||
with host_helper.ssh_to_host(vm_host) as host_ssh:
|
||||
cmd = " ls /var/lib/nova/instances/{}".format(vm_id)
|
||||
cmd_output = host_ssh.exec_cmd(cmd)[1]
|
||||
for expt_file in ('console.log', 'disk.config'):
|
||||
assert expt_file in cmd_output, \
|
||||
"{} is not found for config drive vm {} on " \
|
||||
"{}".format(expt_file, vm_id, vm_host)
|
||||
|
||||
output = host_ssh.exec_cmd('ls /run/libvirt/qemu')[1]
|
||||
libvirt = "{}.xml".format(instance_name)
|
||||
assert libvirt in output, "{} is not found in /run/libvirt/qemu on " \
|
||||
"{}".format(libvirt, vm_host)
|
|
@ -0,0 +1,185 @@
|
|||
#
|
||||
# Copyright (c) 2019 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
from pytest import mark, param
|
||||
|
||||
from utils.tis_log import LOG
|
||||
|
||||
from consts.stx import FlavorSpec, ImageMetadata, GuestImages
|
||||
from consts.cli_errs import CPUPolicyErr # used by eval
|
||||
|
||||
from keywords import nova_helper, vm_helper, glance_helper, cinder_helper, \
|
||||
check_helper, host_helper
|
||||
from testfixtures.fixture_resources import ResourceCleanup
|
||||
|
||||
|
||||
@mark.parametrize(
|
||||
('flv_vcpus', 'flv_pol', 'img_pol', 'boot_source', 'expt_err'), [
|
||||
param(3, None, 'shared', 'image', None, marks=mark.p3),
|
||||
param(4, 'dedicated', 'dedicated', 'volume', None, marks=mark.p3),
|
||||
param(1, 'dedicated', None, 'image', None, marks=mark.p3),
|
||||
param(1, 'shared', 'shared', 'volume', None, marks=mark.p3),
|
||||
param(2, 'shared', None, 'image', None, marks=mark.p3),
|
||||
param(3, 'dedicated', 'shared', 'volume', None,
|
||||
marks=mark.domain_sanity),
|
||||
param(1, 'shared', 'dedicated', 'image',
|
||||
'CPUPolicyErr.CONFLICT_FLV_IMG', marks=mark.p3),
|
||||
])
|
||||
def test_boot_vm_cpu_policy_image(flv_vcpus, flv_pol, img_pol, boot_source,
|
||||
expt_err):
|
||||
LOG.tc_step("Create flavor with {} vcpus".format(flv_vcpus))
|
||||
flavor_id = nova_helper.create_flavor(name='cpu_pol_{}'.format(flv_pol),
|
||||
vcpus=flv_vcpus)[1]
|
||||
ResourceCleanup.add('flavor', flavor_id)
|
||||
|
||||
if flv_pol is not None:
|
||||
specs = {FlavorSpec.CPU_POLICY: flv_pol}
|
||||
|
||||
LOG.tc_step("Set following extra specs: {}".format(specs))
|
||||
nova_helper.set_flavor(flavor_id, **specs)
|
||||
|
||||
if img_pol is not None:
|
||||
image_meta = {ImageMetadata.CPU_POLICY: img_pol}
|
||||
LOG.tc_step(
|
||||
"Create image with following metadata: {}".format(image_meta))
|
||||
image_id = glance_helper.create_image(
|
||||
name='cpu_pol_{}'.format(img_pol), cleanup='function',
|
||||
**image_meta)[1]
|
||||
else:
|
||||
image_id = glance_helper.get_image_id_from_name(
|
||||
GuestImages.DEFAULT['guest'], strict=True)
|
||||
|
||||
if boot_source == 'volume':
|
||||
LOG.tc_step("Create a volume from image")
|
||||
source_id = cinder_helper.create_volume(name='cpu_pol_img',
|
||||
source_id=image_id)[1]
|
||||
ResourceCleanup.add('volume', source_id)
|
||||
else:
|
||||
source_id = image_id
|
||||
|
||||
prev_cpus = host_helper.get_vcpus_for_computes(field='used_now')
|
||||
|
||||
LOG.tc_step("Attempt to boot a vm from above {} with above flavor".format(
|
||||
boot_source))
|
||||
code, vm_id, msg = vm_helper.boot_vm(name='cpu_pol', flavor=flavor_id,
|
||||
source=boot_source,
|
||||
source_id=source_id, fail_ok=True,
|
||||
cleanup='function')
|
||||
|
||||
# check for negative tests
|
||||
if expt_err is not None:
|
||||
LOG.tc_step(
|
||||
"Check VM failed to boot due to conflict in flavor and image.")
|
||||
assert 4 == code, "Expect boot vm cli reject and no vm booted. " \
|
||||
"Actual: {}".format(msg)
|
||||
assert eval(expt_err) in msg, \
|
||||
"Expected error message is not found in cli return."
|
||||
return # end the test for negative cases
|
||||
|
||||
# Check for positive tests
|
||||
LOG.tc_step("Check vm is successfully booted.")
|
||||
assert 0 == code, "Expect vm boot successfully. Actual: {}".format(msg)
|
||||
|
||||
# Calculate expected policy:
|
||||
expt_cpu_pol = flv_pol if flv_pol else img_pol
|
||||
expt_cpu_pol = expt_cpu_pol if expt_cpu_pol else 'shared'
|
||||
|
||||
vm_host = vm_helper.get_vm_host(vm_id)
|
||||
check_helper.check_topology_of_vm(vm_id, vcpus=flv_vcpus,
|
||||
cpu_pol=expt_cpu_pol, vm_host=vm_host,
|
||||
prev_total_cpus=prev_cpus[vm_host])
|
||||
|
||||
|
||||
@mark.parametrize(('flv_vcpus', 'cpu_pol', 'pol_source', 'boot_source'), [
|
||||
param(4, None, 'flavor', 'image', marks=mark.p2),
|
||||
param(2, 'dedicated', 'flavor', 'volume', marks=mark.domain_sanity),
|
||||
param(3, 'shared', 'flavor', 'volume', marks=mark.p2),
|
||||
param(1, 'dedicated', 'flavor', 'image', marks=mark.p2),
|
||||
param(2, 'dedicated', 'image', 'volume', marks=mark.nightly),
|
||||
param(3, 'shared', 'image', 'volume', marks=mark.p2),
|
||||
param(1, 'dedicated', 'image', 'image', marks=mark.domain_sanity),
|
||||
])
|
||||
def test_cpu_pol_vm_actions(flv_vcpus, cpu_pol, pol_source, boot_source):
|
||||
LOG.tc_step("Create flavor with {} vcpus".format(flv_vcpus))
|
||||
flavor_id = nova_helper.create_flavor(name='cpu_pol', vcpus=flv_vcpus)[1]
|
||||
ResourceCleanup.add('flavor', flavor_id)
|
||||
|
||||
image_id = glance_helper.get_image_id_from_name(
|
||||
GuestImages.DEFAULT['guest'], strict=True)
|
||||
if cpu_pol is not None:
|
||||
if pol_source == 'flavor':
|
||||
specs = {FlavorSpec.CPU_POLICY: cpu_pol}
|
||||
|
||||
LOG.tc_step("Set following extra specs: {}".format(specs))
|
||||
nova_helper.set_flavor(flavor_id, **specs)
|
||||
else:
|
||||
image_meta = {ImageMetadata.CPU_POLICY: cpu_pol}
|
||||
LOG.tc_step(
|
||||
"Create image with following metadata: {}".format(image_meta))
|
||||
image_id = glance_helper.create_image(
|
||||
name='cpu_pol_{}'.format(cpu_pol), cleanup='function',
|
||||
**image_meta)[1]
|
||||
if boot_source == 'volume':
|
||||
LOG.tc_step("Create a volume from image")
|
||||
source_id = cinder_helper.create_volume(name='cpu_pol'.format(cpu_pol),
|
||||
source_id=image_id)[1]
|
||||
ResourceCleanup.add('volume', source_id)
|
||||
else:
|
||||
source_id = image_id
|
||||
|
||||
prev_cpus = host_helper.get_vcpus_for_computes(field='used_now')
|
||||
|
||||
LOG.tc_step(
|
||||
"Boot a vm from {} with above flavor and check vm topology is as "
|
||||
"expected".format(boot_source))
|
||||
vm_id = vm_helper.boot_vm(name='cpu_pol_{}_{}'.format(cpu_pol, flv_vcpus),
|
||||
flavor=flavor_id, source=boot_source,
|
||||
source_id=source_id, cleanup='function')[1]
|
||||
|
||||
vm_helper.wait_for_vm_pingable_from_natbox(vm_id)
|
||||
vm_host = vm_helper.get_vm_host(vm_id)
|
||||
check_helper.check_topology_of_vm(vm_id, vcpus=flv_vcpus, cpu_pol=cpu_pol,
|
||||
vm_host=vm_host,
|
||||
prev_total_cpus=prev_cpus[vm_host])
|
||||
|
||||
LOG.tc_step("Suspend/Resume vm and check vm topology stays the same")
|
||||
vm_helper.suspend_vm(vm_id)
|
||||
vm_helper.resume_vm(vm_id)
|
||||
|
||||
vm_helper.wait_for_vm_pingable_from_natbox(vm_id)
|
||||
check_helper.check_topology_of_vm(vm_id, vcpus=flv_vcpus, cpu_pol=cpu_pol,
|
||||
vm_host=vm_host,
|
||||
prev_total_cpus=prev_cpus[vm_host])
|
||||
|
||||
LOG.tc_step("Stop/Start vm and check vm topology stays the same")
|
||||
vm_helper.stop_vms(vm_id)
|
||||
vm_helper.start_vms(vm_id)
|
||||
|
||||
vm_helper.wait_for_vm_pingable_from_natbox(vm_id)
|
||||
prev_siblings = check_helper.check_topology_of_vm(
|
||||
vm_id, vcpus=flv_vcpus, cpu_pol=cpu_pol, vm_host=vm_host,
|
||||
prev_total_cpus=prev_cpus[vm_host])[1]
|
||||
|
||||
LOG.tc_step("Live migrate vm and check vm topology stays the same")
|
||||
vm_helper.live_migrate_vm(vm_id=vm_id)
|
||||
|
||||
vm_helper.wait_for_vm_pingable_from_natbox(vm_id)
|
||||
vm_host = vm_helper.get_vm_host(vm_id)
|
||||
prev_siblings = prev_siblings if cpu_pol == 'dedicated' else None
|
||||
check_helper.check_topology_of_vm(vm_id, vcpus=flv_vcpus, cpu_pol=cpu_pol,
|
||||
vm_host=vm_host,
|
||||
prev_total_cpus=prev_cpus[vm_host],
|
||||
prev_siblings=prev_siblings)
|
||||
|
||||
LOG.tc_step("Cold migrate vm and check vm topology stays the same")
|
||||
vm_helper.cold_migrate_vm(vm_id=vm_id)
|
||||
|
||||
vm_helper.wait_for_vm_pingable_from_natbox(vm_id)
|
||||
vm_host = vm_helper.get_vm_host(vm_id)
|
||||
check_helper.check_topology_of_vm(vm_id, vcpus=flv_vcpus, cpu_pol=cpu_pol,
|
||||
vm_host=vm_host,
|
||||
prev_total_cpus=prev_cpus[vm_host])
|
|
@ -0,0 +1,437 @@
|
|||
#
|
||||
# Copyright (c) 2019 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
from pytest import mark, fixture, skip, param
|
||||
|
||||
from utils.tis_log import LOG
|
||||
|
||||
from consts.reasons import SkipHypervisor, SkipHyperthreading
|
||||
from consts.stx import FlavorSpec, ImageMetadata
|
||||
# Do not remove used imports below as they are used in eval()
|
||||
from consts.cli_errs import CPUThreadErr
|
||||
|
||||
from keywords import nova_helper, vm_helper, host_helper, glance_helper, \
|
||||
check_helper
|
||||
from testfixtures.fixture_resources import ResourceCleanup
|
||||
from testfixtures.recover_hosts import HostsToRecover
|
||||
|
||||
|
||||
def id_gen(val):
|
||||
if isinstance(val, list):
|
||||
return '-'.join(val)
|
||||
|
||||
|
||||
@fixture(scope='module')
|
||||
def ht_and_nonht_hosts():
|
||||
LOG.fixture_step(
|
||||
"(Module) Get hyper-threading enabled and disabled hypervisors")
|
||||
nova_hosts = host_helper.get_up_hypervisors()
|
||||
ht_hosts = []
|
||||
non_ht_hosts = []
|
||||
for host in nova_hosts:
|
||||
if host_helper.is_host_hyperthreaded(host):
|
||||
ht_hosts.append(host)
|
||||
else:
|
||||
non_ht_hosts.append(host)
|
||||
|
||||
LOG.info(
|
||||
'-- Hyper-threading enabled hosts: {}; Hyper-threading disabled '
|
||||
'hosts: {}'.format(
|
||||
ht_hosts, non_ht_hosts))
|
||||
return ht_hosts, non_ht_hosts
|
||||
|
||||
|
||||
class TestHTEnabled:
|
||||
|
||||
@fixture(scope='class', autouse=True)
|
||||
def ht_hosts_(self, ht_and_nonht_hosts):
|
||||
ht_hosts, non_ht_hosts = ht_and_nonht_hosts
|
||||
|
||||
if not ht_hosts:
|
||||
skip("No up hypervisor found with Hyper-threading enabled.")
|
||||
|
||||
return ht_hosts, non_ht_hosts
|
||||
|
||||
def test_isolate_vm_on_ht_host(self, ht_hosts_, add_admin_role_func):
|
||||
"""
|
||||
Test isolate vms take the host log_core sibling pair for each vcpu
|
||||
when HT is enabled.
|
||||
Args:
|
||||
ht_hosts_:
|
||||
add_admin_role_func:
|
||||
|
||||
Pre-conditions: At least on hypervisor has HT enabled
|
||||
|
||||
Test Steps:
|
||||
- Launch VM with isolate thread policy and 4 vcpus, until all
|
||||
Application cores on thread-0 are taken
|
||||
- Attempt to launch another vm on same host, and ensure it fails
|
||||
|
||||
"""
|
||||
ht_hosts, non_ht_hosts = ht_hosts_
|
||||
vcpu_count = 4
|
||||
cpu_thread_policy = 'isolate'
|
||||
LOG.tc_step("Create flavor with {} vcpus and {} thread policy".format(
|
||||
vcpu_count, cpu_thread_policy))
|
||||
flavor_id = nova_helper.create_flavor(
|
||||
name='cpu_thread_{}'.format(cpu_thread_policy), vcpus=vcpu_count,
|
||||
cleanup='function')[1]
|
||||
specs = {FlavorSpec.CPU_POLICY: 'dedicated',
|
||||
FlavorSpec.CPU_THREAD_POLICY: cpu_thread_policy}
|
||||
nova_helper.set_flavor(flavor_id, **specs)
|
||||
|
||||
LOG.tc_step(
|
||||
"Get used vcpus for vm host before booting vm, and ensure "
|
||||
"sufficient instance and core quotas")
|
||||
host = ht_hosts[0]
|
||||
vms = vm_helper.get_vms_on_host(hostname=host)
|
||||
vm_helper.delete_vms(vms=vms)
|
||||
log_core_counts = host_helper.get_logcores_counts(
|
||||
host, thread='0', functions='Applications')
|
||||
max_vm_count = int(log_core_counts[0] / vcpu_count) + int(
|
||||
log_core_counts[1] / vcpu_count)
|
||||
vm_helper.ensure_vms_quotas(vms_num=max_vm_count + 10,
|
||||
cores_num=4 * (max_vm_count + 2) + 10)
|
||||
|
||||
LOG.tc_step(
|
||||
"Boot {} isolate 4vcpu vms on a HT enabled host, and check "
|
||||
"topology of vm on host and vms".
|
||||
format(max_vm_count))
|
||||
for i in range(max_vm_count):
|
||||
name = '4vcpu_isolate-{}'.format(i)
|
||||
LOG.info(
|
||||
"Launch VM {} on {} and check it's topology".format(name, host))
|
||||
prev_cpus = host_helper.get_vcpus_for_computes(
|
||||
hosts=[host], field='used_now')[host]
|
||||
vm_id = vm_helper.boot_vm(name=name, flavor=flavor_id, vm_host=host,
|
||||
cleanup='function')[1]
|
||||
|
||||
check_helper.check_topology_of_vm(vm_id, vcpus=vcpu_count,
|
||||
prev_total_cpus=prev_cpus,
|
||||
cpu_pol='dedicated',
|
||||
cpu_thr_pol=cpu_thread_policy,
|
||||
vm_host=host)
|
||||
|
||||
LOG.tc_step(
|
||||
"Attempt to boot another vm on {}, and ensure it fails due to no "
|
||||
"free sibling pairs".format(host))
|
||||
code = vm_helper.boot_vm(name='cpu_thread_{}'.format(cpu_thread_policy),
|
||||
flavor=flavor_id, vm_host=host,
|
||||
fail_ok=True, cleanup='function')[0]
|
||||
assert code > 0, "VM is still scheduled even though all sibling " \
|
||||
"pairs should have been occupied"
|
||||
|
||||
@mark.parametrize(('vcpus', 'cpu_thread_policy', 'min_vcpus'), [
|
||||
param(4, 'require', None),
|
||||
param(3, 'require', None),
|
||||
param(3, 'prefer', None),
|
||||
])
|
||||
def test_boot_vm_cpu_thread_positive(self, vcpus, cpu_thread_policy,
|
||||
min_vcpus, ht_hosts_):
|
||||
"""
|
||||
Test boot vm with specific cpu thread policy requirement
|
||||
|
||||
Args:
|
||||
vcpus (int): number of vpus to set when creating flavor
|
||||
cpu_thread_policy (str): cpu thread policy to set in flavor
|
||||
min_vcpus (int): min_vcpus extra spec to set
|
||||
ht_hosts_ (tuple): (ht_hosts, non-ht_hosts)
|
||||
|
||||
Skip condition:
|
||||
- no host is hyperthreading enabled on system
|
||||
|
||||
Setups:
|
||||
- Find out HT hosts and non-HT_hosts on system (module)
|
||||
|
||||
Test Steps:
|
||||
- Create a flavor with given number of vcpus
|
||||
- Set cpu policy to dedicated and extra specs as per test params
|
||||
- Get the host vcpu usage before booting vm
|
||||
- Boot a vm with above flavor
|
||||
- Ensure vm is booted on HT host for 'require' vm
|
||||
- Check vm-topology, host side vcpu usage, topology from within
|
||||
the guest to ensure vm is properly booted
|
||||
|
||||
Teardown:
|
||||
- Delete created vm, volume, flavor
|
||||
|
||||
"""
|
||||
ht_hosts, non_ht_hosts = ht_hosts_
|
||||
LOG.tc_step("Create flavor with {} vcpus".format(vcpus))
|
||||
flavor_id = nova_helper.create_flavor(
|
||||
name='cpu_thread_{}'.format(cpu_thread_policy), vcpus=vcpus)[1]
|
||||
ResourceCleanup.add('flavor', flavor_id)
|
||||
|
||||
specs = {FlavorSpec.CPU_POLICY: 'dedicated'}
|
||||
if cpu_thread_policy is not None:
|
||||
specs[FlavorSpec.CPU_THREAD_POLICY] = cpu_thread_policy
|
||||
|
||||
if min_vcpus is not None:
|
||||
specs[FlavorSpec.MIN_VCPUS] = min_vcpus
|
||||
|
||||
LOG.tc_step("Set following extra specs: {}".format(specs))
|
||||
nova_helper.set_flavor(flavor_id, **specs)
|
||||
|
||||
LOG.tc_step("Get used cpus for all hosts before booting vm")
|
||||
hosts_to_check = ht_hosts if cpu_thread_policy == 'require' else \
|
||||
ht_hosts + non_ht_hosts
|
||||
pre_hosts_cpus = host_helper.get_vcpus_for_computes(
|
||||
hosts=hosts_to_check, field='used_now')
|
||||
|
||||
LOG.tc_step(
|
||||
"Boot a vm with above flavor and ensure it's booted on a HT "
|
||||
"enabled host.")
|
||||
vm_id = vm_helper.boot_vm(
|
||||
name='cpu_thread_{}'.format(cpu_thread_policy),
|
||||
flavor=flavor_id,
|
||||
cleanup='function')[1]
|
||||
|
||||
vm_host = vm_helper.get_vm_host(vm_id)
|
||||
if cpu_thread_policy == 'require':
|
||||
assert vm_host in ht_hosts, "VM host {} is not hyper-threading " \
|
||||
"enabled.".format(vm_host)
|
||||
|
||||
LOG.tc_step("Check topology of the {}vcpu {} vm on hypervisor and "
|
||||
"on vm".format(vcpus, cpu_thread_policy))
|
||||
prev_cpus = pre_hosts_cpus[vm_host]
|
||||
check_helper.check_topology_of_vm(vm_id, vcpus=vcpus,
|
||||
prev_total_cpus=prev_cpus,
|
||||
cpu_pol='dedicated',
|
||||
cpu_thr_pol=cpu_thread_policy,
|
||||
min_vcpus=min_vcpus, vm_host=vm_host)
|
||||
|
||||
@mark.parametrize(('vcpus', 'cpu_pol', 'cpu_thr_pol', 'flv_or_img',
|
||||
'vs_numa_affinity', 'boot_source', 'nova_actions'), [
|
||||
param(2, 'dedicated', 'isolate', 'image', None, 'volume',
|
||||
'live_migrate', marks=mark.priorities('domain_sanity',
|
||||
'nightly')),
|
||||
param(3, 'dedicated', 'require', 'image', None, 'volume',
|
||||
'live_migrate', marks=mark.domain_sanity),
|
||||
param(3, 'dedicated', 'prefer', 'flavor', None, 'volume',
|
||||
'live_migrate', marks=mark.p2),
|
||||
param(3, 'dedicated', 'require', 'flavor', None, 'volume',
|
||||
'live_migrate', marks=mark.p2),
|
||||
param(3, 'dedicated', 'isolate', 'flavor', None, 'volume',
|
||||
'cold_migrate', marks=mark.domain_sanity),
|
||||
param(2, 'dedicated', 'require', 'image', None, 'image',
|
||||
'cold_migrate', marks=mark.domain_sanity),
|
||||
param(2, 'dedicated', 'require', 'flavor', None, 'volume',
|
||||
'cold_mig_revert', marks=mark.p2),
|
||||
param(5, 'dedicated', 'prefer', 'image', None, 'volume',
|
||||
'cold_mig_revert'),
|
||||
param(4, 'dedicated', 'isolate', 'image', None, 'volume',
|
||||
['suspend', 'resume', 'rebuild'], marks=mark.p2),
|
||||
param(6, 'dedicated', 'require', 'image', None, 'image',
|
||||
['suspend', 'resume', 'rebuild'], marks=mark.p2),
|
||||
], ids=id_gen)
|
||||
def test_cpu_thread_vm_topology_nova_actions(self, vcpus, cpu_pol,
|
||||
cpu_thr_pol, flv_or_img,
|
||||
vs_numa_affinity,
|
||||
boot_source, nova_actions,
|
||||
ht_hosts_):
|
||||
ht_hosts, non_ht_hosts = ht_hosts_
|
||||
if 'mig' in nova_actions:
|
||||
if len(ht_hosts) + len(non_ht_hosts) < 2:
|
||||
skip(SkipHypervisor.LESS_THAN_TWO_HYPERVISORS)
|
||||
if cpu_thr_pol in ['require', 'isolate'] and len(ht_hosts) < 2:
|
||||
skip(SkipHyperthreading.LESS_THAN_TWO_HT_HOSTS)
|
||||
|
||||
name_str = 'cpu_thr_{}_in_img'.format(cpu_pol)
|
||||
|
||||
LOG.tc_step("Create flavor with {} vcpus".format(vcpus))
|
||||
flavor_id = nova_helper.create_flavor(name='vcpus{}'.format(vcpus),
|
||||
vcpus=vcpus)[1]
|
||||
ResourceCleanup.add('flavor', flavor_id)
|
||||
|
||||
specs = {}
|
||||
if vs_numa_affinity:
|
||||
specs[FlavorSpec.VSWITCH_NUMA_AFFINITY] = vs_numa_affinity
|
||||
|
||||
if flv_or_img == 'flavor':
|
||||
specs[FlavorSpec.CPU_POLICY] = cpu_pol
|
||||
specs[FlavorSpec.CPU_THREAD_POLICY] = cpu_thr_pol
|
||||
|
||||
if specs:
|
||||
LOG.tc_step("Set following extra specs: {}".format(specs))
|
||||
nova_helper.set_flavor(flavor_id, **specs)
|
||||
|
||||
image_id = None
|
||||
if flv_or_img == 'image':
|
||||
image_meta = {ImageMetadata.CPU_POLICY: cpu_pol,
|
||||
ImageMetadata.CPU_THREAD_POLICY: cpu_thr_pol}
|
||||
LOG.tc_step(
|
||||
"Create image with following metadata: {}".format(image_meta))
|
||||
image_id = glance_helper.create_image(name=name_str,
|
||||
cleanup='function',
|
||||
**image_meta)[1]
|
||||
|
||||
LOG.tc_step("Get used cpus for all hosts before booting vm")
|
||||
hosts_to_check = ht_hosts if cpu_thr_pol == 'require' else \
|
||||
ht_hosts + non_ht_hosts
|
||||
pre_hosts_cpus = host_helper.get_vcpus_for_computes(
|
||||
hosts=hosts_to_check, field='used_now')
|
||||
|
||||
LOG.tc_step("Boot a vm from {} with above flavor".format(boot_source))
|
||||
vm_id = vm_helper.boot_vm(name=name_str, flavor=flavor_id,
|
||||
source=boot_source, image_id=image_id,
|
||||
cleanup='function')[1]
|
||||
|
||||
vm_host = vm_helper.get_vm_host(vm_id)
|
||||
|
||||
if cpu_thr_pol == 'require':
|
||||
LOG.tc_step("Check vm is booted on a HT host")
|
||||
assert vm_host in ht_hosts, "VM host {} is not hyper-threading " \
|
||||
"enabled.".format(vm_host)
|
||||
|
||||
prev_cpus = pre_hosts_cpus[vm_host]
|
||||
prev_siblings = check_helper.check_topology_of_vm(
|
||||
vm_id, vcpus=vcpus, prev_total_cpus=prev_cpus, cpu_pol=cpu_pol,
|
||||
cpu_thr_pol=cpu_thr_pol, vm_host=vm_host)[1]
|
||||
|
||||
LOG.tc_step("Perform following nova action(s) on vm {}: "
|
||||
"{}".format(vm_id, nova_actions))
|
||||
if isinstance(nova_actions, str):
|
||||
nova_actions = [nova_actions]
|
||||
|
||||
check_prev_siblings = False
|
||||
for action in nova_actions:
|
||||
kwargs = {}
|
||||
if action == 'rebuild':
|
||||
kwargs['image_id'] = image_id
|
||||
elif action == 'live_migrate':
|
||||
check_prev_siblings = True
|
||||
vm_helper.perform_action_on_vm(vm_id, action=action, **kwargs)
|
||||
|
||||
post_vm_host = vm_helper.get_vm_host(vm_id)
|
||||
pre_action_cpus = pre_hosts_cpus[post_vm_host]
|
||||
|
||||
if cpu_thr_pol == 'require':
|
||||
LOG.tc_step("Check vm is still on HT host")
|
||||
assert post_vm_host in ht_hosts, "VM host {} is not " \
|
||||
"hyper-threading " \
|
||||
"enabled.".format(vm_host)
|
||||
|
||||
LOG.tc_step(
|
||||
"Check VM topology is still correct after {}".format(nova_actions))
|
||||
if cpu_pol != 'dedicated' or not check_prev_siblings:
|
||||
# Allow prev_siblings in live migration case
|
||||
prev_siblings = None
|
||||
check_helper.check_topology_of_vm(vm_id, vcpus=vcpus,
|
||||
prev_total_cpus=pre_action_cpus,
|
||||
cpu_pol=cpu_pol,
|
||||
cpu_thr_pol=cpu_thr_pol,
|
||||
vm_host=post_vm_host,
|
||||
prev_siblings=prev_siblings)
|
||||
|
||||
@fixture(scope='class')
|
||||
def _add_hosts_to_stxauto(self, request, ht_hosts_, add_stxauto_zone):
|
||||
ht_hosts, non_ht_hosts = ht_hosts_
|
||||
|
||||
if not non_ht_hosts:
|
||||
skip("No non-HT host available")
|
||||
|
||||
LOG.fixture_step("Add one HT host and nonHT hosts to stxauto zone")
|
||||
|
||||
if len(ht_hosts) > 1:
|
||||
ht_hosts = [ht_hosts[0]]
|
||||
|
||||
host_in_stxauto = ht_hosts + non_ht_hosts
|
||||
|
||||
def _revert():
|
||||
nova_helper.remove_hosts_from_aggregate(aggregate='stxauto',
|
||||
hosts=host_in_stxauto)
|
||||
|
||||
request.addfinalizer(_revert)
|
||||
|
||||
nova_helper.add_hosts_to_aggregate('stxauto', ht_hosts + non_ht_hosts)
|
||||
|
||||
LOG.info(
|
||||
"stxauto zone: HT: {}; non-HT: {}".format(ht_hosts, non_ht_hosts))
|
||||
return ht_hosts, non_ht_hosts
|
||||
|
||||
|
||||
class TestHTDisabled:
|
||||
|
||||
@fixture(scope='class', autouse=True)
|
||||
def ensure_nonht(self, ht_and_nonht_hosts):
|
||||
ht_hosts, non_ht_hosts = ht_and_nonht_hosts
|
||||
if not non_ht_hosts:
|
||||
skip("No host with HT disabled")
|
||||
|
||||
if ht_hosts:
|
||||
LOG.fixture_step(
|
||||
"Locking HT hosts to ensure only non-HT hypervisors available")
|
||||
HostsToRecover.add(ht_hosts, scope='class')
|
||||
for host_ in ht_hosts:
|
||||
host_helper.lock_host(host_, swact=True)
|
||||
|
||||
@mark.parametrize(('vcpus', 'cpu_thread_policy', 'min_vcpus', 'expt_err'), [
|
||||
param(2, 'require', None, 'CPUThreadErr.HT_HOST_UNAVAIL'),
|
||||
param(3, 'require', None, 'CPUThreadErr.HT_HOST_UNAVAIL'),
|
||||
param(3, 'isolate', None, None),
|
||||
param(2, 'prefer', None, None),
|
||||
])
|
||||
def test_boot_vm_cpu_thread_ht_disabled(self, vcpus, cpu_thread_policy,
|
||||
min_vcpus, expt_err):
|
||||
"""
|
||||
Test boot vm with specified cpu thread policy when no HT host is
|
||||
available on system
|
||||
|
||||
Args:
|
||||
vcpus (int): number of vcpus to set in flavor
|
||||
cpu_thread_policy (str): cpu thread policy in flavor extra spec
|
||||
min_vcpus (int): min_vpus in flavor extra spec
|
||||
expt_err (str|None): expected error message in nova show if any
|
||||
|
||||
Skip condition:
|
||||
- All hosts are hyperthreading enabled on system
|
||||
|
||||
Setups:
|
||||
- Find out HT hosts and non-HT_hosts on system (module)
|
||||
- Enusre no HT hosts on system
|
||||
|
||||
Test Steps:
|
||||
- Create a flavor with given number of vcpus
|
||||
- Set flavor extra specs as per test params
|
||||
- Get the host vcpu usage before booting vm
|
||||
- Attempt to boot a vm with above flavor
|
||||
- if expt_err is None:
|
||||
- Ensure vm is booted on non-HT host for 'isolate'/'prefer'
|
||||
vm
|
||||
- Check vm-topology, host side vcpu usage, topology from
|
||||
within the guest to ensure vm is properly booted
|
||||
- else, ensure expected error message is included in nova
|
||||
show for 'require' vm
|
||||
|
||||
Teardown:
|
||||
- Delete created vm, volume, flavor
|
||||
|
||||
"""
|
||||
|
||||
LOG.tc_step("Create flavor with {} vcpus".format(vcpus))
|
||||
flavor_id = nova_helper.create_flavor(name='cpu_thread', vcpus=vcpus)[1]
|
||||
ResourceCleanup.add('flavor', flavor_id)
|
||||
|
||||
specs = {FlavorSpec.CPU_THREAD_POLICY: cpu_thread_policy,
|
||||
FlavorSpec.CPU_POLICY: 'dedicated'}
|
||||
if min_vcpus is not None:
|
||||
specs[FlavorSpec.MIN_VCPUS] = min_vcpus
|
||||
|
||||
LOG.tc_step("Set following extra specs: {}".format(specs))
|
||||
nova_helper.set_flavor(flavor_id, **specs)
|
||||
|
||||
LOG.tc_step("Attempt to boot a vm with the above flavor.")
|
||||
code, vm_id, msg = vm_helper.boot_vm(
|
||||
name='cpu_thread_{}'.format(cpu_thread_policy),
|
||||
flavor=flavor_id, fail_ok=True, cleanup='function')
|
||||
|
||||
if expt_err:
|
||||
assert 1 == code, "Boot vm cli is not rejected. Details: " \
|
||||
"{}".format(msg)
|
||||
else:
|
||||
assert 0 == code, "Boot vm with isolate policy was unsuccessful. " \
|
||||
"Details: {}".format(msg)
|
|
@ -0,0 +1,318 @@
|
|||
#
|
||||
# Copyright (c) 2019 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
from pytest import fixture, skip, mark
|
||||
|
||||
import keywords.host_helper
|
||||
from utils.tis_log import LOG
|
||||
from consts.timeout import VMTimeout
|
||||
from consts.stx import VMStatus
|
||||
from consts.reasons import SkipStorageBacking, SkipHypervisor
|
||||
|
||||
from keywords import vm_helper, host_helper, nova_helper, cinder_helper, \
|
||||
system_helper, check_helper
|
||||
from testfixtures.fixture_resources import ResourceCleanup
|
||||
|
||||
from testfixtures.recover_hosts import HostsToRecover
|
||||
|
||||
|
||||
@fixture(scope='module', autouse=True)
|
||||
def update_quotas(add_admin_role_module):
|
||||
LOG.fixture_step("Update instance and volume quota to at least 10 and "
|
||||
"20 respectively")
|
||||
vm_helper.ensure_vms_quotas()
|
||||
|
||||
|
||||
@fixture(scope='module')
|
||||
def hosts_per_backing():
|
||||
hosts_per_backend = host_helper.get_hosts_per_storage_backing()
|
||||
return hosts_per_backend
|
||||
|
||||
|
||||
def touch_files_under_vm_disks(vm_id, ephemeral, swap, vm_type, disks):
|
||||
expt_len = 1 + int(bool(ephemeral)) + int(bool(swap)) + \
|
||||
(1 if 'with_vol' in vm_type else 0)
|
||||
|
||||
LOG.info("\n--------------------------Auto mount non-root disks if any")
|
||||
mounts = vm_helper.auto_mount_vm_disks(vm_id=vm_id, disks=disks)
|
||||
assert expt_len == len(mounts)
|
||||
|
||||
if bool(swap):
|
||||
mounts.remove('none')
|
||||
|
||||
LOG.info("\n--------------------------Create files under vm disks: "
|
||||
"{}".format(mounts))
|
||||
file_paths, content = vm_helper.touch_files(vm_id=vm_id, file_dirs=mounts)
|
||||
return file_paths, content
|
||||
|
||||
|
||||
class TestDefaultGuest:
|
||||
|
||||
@fixture(scope='class', autouse=True)
|
||||
def skip_test_if_less_than_two_hosts(self):
|
||||
if len(host_helper.get_up_hypervisors()) < 2:
|
||||
skip(SkipHypervisor.LESS_THAN_TWO_HYPERVISORS)
|
||||
|
||||
@mark.parametrize('storage_backing', [
|
||||
'local_image',
|
||||
'remote',
|
||||
])
|
||||
def test_evacuate_vms_with_inst_backing(self, hosts_per_backing,
|
||||
storage_backing):
|
||||
"""
|
||||
Test evacuate vms with various vm storage configs and host instance
|
||||
backing configs
|
||||
|
||||
Args:
|
||||
storage_backing: storage backing under test
|
||||
|
||||
Skip conditions:
|
||||
- Less than two hosts configured with storage backing under test
|
||||
|
||||
Setups:
|
||||
- Add admin role to primary tenant (module)
|
||||
|
||||
Test Steps:
|
||||
- Create flv_rootdisk without ephemeral or swap disks, and set
|
||||
storage backing extra spec
|
||||
- Create flv_ephemswap with ephemeral AND swap disks, and set
|
||||
storage backing extra spec
|
||||
- Boot following vms on same host and wait for them to be
|
||||
pingable from NatBox:
|
||||
- Boot vm1 from volume with flavor flv_rootdisk
|
||||
- Boot vm2 from volume with flavor flv_localdisk
|
||||
- Boot vm3 from image with flavor flv_rootdisk
|
||||
- Boot vm4 from image with flavor flv_rootdisk, and attach a
|
||||
volume to it
|
||||
- Boot vm5 from image with flavor flv_localdisk
|
||||
- sudo reboot -f on vms host
|
||||
- Ensure evacuation for all 5 vms are successful (vm host
|
||||
changed, active state, pingable from NatBox)
|
||||
|
||||
Teardown:
|
||||
- Delete created vms, volumes, flavors
|
||||
- Remove admin role from primary tenant (module)
|
||||
|
||||
"""
|
||||
hosts = hosts_per_backing.get(storage_backing, [])
|
||||
if len(hosts) < 2:
|
||||
skip(SkipStorageBacking.LESS_THAN_TWO_HOSTS_WITH_BACKING.format(
|
||||
storage_backing))
|
||||
|
||||
target_host = hosts[0]
|
||||
|
||||
LOG.tc_step("Create a flavor without ephemeral or swap disks")
|
||||
flavor_1 = nova_helper.create_flavor('flv_rootdisk',
|
||||
storage_backing=storage_backing)[1]
|
||||
ResourceCleanup.add('flavor', flavor_1, scope='function')
|
||||
|
||||
LOG.tc_step("Create another flavor with ephemeral and swap disks")
|
||||
flavor_2 = nova_helper.create_flavor('flv_ephemswap', ephemeral=1,
|
||||
swap=512,
|
||||
storage_backing=storage_backing)[1]
|
||||
ResourceCleanup.add('flavor', flavor_2, scope='function')
|
||||
|
||||
LOG.tc_step("Boot vm1 from volume with flavor flv_rootdisk and wait "
|
||||
"for it pingable from NatBox")
|
||||
vm1_name = "vol_root"
|
||||
vm1 = vm_helper.boot_vm(vm1_name, flavor=flavor_1, source='volume',
|
||||
avail_zone='nova', vm_host=target_host,
|
||||
cleanup='function')[1]
|
||||
|
||||
vms_info = {vm1: {'ephemeral': 0,
|
||||
'swap': 0,
|
||||
'vm_type': 'volume',
|
||||
'disks': vm_helper.get_vm_devices_via_virsh(vm1)}}
|
||||
vm_helper.wait_for_vm_pingable_from_natbox(vm1)
|
||||
|
||||
LOG.tc_step("Boot vm2 from volume with flavor flv_localdisk and wait "
|
||||
"for it pingable from NatBox")
|
||||
vm2_name = "vol_ephemswap"
|
||||
vm2 = vm_helper.boot_vm(vm2_name, flavor=flavor_2, source='volume',
|
||||
avail_zone='nova', vm_host=target_host,
|
||||
cleanup='function')[1]
|
||||
|
||||
vm_helper.wait_for_vm_pingable_from_natbox(vm2)
|
||||
vms_info[vm2] = {'ephemeral': 1,
|
||||
'swap': 512,
|
||||
'vm_type': 'volume',
|
||||
'disks': vm_helper.get_vm_devices_via_virsh(vm2)}
|
||||
|
||||
LOG.tc_step("Boot vm3 from image with flavor flv_rootdisk and wait for "
|
||||
"it pingable from NatBox")
|
||||
vm3_name = "image_root"
|
||||
vm3 = vm_helper.boot_vm(vm3_name, flavor=flavor_1, source='image',
|
||||
avail_zone='nova', vm_host=target_host,
|
||||
cleanup='function')[1]
|
||||
|
||||
vm_helper.wait_for_vm_pingable_from_natbox(vm3)
|
||||
vms_info[vm3] = {'ephemeral': 0,
|
||||
'swap': 0,
|
||||
'vm_type': 'image',
|
||||
'disks': vm_helper.get_vm_devices_via_virsh(vm3)}
|
||||
|
||||
LOG.tc_step("Boot vm4 from image with flavor flv_rootdisk, attach a "
|
||||
"volume to it and wait for it "
|
||||
"pingable from NatBox")
|
||||
vm4_name = 'image_root_attachvol'
|
||||
vm4 = vm_helper.boot_vm(vm4_name, flavor_1, source='image',
|
||||
avail_zone='nova',
|
||||
vm_host=target_host,
|
||||
cleanup='function')[1]
|
||||
|
||||
vol = cinder_helper.create_volume(bootable=False)[1]
|
||||
ResourceCleanup.add('volume', vol, scope='function')
|
||||
vm_helper.attach_vol_to_vm(vm4, vol_id=vol, mount=False)
|
||||
|
||||
vm_helper.wait_for_vm_pingable_from_natbox(vm4)
|
||||
vms_info[vm4] = {'ephemeral': 0,
|
||||
'swap': 0,
|
||||
'vm_type': 'image_with_vol',
|
||||
'disks': vm_helper.get_vm_devices_via_virsh(vm4)}
|
||||
|
||||
LOG.tc_step("Boot vm5 from image with flavor flv_localdisk and wait "
|
||||
"for it pingable from NatBox")
|
||||
vm5_name = 'image_ephemswap'
|
||||
vm5 = vm_helper.boot_vm(vm5_name, flavor_2, source='image',
|
||||
avail_zone='nova', vm_host=target_host,
|
||||
cleanup='function')[1]
|
||||
vm_helper.wait_for_vm_pingable_from_natbox(vm5)
|
||||
vms_info[vm5] = {'ephemeral': 1,
|
||||
'swap': 512,
|
||||
'vm_type': 'image',
|
||||
'disks': vm_helper.get_vm_devices_via_virsh(vm5)}
|
||||
|
||||
LOG.tc_step("Check all VMs are booted on {}".format(target_host))
|
||||
vms_on_host = vm_helper.get_vms_on_host(hostname=target_host)
|
||||
vms = [vm1, vm2, vm3, vm4, vm5]
|
||||
assert set(vms) <= set(vms_on_host), "VMs booted on host: {}. " \
|
||||
"Current vms on host: {}". \
|
||||
format(vms, vms_on_host)
|
||||
|
||||
for vm_ in vms:
|
||||
LOG.tc_step("Touch files under vm disks {}: "
|
||||
"{}".format(vm_, vms_info[vm_]))
|
||||
file_paths, content = touch_files_under_vm_disks(vm_,
|
||||
**vms_info[vm_])
|
||||
vms_info[vm_]['file_paths'] = file_paths
|
||||
vms_info[vm_]['content'] = content
|
||||
|
||||
LOG.tc_step("Reboot target host {}".format(target_host))
|
||||
vm_helper.evacuate_vms(host=target_host, vms_to_check=vms,
|
||||
ping_vms=True)
|
||||
|
||||
LOG.tc_step("Check files after evacuation")
|
||||
for vm_ in vms:
|
||||
LOG.info("--------------------Check files for vm {}".format(vm_))
|
||||
check_helper.check_vm_files(vm_id=vm_, vm_action='evacuate',
|
||||
storage_backing=storage_backing,
|
||||
prev_host=target_host, **vms_info[vm_])
|
||||
vm_helper.ping_vms_from_natbox(vms)
|
||||
|
||||
@fixture(scope='function')
|
||||
def check_hosts(self):
|
||||
storage_backing, hosts = \
|
||||
keywords.host_helper.get_storage_backing_with_max_hosts()
|
||||
if len(hosts) < 2:
|
||||
skip("at least two hosts with the same storage backing are "
|
||||
"required")
|
||||
|
||||
acceptable_hosts = []
|
||||
for host in hosts:
|
||||
numa_num = len(host_helper.get_host_procs(host))
|
||||
if numa_num > 1:
|
||||
acceptable_hosts.append(host)
|
||||
if len(acceptable_hosts) == 2:
|
||||
break
|
||||
else:
|
||||
skip("at least two hosts with multiple numa nodes are required")
|
||||
|
||||
target_host = acceptable_hosts[0]
|
||||
return target_host
|
||||
|
||||
|
||||
class TestOneHostAvail:
|
||||
@fixture(scope='class')
|
||||
def get_zone(self, request, add_stxauto_zone):
|
||||
if system_helper.is_aio_simplex():
|
||||
zone = 'nova'
|
||||
return zone
|
||||
|
||||
zone = 'stxauto'
|
||||
storage_backing, hosts = \
|
||||
keywords.host_helper.get_storage_backing_with_max_hosts()
|
||||
host = hosts[0]
|
||||
LOG.fixture_step('Select host {} with backing '
|
||||
'{}'.format(host, storage_backing))
|
||||
nova_helper.add_hosts_to_aggregate(aggregate='stxauto', hosts=[host])
|
||||
|
||||
def remove_hosts_from_zone():
|
||||
nova_helper.remove_hosts_from_aggregate(aggregate='stxauto',
|
||||
check_first=False)
|
||||
|
||||
request.addfinalizer(remove_hosts_from_zone)
|
||||
return zone
|
||||
|
||||
@mark.sx_sanity
|
||||
def test_reboot_only_host(self, get_zone):
|
||||
"""
|
||||
Test reboot only hypervisor on the system
|
||||
|
||||
Args:
|
||||
get_zone: fixture to create stxauto aggregate, to ensure vms can
|
||||
only on one host
|
||||
|
||||
Setups:
|
||||
- If more than 1 hypervisor: Create stxauto aggregate and add
|
||||
one host to the aggregate
|
||||
|
||||
Test Steps:
|
||||
- Launch various vms on target host
|
||||
- vm booted from cinder volume,
|
||||
- vm booted from glance image,
|
||||
- vm booted from glance image, and have an extra cinder
|
||||
volume attached after launch,
|
||||
- vm booed from cinder volume with ephemeral and swap disks
|
||||
- sudo reboot -f only host
|
||||
- Check host is recovered
|
||||
- Check vms are recovered and reachable from NatBox
|
||||
|
||||
"""
|
||||
zone = get_zone
|
||||
|
||||
LOG.tc_step("Launch 5 vms in {} zone".format(zone))
|
||||
vms = vm_helper.boot_vms_various_types(avail_zone=zone,
|
||||
cleanup='function')
|
||||
target_host = vm_helper.get_vm_host(vm_id=vms[0])
|
||||
for vm in vms[1:]:
|
||||
vm_host = vm_helper.get_vm_host(vm)
|
||||
assert target_host == vm_host, "VMs are not booted on same host"
|
||||
|
||||
LOG.tc_step("Reboot -f from target host {}".format(target_host))
|
||||
HostsToRecover.add(target_host)
|
||||
host_helper.reboot_hosts(target_host)
|
||||
|
||||
LOG.tc_step("Check vms are in Active state after host come back up")
|
||||
res, active_vms, inactive_vms = vm_helper.wait_for_vms_values(
|
||||
vms=vms, value=VMStatus.ACTIVE, timeout=600)
|
||||
|
||||
vms_host_err = []
|
||||
for vm in vms:
|
||||
if vm_helper.get_vm_host(vm) != target_host:
|
||||
vms_host_err.append(vm)
|
||||
|
||||
assert not vms_host_err, "Following VMs are not on the same host {}: " \
|
||||
"{}\nVMs did not reach Active state: {}". \
|
||||
format(target_host, vms_host_err, inactive_vms)
|
||||
|
||||
assert not inactive_vms, "VMs did not reach Active state after " \
|
||||
"evacuated to other host: " \
|
||||
"{}".format(inactive_vms)
|
||||
|
||||
LOG.tc_step("Check VMs are pingable from NatBox after evacuation")
|
||||
vm_helper.wait_for_vm_pingable_from_natbox(vms,
|
||||
timeout=VMTimeout.DHCP_RETRY)
|
|
@ -0,0 +1,183 @@
|
|||
#
|
||||
# Copyright (c) 2019 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
import random
|
||||
|
||||
from pytest import fixture, mark, skip
|
||||
|
||||
import keywords.host_helper
|
||||
from utils.tis_log import LOG
|
||||
from consts.reasons import SkipStorageBacking
|
||||
from consts.stx import VMStatus, SysType
|
||||
from consts.timeout import VMTimeout
|
||||
from testfixtures.recover_hosts import HostsToRecover
|
||||
from keywords import vm_helper, nova_helper, host_helper, system_helper
|
||||
|
||||
|
||||
@fixture(scope='module', autouse=True)
|
||||
def update_instances_quota():
|
||||
vm_helper.ensure_vms_quotas()
|
||||
|
||||
|
||||
def _boot_migrable_vms(storage_backing):
|
||||
"""
|
||||
Create vms with specific storage backing that can be live migrated
|
||||
|
||||
Args:
|
||||
storage_backing: 'local_image' or 'remote'
|
||||
|
||||
Returns: (vms_info (list), flavors_created (list))
|
||||
vms_info : [(vm_id1, block_mig1), (vm_id2, block_mig2), ...]
|
||||
|
||||
"""
|
||||
vms_to_test = []
|
||||
flavors_created = []
|
||||
flavor_no_localdisk = nova_helper.create_flavor(
|
||||
ephemeral=0, swap=0, storage_backing=storage_backing)[1]
|
||||
flavors_created.append(flavor_no_localdisk)
|
||||
|
||||
vm_1 = vm_helper.boot_vm(flavor=flavor_no_localdisk, source='volume')[1]
|
||||
|
||||
block_mig_1 = False
|
||||
vms_to_test.append((vm_1, block_mig_1))
|
||||
|
||||
LOG.info("Boot a VM from image if host storage backing is local_image or "
|
||||
"remote...")
|
||||
vm_2 = vm_helper.boot_vm(flavor=flavor_no_localdisk, source='image')[1]
|
||||
block_mig_2 = True
|
||||
vms_to_test.append((vm_2, block_mig_2))
|
||||
if storage_backing == 'remote':
|
||||
LOG.info("Boot a VM from volume with local disks if storage backing "
|
||||
"is remote...")
|
||||
ephemeral_swap = random.choice([[0, 512], [1, 512], [1, 0]])
|
||||
flavor_with_localdisk = nova_helper.create_flavor(
|
||||
ephemeral=ephemeral_swap[0], swap=ephemeral_swap[1])[1]
|
||||
flavors_created.append(flavor_with_localdisk)
|
||||
vm_3 = vm_helper.boot_vm(flavor=flavor_with_localdisk,
|
||||
source='volume')[1]
|
||||
block_mig_3 = False
|
||||
vms_to_test.append((vm_3, block_mig_3))
|
||||
LOG.info("Boot a VM from image with volume attached if "
|
||||
"storage backing is remote...")
|
||||
vm_4 = vm_helper.boot_vm(flavor=flavor_no_localdisk, source='image')[1]
|
||||
vm_helper.attach_vol_to_vm(vm_id=vm_4)
|
||||
block_mig_4 = False
|
||||
vms_to_test.append((vm_4, block_mig_4))
|
||||
|
||||
return vms_to_test, flavors_created
|
||||
|
||||
|
||||
class TestLockWithVMs:
|
||||
@fixture()
|
||||
def target_hosts(self):
|
||||
"""
|
||||
Test fixture for test_lock_with_vms().
|
||||
Calculate target host(s) to perform lock based on storage backing of
|
||||
vms_to_test, and live migrate suitable vms
|
||||
to target host before test start.
|
||||
"""
|
||||
|
||||
storage_backing, target_hosts = \
|
||||
keywords.host_helper.get_storage_backing_with_max_hosts()
|
||||
if len(target_hosts) < 2:
|
||||
skip(SkipStorageBacking.LESS_THAN_TWO_HOSTS_WITH_BACKING.
|
||||
format(storage_backing))
|
||||
|
||||
target_host = target_hosts[0]
|
||||
if SysType.AIO_DX == system_helper.get_sys_type():
|
||||
target_host = system_helper.get_standby_controller_name()
|
||||
|
||||
return storage_backing, target_host
|
||||
|
||||
@mark.nightly
|
||||
def test_lock_with_vms(self, target_hosts, no_simplex, add_admin_role_func):
|
||||
"""
|
||||
Test lock host with vms on it.
|
||||
|
||||
Args:
|
||||
target_hosts (list): targeted host(s) to lock that was prepared
|
||||
by the target_hosts test fixture.
|
||||
|
||||
Skip Conditions:
|
||||
- Less than 2 hypervisor hosts on the system
|
||||
|
||||
Prerequisites:
|
||||
- Hosts storage backing are pre-configured to storage backing
|
||||
under test
|
||||
ie., 2 or more hosts should support the storage backing under
|
||||
test.
|
||||
Test Setups:
|
||||
- Set instances quota to 10 if it was less than 8
|
||||
- Determine storage backing(s) under test. i.e.,storage backings
|
||||
supported by at least 2 hosts on the system
|
||||
- Create flavors with storage extra specs set based on storage
|
||||
backings under test
|
||||
- Create vms_to_test that can be live migrated using created flavors
|
||||
- Determine target host(s) to perform lock based on which host(s)
|
||||
have the most vms_to_test
|
||||
- Live migrate vms to target host(s)
|
||||
Test Steps:
|
||||
- Lock target host
|
||||
- Verify lock succeeded and vms status unchanged
|
||||
- Repeat above steps if more than one target host
|
||||
Test Teardown:
|
||||
- Delete created vms and volumes
|
||||
- Delete created flavors
|
||||
- Unlock locked target host(s)
|
||||
|
||||
"""
|
||||
storage_backing, host = target_hosts
|
||||
vms_num = 5
|
||||
vm_helper.ensure_vms_quotas(vms_num=vms_num)
|
||||
|
||||
LOG.tc_step("Boot {} vms with various storage settings".format(vms_num))
|
||||
vms = vm_helper.boot_vms_various_types(cleanup='function',
|
||||
vms_num=vms_num,
|
||||
storage_backing=storage_backing,
|
||||
target_host=host)
|
||||
|
||||
LOG.tc_step("Attempt to lock target host {}...".format(host))
|
||||
HostsToRecover.add(host)
|
||||
host_helper.lock_host(host=host, check_first=False, fail_ok=False,
|
||||
swact=True)
|
||||
|
||||
LOG.tc_step("Verify lock succeeded and vms still in good state")
|
||||
vm_helper.wait_for_vms_values(vms=vms, fail_ok=False)
|
||||
for vm in vms:
|
||||
vm_host = vm_helper.get_vm_host(vm_id=vm)
|
||||
assert vm_host != host, "VM is still on {} after lock".format(host)
|
||||
|
||||
vm_helper.wait_for_vm_pingable_from_natbox(
|
||||
vm_id=vm, timeout=VMTimeout.DHCP_RETRY)
|
||||
|
||||
@mark.sx_nightly
|
||||
def test_lock_with_max_vms_simplex(self, simplex_only):
|
||||
vms_num = host_helper.get_max_vms_supported(host='controller-0')
|
||||
vm_helper.ensure_vms_quotas(vms_num=vms_num)
|
||||
|
||||
LOG.tc_step("Boot {} vms with various storage settings".format(vms_num))
|
||||
vms = vm_helper.boot_vms_various_types(cleanup='function',
|
||||
vms_num=vms_num)
|
||||
|
||||
LOG.tc_step("Lock vm host on simplex system")
|
||||
HostsToRecover.add('controller-0')
|
||||
host_helper.lock_host('controller-0')
|
||||
|
||||
LOG.tc_step("Ensure vms are in {} state after locked host come "
|
||||
"online".format(VMStatus.STOPPED))
|
||||
vm_helper.wait_for_vms_values(vms, value=VMStatus.STOPPED,
|
||||
fail_ok=False)
|
||||
|
||||
LOG.tc_step("Unlock host on simplex system")
|
||||
host_helper.unlock_host(host='controller-0')
|
||||
|
||||
LOG.tc_step("Ensure vms are Active and Pingable from NatBox")
|
||||
vm_helper.wait_for_vms_values(vms, value=VMStatus.ACTIVE,
|
||||
fail_ok=False, timeout=600)
|
||||
for vm in vms:
|
||||
vm_helper.wait_for_vm_pingable_from_natbox(
|
||||
vm, timeout=VMTimeout.DHCP_RETRY)
|
|
@ -0,0 +1,501 @@
|
|||
#
|
||||
# Copyright (c) 2019 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
import re
|
||||
import random
|
||||
|
||||
from pytest import fixture, mark, skip, param
|
||||
|
||||
import keywords.host_helper
|
||||
from utils.tis_log import LOG
|
||||
from consts.stx import FlavorSpec, ImageMetadata, NovaCLIOutput
|
||||
from keywords import nova_helper, vm_helper, system_helper, cinder_helper, \
|
||||
host_helper, glance_helper
|
||||
|
||||
MEMPAGE_HEADERS = ('app_total_4K', 'app_hp_avail_2M', 'app_hp_avail_1G')
|
||||
|
||||
|
||||
def skip_4k_for_ovs(mempage_size):
|
||||
if mempage_size in (None, 'any', 'small') and not system_helper.is_avs():
|
||||
skip("4K VM is unsupported by OVS by default")
|
||||
|
||||
|
||||
@fixture(scope='module')
|
||||
def prepare_resource(add_admin_role_module):
|
||||
hypervisor = random.choice(host_helper.get_up_hypervisors())
|
||||
flavor = nova_helper.create_flavor(name='flavor-1g', ram=1024,
|
||||
cleanup='module')[1]
|
||||
vol_id = cinder_helper.create_volume('vol-mem_page_size',
|
||||
cleanup='module')[1]
|
||||
return hypervisor, flavor, vol_id
|
||||
|
||||
|
||||
def _get_expt_indices(mempage_size):
|
||||
if mempage_size in ('small', None):
|
||||
expt_mempage_indices = (0,)
|
||||
elif str(mempage_size) == '2048':
|
||||
expt_mempage_indices = (1,)
|
||||
elif str(mempage_size) == '1048576':
|
||||
expt_mempage_indices = (2,)
|
||||
elif mempage_size == 'large':
|
||||
expt_mempage_indices = (1, 2)
|
||||
else:
|
||||
expt_mempage_indices = (0, 1, 2)
|
||||
return expt_mempage_indices
|
||||
|
||||
|
||||
def is_host_mem_sufficient(host, mempage_size=None, mem_gib=1):
|
||||
host_mems_per_proc = host_helper.get_host_memories(host,
|
||||
headers=MEMPAGE_HEADERS)
|
||||
mempage_size = 'small' if not mempage_size else mempage_size
|
||||
expt_mempage_indices = _get_expt_indices(mempage_size)
|
||||
|
||||
for proc, mems_for_proc in host_mems_per_proc.items():
|
||||
pages_4k, pages_2m, pages_1g = mems_for_proc
|
||||
mems_for_proc = (int(pages_4k * 4 / 1048576),
|
||||
int(pages_2m * 2 / 1024), int(pages_1g))
|
||||
for index in expt_mempage_indices:
|
||||
avail_g_for_memsize = mems_for_proc[index]
|
||||
if avail_g_for_memsize >= mem_gib:
|
||||
LOG.info("{} has sufficient {} mempages to launch {}G "
|
||||
"vm".format(host, mempage_size, mem_gib))
|
||||
return True, host_mems_per_proc
|
||||
|
||||
LOG.info("{} does not have sufficient {} mempages to launch {}G "
|
||||
"vm".format(host, mempage_size, mem_gib))
|
||||
return False, host_mems_per_proc
|
||||
|
||||
|
||||
def check_mempage_change(vm, host, prev_host_mems, mempage_size=None,
|
||||
mem_gib=1, numa_node=None):
|
||||
expt_mempage_indics = _get_expt_indices(mempage_size)
|
||||
if numa_node is None:
|
||||
numa_node = vm_helper.get_vm_numa_nodes_via_ps(vm_id=vm, host=host)[0]
|
||||
|
||||
prev_host_mems = prev_host_mems[numa_node]
|
||||
current_host_mems = host_helper.get_host_memories(
|
||||
host, headers=MEMPAGE_HEADERS)[numa_node]
|
||||
|
||||
if 0 in expt_mempage_indics:
|
||||
if current_host_mems[1:] == prev_host_mems[1:] and \
|
||||
abs(prev_host_mems[0] - current_host_mems[
|
||||
0]) <= mem_gib * 512 * 1024 / 4:
|
||||
return
|
||||
|
||||
for i in expt_mempage_indics:
|
||||
if i == 0:
|
||||
continue
|
||||
|
||||
expt_pagecount = 1 if i == 2 else 1024
|
||||
if prev_host_mems[i] - expt_pagecount == current_host_mems[i]:
|
||||
LOG.info("{} {} memory page reduced by {}GiB as "
|
||||
"expected".format(host, MEMPAGE_HEADERS[i], mem_gib))
|
||||
return
|
||||
|
||||
LOG.info("{} {} memory pages - Previous: {}, current: "
|
||||
"{}".format(host, MEMPAGE_HEADERS[i],
|
||||
prev_host_mems[i], current_host_mems[i]))
|
||||
|
||||
assert 0, "{} available vm {} memory page count did not change as " \
|
||||
"expected".format(host, mempage_size)
|
||||
|
||||
|
||||
@mark.parametrize('mem_page_size', [
|
||||
param('2048', marks=mark.domain_sanity),
|
||||
param('large', marks=mark.p1),
|
||||
param('small', marks=mark.domain_sanity),
|
||||
param('1048576', marks=mark.p3),
|
||||
])
|
||||
def test_vm_mem_pool_default_config(prepare_resource, mem_page_size):
|
||||
"""
|
||||
Test memory used by vm is taken from the expected memory pool
|
||||
|
||||
Args:
|
||||
prepare_resource (tuple): test fixture
|
||||
mem_page_size (str): mem page size setting in flavor
|
||||
|
||||
Setup:
|
||||
- Create a flavor with 1G RAM (module)
|
||||
- Create a volume with default values (module)
|
||||
- Select a hypervisor to launch vm on
|
||||
|
||||
Test Steps:
|
||||
- Set memory page size flavor spec to given value
|
||||
- Attempt to boot a vm with above flavor and a basic volume
|
||||
- Verify the system is taking memory from the expected memory pool:
|
||||
- If boot vm succeeded:
|
||||
- Calculate the available/used memory change on the vm host
|
||||
- Verify the memory is taken from memory pool specified via
|
||||
mem_page_size
|
||||
- If boot vm failed:
|
||||
- Verify system attempted to take memory from expected pool,
|
||||
but insufficient memory is available
|
||||
|
||||
Teardown:
|
||||
- Delete created vm
|
||||
- Delete created volume and flavor (module)
|
||||
|
||||
"""
|
||||
hypervisor, flavor_1g, volume_ = prepare_resource
|
||||
|
||||
LOG.tc_step("Set memory page size extra spec in flavor")
|
||||
nova_helper.set_flavor(flavor_1g,
|
||||
**{FlavorSpec.CPU_POLICY: 'dedicated',
|
||||
FlavorSpec.MEM_PAGE_SIZE: mem_page_size})
|
||||
|
||||
LOG.tc_step("Check system host-memory-list before launch vm")
|
||||
is_sufficient, prev_host_mems = is_host_mem_sufficient(
|
||||
host=hypervisor, mempage_size=mem_page_size)
|
||||
|
||||
LOG.tc_step("Boot a vm with mem page size spec - {}".format(mem_page_size))
|
||||
code, vm_id, msg = vm_helper.boot_vm('mempool_' + mem_page_size, flavor_1g,
|
||||
source='volume', fail_ok=True,
|
||||
vm_host=hypervisor, source_id=volume_,
|
||||
cleanup='function')
|
||||
|
||||
if not is_sufficient:
|
||||
LOG.tc_step("Check boot vm rejected due to insufficient memory from "
|
||||
"{} pool".format(mem_page_size))
|
||||
assert 1 == code, "{} vm launched successfully when insufficient " \
|
||||
"mempage configured on {}". \
|
||||
format(mem_page_size, hypervisor)
|
||||
else:
|
||||
LOG.tc_step("Check vm launches successfully and {} available mempages "
|
||||
"change accordingly".format(hypervisor))
|
||||
assert 0 == code, "VM failed to launch with '{}' " \
|
||||
"mempages".format(mem_page_size)
|
||||
check_mempage_change(vm_id, host=hypervisor,
|
||||
prev_host_mems=prev_host_mems,
|
||||
mempage_size=mem_page_size)
|
||||
|
||||
|
||||
def get_hosts_to_configure(candidates):
|
||||
hosts_selected = [None, None]
|
||||
hosts_to_configure = [None, None]
|
||||
max_4k, expt_p1_4k, max_1g, expt_p1_1g = \
|
||||
1.5 * 1048576 / 4, 2.5 * 1048576 / 4, 1, 2
|
||||
for host in candidates:
|
||||
host_mems = host_helper.get_host_memories(host, headers=MEMPAGE_HEADERS)
|
||||
if 1 not in host_mems:
|
||||
LOG.info("{} has only 1 processor".format(host))
|
||||
continue
|
||||
|
||||
proc0_mems, proc1_mems = host_mems[0], host_mems[1]
|
||||
p0_4k, p1_4k, p0_1g, p1_1g = \
|
||||
proc0_mems[0], proc1_mems[0], proc0_mems[2], proc1_mems[2]
|
||||
|
||||
if p0_4k <= max_4k and p0_1g <= max_1g:
|
||||
if not hosts_selected[1] and p1_4k >= expt_p1_4k and \
|
||||
p1_1g <= max_1g:
|
||||
hosts_selected[1] = host
|
||||
elif not hosts_selected[0] and p1_4k <= max_4k and \
|
||||
p1_1g >= expt_p1_1g:
|
||||
hosts_selected[0] = host
|
||||
|
||||
if None not in hosts_selected:
|
||||
LOG.info("1G and 4k hosts already configured and selected: "
|
||||
"{}".format(hosts_selected))
|
||||
break
|
||||
else:
|
||||
for i in range(len(hosts_selected)):
|
||||
if hosts_selected[i] is None:
|
||||
hosts_selected[i] = hosts_to_configure[i] = \
|
||||
list(set(candidates) - set(hosts_selected))[0]
|
||||
LOG.info("Hosts selected: {}; To be configured: "
|
||||
"{}".format(hosts_selected, hosts_to_configure))
|
||||
|
||||
return hosts_selected, hosts_to_configure
|
||||
|
||||
|
||||
class TestConfigMempage:
|
||||
MEM_CONFIGS = [None, 'any', 'large', 'small', '2048', '1048576']
|
||||
|
||||
@fixture(scope='class')
|
||||
def add_1g_and_4k_pages(self, request, config_host_class,
|
||||
skip_for_one_proc, add_stxauto_zone,
|
||||
add_admin_role_module):
|
||||
storage_backing, candidate_hosts = \
|
||||
keywords.host_helper.get_storage_backing_with_max_hosts()
|
||||
|
||||
if len(candidate_hosts) < 2:
|
||||
skip("Less than two up hosts have same storage backing")
|
||||
|
||||
LOG.fixture_step("Check mempage configs for hypervisors and select "
|
||||
"host to use or configure")
|
||||
hosts_selected, hosts_to_configure = get_hosts_to_configure(
|
||||
candidate_hosts)
|
||||
|
||||
if set(hosts_to_configure) != {None}:
|
||||
def _modify(host):
|
||||
is_1g = True if hosts_selected.index(host) == 0 else False
|
||||
proc1_kwargs = {'gib_1g': 2, 'gib_4k_range': (None, 2)} if \
|
||||
is_1g else {'gib_1g': 0, 'gib_4k_range': (2, None)}
|
||||
kwargs = {'gib_1g': 0, 'gib_4k_range': (None, 2)}, proc1_kwargs
|
||||
|
||||
actual_mems = host_helper._get_actual_mems(host=host)
|
||||
LOG.fixture_step("Modify {} proc0 to have 0 of 1G pages and "
|
||||
"<2GiB of 4K pages".format(host))
|
||||
host_helper.modify_host_memory(host, proc=0,
|
||||
actual_mems=actual_mems,
|
||||
**kwargs[0])
|
||||
LOG.fixture_step("Modify {} proc1 to have >=2GiB of {} "
|
||||
"pages".format(host, '1G' if is_1g else '4k'))
|
||||
host_helper.modify_host_memory(host, proc=1,
|
||||
actual_mems=actual_mems,
|
||||
**kwargs[1])
|
||||
|
||||
for host_to_config in hosts_to_configure:
|
||||
if host_to_config:
|
||||
config_host_class(host=host_to_config, modify_func=_modify)
|
||||
LOG.fixture_step("Check mem pages for {} are modified "
|
||||
"and updated successfully".
|
||||
format(host_to_config))
|
||||
host_helper.wait_for_memory_update(host=host_to_config)
|
||||
|
||||
LOG.fixture_step("Check host memories for {} after mem config "
|
||||
"completed".format(hosts_selected))
|
||||
_, hosts_unconfigured = get_hosts_to_configure(hosts_selected)
|
||||
assert not hosts_unconfigured[0], \
|
||||
"Failed to configure {}. Expt: proc0:1g<2,4k<2gib;" \
|
||||
"proc1:1g>=2,4k<2gib".format(hosts_unconfigured[0])
|
||||
assert not hosts_unconfigured[1], \
|
||||
"Failed to configure {}. Expt: proc0:1g<2,4k<2gib;" \
|
||||
"proc1:1g<2,4k>=2gib".format(hosts_unconfigured[1])
|
||||
|
||||
LOG.fixture_step('(class) Add hosts to stxauto aggregate: '
|
||||
'{}'.format(hosts_selected))
|
||||
nova_helper.add_hosts_to_aggregate(aggregate='stxauto',
|
||||
hosts=hosts_selected)
|
||||
|
||||
def remove_host_from_zone():
|
||||
LOG.fixture_step('(class) Remove hosts from stxauto aggregate: '
|
||||
'{}'.format(hosts_selected))
|
||||
nova_helper.remove_hosts_from_aggregate(aggregate='stxauto',
|
||||
check_first=False)
|
||||
|
||||
request.addfinalizer(remove_host_from_zone)
|
||||
|
||||
return hosts_selected, storage_backing
|
||||
|
||||
@fixture(scope='class')
|
||||
def flavor_2g(self, add_1g_and_4k_pages):
|
||||
hosts, storage_backing = add_1g_and_4k_pages
|
||||
LOG.fixture_step("Create a 2G memory flavor to be used by mempage "
|
||||
"testcases")
|
||||
flavor = nova_helper.create_flavor(name='flavor-2g', ram=2048,
|
||||
storage_backing=storage_backing,
|
||||
cleanup='class')[1]
|
||||
return flavor, hosts, storage_backing
|
||||
|
||||
@fixture(scope='class')
|
||||
def image_mempage(self):
|
||||
LOG.fixture_step("(class) Create a glance image for mempage testcases")
|
||||
image_id = glance_helper.create_image(name='mempage',
|
||||
cleanup='class')[1]
|
||||
return image_id
|
||||
|
||||
@fixture()
|
||||
def check_alarms(self, add_1g_and_4k_pages):
|
||||
hosts, storage_backing = add_1g_and_4k_pages
|
||||
host_helper.get_hypervisor_info(hosts=hosts)
|
||||
for host in hosts:
|
||||
host_helper.get_host_memories(host, wait_for_update=False)
|
||||
|
||||
@fixture(params=MEM_CONFIGS)
|
||||
def flavor_mem_page_size(self, request, flavor_2g):
|
||||
flavor_id = flavor_2g[0]
|
||||
mem_page_size = request.param
|
||||
skip_4k_for_ovs(mem_page_size)
|
||||
|
||||
if mem_page_size is None:
|
||||
nova_helper.unset_flavor(flavor_id, FlavorSpec.MEM_PAGE_SIZE)
|
||||
else:
|
||||
nova_helper.set_flavor(flavor_id,
|
||||
**{FlavorSpec.MEM_PAGE_SIZE: mem_page_size})
|
||||
|
||||
return mem_page_size
|
||||
|
||||
@mark.parametrize('image_mem_page_size', MEM_CONFIGS)
|
||||
def test_boot_vm_mem_page_size(self, flavor_2g, flavor_mem_page_size,
|
||||
image_mempage, image_mem_page_size):
|
||||
"""
|
||||
Test boot vm with various memory page size setting in flavor and image.
|
||||
|
||||
Args:
|
||||
flavor_2g (tuple): flavor id of a flavor with ram set to 2G,
|
||||
hosts configured and storage_backing
|
||||
flavor_mem_page_size (str): memory page size extra spec value to
|
||||
set in flavor
|
||||
image_mempage (str): image id for tis image
|
||||
image_mem_page_size (str): memory page metadata value to set in
|
||||
image
|
||||
|
||||
Setup:
|
||||
- Create a flavor with 2G RAM (module)
|
||||
- Get image id of tis image (module)
|
||||
|
||||
Test Steps:
|
||||
- Set/Unset flavor memory page size extra spec with given value (
|
||||
unset if None is given)
|
||||
- Set/Unset image memory page size metadata with given value (
|
||||
unset if None if given)
|
||||
- Attempt to boot a vm with above flavor and image
|
||||
- Verify boot result based on the mem page size values in the
|
||||
flavor and image
|
||||
|
||||
Teardown:
|
||||
- Delete vm if booted
|
||||
- Delete created flavor (module)
|
||||
|
||||
"""
|
||||
skip_4k_for_ovs(image_mem_page_size)
|
||||
|
||||
flavor_id, hosts, storage_backing = flavor_2g
|
||||
|
||||
if image_mem_page_size is None:
|
||||
glance_helper.unset_image(image_mempage,
|
||||
properties=ImageMetadata.MEM_PAGE_SIZE)
|
||||
expt_code = 0
|
||||
else:
|
||||
glance_helper.set_image(image=image_mempage,
|
||||
properties={ImageMetadata.MEM_PAGE_SIZE:
|
||||
image_mem_page_size})
|
||||
if flavor_mem_page_size is None:
|
||||
expt_code = 4
|
||||
elif flavor_mem_page_size.lower() in ['any', 'large']:
|
||||
expt_code = 0
|
||||
else:
|
||||
expt_code = 0 if flavor_mem_page_size.lower() == \
|
||||
image_mem_page_size.lower() else 4
|
||||
|
||||
LOG.tc_step("Attempt to boot a vm with flavor_mem_page_size: {}, and "
|
||||
"image_mem_page_size: {}. And check return "
|
||||
"code is {}.".format(flavor_mem_page_size,
|
||||
image_mem_page_size, expt_code))
|
||||
|
||||
actual_code, vm_id, msg = vm_helper.boot_vm(name='mem_page_size',
|
||||
flavor=flavor_id,
|
||||
source='image',
|
||||
source_id=image_mempage,
|
||||
fail_ok=True,
|
||||
avail_zone='stxauto',
|
||||
cleanup='function')
|
||||
|
||||
assert expt_code == actual_code, "Expect boot vm to return {}; " \
|
||||
"Actual result: {} with msg: " \
|
||||
"{}".format(expt_code, actual_code,
|
||||
msg)
|
||||
|
||||
if expt_code != 0:
|
||||
assert re.search(
|
||||
NovaCLIOutput.VM_BOOT_REJECT_MEM_PAGE_SIZE_FORBIDDEN, msg)
|
||||
else:
|
||||
assert vm_helper.get_vm_host(vm_id) in hosts, \
|
||||
"VM is not booted on hosts in stxauto zone"
|
||||
LOG.tc_step("Ensure VM is pingable from NatBox")
|
||||
vm_helper.wait_for_vm_pingable_from_natbox(vm_id)
|
||||
|
||||
@mark.parametrize('mem_page_size', [
|
||||
param('1048576', marks=mark.priorities('domain_sanity', 'nightly')),
|
||||
param('large'),
|
||||
param('small', marks=mark.nightly),
|
||||
])
|
||||
def test_schedule_vm_mempage_config(self, flavor_2g, mem_page_size):
|
||||
"""
|
||||
Test memory used by vm is taken from the expected memory pool and the
|
||||
vm was scheduled on the correct
|
||||
host/processor
|
||||
|
||||
Args:
|
||||
flavor_2g (tuple): flavor id of a flavor with ram set to 2G,
|
||||
hosts, storage_backing
|
||||
mem_page_size (str): mem page size setting in flavor
|
||||
|
||||
Setup:
|
||||
- Create host aggregate
|
||||
- Add two hypervisors to the host aggregate
|
||||
- Host-0 configuration:
|
||||
- Processor-0:
|
||||
- Insufficient 1g pages to boot vm that requires 2g
|
||||
- Insufficient 4k pages to boot vm that requires 2g
|
||||
- Processor-1:
|
||||
- Sufficient 1g pages to boot vm that requires 2g
|
||||
- Insufficient 4k pages to boot vm that requires 2g
|
||||
- Host-1 configuration:
|
||||
- Processor-0:
|
||||
- Insufficient 1g pages to boot vm that requires 2g
|
||||
- Insufficient 4k pages to boot vm that requires 2g
|
||||
- Processor-1:
|
||||
- Insufficient 1g pages to boot vm that requires 2g
|
||||
- Sufficient 4k pages to boot vm that requires 2g
|
||||
- Configure a compute to have 4 1G hugepages (module)
|
||||
- Create a flavor with 2G RAM (module)
|
||||
- Create a volume with default values (module)
|
||||
|
||||
Test Steps:
|
||||
- Set memory page size flavor spec to given value
|
||||
- Boot a vm with above flavor and a basic volume
|
||||
- Calculate the available/used memory change on the vm host
|
||||
- Verify the memory is taken from 1G hugepage memory pool
|
||||
- Verify the vm was booted on a supporting host
|
||||
|
||||
Teardown:
|
||||
- Delete created vm
|
||||
- Delete created volume and flavor (module)
|
||||
- Re-Configure the compute to have 0 hugepages (module)
|
||||
- Revert host mem pages back to original
|
||||
"""
|
||||
skip_4k_for_ovs(mem_page_size)
|
||||
|
||||
flavor_id, hosts_configured, storage_backing = flavor_2g
|
||||
LOG.tc_step("Set memory page size extra spec in flavor")
|
||||
nova_helper.set_flavor(flavor_id,
|
||||
**{FlavorSpec.CPU_POLICY: 'dedicated',
|
||||
FlavorSpec.MEM_PAGE_SIZE: mem_page_size})
|
||||
|
||||
host_helper.wait_for_hypervisors_up(hosts_configured)
|
||||
prev_computes_mems = {}
|
||||
for host in hosts_configured:
|
||||
prev_computes_mems[host] = host_helper.get_host_memories(
|
||||
host=host, headers=MEMPAGE_HEADERS)
|
||||
|
||||
LOG.tc_step(
|
||||
"Boot a vm with mem page size spec - {}".format(mem_page_size))
|
||||
|
||||
host_1g, host_4k = hosts_configured
|
||||
code, vm_id, msg = vm_helper.boot_vm('mempool_configured', flavor_id,
|
||||
fail_ok=True,
|
||||
avail_zone='stxauto',
|
||||
cleanup='function')
|
||||
assert 0 == code, "VM is not successfully booted."
|
||||
|
||||
instance_name, vm_host = vm_helper.get_vm_values(
|
||||
vm_id, fields=[":instance_name", ":host"], strict=False)
|
||||
vm_node = vm_helper.get_vm_numa_nodes_via_ps(
|
||||
vm_id=vm_id, instance_name=instance_name, host=vm_host)
|
||||
if mem_page_size == '1048576':
|
||||
assert host_1g == vm_host, \
|
||||
"VM is not created on the configured host " \
|
||||
"{}".format(hosts_configured[0])
|
||||
assert vm_node == [1], "VM (huge) did not boot on the correct " \
|
||||
"processor"
|
||||
elif mem_page_size == 'small':
|
||||
assert host_4k == vm_host, "VM is not created on the configured " \
|
||||
"host {}".format(hosts_configured[1])
|
||||
assert vm_node == [1], "VM (small) did not boot on the correct " \
|
||||
"processor"
|
||||
else:
|
||||
assert vm_host in hosts_configured
|
||||
|
||||
LOG.tc_step("Calculate memory change on vm host - {}".format(vm_host))
|
||||
check_mempage_change(vm_id, vm_host,
|
||||
prev_host_mems=prev_computes_mems[vm_host],
|
||||
mempage_size=mem_page_size, mem_gib=2,
|
||||
numa_node=vm_node[0])
|
||||
|
||||
LOG.tc_step("Ensure vm is pingable from NatBox")
|
||||
vm_helper.wait_for_vm_pingable_from_natbox(vm_id)
|
|
@ -0,0 +1,412 @@
|
|||
#
|
||||
# Copyright (c) 2019 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
from pytest import fixture, mark, skip, param
|
||||
|
||||
from utils.tis_log import LOG
|
||||
from consts.stx import FlavorSpec, EventLogID
|
||||
# Don't remove this import, used by eval()
|
||||
from consts.cli_errs import LiveMigErr
|
||||
from keywords import vm_helper, nova_helper, host_helper, cinder_helper, \
|
||||
glance_helper, check_helper, system_helper
|
||||
from testfixtures.fixture_resources import ResourceCleanup
|
||||
|
||||
|
||||
@fixture(scope='module')
|
||||
def check_system():
|
||||
up_hypervisors = host_helper.get_up_hypervisors()
|
||||
if len(up_hypervisors) < 2:
|
||||
skip("Less than two up hypervisors")
|
||||
|
||||
|
||||
@fixture(scope='module')
|
||||
def hosts_per_stor_backing(check_system):
|
||||
hosts_per_backing = host_helper.get_hosts_per_storage_backing()
|
||||
LOG.fixture_step("Hosts per storage backing: {}".format(hosts_per_backing))
|
||||
|
||||
return hosts_per_backing
|
||||
|
||||
|
||||
def touch_files_under_vm_disks(vm_id, ephemeral=0, swap=0, vm_type='volume',
|
||||
disks=None):
|
||||
expt_len = 1 + int(bool(ephemeral)) + int(bool(swap)) + (
|
||||
1 if 'with_vol' in vm_type else 0)
|
||||
|
||||
LOG.tc_step("Auto mount ephemeral, swap, and attached volume if any")
|
||||
mounts = vm_helper.auto_mount_vm_disks(vm_id=vm_id, disks=disks)
|
||||
assert expt_len == len(mounts)
|
||||
|
||||
LOG.tc_step("Create files under vm disks: {}".format(mounts))
|
||||
file_paths, content = vm_helper.touch_files(vm_id=vm_id, file_dirs=mounts)
|
||||
return file_paths, content
|
||||
|
||||
|
||||
@mark.parametrize(('storage_backing', 'ephemeral', 'swap', 'cpu_pol', 'vcpus',
|
||||
'vm_type', 'block_mig'), [
|
||||
param('local_image', 0, 0, None, 1, 'volume', False,
|
||||
marks=mark.p1),
|
||||
param('local_image', 0, 0, 'dedicated', 2, 'volume',
|
||||
False, marks=mark.p1),
|
||||
('local_image', 1, 0, 'dedicated', 2, 'volume', False),
|
||||
('local_image', 0, 512, 'shared', 1, 'volume', False),
|
||||
('local_image', 1, 512, 'dedicated', 2, 'volume', True),
|
||||
# Supported from Newton
|
||||
param('local_image', 0, 0, 'shared', 2, 'image', True,
|
||||
marks=mark.domain_sanity),
|
||||
param('local_image', 1, 512, 'dedicated', 1, 'image',
|
||||
False, marks=mark.domain_sanity),
|
||||
('local_image', 0, 0, None, 2, 'image_with_vol', False),
|
||||
('local_image', 0, 0, 'dedicated', 1, 'image_with_vol',
|
||||
True),
|
||||
('local_image', 1, 512, 'dedicated', 2, 'image_with_vol',
|
||||
True),
|
||||
('local_image', 1, 512, 'dedicated', 1, 'image_with_vol',
|
||||
False),
|
||||
param('remote', 0, 0, None, 2, 'volume', False,
|
||||
marks=mark.p1),
|
||||
param('remote', 1, 0, 'dedicated', 1, 'volume', False,
|
||||
marks=mark.p1),
|
||||
param('remote', 1, 512, None, 1, 'image', False,
|
||||
marks=mark.domain_sanity),
|
||||
param('remote', 0, 512, 'dedicated', 2, 'image_with_vol',
|
||||
False, marks=mark.domain_sanity),
|
||||
])
|
||||
def test_live_migrate_vm_positive(hosts_per_stor_backing, storage_backing,
|
||||
ephemeral, swap, cpu_pol, vcpus, vm_type,
|
||||
block_mig):
|
||||
"""
|
||||
Skip Condition:
|
||||
- Less than two hosts have specified storage backing
|
||||
|
||||
Test Steps:
|
||||
- create flavor with specified vcpus, cpu_policy, ephemeral, swap,
|
||||
storage_backing
|
||||
- boot vm from specified boot source with above flavor
|
||||
- (attach volume to vm if 'image_with_vol', specified in vm_type)
|
||||
- Live migrate the vm with specified block_migration flag
|
||||
- Verify VM is successfully live migrated to different host
|
||||
|
||||
Teardown:
|
||||
- Delete created vm, volume, flavor
|
||||
|
||||
"""
|
||||
if len(hosts_per_stor_backing.get(storage_backing, [])) < 2:
|
||||
skip("Less than two hosts have {} storage backing".format(
|
||||
storage_backing))
|
||||
|
||||
vm_id = _boot_vm_under_test(storage_backing, ephemeral, swap, cpu_pol,
|
||||
vcpus, vm_type)
|
||||
|
||||
prev_vm_host = vm_helper.get_vm_host(vm_id)
|
||||
vm_helper.wait_for_vm_pingable_from_natbox(vm_id)
|
||||
|
||||
vm_disks = vm_helper.get_vm_devices_via_virsh(vm_id)
|
||||
file_paths, content = touch_files_under_vm_disks(vm_id=vm_id,
|
||||
ephemeral=ephemeral,
|
||||
swap=swap, vm_type=vm_type,
|
||||
disks=vm_disks)
|
||||
|
||||
LOG.tc_step("Live migrate VM and ensure it succeeded")
|
||||
# block_mig = True if boot_source == 'image' else False
|
||||
code, output = vm_helper.live_migrate_vm(vm_id, block_migrate=block_mig)
|
||||
assert 0 == code, "Live migrate is not successful. Details: {}".format(
|
||||
output)
|
||||
|
||||
post_vm_host = vm_helper.get_vm_host(vm_id)
|
||||
assert prev_vm_host != post_vm_host
|
||||
|
||||
LOG.tc_step("Ensure vm is pingable from NatBox after live migration")
|
||||
vm_helper.wait_for_vm_pingable_from_natbox(vm_id)
|
||||
|
||||
LOG.tc_step("Check files after live migrate")
|
||||
check_helper.check_vm_files(vm_id=vm_id, storage_backing=storage_backing,
|
||||
ephemeral=ephemeral, swap=swap,
|
||||
vm_type=vm_type, vm_action='live_migrate',
|
||||
file_paths=file_paths, content=content,
|
||||
disks=vm_disks, prev_host=prev_vm_host,
|
||||
post_host=post_vm_host)
|
||||
|
||||
|
||||
@mark.parametrize(('storage_backing', 'ephemeral', 'swap', 'vm_type',
|
||||
'block_mig', 'expt_err'), [
|
||||
param('local_image', 0, 0, 'volume', True,
|
||||
'LiveMigErr.BLOCK_MIG_UNSUPPORTED'),
|
||||
param('remote', 0, 0, 'volume', True,
|
||||
'LiveMigErr.BLOCK_MIG_UNSUPPORTED'),
|
||||
param('remote', 1, 0, 'volume', True,
|
||||
'LiveMigErr.BLOCK_MIG_UNSUPPORTED'),
|
||||
param('remote', 0, 512, 'volume', True,
|
||||
'LiveMigErr.BLOCK_MIG_UNSUPPORTED'),
|
||||
param('remote', 0, 512, 'image', True,
|
||||
'LiveMigErr.BLOCK_MIG_UNSUPPORTED'),
|
||||
param('remote', 0, 0, 'image_with_vol', True,
|
||||
'LiveMigErr.BLOCK_MIG_UNSUPPORTED'),
|
||||
])
|
||||
def test_live_migrate_vm_negative(storage_backing, ephemeral, swap, vm_type,
|
||||
block_mig, expt_err,
|
||||
hosts_per_stor_backing, no_simplex):
|
||||
"""
|
||||
Skip Condition:
|
||||
- Less than two hosts have specified storage backing
|
||||
|
||||
Test Steps:
|
||||
- create flavor with specified vcpus, cpu_policy, ephemeral, swap,
|
||||
storage_backing
|
||||
- boot vm from specified boot source with above flavor
|
||||
- (attach volume to vm if 'image_with_vol', specified in vm_type)
|
||||
- Live migrate the vm with specified block_migration flag
|
||||
- Verify VM is successfully live migrated to different host
|
||||
|
||||
Teardown:
|
||||
- Delete created vm, volume, flavor
|
||||
|
||||
"""
|
||||
if len(hosts_per_stor_backing.get(storage_backing, [])) < 2:
|
||||
skip("Less than two hosts have {} storage backing".format(
|
||||
storage_backing))
|
||||
|
||||
vm_id = _boot_vm_under_test(storage_backing, ephemeral, swap, None, 1,
|
||||
vm_type)
|
||||
vm_helper.wait_for_vm_pingable_from_natbox(vm_id)
|
||||
|
||||
prev_vm_host = vm_helper.get_vm_host(vm_id)
|
||||
vm_disks = vm_helper.get_vm_devices_via_virsh(vm_id)
|
||||
file_paths, content = touch_files_under_vm_disks(vm_id=vm_id,
|
||||
ephemeral=ephemeral,
|
||||
swap=swap, vm_type=vm_type,
|
||||
disks=vm_disks)
|
||||
|
||||
LOG.tc_step(
|
||||
"Live migrate VM and ensure it's rejected with proper error message")
|
||||
# block_mig = True if boot_source == 'image' else False
|
||||
code, output = vm_helper.live_migrate_vm(vm_id, block_migrate=block_mig)
|
||||
assert 2 == code, "Expect live migration to have expected fail. Actual: " \
|
||||
"{}".format(output)
|
||||
|
||||
# Remove below code due to live-migration is async in newton
|
||||
# assert 'Unexpected API Error'.lower() not in output.lower(),
|
||||
# "'Unexpected API Error' returned."
|
||||
#
|
||||
# # remove extra spaces in error message
|
||||
# output = re.sub(r'\s\s+', " ", output)
|
||||
# assert eval(expt_err) in output, "Expected error message {} is not in
|
||||
# actual error message: {}".\
|
||||
# format(eval(expt_err), output)
|
||||
|
||||
post_vm_host = vm_helper.get_vm_host(vm_id)
|
||||
assert prev_vm_host == post_vm_host, "VM host changed even though live " \
|
||||
"migration request rejected."
|
||||
|
||||
LOG.tc_step(
|
||||
"Ensure vm is pingable from NatBox after live migration rejected")
|
||||
vm_helper.wait_for_vm_pingable_from_natbox(vm_id)
|
||||
|
||||
LOG.tc_step("Check files after live migrate attempt")
|
||||
check_helper.check_vm_files(vm_id=vm_id, storage_backing=storage_backing,
|
||||
ephemeral=ephemeral, swap=swap,
|
||||
vm_type=vm_type, vm_action='live_migrate',
|
||||
file_paths=file_paths, content=content,
|
||||
disks=vm_disks, prev_host=prev_vm_host,
|
||||
post_host=post_vm_host)
|
||||
|
||||
|
||||
@mark.parametrize(('storage_backing', 'ephemeral', 'swap', 'cpu_pol',
|
||||
'vcpus', 'vm_type', 'resize'), [
|
||||
param('local_image', 0, 0, None, 1, 'volume', 'confirm'),
|
||||
param('local_image', 0, 0, 'dedicated', 2, 'volume', 'confirm'),
|
||||
param('local_image', 1, 0, 'shared', 2, 'image', 'confirm'),
|
||||
param('local_image', 0, 512, 'dedicated', 1, 'image', 'confirm'),
|
||||
param('local_image', 0, 0, None, 1, 'image_with_vol', 'confirm'),
|
||||
param('remote', 0, 0, None, 2, 'volume', 'confirm'),
|
||||
param('remote', 1, 0, None, 1, 'volume', 'confirm'),
|
||||
param('remote', 1, 512, None, 1, 'image', 'confirm'),
|
||||
param('remote', 0, 0, None, 2, 'image_with_vol', 'confirm'),
|
||||
param('local_image', 0, 0, None, 2, 'volume', 'revert'),
|
||||
param('local_image', 0, 0, 'dedicated', 1, 'volume', 'revert'),
|
||||
param('local_image', 1, 0, 'shared', 2, 'image', 'revert'),
|
||||
param('local_image', 0, 512, 'dedicated', 1, 'image', 'revert'),
|
||||
param('local_image', 0, 0, 'dedicated', 2, 'image_with_vol', 'revert'),
|
||||
param('remote', 0, 0, None, 2, 'volume', 'revert'),
|
||||
param('remote', 1, 512, None, 1, 'volume', 'revert'),
|
||||
param('remote', 0, 0, None, 1, 'image', 'revert'),
|
||||
param('remote', 1, 0, None, 2, 'image_with_vol', 'revert'),
|
||||
])
|
||||
def test_cold_migrate_vm(storage_backing, ephemeral, swap, cpu_pol, vcpus,
|
||||
vm_type, resize, hosts_per_stor_backing,
|
||||
no_simplex):
|
||||
"""
|
||||
Skip Condition:
|
||||
- Less than two hosts have specified storage backing
|
||||
|
||||
Test Steps:
|
||||
- create flavor with specified vcpus, cpu_policy, ephemeral, swap,
|
||||
storage_backing
|
||||
- boot vm from specified boot source with above flavor
|
||||
- (attach volume to vm if 'image_with_vol', specified in vm_type)
|
||||
- Cold migrate vm
|
||||
- Confirm/Revert resize as specified
|
||||
- Verify VM is successfully cold migrated and confirmed/reverted resize
|
||||
- Verify that instance files are not found on original host. (TC6621)
|
||||
|
||||
Teardown:
|
||||
- Delete created vm, volume, flavor
|
||||
|
||||
"""
|
||||
if len(hosts_per_stor_backing.get(storage_backing, [])) < 2:
|
||||
skip("Less than two hosts have {} storage backing".format(
|
||||
storage_backing))
|
||||
|
||||
vm_id = _boot_vm_under_test(storage_backing, ephemeral, swap, cpu_pol,
|
||||
vcpus, vm_type)
|
||||
prev_vm_host = vm_helper.get_vm_host(vm_id)
|
||||
vm_helper.wait_for_vm_pingable_from_natbox(vm_id)
|
||||
|
||||
vm_disks = vm_helper.get_vm_devices_via_virsh(vm_id)
|
||||
file_paths, content = touch_files_under_vm_disks(vm_id=vm_id,
|
||||
ephemeral=ephemeral,
|
||||
swap=swap, vm_type=vm_type,
|
||||
disks=vm_disks)
|
||||
|
||||
LOG.tc_step("Cold migrate VM and {} resize".format(resize))
|
||||
revert = True if resize == 'revert' else False
|
||||
code, output = vm_helper.cold_migrate_vm(vm_id, revert=revert)
|
||||
assert 0 == code, "Cold migrate {} is not successful. Details: {}".format(
|
||||
resize, output)
|
||||
|
||||
# Below steps are unnecessary as host is already checked in
|
||||
# cold_migrate_vm keyword. Add steps below just in case.
|
||||
LOG.tc_step(
|
||||
"Check VM host is as expected after cold migrate {}".format(resize))
|
||||
post_vm_host = vm_helper.get_vm_host(vm_id)
|
||||
if revert:
|
||||
assert prev_vm_host == post_vm_host, "vm host changed after cold " \
|
||||
"migrate revert"
|
||||
else:
|
||||
assert prev_vm_host != post_vm_host, "vm host did not change after " \
|
||||
"cold migrate"
|
||||
LOG.tc_step("Check that source host no longer has instance files")
|
||||
with host_helper.ssh_to_host(prev_vm_host) as prev_ssh:
|
||||
assert not prev_ssh.file_exists(
|
||||
'/var/lib/nova/instances/{}'.format(vm_id)), \
|
||||
"Instance files found on previous host {} after cold migrate " \
|
||||
"to {}".format(prev_vm_host, post_vm_host)
|
||||
|
||||
LOG.tc_step("Ensure vm is pingable from NatBox after cold migration "
|
||||
"{}".format(resize))
|
||||
vm_helper.wait_for_vm_pingable_from_natbox(vm_id)
|
||||
|
||||
LOG.tc_step("Check files after cold migrate {}".format(resize))
|
||||
action = None if revert else 'cold_migrate'
|
||||
check_helper.check_vm_files(vm_id=vm_id, storage_backing=storage_backing,
|
||||
ephemeral=ephemeral, swap=swap,
|
||||
vm_type=vm_type, vm_action=action,
|
||||
file_paths=file_paths, content=content,
|
||||
disks=vm_disks, prev_host=prev_vm_host,
|
||||
post_host=post_vm_host)
|
||||
|
||||
|
||||
def _boot_vm_under_test(storage_backing, ephemeral, swap, cpu_pol, vcpus,
|
||||
vm_type):
|
||||
LOG.tc_step(
|
||||
"Create a flavor with {} vcpus, {}G ephemera disk, {}M swap "
|
||||
"disk".format(vcpus, ephemeral, swap))
|
||||
flavor_id = nova_helper.create_flavor(
|
||||
name='migration_test', ephemeral=ephemeral, swap=swap, vcpus=vcpus,
|
||||
storage_backing=storage_backing, cleanup='function')[1]
|
||||
|
||||
if cpu_pol is not None:
|
||||
specs = {FlavorSpec.CPU_POLICY: cpu_pol}
|
||||
|
||||
LOG.tc_step("Add following extra specs: {}".format(specs))
|
||||
nova_helper.set_flavor(flavor=flavor_id, **specs)
|
||||
|
||||
boot_source = 'volume' if vm_type == 'volume' else 'image'
|
||||
LOG.tc_step("Boot a vm from {}".format(boot_source))
|
||||
vm_id = vm_helper.boot_vm('migration_test',
|
||||
flavor=flavor_id, source=boot_source,
|
||||
reuse_vol=False,
|
||||
cleanup='function')[1]
|
||||
|
||||
if vm_type == 'image_with_vol':
|
||||
LOG.tc_step("Attach volume to vm")
|
||||
vm_helper.attach_vol_to_vm(vm_id=vm_id, mount=False)
|
||||
|
||||
return vm_id
|
||||
|
||||
|
||||
@mark.parametrize(('guest_os', 'mig_type', 'cpu_pol'), [
|
||||
('ubuntu_14', 'live', 'dedicated'),
|
||||
# Live migration with pinned VM may not be unsupported
|
||||
param('ubuntu_14', 'cold', 'dedicated',
|
||||
marks=mark.priorities('sanity', 'cpe_sanity')),
|
||||
param('tis-centos-guest', 'live', None,
|
||||
marks=mark.priorities('sanity', 'cpe_sanity')),
|
||||
('tis-centos-guest', 'cold', None),
|
||||
])
|
||||
def test_migrate_vm(check_system, guest_os, mig_type, cpu_pol):
|
||||
"""
|
||||
Test migrate vms for given guest type
|
||||
Args:
|
||||
check_system:
|
||||
guest_os:
|
||||
mig_type:
|
||||
cpu_pol:
|
||||
|
||||
Test Steps:
|
||||
- Create a glance image from given guest type
|
||||
- Create a vm from cinder volume using above image
|
||||
- Live/cold migrate the vm
|
||||
- Ensure vm moved to other host and in good state (active and
|
||||
reachabe from NatBox)
|
||||
|
||||
"""
|
||||
LOG.tc_step("Create a flavor with 1 vcpu")
|
||||
flavor_id = \
|
||||
nova_helper.create_flavor(name='{}-mig'.format(mig_type), vcpus=1,
|
||||
root_disk=9, cleanup='function')[1]
|
||||
|
||||
if cpu_pol is not None:
|
||||
specs = {FlavorSpec.CPU_POLICY: cpu_pol}
|
||||
LOG.tc_step("Add following extra specs: {}".format(specs))
|
||||
nova_helper.set_flavor(flavor=flavor_id, **specs)
|
||||
|
||||
LOG.tc_step("Create a volume from {} image".format(guest_os))
|
||||
image_id = glance_helper.get_guest_image(guest_os=guest_os)
|
||||
|
||||
vol_id = cinder_helper.create_volume(source_id=image_id, size=9,
|
||||
guest_image=guest_os)[1]
|
||||
ResourceCleanup.add('volume', vol_id)
|
||||
|
||||
LOG.tc_step("Boot a vm from above flavor and volume")
|
||||
vm_id = vm_helper.boot_vm(guest_os, flavor=flavor_id, source='volume',
|
||||
source_id=vol_id, cleanup='function')[1]
|
||||
vm_helper.wait_for_vm_pingable_from_natbox(vm_id)
|
||||
|
||||
if guest_os == 'ubuntu_14':
|
||||
system_helper.wait_for_alarm_gone(alarm_id=EventLogID.CINDER_IO_CONGEST,
|
||||
entity_id='cinder_io_monitor',
|
||||
strict=False, timeout=300,
|
||||
fail_ok=False)
|
||||
|
||||
LOG.tc_step("{} migrate vm and check vm is moved to different host".format(
|
||||
mig_type))
|
||||
prev_vm_host = vm_helper.get_vm_host(vm_id)
|
||||
|
||||
if mig_type == 'live':
|
||||
code, output = vm_helper.live_migrate_vm(vm_id)
|
||||
if code == 1:
|
||||
assert False, "No host to live migrate to. System may not be in " \
|
||||
"good state."
|
||||
else:
|
||||
vm_helper.cold_migrate_vm(vm_id)
|
||||
|
||||
vm_host = vm_helper.get_vm_host(vm_id)
|
||||
assert prev_vm_host != vm_host, "vm host did not change after {} " \
|
||||
"migration".format(mig_type)
|
||||
|
||||
LOG.tc_step("Ping vm from NatBox after {} migration".format(mig_type))
|
||||
vm_helper.wait_for_vm_pingable_from_natbox(vm_id)
|
|
@ -0,0 +1,91 @@
|
|||
#
|
||||
# Copyright (c) 2019 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
from pytest import mark, skip, param
|
||||
|
||||
from utils.tis_log import LOG
|
||||
from consts.stx import FlavorSpec, VMStatus
|
||||
from consts.reasons import SkipStorageSpace
|
||||
|
||||
from keywords import vm_helper, nova_helper, glance_helper, cinder_helper
|
||||
from testfixtures.fixture_resources import ResourceCleanup
|
||||
|
||||
|
||||
def id_gen(val):
|
||||
if isinstance(val, list):
|
||||
return '-'.join(val)
|
||||
|
||||
|
||||
@mark.parametrize(('guest_os', 'cpu_pol', 'actions'), [
|
||||
param('tis-centos-guest', 'dedicated', ['pause', 'unpause'],
|
||||
marks=mark.priorities('sanity', 'cpe_sanity', 'sx_sanity')),
|
||||
param('ubuntu_14', 'shared', ['stop', 'start'], marks=mark.sanity),
|
||||
param('ubuntu_14', 'dedicated', ['auto_recover'], marks=mark.sanity),
|
||||
param('tis-centos-guest', 'dedicated', ['suspend', 'resume'],
|
||||
marks=mark.priorities('sanity', 'cpe_sanity', 'sx_sanity')),
|
||||
], ids=id_gen)
|
||||
def test_nova_actions(guest_os, cpu_pol, actions):
|
||||
"""
|
||||
|
||||
Args:
|
||||
guest_os:
|
||||
cpu_pol:
|
||||
actions:
|
||||
|
||||
Test Steps:
|
||||
- Create a glance image from given guest type
|
||||
- Create a vm from cinder volume using above image with specified cpu
|
||||
policy
|
||||
- Perform given nova actions on vm
|
||||
- Ensure nova operation succeeded and vm still in good state (active
|
||||
and reachable from NatBox)
|
||||
|
||||
"""
|
||||
if guest_os == 'opensuse_12':
|
||||
if not cinder_helper.is_volumes_pool_sufficient(min_size=40):
|
||||
skip(SkipStorageSpace.SMALL_CINDER_VOLUMES_POOL)
|
||||
|
||||
img_id = glance_helper.get_guest_image(guest_os=guest_os)
|
||||
|
||||
LOG.tc_step("Create a flavor with 1 vcpu")
|
||||
flavor_id = nova_helper.create_flavor(name=cpu_pol, vcpus=1, root_disk=9)[1]
|
||||
ResourceCleanup.add('flavor', flavor_id)
|
||||
|
||||
if cpu_pol is not None:
|
||||
specs = {FlavorSpec.CPU_POLICY: cpu_pol}
|
||||
LOG.tc_step("Add following extra specs: {}".format(specs))
|
||||
nova_helper.set_flavor(flavor=flavor_id, **specs)
|
||||
|
||||
LOG.tc_step("Create a volume from {} image".format(guest_os))
|
||||
vol_id = \
|
||||
cinder_helper.create_volume(name='vol-' + guest_os, source_id=img_id,
|
||||
guest_image=guest_os)[1]
|
||||
ResourceCleanup.add('volume', vol_id)
|
||||
|
||||
LOG.tc_step("Boot a vm from above flavor and volume")
|
||||
vm_id = vm_helper.boot_vm('nova_actions', flavor=flavor_id, source='volume',
|
||||
source_id=vol_id,
|
||||
cleanup='function')[1]
|
||||
|
||||
LOG.tc_step("Wait for VM pingable from NATBOX")
|
||||
vm_helper.wait_for_vm_pingable_from_natbox(vm_id)
|
||||
|
||||
for action in actions:
|
||||
if action == 'auto_recover':
|
||||
LOG.tc_step(
|
||||
"Set vm to error state and wait for auto recovery complete, "
|
||||
"then verify ping from base vm over "
|
||||
"management and data networks")
|
||||
vm_helper.set_vm_state(vm_id=vm_id, error_state=True, fail_ok=False)
|
||||
vm_helper.wait_for_vm_values(vm_id=vm_id, status=VMStatus.ACTIVE,
|
||||
fail_ok=True, timeout=600)
|
||||
else:
|
||||
LOG.tc_step(
|
||||
"Perform following action on vm {}: {}".format(vm_id, action))
|
||||
vm_helper.perform_action_on_vm(vm_id, action=action)
|
||||
|
||||
vm_helper.wait_for_vm_pingable_from_natbox(vm_id)
|
|
@ -0,0 +1,508 @@
|
|||
#
|
||||
# Copyright (c) 2019 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
import time
|
||||
import math
|
||||
|
||||
from pytest import fixture, mark, skip, param
|
||||
|
||||
from utils.tis_log import LOG
|
||||
|
||||
from keywords import vm_helper, nova_helper, host_helper, check_helper, \
|
||||
glance_helper
|
||||
from testfixtures.fixture_resources import ResourceCleanup
|
||||
from consts.stx import FlavorSpec, GuestImages
|
||||
from consts.reasons import SkipStorageBacking
|
||||
|
||||
|
||||
def id_gen(val):
|
||||
if isinstance(val, (tuple, list)):
|
||||
val = '_'.join([str(val_) for val_ in val])
|
||||
return val
|
||||
|
||||
|
||||
def touch_files_under_vm_disks(vm_id, ephemeral=0, swap=0, vm_type='volume',
|
||||
disks=None):
|
||||
expt_len = 1 + int(bool(ephemeral)) + int(bool(swap)) + (
|
||||
1 if 'with_vol' in vm_type else 0)
|
||||
|
||||
LOG.tc_step("Auto mount non-root disk(s)")
|
||||
mounts = vm_helper.auto_mount_vm_disks(vm_id=vm_id, disks=disks)
|
||||
assert expt_len == len(mounts)
|
||||
|
||||
if bool(swap):
|
||||
mounts.remove('none')
|
||||
|
||||
LOG.tc_step("Create files under vm disks: {}".format(mounts))
|
||||
file_paths, content = vm_helper.touch_files(vm_id=vm_id, file_dirs=mounts)
|
||||
return file_paths, content
|
||||
|
||||
|
||||
def get_expt_disk_increase(origin_flavor, dest_flavor, boot_source,
|
||||
storage_backing):
|
||||
root_diff = dest_flavor[0] - origin_flavor[0]
|
||||
ephemeral_diff = dest_flavor[1] - origin_flavor[1]
|
||||
swap_diff = (dest_flavor[2] - origin_flavor[2]) / 1024
|
||||
|
||||
if storage_backing == 'remote':
|
||||
expected_increase = 0
|
||||
expect_to_check = True
|
||||
else:
|
||||
if boot_source == 'volume':
|
||||
expected_increase = ephemeral_diff + swap_diff
|
||||
expect_to_check = False
|
||||
else:
|
||||
expected_increase = root_diff + ephemeral_diff + swap_diff
|
||||
expect_to_check = expected_increase >= 2
|
||||
|
||||
return expected_increase, expect_to_check
|
||||
|
||||
|
||||
def get_disk_avail_least(host):
|
||||
return \
|
||||
host_helper.get_hypervisor_info(hosts=host,
|
||||
field='disk_available_least')[host]
|
||||
|
||||
|
||||
def check_correct_post_resize_value(original_disk_value, expected_increase,
|
||||
host, sleep=True):
|
||||
if sleep:
|
||||
time.sleep(65)
|
||||
|
||||
post_resize_value = get_disk_avail_least(host)
|
||||
LOG.info(
|
||||
"{} original_disk_value: {}. post_resize_value: {}. "
|
||||
"expected_increase: {}".format(
|
||||
host, original_disk_value, post_resize_value, expected_increase))
|
||||
expt_post = original_disk_value + expected_increase
|
||||
|
||||
if expected_increase < 0:
|
||||
# vm is on this host, backup image files may be created if not
|
||||
# already existed
|
||||
backup_val = math.ceil(
|
||||
glance_helper.get_image_size(guest_os=GuestImages.DEFAULT['guest'],
|
||||
virtual_size=False))
|
||||
assert expt_post - backup_val <= post_resize_value <= expt_post
|
||||
elif expected_increase > 0:
|
||||
# vm moved away from this host, or resized to smaller disk on same
|
||||
# host, backup files will stay
|
||||
assert expt_post - 1 <= post_resize_value <= expt_post + 1, \
|
||||
"disk_available_least on {} expected: {}+-1, actual: {}".format(
|
||||
host, expt_post, post_resize_value)
|
||||
else:
|
||||
assert expt_post == post_resize_value, \
|
||||
"{} disk_available_least value changed to {} unexpectedly".format(
|
||||
host, post_resize_value)
|
||||
|
||||
return post_resize_value
|
||||
|
||||
|
||||
@fixture(scope='module')
|
||||
def get_hosts_per_backing(add_admin_role_module):
|
||||
return host_helper.get_hosts_per_storage_backing()
|
||||
|
||||
|
||||
class TestResizeSameHost:
|
||||
@fixture(scope='class')
|
||||
def add_hosts_to_zone(self, request, add_stxauto_zone,
|
||||
get_hosts_per_backing):
|
||||
hosts_per_backing = get_hosts_per_backing
|
||||
avail_hosts = {key: vals[0] for key, vals in hosts_per_backing.items()
|
||||
if vals}
|
||||
|
||||
if not avail_hosts:
|
||||
skip("No host in any storage aggregate")
|
||||
|
||||
nova_helper.add_hosts_to_aggregate(aggregate='stxauto',
|
||||
hosts=list(avail_hosts.values()))
|
||||
|
||||
def remove_hosts_from_zone():
|
||||
nova_helper.remove_hosts_from_aggregate(aggregate='stxauto',
|
||||
check_first=False)
|
||||
|
||||
request.addfinalizer(remove_hosts_from_zone)
|
||||
return avail_hosts
|
||||
|
||||
@mark.parametrize(('storage_backing', 'origin_flavor', 'dest_flavor',
|
||||
'boot_source'), [
|
||||
('remote', (4, 0, 0), (5, 1, 512), 'image'),
|
||||
('remote', (4, 1, 512), (5, 2, 1024), 'image'),
|
||||
('remote', (4, 1, 512), (4, 1, 0), 'image'),
|
||||
# LP1762423
|
||||
param('remote', (4, 0, 0), (1, 1, 512), 'volume',
|
||||
marks=mark.priorities('nightly', 'sx_nightly')),
|
||||
('remote', (4, 1, 512), (8, 2, 1024), 'volume'),
|
||||
('remote', (4, 1, 512), (0, 1, 0), 'volume'),
|
||||
('local_image', (4, 0, 0), (5, 1, 512), 'image'),
|
||||
param('local_image', (4, 1, 512), (5, 2, 1024),
|
||||
'image',
|
||||
marks=mark.priorities('nightly', 'sx_nightly')),
|
||||
('local_image', (5, 1, 512), (5, 1, 0), 'image'),
|
||||
('local_image', (4, 0, 0), (5, 1, 512), 'volume'),
|
||||
('local_image', (4, 1, 512), (0, 2, 1024), 'volume'),
|
||||
('local_image', (4, 1, 512), (1, 1, 0), 'volume'),
|
||||
# LP1762423
|
||||
], ids=id_gen)
|
||||
def test_resize_vm_positive(self, add_hosts_to_zone, storage_backing,
|
||||
origin_flavor, dest_flavor, boot_source):
|
||||
"""
|
||||
Test resizing disks of a vm
|
||||
- Resize root disk is allowed except 0 & boot-from-image
|
||||
- Resize to larger or same ephemeral is allowed
|
||||
- Resize swap to any size is allowed including removing
|
||||
|
||||
Args:
|
||||
storage_backing: The host storage backing required
|
||||
origin_flavor: The flavor to boot the vm from, listed by GBs for
|
||||
root, ephemeral, and swap disks, i.e. for a
|
||||
system with a 2GB root disk, a 1GB ephemeral disk,
|
||||
and no swap disk: (2, 1, 0)
|
||||
boot_source: Which source to boot the vm from, either 'volume' or
|
||||
'image'
|
||||
add_hosts_to_zone
|
||||
dest_flavor
|
||||
|
||||
Skip Conditions:
|
||||
- No hosts exist with required storage backing.
|
||||
Test setup:
|
||||
- Put a single host of each backing in stxautozone to prevent
|
||||
migration and instead force resize.
|
||||
- Create two flavors based on origin_flavor and dest_flavor
|
||||
- Create a volume or image to boot from.
|
||||
- Boot VM with origin_flavor
|
||||
Test Steps:
|
||||
- Resize VM to dest_flavor with revert
|
||||
- If vm is booted from image and has a non-remote backing,
|
||||
check that the amount of disk space post-revert
|
||||
is around the same pre-revert # TC5155
|
||||
- Resize VM to dest_flavor with confirm
|
||||
- If vm is booted from image and has a non-remote backing,
|
||||
check that the amount of disk space post-confirm
|
||||
is reflects the increase in disk-space taken up # TC5155
|
||||
Test Teardown:
|
||||
- Delete created VM
|
||||
- Delete created volume or image
|
||||
- Delete created flavors
|
||||
- Remove hosts from stxautozone
|
||||
- Delete stxautozone
|
||||
|
||||
"""
|
||||
vm_host = add_hosts_to_zone.get(storage_backing, None)
|
||||
|
||||
if not vm_host:
|
||||
skip(
|
||||
SkipStorageBacking.NO_HOST_WITH_BACKING.format(storage_backing))
|
||||
|
||||
expected_increase, expect_to_check = get_expt_disk_increase(
|
||||
origin_flavor, dest_flavor,
|
||||
boot_source, storage_backing)
|
||||
LOG.info("Expected_increase of vm compute occupancy is {}".format(
|
||||
expected_increase))
|
||||
|
||||
LOG.tc_step('Create origin flavor')
|
||||
origin_flavor_id = _create_flavor(origin_flavor, storage_backing)
|
||||
vm_id = _boot_vm_to_test(boot_source, vm_host, origin_flavor_id)
|
||||
vm_helper.wait_for_vm_pingable_from_natbox(vm_id)
|
||||
|
||||
vm_disks = vm_helper.get_vm_devices_via_virsh(vm_id)
|
||||
root, ephemeral, swap = origin_flavor
|
||||
if boot_source == 'volume':
|
||||
root = GuestImages.IMAGE_FILES[GuestImages.DEFAULT['guest']][1]
|
||||
file_paths, content = touch_files_under_vm_disks(vm_id=vm_id,
|
||||
ephemeral=ephemeral,
|
||||
swap=swap,
|
||||
vm_type=boot_source,
|
||||
disks=vm_disks)
|
||||
|
||||
if expect_to_check:
|
||||
LOG.tc_step('Check initial disk usage')
|
||||
original_disk_value = get_disk_avail_least(vm_host)
|
||||
LOG.info("{} space left on compute".format(original_disk_value))
|
||||
|
||||
LOG.tc_step('Create destination flavor')
|
||||
dest_flavor_id = _create_flavor(dest_flavor, storage_backing)
|
||||
LOG.tc_step('Resize vm to dest flavor and revert')
|
||||
vm_helper.resize_vm(vm_id, dest_flavor_id, revert=True, fail_ok=False)
|
||||
vm_helper.wait_for_vm_pingable_from_natbox(vm_id)
|
||||
|
||||
swap_size = swap
|
||||
LOG.tc_step("Check files after resize revert")
|
||||
if storage_backing == 'remote' and swap and dest_flavor[2]:
|
||||
swap_size = dest_flavor[2]
|
||||
|
||||
time.sleep(30)
|
||||
prev_host = vm_helper.get_vm_host(vm_id)
|
||||
check_helper.check_vm_files(vm_id=vm_id,
|
||||
storage_backing=storage_backing, root=root,
|
||||
ephemeral=ephemeral,
|
||||
swap=swap_size, vm_type=boot_source,
|
||||
vm_action=None, file_paths=file_paths,
|
||||
content=content, disks=vm_disks,
|
||||
check_volume_root=True)
|
||||
|
||||
LOG.tc_step('Resize vm to dest flavor and confirm')
|
||||
vm_helper.resize_vm(vm_id, dest_flavor_id, revert=False, fail_ok=False)
|
||||
vm_helper.wait_for_vm_pingable_from_natbox(vm_id)
|
||||
post_host = vm_helper.get_vm_host(vm_id)
|
||||
post_root, post_ephemeral, post_swap = dest_flavor
|
||||
if boot_source == 'volume':
|
||||
post_root = GuestImages.IMAGE_FILES[GuestImages.DEFAULT['guest']][1]
|
||||
post_ephemeral = ephemeral if ephemeral else post_ephemeral
|
||||
LOG.tc_step("Check files after resize attempt")
|
||||
check_helper.check_vm_files(
|
||||
vm_id=vm_id, storage_backing=storage_backing,
|
||||
ephemeral=post_ephemeral,
|
||||
swap=post_swap, vm_type=boot_source,
|
||||
vm_action='resize', file_paths=file_paths,
|
||||
content=content, prev_host=prev_host,
|
||||
post_host=post_host, root=post_root,
|
||||
disks=vm_disks,
|
||||
post_disks=vm_helper.get_vm_devices_via_virsh(vm_id),
|
||||
check_volume_root=True)
|
||||
|
||||
@mark.parametrize(
|
||||
('storage_backing', 'origin_flavor', 'dest_flavor', 'boot_source'), [
|
||||
# Root disk can be resized, but cannot be 0
|
||||
('remote', (5, 0, 0), (0, 0, 0), 'image'),
|
||||
# check ephemeral disk cannot be smaller than origin
|
||||
('remote', (5, 2, 512), (5, 1, 512), 'image'),
|
||||
# check ephemeral disk cannot be smaller than origin
|
||||
('remote', (1, 1, 512), (1, 0, 512), 'volume'),
|
||||
# Root disk can be resized, but cannot be 0
|
||||
('local_image', (5, 0, 0), (0, 0, 0), 'image'),
|
||||
('local_image', (5, 2, 512), (5, 1, 512), 'image'),
|
||||
('local_image', (5, 1, 512), (4, 1, 512), 'image'),
|
||||
('local_image', (5, 1, 512), (4, 1, 0), 'image'),
|
||||
('local_image', (1, 1, 512), (1, 0, 512), 'volume'),
|
||||
], ids=id_gen)
|
||||
def test_resize_vm_negative(self, add_hosts_to_zone, storage_backing,
|
||||
origin_flavor, dest_flavor, boot_source):
|
||||
"""
|
||||
Test resizing disks of a vm not allowed:
|
||||
- Resize to smaller ephemeral flavor is not allowed
|
||||
- Resize to zero disk flavor is not allowed (boot from image only)
|
||||
|
||||
Args:
|
||||
storage_backing: The host storage backing required
|
||||
origin_flavor: The flavor to boot the vm from, listed by GBs for
|
||||
root, ephemeral, and swap disks, i.e. for a
|
||||
system with a 2GB root disk, a 1GB ephemeral disk,
|
||||
and no swap disk: (2, 1, 0)
|
||||
boot_source: Which source to boot the vm from, either 'volume' or
|
||||
'image'
|
||||
Skip Conditions:
|
||||
- No hosts exist with required storage backing.
|
||||
Test setup:
|
||||
- Put a single host of each backing in stxautozone to prevent
|
||||
migration and instead force resize.
|
||||
- Create two flavors based on origin_flavor and dest_flavor
|
||||
- Create a volume or image to boot from.
|
||||
- Boot VM with origin_flavor
|
||||
Test Steps:
|
||||
- Resize VM to dest_flavor with revert
|
||||
- Resize VM to dest_flavor with confirm
|
||||
Test Teardown:
|
||||
- Delete created VM
|
||||
- Delete created volume or image
|
||||
- Delete created flavors
|
||||
- Remove hosts from stxauto zone
|
||||
- Delete stxauto zone
|
||||
|
||||
"""
|
||||
vm_host = add_hosts_to_zone.get(storage_backing, None)
|
||||
|
||||
if not vm_host:
|
||||
skip("No available host with {} storage backing".format(
|
||||
storage_backing))
|
||||
|
||||
LOG.tc_step('Create origin flavor')
|
||||
origin_flavor_id = _create_flavor(origin_flavor, storage_backing)
|
||||
LOG.tc_step('Create destination flavor')
|
||||
dest_flavor_id = _create_flavor(dest_flavor, storage_backing)
|
||||
vm_id = _boot_vm_to_test(boot_source, vm_host, origin_flavor_id)
|
||||
vm_helper.wait_for_vm_pingable_from_natbox(vm_id)
|
||||
|
||||
vm_disks = vm_helper.get_vm_devices_via_virsh(vm_id)
|
||||
root, ephemeral, swap = origin_flavor
|
||||
file_paths, content = touch_files_under_vm_disks(vm_id=vm_id,
|
||||
ephemeral=ephemeral,
|
||||
swap=swap,
|
||||
vm_type=boot_source,
|
||||
disks=vm_disks)
|
||||
|
||||
LOG.tc_step('Resize vm to dest flavor')
|
||||
code, output = vm_helper.resize_vm(vm_id, dest_flavor_id, fail_ok=True)
|
||||
vm_helper.wait_for_vm_pingable_from_natbox(vm_id)
|
||||
|
||||
assert vm_helper.get_vm_flavor(
|
||||
vm_id) == origin_flavor_id, 'VM did not keep origin flavor'
|
||||
assert code > 0, "Resize VM CLI is not rejected"
|
||||
|
||||
LOG.tc_step("Check files after resize attempt")
|
||||
check_helper.check_vm_files(vm_id=vm_id,
|
||||
storage_backing=storage_backing, root=root,
|
||||
ephemeral=ephemeral,
|
||||
swap=swap, vm_type=boot_source,
|
||||
vm_action=None, file_paths=file_paths,
|
||||
content=content, disks=vm_disks)
|
||||
|
||||
|
||||
def _create_flavor(flavor_info, storage_backing):
|
||||
root_disk = flavor_info[0]
|
||||
ephemeral = flavor_info[1]
|
||||
swap = flavor_info[2]
|
||||
|
||||
flavor_id = nova_helper.create_flavor(ephemeral=ephemeral, swap=swap,
|
||||
root_disk=root_disk,
|
||||
storage_backing=storage_backing)[1]
|
||||
ResourceCleanup.add('flavor', flavor_id)
|
||||
return flavor_id
|
||||
|
||||
|
||||
def _boot_vm_to_test(boot_source, vm_host, flavor_id):
|
||||
LOG.tc_step('Boot a vm with given flavor')
|
||||
vm_id = vm_helper.boot_vm(flavor=flavor_id, avail_zone='stxauto',
|
||||
vm_host=vm_host, source=boot_source,
|
||||
cleanup='function')[1]
|
||||
return vm_id
|
||||
|
||||
|
||||
def get_cpu_count(hosts_with_backing):
|
||||
LOG.fixture_step("Find suitable vm host and cpu count and backing of host")
|
||||
compute_space_dict = {}
|
||||
|
||||
vm_host = hosts_with_backing[0]
|
||||
numa0_used_cpus, numa0_total_cpus = \
|
||||
host_helper.get_vcpus_per_proc(vm_host)[vm_host][0]
|
||||
numa0_avail_cpus = len(numa0_total_cpus) - len(numa0_used_cpus)
|
||||
for host in hosts_with_backing:
|
||||
free_space = get_disk_avail_least(host)
|
||||
compute_space_dict[host] = free_space
|
||||
LOG.info("{} space on {}".format(free_space, host))
|
||||
|
||||
# increase quota
|
||||
LOG.fixture_step("Increase quota of allotted cores")
|
||||
vm_helper.ensure_vms_quotas(cores_num=int(numa0_avail_cpus + 30))
|
||||
|
||||
return vm_host, numa0_avail_cpus, compute_space_dict
|
||||
|
||||
|
||||
class TestResizeDiffHost:
|
||||
@mark.parametrize('storage_backing', [
|
||||
'local_image',
|
||||
'remote',
|
||||
])
|
||||
def test_resize_different_comp_node(self, storage_backing,
|
||||
get_hosts_per_backing):
|
||||
"""
|
||||
Test resizing disks of a larger vm onto a different compute node and
|
||||
check hypervisor statistics to
|
||||
make sure difference in disk usage of both nodes involved is
|
||||
correctly reflected
|
||||
|
||||
Args:
|
||||
storage_backing: The host storage backing required
|
||||
Skip Conditions:
|
||||
- 2 hosts must exist with required storage backing.
|
||||
Test setup:
|
||||
- For each of the two backings tested, the setup will return the
|
||||
number of nodes for each backing,
|
||||
the vm host that the vm will initially be created on and the
|
||||
number of hosts for that backing.
|
||||
Test Steps:
|
||||
- Create a flavor with a root disk size that is slightly larger
|
||||
than the default image used to boot up
|
||||
the VM
|
||||
- Create a VM with the aforementioned flavor
|
||||
- Create a flavor will enough cpus to occupy the rest of the cpus
|
||||
on the same host as the first VM
|
||||
- Create another VM on the same host as the first VM
|
||||
- Create a similar flavor to the first one, except that it has
|
||||
one more vcpu
|
||||
- Resize the first VM and confirm that it is on a different host
|
||||
- Check hypervisor-show on both computes to make sure that disk
|
||||
usage goes down on the original host and
|
||||
goes up on the new host
|
||||
Test Teardown:
|
||||
- Delete created VMs
|
||||
- Delete created flavors
|
||||
|
||||
"""
|
||||
hosts_with_backing = get_hosts_per_backing.get(storage_backing, [])
|
||||
if len(hosts_with_backing) < 2:
|
||||
skip(SkipStorageBacking.LESS_THAN_TWO_HOSTS_WITH_BACKING.format(
|
||||
storage_backing))
|
||||
|
||||
origin_host, cpu_count, compute_space_dict = get_cpu_count(
|
||||
hosts_with_backing)
|
||||
|
||||
root_disk_size = \
|
||||
GuestImages.IMAGE_FILES[GuestImages.DEFAULT['guest']][1] + 5
|
||||
|
||||
# make vm (1 cpu)
|
||||
LOG.tc_step("Create flavor with 1 cpu")
|
||||
numa0_specs = {FlavorSpec.CPU_POLICY: 'dedicated', FlavorSpec.NUMA_0: 0}
|
||||
flavor_1 = \
|
||||
nova_helper.create_flavor(ephemeral=0, swap=0,
|
||||
root_disk=root_disk_size, vcpus=1,
|
||||
storage_backing=storage_backing)[1]
|
||||
ResourceCleanup.add('flavor', flavor_1)
|
||||
nova_helper.set_flavor(flavor_1, **numa0_specs)
|
||||
|
||||
LOG.tc_step("Boot a vm with above flavor")
|
||||
vm_to_resize = \
|
||||
vm_helper.boot_vm(flavor=flavor_1, source='image',
|
||||
cleanup='function', vm_host=origin_host)[1]
|
||||
vm_helper.wait_for_vm_pingable_from_natbox(vm_to_resize)
|
||||
|
||||
# launch another vm
|
||||
LOG.tc_step("Create a flavor to occupy vcpus")
|
||||
occupy_amount = int(cpu_count) - 1
|
||||
second_specs = {FlavorSpec.CPU_POLICY: 'dedicated',
|
||||
FlavorSpec.NUMA_0: 0}
|
||||
flavor_2 = nova_helper.create_flavor(vcpus=occupy_amount,
|
||||
storage_backing=storage_backing)[1]
|
||||
ResourceCleanup.add('flavor', flavor_2)
|
||||
nova_helper.set_flavor(flavor_2, **second_specs)
|
||||
|
||||
LOG.tc_step("Boot a vm with above flavor to occupy remaining vcpus")
|
||||
vm_2 = vm_helper.boot_vm(flavor=flavor_2, source='image',
|
||||
cleanup='function', vm_host=origin_host)[1]
|
||||
vm_helper.wait_for_vm_pingable_from_natbox(vm_2)
|
||||
|
||||
LOG.tc_step('Check disk usage before resize')
|
||||
prev_val_origin_host = get_disk_avail_least(origin_host)
|
||||
LOG.info("{} space left on compute".format(prev_val_origin_host))
|
||||
|
||||
# create a larger flavor and resize
|
||||
LOG.tc_step("Create a flavor that has an extra vcpu to force resize "
|
||||
"to a different node")
|
||||
resize_flavor = nova_helper.create_flavor(
|
||||
ephemeral=0, swap=0, root_disk=root_disk_size, vcpus=2,
|
||||
storage_backing=storage_backing)[1]
|
||||
ResourceCleanup.add('flavor', resize_flavor)
|
||||
nova_helper.set_flavor(resize_flavor, **numa0_specs)
|
||||
|
||||
LOG.tc_step("Resize the vm and verify if it is on a different host")
|
||||
vm_helper.resize_vm(vm_to_resize, resize_flavor)
|
||||
new_host = vm_helper.get_vm_host(vm_to_resize)
|
||||
assert new_host != origin_host, "vm did not change hosts " \
|
||||
"following resize"
|
||||
|
||||
LOG.tc_step('Check disk usage on computes after resize')
|
||||
if storage_backing == 'remote':
|
||||
LOG.info("Compute disk usage change should be minimal for "
|
||||
"remote storage backing")
|
||||
root_disk_size = 0
|
||||
|
||||
check_correct_post_resize_value(prev_val_origin_host, root_disk_size,
|
||||
origin_host)
|
||||
|
||||
prev_val_new_host = compute_space_dict[new_host]
|
||||
check_correct_post_resize_value(prev_val_new_host, -root_disk_size,
|
||||
new_host, sleep=False)
|
||||
vm_helper.wait_for_vm_pingable_from_natbox(vm_to_resize)
|
|
@ -0,0 +1,105 @@
|
|||
#
|
||||
# Copyright (c) 2019 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
from pytest import mark, param
|
||||
|
||||
from consts.stx import FlavorSpec, ImageMetadata, VMStatus
|
||||
from keywords import nova_helper, vm_helper, glance_helper
|
||||
from utils.tis_log import LOG
|
||||
|
||||
|
||||
# Note auto recovery metadata in image will not be passed to vm if vm is booted
|
||||
# from Volume
|
||||
|
||||
|
||||
@mark.parametrize(('cpu_policy', 'flavor_auto_recovery', 'image_auto_recovery',
|
||||
'disk_format', 'container_format', 'expt_result'), [
|
||||
param(None, None, None, 'raw', 'bare', True, marks=mark.p1),
|
||||
param(None, 'false', 'true', 'qcow2', 'bare', False, marks=mark.p3),
|
||||
param(None, 'true', 'false', 'raw', 'bare', True, marks=mark.p3),
|
||||
param('dedicated', 'false', None, 'raw', 'bare', False, marks=mark.p3),
|
||||
param('dedicated', None, 'false', 'qcow2', 'bare', False,
|
||||
marks=mark.domain_sanity),
|
||||
param('shared', None, 'true', 'raw', 'bare', True, marks=mark.p3),
|
||||
param('shared', 'false', None, 'raw', 'bare', False, marks=mark.p3),
|
||||
])
|
||||
def test_vm_autorecovery(cpu_policy, flavor_auto_recovery, image_auto_recovery,
|
||||
disk_format, container_format, expt_result):
|
||||
"""
|
||||
Test auto recovery setting in vm with various auto recovery settings in
|
||||
flavor and image.
|
||||
|
||||
Args:
|
||||
cpu_policy (str|None): cpu policy to set in flavor
|
||||
flavor_auto_recovery (str|None): None (unset) or true or false
|
||||
image_auto_recovery (str|None): None (unset) or true or false
|
||||
disk_format (str):
|
||||
container_format (str):
|
||||
expt_result (bool): Expected vm auto recovery behavior.
|
||||
False > disabled, True > enabled.
|
||||
|
||||
Test Steps:
|
||||
- Create a flavor with auto recovery and cpu policy set to given
|
||||
values in extra spec
|
||||
- Create an image with auto recovery set to given value in metadata
|
||||
- Boot a vm with the flavor and from the image
|
||||
- Set vm state to error via nova reset-state
|
||||
- Verify vm auto recovery behavior is as expected
|
||||
|
||||
Teardown:
|
||||
- Delete created vm, volume, image, flavor
|
||||
|
||||
"""
|
||||
|
||||
LOG.tc_step("Create a flavor with cpu_policy set to {} and auto_recovery "
|
||||
"set to {} in extra spec".format(cpu_policy,
|
||||
flavor_auto_recovery))
|
||||
flavor_id = nova_helper.create_flavor(
|
||||
name='auto_recover_'+str(flavor_auto_recovery), cleanup='function')[1]
|
||||
|
||||
# Add extra specs as specified
|
||||
extra_specs = {}
|
||||
if cpu_policy is not None:
|
||||
extra_specs[FlavorSpec.CPU_POLICY] = cpu_policy
|
||||
if flavor_auto_recovery is not None:
|
||||
extra_specs[FlavorSpec.AUTO_RECOVERY] = flavor_auto_recovery
|
||||
|
||||
if extra_specs:
|
||||
nova_helper.set_flavor(flavor=flavor_id, **extra_specs)
|
||||
|
||||
property_key = ImageMetadata.AUTO_RECOVERY
|
||||
LOG.tc_step("Create an image with property auto_recovery={}, "
|
||||
"disk_format={}, container_format={}".
|
||||
format(image_auto_recovery, disk_format, container_format))
|
||||
if image_auto_recovery is None:
|
||||
image_id = glance_helper.create_image(disk_format=disk_format,
|
||||
container_format=container_format,
|
||||
cleanup='function')[1]
|
||||
else:
|
||||
image_id = glance_helper.create_image(
|
||||
disk_format=disk_format, container_format=container_format,
|
||||
cleanup='function', **{property_key: image_auto_recovery})[1]
|
||||
|
||||
LOG.tc_step("Boot a vm from image with auto recovery - {} and "
|
||||
"using the flavor with auto recovery - "
|
||||
"{}".format(image_auto_recovery, flavor_auto_recovery))
|
||||
vm_id = vm_helper.boot_vm(name='auto_recov', flavor=flavor_id,
|
||||
source='image', source_id=image_id,
|
||||
cleanup='function')[1]
|
||||
vm_helper.wait_for_vm_pingable_from_natbox(vm_id)
|
||||
|
||||
LOG.tc_step("Verify vm auto recovery is {} by setting vm to error "
|
||||
"state.".format(expt_result))
|
||||
vm_helper.set_vm_state(vm_id=vm_id, error_state=True, fail_ok=False)
|
||||
res_bool, actual_val = vm_helper.wait_for_vm_values(
|
||||
vm_id=vm_id, status=VMStatus.ACTIVE, fail_ok=True, timeout=600)
|
||||
|
||||
assert expt_result == res_bool, "Expected auto_recovery: {}. Actual vm " \
|
||||
"status: {}".format(expt_result, actual_val)
|
||||
|
||||
LOG.tc_step("Ensure vm is pingable after auto recovery")
|
||||
vm_helper.wait_for_vm_pingable_from_natbox(vm_id)
|
|
@ -0,0 +1,412 @@
|
|||
#
|
||||
# Copyright (c) 2019 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
from pytest import mark, fixture, skip
|
||||
|
||||
from consts.auth import HostLinuxUser
|
||||
from consts.stx import EventLogID
|
||||
from keywords import system_helper, common
|
||||
from utils.clients.ssh import ControllerClient
|
||||
from utils.tis_log import LOG
|
||||
|
||||
files_to_delete = []
|
||||
|
||||
|
||||
@fixture(scope='module', autouse=True)
|
||||
def ima_precheck():
|
||||
"""
|
||||
This tests if the system is enabled with IMA. If not, we
|
||||
should skip IMA-related tests.
|
||||
"""
|
||||
|
||||
LOG.info("Checking if IMA is enabled")
|
||||
con_ssh = ControllerClient.get_active_controller()
|
||||
|
||||
exitcode, output = con_ssh.exec_cmd("cat /proc/cmdline")
|
||||
if "extended" not in output:
|
||||
skip("IMA must be enabled in order to run this test")
|
||||
else:
|
||||
LOG.info("IMA is enabled")
|
||||
|
||||
|
||||
@fixture(autouse=True)
|
||||
def delete_files(request):
|
||||
global files_to_delete
|
||||
files_to_delete = []
|
||||
|
||||
def teardown():
|
||||
"""
|
||||
Delete any created files on teardown.
|
||||
"""
|
||||
for filename in files_to_delete:
|
||||
delete_file(filename)
|
||||
|
||||
request.addfinalizer(teardown)
|
||||
|
||||
|
||||
def checksum_compare(source_file, dest_file):
|
||||
"""
|
||||
This does a checksum comparison of two files. It returns True if the
|
||||
checksum matches, and False if it doesn't.
|
||||
"""
|
||||
|
||||
con_ssh = ControllerClient.get_active_controller()
|
||||
|
||||
LOG.info("Compare checksums on source file and destination file")
|
||||
cmd = "getfattr -m . -d {}"
|
||||
|
||||
exitcode, source_sha = con_ssh.exec_cmd(cmd.format(source_file))
|
||||
LOG.info("Raw source file checksum is: {}".format(source_sha))
|
||||
source_sha2 = source_sha.split("\n")
|
||||
print("This is source_sha2: {}".format(source_sha2))
|
||||
assert source_sha2 != [''], "No signature on source file"
|
||||
|
||||
if source_file.startswith("/"):
|
||||
source_sha = source_sha2[2] + " " + source_sha2[3]
|
||||
else:
|
||||
source_sha = source_sha2[1] + " " + source_sha2[2]
|
||||
|
||||
LOG.info("Extracted source file checksum: {}".format(source_sha))
|
||||
|
||||
exitcode, dest_sha = con_ssh.exec_cmd(cmd.format(dest_file))
|
||||
LOG.info("Raw symlink checksum is: {}".format(dest_sha))
|
||||
dest_sha2 = dest_sha.split("\n")
|
||||
|
||||
if dest_file.startswith("/"):
|
||||
dest_sha = dest_sha2[2] + " " + dest_sha2[3]
|
||||
else:
|
||||
dest_sha = dest_sha2[1] + " " + dest_sha2[2]
|
||||
|
||||
LOG.info("Extracted destination file checksum: {}".format(dest_sha))
|
||||
|
||||
if source_sha == dest_sha:
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
|
||||
|
||||
def create_symlink(source_file, dest_file, sudo=True):
|
||||
"""
|
||||
This creates a symlink given a source filename and a destination filename.
|
||||
"""
|
||||
LOG.info("Creating symlink to {} called {}".format(source_file, dest_file))
|
||||
cmd = "ln -sf {} {}".format(source_file, dest_file)
|
||||
_exec_cmd(cmd=cmd, sudo=sudo, fail_ok=False)
|
||||
|
||||
|
||||
def delete_file(filename, sudo=True):
|
||||
"""
|
||||
This deletes a file.
|
||||
"""
|
||||
LOG.info("Deleting file {}".format(filename))
|
||||
cmd = "rm {}".format(filename)
|
||||
_exec_cmd(cmd=cmd, sudo=sudo, fail_ok=False)
|
||||
|
||||
|
||||
def chmod_file(filename, permissions, sudo=True):
|
||||
"""
|
||||
This modifies permissions of a file
|
||||
"""
|
||||
LOG.info("Changing file permissions for {}".format(filename))
|
||||
cmd = "chmod {} {}".format(permissions, filename)
|
||||
_exec_cmd(cmd=cmd, sudo=sudo, fail_ok=False)
|
||||
|
||||
|
||||
def chgrp_file(filename, group, sudo=True):
|
||||
"""
|
||||
This modifies the group ownership of a file
|
||||
"""
|
||||
LOG.info("Changing file permissions for {}".format(filename))
|
||||
cmd = "chgrp {} {}".format(group, filename)
|
||||
_exec_cmd(cmd=cmd, sudo=sudo, fail_ok=False)
|
||||
|
||||
|
||||
def chown_file(filename, file_owner, sudo=True):
|
||||
"""
|
||||
This modifies the user that owns the file
|
||||
"""
|
||||
LOG.info("Changing the user that owns {}".format(filename))
|
||||
cmd = "chown {} {}".format(file_owner, filename)
|
||||
_exec_cmd(cmd=cmd, sudo=sudo, fail_ok=False)
|
||||
|
||||
|
||||
def copy_file(source_file, dest_file, sudo=True, preserve=True, cleanup=None):
|
||||
"""
|
||||
This creates a copy of a file
|
||||
|
||||
Args:
|
||||
source_file:
|
||||
dest_file:
|
||||
sudo (bool): whether to copy with sudo
|
||||
cleanup (None|str): source or dest. Add source or dest file to files to
|
||||
delete list
|
||||
preserve (bool): whether to preserve attributes of source file
|
||||
|
||||
Returns:
|
||||
|
||||
"""
|
||||
LOG.info("Copy file {} preserve attributes".format('and' if preserve
|
||||
else 'without'))
|
||||
preserve_str = '--preserve=all ' if preserve else ''
|
||||
cmd = "cp {} {}{}".format(source_file, preserve_str, dest_file)
|
||||
_exec_cmd(cmd, sudo=sudo, fail_ok=False)
|
||||
|
||||
if cleanup:
|
||||
file_path = source_file if cleanup == 'source' else dest_file
|
||||
files_to_delete.append(file_path)
|
||||
|
||||
|
||||
def move_file(source_file, dest_file, sudo=True):
|
||||
"""
|
||||
This moves a file from source to destination
|
||||
"""
|
||||
LOG.info("Copy file and preserve attributes")
|
||||
cmd = "mv {} {}".format(source_file, dest_file)
|
||||
_exec_cmd(cmd=cmd, sudo=sudo, fail_ok=False)
|
||||
|
||||
|
||||
def create_and_execute(file_path, sudo=True):
|
||||
LOG.tc_step("Create a new {} file and execute it".format('root' if sudo
|
||||
else 'non-root'))
|
||||
cmd = "touch {}".format(file_path)
|
||||
_exec_cmd(cmd=cmd, sudo=sudo, fail_ok=False)
|
||||
files_to_delete.append(file_path)
|
||||
|
||||
LOG.info("Set file to be executable")
|
||||
chmod_file(file_path, "755", sudo=sudo)
|
||||
|
||||
LOG.info("Append to copy of monitored file")
|
||||
cmd = 'echo "ls" | {}tee -a {}'.format('sudo -S ' if sudo else '',
|
||||
file_path)
|
||||
_exec_cmd(cmd=cmd, sudo=False, fail_ok=False)
|
||||
|
||||
LOG.info("Execute created file")
|
||||
_exec_cmd(file_path, sudo=sudo, fail_ok=False)
|
||||
|
||||
|
||||
@mark.priorities('nightly', 'sx_nightly')
|
||||
@mark.parametrize(('operation', 'file_path'), [
|
||||
('create_symlink', '/usr/sbin/ntpq'),
|
||||
('copy_and_execute', '/usr/sbin/ntpq'),
|
||||
('change_file_attributes', '/usr/sbin/ntpq'),
|
||||
('create_and_execute', 'new_nonroot_file')
|
||||
])
|
||||
def test_ima_no_event(operation, file_path):
|
||||
"""
|
||||
This test validates following scenarios will not generate IMA event:
|
||||
- create symlink of a monitored file
|
||||
- copy a root file with the proper IMA signature, the nexcute it
|
||||
- make file attribute changes, include: chgrp, chown, chmod
|
||||
- create and execute a files as sysadmin
|
||||
|
||||
Test Steps:
|
||||
- Perform specified operation on given file
|
||||
- Confirm IMA violation event is not triggered
|
||||
|
||||
Teardown:
|
||||
- Delete created test file
|
||||
|
||||
Maps to TC_17684/TC_17644/TC_17640/TC_17902 from US105523
|
||||
This test also covers TC_17665/T_16397 from US105523 (FM Event Log Updates)
|
||||
|
||||
"""
|
||||
|
||||
|
||||
global files_to_delete
|
||||
start_time = common.get_date_in_format()
|
||||
source_file = file_path
|
||||
con_ssh = ControllerClient.get_active_controller()
|
||||
|
||||
LOG.tc_step("{} for {}".format(operation, source_file))
|
||||
if operation == 'create_symlink':
|
||||
dest_file = "my_symlink"
|
||||
create_symlink(source_file, dest_file)
|
||||
files_to_delete.append(dest_file)
|
||||
|
||||
checksum_match = checksum_compare(source_file, dest_file)
|
||||
assert checksum_match, "SHA256 checksum should match source file and " \
|
||||
"the symlink but didn't"
|
||||
|
||||
elif operation == 'copy_and_execute':
|
||||
dest_file = "/usr/sbin/TEMP"
|
||||
copy_file(source_file, dest_file)
|
||||
files_to_delete.append(dest_file)
|
||||
|
||||
LOG.info("Execute the copied file")
|
||||
con_ssh.exec_sudo_cmd("{} -p".format(dest_file))
|
||||
|
||||
elif operation == 'change_file_attributes':
|
||||
if HostLinuxUser.get_home() != 'sysadmin':
|
||||
skip('sysadmin user is required to run this test')
|
||||
dest_file = "/usr/sbin/TEMP"
|
||||
copy_file(source_file, dest_file)
|
||||
files_to_delete.append(dest_file)
|
||||
|
||||
LOG.info("Change permission of copy")
|
||||
chmod_file(dest_file, "777")
|
||||
LOG.info("Changing group ownership of file")
|
||||
chgrp_file(dest_file, "sys_protected")
|
||||
LOG.info("Changing file ownership")
|
||||
chown_file(dest_file, "sysadmin:sys_protected")
|
||||
|
||||
elif operation == 'create_and_execute':
|
||||
dest_file = "{}/TEMP".format(HostLinuxUser.get_home())
|
||||
create_and_execute(file_path=dest_file, sudo=False)
|
||||
|
||||
LOG.tc_step("Ensure no IMA events are raised")
|
||||
events_found = system_helper.wait_for_events(start=start_time,
|
||||
timeout=60, num=10,
|
||||
event_log_id=EventLogID.IMA,
|
||||
fail_ok=True, strict=False)
|
||||
|
||||
assert not events_found, "Unexpected IMA events found"
|
||||
|
||||
|
||||
def _exec_cmd(cmd, con_ssh=None, sudo=False, fail_ok=True):
|
||||
if not con_ssh:
|
||||
con_ssh = ControllerClient.get_active_controller()
|
||||
|
||||
if sudo:
|
||||
return con_ssh.exec_sudo_cmd(cmd, fail_ok=fail_ok)
|
||||
else:
|
||||
return con_ssh.exec_cmd(cmd, fail_ok=fail_ok)
|
||||
|
||||
|
||||
@mark.priorities('nightly', 'sx_nightly')
|
||||
@mark.parametrize(('operation', 'file_path'), [
|
||||
('edit_and_execute', '/usr/sbin/ntpq'),
|
||||
('append_and_execute', '/usr/sbin/logrotate'),
|
||||
('replace_library', '/lib64/libcrypt.so.1'),
|
||||
('create_and_execute', 'new_root_file')
|
||||
])
|
||||
def test_ima_event_generation(operation, file_path):
|
||||
"""
|
||||
Following IMA violation scenarios are covered:
|
||||
- append/edit data to/of a monitored file, result in changing of the
|
||||
hash
|
||||
- dynamic library changes
|
||||
- create and execute a files as sysadmin
|
||||
|
||||
Test Steps:
|
||||
- Perform specified file operations
|
||||
- Check IMA violation event is logged
|
||||
|
||||
"""
|
||||
global files_to_delete
|
||||
|
||||
con_ssh = ControllerClient.get_active_controller()
|
||||
start_time = common.get_date_in_format()
|
||||
|
||||
source_file = file_path
|
||||
backup_file = None
|
||||
|
||||
if operation in ('edit_and_execute', 'append_and_execute'):
|
||||
dest_file = "/usr/sbin/TEMP"
|
||||
copy_file(source_file, dest_file, cleanup='dest')
|
||||
|
||||
if operation == 'edit_and_execute':
|
||||
LOG.tc_step("Open copy of monitored file and save")
|
||||
cmd = "vim {} '+:wq!'".format(dest_file)
|
||||
con_ssh.exec_sudo_cmd(cmd, fail_ok=False)
|
||||
execute_cmd = "{} -p".format(dest_file)
|
||||
else:
|
||||
LOG.tc_step("Append to copy of monitored file")
|
||||
cmd = 'echo "output" | sudo -S tee -a /usr/sbin/TEMP'.format(
|
||||
HostLinuxUser.get_password())
|
||||
con_ssh.exec_cmd(cmd, fail_ok=False)
|
||||
LOG.tc_step("Execute modified file")
|
||||
con_ssh.exec_sudo_cmd(dest_file)
|
||||
execute_cmd = "{}".format(dest_file)
|
||||
|
||||
LOG.tc_step("Execute modified file")
|
||||
con_ssh.exec_sudo_cmd(execute_cmd)
|
||||
|
||||
elif operation == 'replace_library':
|
||||
backup_file = "/root/{}".format(source_file.split('/')[-1])
|
||||
dest_file_nocsum = "/root/TEMP"
|
||||
|
||||
LOG.info("Backup source file {} to {}".format(source_file, backup_file))
|
||||
copy_file(source_file, backup_file)
|
||||
LOG.info("Copy the library without the checksum")
|
||||
copy_file(source_file, dest_file_nocsum, preserve=False)
|
||||
LOG.info("Replace the library with the unsigned one")
|
||||
move_file(dest_file_nocsum, source_file)
|
||||
|
||||
elif operation == 'create_and_execute':
|
||||
dest_file = "{}/TEMP".format(HostLinuxUser.get_home())
|
||||
create_and_execute(file_path=dest_file, sudo=True)
|
||||
|
||||
LOG.tc_step("Check for IMA event")
|
||||
ima_events = system_helper.wait_for_events(start=start_time,
|
||||
timeout=60, num=10,
|
||||
event_log_id=EventLogID.IMA,
|
||||
state='log', severity='major',
|
||||
fail_ok=True, strict=False)
|
||||
|
||||
if backup_file:
|
||||
LOG.info("Restore backup file {} to {}".format(backup_file,
|
||||
source_file))
|
||||
move_file(backup_file, source_file)
|
||||
|
||||
assert ima_events, "IMA event is not generated after {} on " \
|
||||
"{}".format(operation, file_path)
|
||||
|
||||
|
||||
# CHECK TEST PROCEDURE - FAILS in the middle
|
||||
|
||||
|
||||
@mark.priorities('nightly', 'sx_nightly')
|
||||
def test_ima_keyring_protection():
|
||||
"""
|
||||
This test validates that the IMA keyring is safe from user space attacks.
|
||||
|
||||
Test Steps:
|
||||
- Attempt to add new keys to the keyring
|
||||
- Extract key ID and save
|
||||
- Attempt to change the key timeout
|
||||
- Attempt to change the group and ownership of the key
|
||||
- Attempt to delete the key
|
||||
|
||||
This test maps to TC_17667/T_16387 from US105523 (IMA keyring is safe from
|
||||
user space attacks)
|
||||
|
||||
"""
|
||||
|
||||
con_ssh = ControllerClient.get_active_controller()
|
||||
|
||||
LOG.info("Extract ima key ID")
|
||||
exitcode, msg = con_ssh.exec_sudo_cmd("cat /proc/keys | grep _ima")
|
||||
raw_key_id = msg.split(" ", maxsplit=1)[0]
|
||||
key_id = "0x{}".format(raw_key_id)
|
||||
LOG.info("Extracted key is: {}".format(key_id))
|
||||
|
||||
LOG.info("Attempting to add new keys to keyring")
|
||||
exitcode, msg = con_ssh.exec_sudo_cmd("keyctl add keyring TEST stuff "
|
||||
"{}".format(key_id))
|
||||
assert exitcode != 0, \
|
||||
"Key addition should have failed but instead succeeded"
|
||||
|
||||
LOG.info("Attempt to change the timeout on a key")
|
||||
exitcode, msg = con_ssh.exec_sudo_cmd("keyctl timeout {} "
|
||||
"3600".format(key_id))
|
||||
assert exitcode != 0, \
|
||||
"Key timeout modification should be rejected but instead succeeded"
|
||||
|
||||
LOG.info("Attempt to change the group of a key")
|
||||
exitcode, msg = con_ssh.exec_sudo_cmd("keyctl chgrp {} 0".format(key_id))
|
||||
assert exitcode != 0, \
|
||||
"Key group modification should be rejected but instead succeeded"
|
||||
|
||||
LOG.info("Attempt to change the ownership of a key")
|
||||
exitcode, msg = con_ssh.exec_sudo_cmd("keyctl chown {} 1875".format(key_id))
|
||||
assert exitcode != 0, \
|
||||
"Key ownership modification should be rejected but instead succeeded"
|
||||
|
||||
LOG.info("Attempt to delete a key")
|
||||
exitcode, msg = con_ssh.exec_sudo_cmd("keyctl clear {}".format(key_id))
|
||||
assert exitcode != 0, \
|
||||
"Key ownership deletion should be rejected but instead succeeded"
|
|
@ -0,0 +1,71 @@
|
|||
#
|
||||
# Copyright (c) 2019 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
import re
|
||||
from pytest import mark
|
||||
|
||||
from keywords import system_helper, host_helper
|
||||
from utils.tis_log import LOG
|
||||
|
||||
|
||||
@mark.nightly
|
||||
def test_kernel_module_signatures():
|
||||
"""
|
||||
Test kernel modules are properly signed on all stx hosts.
|
||||
|
||||
Steps on each host:
|
||||
- 'cat /proc/sys/kernel/tainted', ensure value is 4096.
|
||||
If not, do following steps:
|
||||
- 'grep --color=never -i "module verification failed"
|
||||
/var/log/kern.log' to find out failed modules
|
||||
- 'modinfo <failed_module> | grep --color=never -E "sig|filename"
|
||||
to display signing info for each module
|
||||
|
||||
"""
|
||||
hosts = system_helper.get_hosts()
|
||||
failed_hosts = {}
|
||||
|
||||
for host in hosts:
|
||||
with host_helper.ssh_to_host(host) as host_ssh:
|
||||
LOG.tc_step(
|
||||
"Check for unassigned kernel modules on {}".format(host))
|
||||
output = host_ssh.exec_cmd('cat /proc/sys/kernel/tainted',
|
||||
fail_ok=False)[1]
|
||||
output_binary = '{0:b}'.format(int(output))
|
||||
unassigned_module_bit = '0'
|
||||
# 14th bit is to flag unassigned module
|
||||
if len(output_binary) >= 14:
|
||||
unassigned_module_bit = output_binary[-14]
|
||||
if unassigned_module_bit != '0':
|
||||
LOG.error(
|
||||
"Kernel module verification(s) failed on {}. Collecting "
|
||||
"more info".format(
|
||||
host))
|
||||
|
||||
LOG.tc_step(
|
||||
"Check kern.log for modules with failed verification")
|
||||
failed_modules = []
|
||||
err_out = host_ssh.exec_cmd(
|
||||
'grep --color=never -i "module verification failed" '
|
||||
'/var/log/kern.log')[
|
||||
1]
|
||||
for line in err_out.splitlines():
|
||||
module = re.findall(r'\] (.*): module verification failed',
|
||||
line)[0].strip()
|
||||
if module not in failed_modules:
|
||||
failed_modules.append(module)
|
||||
|
||||
failed_hosts[host] = failed_modules
|
||||
LOG.tc_step("Display signing info for {} failed kernel "
|
||||
"modules: {}".format(host, failed_modules))
|
||||
for module in failed_modules:
|
||||
host_ssh.exec_cmd(
|
||||
'modinfo {} | grep --color=never -E '
|
||||
'"sig|filename"'.format(module))
|
||||
|
||||
assert not failed_hosts, "Kernel module signature verification " \
|
||||
"failed on: {}".format(failed_hosts)
|
|
@ -0,0 +1,115 @@
|
|||
"""
|
||||
This file contains CEPH-related storage test cases.
|
||||
"""
|
||||
|
||||
import time
|
||||
|
||||
from pytest import mark, param
|
||||
|
||||
from consts.stx import EventLogID
|
||||
from keywords import host_helper, system_helper, storage_helper
|
||||
from utils.tis_log import LOG
|
||||
|
||||
PROC_RESTART_TIME = 30 # number of seconds between process restarts
|
||||
|
||||
|
||||
# Tested on PV1. Runtime: 278.40 Date: Aug 2nd, 2017. Status: Pass
|
||||
|
||||
|
||||
@mark.parametrize('monitor', [
|
||||
param('controller-0', marks=mark.nightly),
|
||||
'controller-1',
|
||||
'storage-0'])
|
||||
# Tested on PV0. Runtime: 222.34 seconds. Date: Aug 4, 2017 Status: Pass
|
||||
@mark.usefixtures('ceph_precheck')
|
||||
def test_ceph_mon_process_kill(monitor):
|
||||
"""
|
||||
us69932_tc2_ceph_mon_process_kill from us69932_ceph_monitoring.odt
|
||||
|
||||
Verify that ceph mon processes recover when they are killed.
|
||||
|
||||
Args:
|
||||
- Nothing
|
||||
|
||||
Setup:
|
||||
- Requires system with storage nodes
|
||||
|
||||
Test Steps:
|
||||
1. Run CEPH pre-check fixture to check:
|
||||
- system has storage nodes
|
||||
- health of the ceph cluster is okay
|
||||
- that we have OSDs provisioned
|
||||
2. Pick one ceph monitor and remove it from the quorum
|
||||
3. Kill the monitor process
|
||||
4. Check that the appropriate alarms are raised
|
||||
5. Restore the monitor to the quorum
|
||||
6. Check that the alarms clear
|
||||
7. Ensure the ceph monitor is restarted under a different pid
|
||||
|
||||
Potential flaws:
|
||||
1. We're not checking if unexpected alarms are raised (TODO)
|
||||
|
||||
Teardown:
|
||||
- None
|
||||
|
||||
"""
|
||||
LOG.tc_step('Get process ID of ceph monitor')
|
||||
mon_pid = storage_helper.get_mon_pid(monitor)
|
||||
|
||||
with host_helper.ssh_to_host(monitor) as host_ssh:
|
||||
with host_ssh.login_as_root() as root_ssh:
|
||||
LOG.tc_step('Remove the monitor')
|
||||
cmd = 'ceph mon remove {}'.format(monitor)
|
||||
root_ssh.exec_cmd(cmd)
|
||||
|
||||
LOG.tc_step('Stop the ceph monitor')
|
||||
cmd = 'service ceph stop mon.{}'.format(monitor)
|
||||
root_ssh.exec_cmd(cmd)
|
||||
|
||||
LOG.tc_step('Check that ceph monitor failure alarm is raised')
|
||||
system_helper.wait_for_alarm(alarm_id=EventLogID.STORAGE_DEGRADE, timeout=300)
|
||||
|
||||
with host_helper.ssh_to_host(monitor) as host_ssh:
|
||||
with host_ssh.login_as_root() as root_ssh:
|
||||
LOG.tc_step('Get cluster fsid')
|
||||
cmd = 'ceph fsid'
|
||||
fsid = host_ssh.exec_cmd(cmd)[0]
|
||||
ceph_conf = '/etc/ceph/ceph.conf'
|
||||
|
||||
LOG.tc_step('Remove old ceph monitor directory')
|
||||
cmd = 'rm -rf /var/lib/ceph/mon/ceph-{}'.format(monitor)
|
||||
root_ssh.exec_cmd(cmd)
|
||||
|
||||
LOG.tc_step('Re-add the monitor')
|
||||
cmd = 'ceph-mon -i {} -c {} --mkfs --fsid {}'.format(monitor, ceph_conf, fsid)
|
||||
root_ssh.exec_cmd(cmd)
|
||||
|
||||
LOG.tc_step('Check the ceph storage alarm condition clears')
|
||||
system_helper.wait_for_alarm_gone(alarm_id=EventLogID.STORAGE_DEGRADE, timeout=360)
|
||||
|
||||
LOG.tc_step('Check the ceph-mon process is restarted with a different pid')
|
||||
mon_pid2 = None
|
||||
for i in range(0, PROC_RESTART_TIME):
|
||||
mon_pid2 = storage_helper.get_mon_pid(monitor, fail_ok=True)
|
||||
if mon_pid2 and mon_pid2 != mon_pid:
|
||||
break
|
||||
time.sleep(5)
|
||||
|
||||
LOG.info('Old pid is {} and new pid is {}'.format(mon_pid, mon_pid2))
|
||||
msg = 'Process did not restart in time'
|
||||
assert mon_pid2 and mon_pid2 != mon_pid, msg
|
||||
|
||||
|
||||
# Testd on PV0. Ruentime: 1899.93 seconds. Date: Aug 4, 2017. Status: Pass
|
||||
|
||||
|
||||
# Tested on PV0. Runtime: 2770.23 seconds sec. Date: Aug 4, 2017 Status: # Pass
|
||||
|
||||
|
||||
# Tested on PV1. Runtime: 762.41 secs Date: Aug 2nd, 2017. Status: Pass
|
||||
|
||||
|
||||
# Tested on PV1. Runtime: 1212.55 secs Date: Aug 2nd, 2017. Status: Pass
|
||||
|
||||
|
||||
# Tested on PV0. Runtime: 58.82 seconds. Status: Pass Date: Aug 8, 2017
|
|
@ -0,0 +1,3 @@
|
|||
from testfixtures.resource_mgmt import *
|
||||
from testfixtures.resource_create import *
|
||||
from testfixtures.config_host import *
|
521
automated-pytest-suite/testcases/functional/storage/test_storage_vm_migration.py
Executable file
521
automated-pytest-suite/testcases/functional/storage/test_storage_vm_migration.py
Executable file
|
@ -0,0 +1,521 @@
|
|||
#
|
||||
# Copyright (c) 2019 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
import time
|
||||
|
||||
from pytest import fixture, skip, mark
|
||||
|
||||
from consts.stx import VMStatus, GuestImages
|
||||
from keywords import host_helper, vm_helper, cinder_helper, glance_helper, \
|
||||
system_helper, network_helper
|
||||
from testfixtures.fixture_resources import ResourceCleanup
|
||||
from utils import table_parser, exceptions
|
||||
from utils.tis_log import LOG
|
||||
|
||||
|
||||
@fixture(scope='module', autouse=True)
|
||||
def check_system():
|
||||
if not cinder_helper.is_volumes_pool_sufficient(min_size=80):
|
||||
skip("Cinder volume pool size is smaller than 80G")
|
||||
|
||||
if len(host_helper.get_up_hypervisors()) < 2:
|
||||
skip("at least two computes are required")
|
||||
|
||||
if len(host_helper.get_storage_backing_with_max_hosts()[1]) < 2:
|
||||
skip("at least two hosts with the same storage backing are required")
|
||||
|
||||
|
||||
@fixture(scope='function', autouse=True)
|
||||
def pre_alarm_():
|
||||
"""
|
||||
Text fixture to get pre-test existing alarm list.
|
||||
Args:None
|
||||
|
||||
Returns: list of alarms
|
||||
|
||||
"""
|
||||
pre_alarms = system_helper.get_alarms_table()
|
||||
pre_list = table_parser.get_all_rows(pre_alarms)
|
||||
# Time stamps are removed before comparing alarms with post test alarms.
|
||||
# The time stamp is the last item in each alarm row.
|
||||
for n in pre_list:
|
||||
n.pop()
|
||||
return pre_list
|
||||
|
||||
|
||||
@fixture(scope='module')
|
||||
def image_():
|
||||
"""
|
||||
Text fixture to get guest image
|
||||
Args:
|
||||
|
||||
Returns: the guest image id
|
||||
|
||||
"""
|
||||
return glance_helper.get_image_id_from_name()
|
||||
|
||||
|
||||
@fixture(scope='function')
|
||||
def volumes_(image_):
|
||||
"""
|
||||
Text fixture to create two large cinder volumes with size of 20 and 40 GB.
|
||||
Args:
|
||||
image_: the guest image_id
|
||||
|
||||
Returns: list of volume dict as following:
|
||||
{'id': <volume_id>,
|
||||
'display_name': <vol_inst1 or vol_inst2>,
|
||||
'size': <20 or 40>
|
||||
}
|
||||
"""
|
||||
|
||||
volumes = []
|
||||
cinder_params = [{'name': 'vol_inst1',
|
||||
'size': 20},
|
||||
{'name': 'vol_inst2',
|
||||
'size': 40}]
|
||||
|
||||
for param in cinder_params:
|
||||
volume_id = \
|
||||
cinder_helper.create_volume(name=param['name'], source_id=image_,
|
||||
size=param['size'])[1]
|
||||
volume = {
|
||||
'id': volume_id,
|
||||
'display_name': param['name'],
|
||||
'size': param['size']
|
||||
}
|
||||
volumes.append(volume)
|
||||
ResourceCleanup.add('volume', volume['id'], scope='function')
|
||||
|
||||
return volumes
|
||||
|
||||
|
||||
@fixture(scope='function')
|
||||
def vms_(volumes_):
|
||||
"""
|
||||
Text fixture to create cinder volume with specific 'display-name',
|
||||
and 'size'
|
||||
Args:
|
||||
volumes_: list of two large volumes dict created by volumes_ fixture
|
||||
|
||||
Returns: volume dict as following:
|
||||
{'id': <volume_id>,
|
||||
'display_name': <vol_inst1 or vol_inst2>,
|
||||
'size': <20 or 40>
|
||||
}
|
||||
"""
|
||||
vms = []
|
||||
vm_names = ['test_inst1', 'test_inst2']
|
||||
index = 0
|
||||
for vol_params in volumes_:
|
||||
instance_name = vm_names[index]
|
||||
vm_id = vm_helper.boot_vm(name=instance_name, source='volume',
|
||||
source_id=vol_params['id'],
|
||||
cleanup='function')[
|
||||
1] # , user_data=get_user_data_file())[1]
|
||||
vm = {
|
||||
'id': vm_id,
|
||||
'display_name': instance_name,
|
||||
}
|
||||
vms.append(vm)
|
||||
index += 1
|
||||
return vms
|
||||
|
||||
|
||||
@mark.storage_sanity
|
||||
def test_vm_with_a_large_volume_live_migrate(vms_, pre_alarm_):
|
||||
"""
|
||||
Test instantiate a vm with a large volume ( 20 GB and 40 GB) and live
|
||||
migrate:
|
||||
Args:
|
||||
vms_ (dict): vms created by vms_ fixture
|
||||
pre_alarm_ (list): alarm lists obtained by pre_alarm_ fixture
|
||||
|
||||
Test Setups:
|
||||
- get tenant1 and management networks which are already created for lab
|
||||
setup
|
||||
- get or create a "small" flavor
|
||||
- get the guest image id
|
||||
- create two large volumes (20 GB and 40 GB) in cinder
|
||||
- boot two vms ( test_inst1, test_inst2) using volumes 20 GB and 40 GB
|
||||
respectively
|
||||
|
||||
|
||||
Test Steps:
|
||||
- Verify VM status is ACTIVE
|
||||
- Validate that VMs boot, and that no timeouts or error status occur.
|
||||
- Verify the VM can be pinged from NATBOX
|
||||
- Verify login to VM and rootfs (dev/vda) filesystem is rw mode
|
||||
- Attempt to live migrate of VMs
|
||||
- Validate that the VMs migrated and no errors or alarms are present
|
||||
- Log into both VMs and validate that file systems are read-write
|
||||
- Terminate VMs
|
||||
|
||||
Skip conditions:
|
||||
- less than two computes
|
||||
- no storage node
|
||||
|
||||
"""
|
||||
for vm in vms_:
|
||||
vm_id = vm['id']
|
||||
|
||||
LOG.tc_step(
|
||||
"Checking VM status; VM Instance id is: {}......".format(vm_id))
|
||||
vm_state = vm_helper.get_vm_status(vm_id)
|
||||
|
||||
assert vm_state == VMStatus.ACTIVE, 'VM {} state is {}; Not in ' \
|
||||
'ACTIVATE state as expected' \
|
||||
.format(vm_id, vm_state)
|
||||
|
||||
LOG.tc_step("Verify VM can be pinged from NAT box...")
|
||||
rc, boot_time = check_vm_boot_time(vm_id)
|
||||
assert rc, "VM is not pingable after {} seconds ".format(boot_time)
|
||||
|
||||
LOG.tc_step("Verify Login to VM and check filesystem is rw mode....")
|
||||
assert is_vm_filesystem_rw(
|
||||
vm_id), 'rootfs filesystem is not RW as expected for VM {}' \
|
||||
.format(vm['display_name'])
|
||||
|
||||
LOG.tc_step(
|
||||
"Attempting live migration; vm id = {}; vm_name = {} ....".format(
|
||||
vm_id, vm['display_name']))
|
||||
|
||||
code, msg = vm_helper.live_migrate_vm(vm_id=vm_id, fail_ok=False)
|
||||
LOG.tc_step("Verify live migration succeeded...")
|
||||
assert code == 0, "Expected return code 0. Actual return code: {}; " \
|
||||
"details: {}".format(code, msg)
|
||||
|
||||
LOG.tc_step("Verifying filesystem is rw mode after live migration....")
|
||||
assert is_vm_filesystem_rw(
|
||||
vm_id), 'After live migration rootfs filesystem is not RW as ' \
|
||||
'expected for VM {}'. \
|
||||
format(vm['display_name'])
|
||||
|
||||
|
||||
@mark.domain_sanity
|
||||
def test_vm_with_large_volume_and_evacuation(vms_, pre_alarm_):
|
||||
"""
|
||||
Test instantiate a vm with a large volume ( 20 GB and 40 GB) and evacuate:
|
||||
|
||||
Args:
|
||||
vms_ (dict): vms created by vms_ fixture
|
||||
pre_alarm_ (list): alarm lists obtained by pre_alarm_ fixture
|
||||
|
||||
Test Setups:
|
||||
- get tenant1 and management networks which are already created for lab
|
||||
setup
|
||||
- get or create a "small" flavor
|
||||
- get the guest image id
|
||||
- create two large volumes (20 GB and 40 GB) in cinder
|
||||
- boot two vms ( test_inst1, test_inst2) using volumes 20 GB and 40 GB
|
||||
respectively
|
||||
|
||||
|
||||
Test Steps:
|
||||
- Verify VM status is ACTIVE
|
||||
- Validate that VMs boot, and that no timeouts or error status occur.
|
||||
- Verify the VM can be pinged from NATBOX
|
||||
- Verify login to VM and rootfs (dev/vda) filesystem is rw mode
|
||||
- live migrate, if required, to bring both VMs to the same compute
|
||||
- Validate migrated VM and no errors or alarms are present
|
||||
- Reboot compute host to initiate evacuation
|
||||
- Verify VMs are evacuated
|
||||
- Check for any system alarms
|
||||
- Verify login to VM and rootfs (dev/vda) filesystem is still rw mode
|
||||
after evacuation
|
||||
- Terminate VMs
|
||||
|
||||
Skip conditions:
|
||||
- less that two computes
|
||||
- no storage node
|
||||
|
||||
"""
|
||||
vm_ids = []
|
||||
for vm in vms_:
|
||||
vm_id = vm['id']
|
||||
vm_ids.append(vm_id)
|
||||
LOG.tc_step(
|
||||
"Checking VM status; VM Instance id is: {}......".format(vm_id))
|
||||
vm_state = vm_helper.get_vm_status(vm_id)
|
||||
assert vm_state == VMStatus.ACTIVE, 'VM {} state is {}; Not in ' \
|
||||
'ACTIVATE state as expected' \
|
||||
.format(vm_id, vm_state)
|
||||
|
||||
LOG.tc_step("Verify VM can be pinged from NAT box...")
|
||||
rc, boot_time = check_vm_boot_time(vm_id)
|
||||
assert rc, "VM is not pingable after {} seconds ".format(boot_time)
|
||||
|
||||
LOG.tc_step("Verify Login to VM and check filesystem is rw mode....")
|
||||
assert is_vm_filesystem_rw(
|
||||
vm_id), 'rootfs filesystem is not RW as expected for VM {}' \
|
||||
.format(vm['display_name'])
|
||||
|
||||
LOG.tc_step(
|
||||
"Checking if live migration is required to put the vms to a single "
|
||||
"compute....")
|
||||
host_0 = vm_helper.get_vm_host(vm_ids[0])
|
||||
host_1 = vm_helper.get_vm_host(vm_ids[1])
|
||||
|
||||
if host_0 != host_1:
|
||||
LOG.tc_step("Attempting to live migrate vm {} to host {} ....".format(
|
||||
(vms_[1])['display_name'], host_0))
|
||||
code, msg = vm_helper.live_migrate_vm(vm_ids[1],
|
||||
destination_host=host_0)
|
||||
LOG.tc_step("Verify live migration succeeded...")
|
||||
assert code == 0, "Live migration of vm {} to host {} did not " \
|
||||
"success".format((vms_[1])['display_name'], host_0)
|
||||
|
||||
LOG.tc_step("Verify both VMs are in same host....")
|
||||
assert host_0 == vm_helper.get_vm_host(
|
||||
vm_ids[1]), "VMs are not in the same compute host"
|
||||
|
||||
LOG.tc_step(
|
||||
"Rebooting compute {} to initiate vm evacuation .....".format(host_0))
|
||||
vm_helper.evacuate_vms(host=host_0, vms_to_check=vm_ids, ping_vms=True)
|
||||
|
||||
LOG.tc_step("Login to VM and to check filesystem is rw mode....")
|
||||
assert is_vm_filesystem_rw((vms_[0])[
|
||||
'id']), 'After evacuation the rootfs ' \
|
||||
'filesystem is not RW as expected ' \
|
||||
'for VM {}'.format(
|
||||
(vms_[0])['display_name'])
|
||||
|
||||
LOG.tc_step("Login to VM and to check filesystem is rw mode....")
|
||||
assert is_vm_filesystem_rw((vms_[1])['id']), \
|
||||
'After evacuation the rootfs filesystem is not RW as expected ' \
|
||||
'for VM {}'.format((vms_[1])['display_name'])
|
||||
|
||||
|
||||
@mark.domain_sanity
|
||||
def test_instantiate_a_vm_with_a_large_volume_and_cold_migrate(vms_,
|
||||
pre_alarm_):
|
||||
"""
|
||||
Test instantiate a vm with a large volume ( 20 GB and 40 GB) and cold
|
||||
migrate:
|
||||
Args:
|
||||
vms_ (dict): vms created by vms_ fixture
|
||||
pre_alarm_ (list): alarm lists obtained by pre_alarm_ fixture
|
||||
|
||||
Test Setups:
|
||||
- get tenant1 and management networks which are already created for lab
|
||||
setup
|
||||
- get or create a "small" flavor
|
||||
- get the guest image id
|
||||
- create two large volumes (20 GB and 40 GB) in cinder
|
||||
- boot two vms ( test_inst1, test_inst2) using volumes 20 GB and 40 GB
|
||||
respectively
|
||||
|
||||
|
||||
Test Steps:
|
||||
- Verify VM status is ACTIVE
|
||||
- Validate that VMs boot, and that no timeouts or error status occur.
|
||||
- Verify the VM can be pinged from NATBOX
|
||||
- Verify login to VM and rootfs (dev/vda) filesystem is rw mode
|
||||
- Attempt to cold migrate of VMs
|
||||
- Validate that the VMs migrated and no errors or alarms are present
|
||||
- Log into both VMs and validate that file systems are read-write
|
||||
- Terminate VMs
|
||||
|
||||
Skip conditions:
|
||||
- less than two hosts with the same storage backing
|
||||
- less than two computes
|
||||
- no storage node
|
||||
|
||||
"""
|
||||
LOG.tc_step("Instantiate a vm with large volume.....")
|
||||
|
||||
vms = vms_
|
||||
|
||||
for vm in vms:
|
||||
vm_id = vm['id']
|
||||
|
||||
LOG.tc_step(
|
||||
"Checking VM status; VM Instance id is: {}......".format(vm_id))
|
||||
vm_state = vm_helper.get_vm_status(vm_id)
|
||||
|
||||
assert vm_state == VMStatus.ACTIVE, 'VM {} state is {}; Not in ' \
|
||||
'ACTIVATE state as expected' \
|
||||
.format(vm_id, vm_state)
|
||||
|
||||
LOG.tc_step("Verify VM can be pinged from NAT box...")
|
||||
rc, boot_time = check_vm_boot_time(vm_id)
|
||||
assert rc, "VM is not pingable after {} seconds ".format(boot_time)
|
||||
|
||||
LOG.tc_step("Verify Login to VM and check filesystem is rw mode....")
|
||||
assert is_vm_filesystem_rw(
|
||||
vm_id), 'rootfs filesystem is not RW as expected for VM {}' \
|
||||
.format(vm['display_name'])
|
||||
|
||||
LOG.tc_step(
|
||||
"Attempting cold migration; vm id = {}; vm_name = {} ....".format(
|
||||
vm_id, vm['display_name']))
|
||||
|
||||
code, msg = vm_helper.cold_migrate_vm(vm_id=vm_id, fail_ok=True)
|
||||
LOG.tc_step("Verify cold migration succeeded...")
|
||||
assert code == 0, "Expected return code 0. Actual return code: {}; " \
|
||||
"details: {}".format(code, msg)
|
||||
|
||||
LOG.tc_step("Verifying filesystem is rw mode after cold migration....")
|
||||
assert is_vm_filesystem_rw(
|
||||
vm_id), 'After cold migration rootfs filesystem is not RW as ' \
|
||||
'expected for ' \
|
||||
'VM {}'.format(vm['display_name'])
|
||||
|
||||
# LOG.tc_step("Checking for any system alarm ....")
|
||||
# rc, new_alarm = is_new_alarm_raised(pre_alarms)
|
||||
# assert not rc, " alarm(s) found: {}".format(new_alarm)
|
||||
|
||||
|
||||
def test_instantiate_a_vm_with_multiple_volumes_and_migrate():
|
||||
"""
|
||||
Test a vm with a multiple volumes live, cold migration and evacuation:
|
||||
|
||||
Test Setups:
|
||||
- get guest image_id
|
||||
- get or create 'small' flavor_id
|
||||
- get tenenat and managment network ids
|
||||
|
||||
Test Steps:
|
||||
- create volume for boot and another extra size 8GB
|
||||
- boot vms from the created volume
|
||||
- Validate that VMs boot, and that no timeouts or error status occur.
|
||||
- Verify VM status is ACTIVE
|
||||
- Attach the second volume to VM
|
||||
- Attempt live migrate VM
|
||||
- Login to VM and verify the filesystem is rw mode on both volumes
|
||||
- Attempt cold migrate VM
|
||||
- Login to VM and verify the filesystem is rw mode on both volumes
|
||||
- Reboot the compute host to initiate evacuation
|
||||
- Login to VM and verify the filesystem is rw mode on both volumes
|
||||
- Terminate VMs
|
||||
|
||||
Skip conditions:
|
||||
- less than two computes
|
||||
- less than one storage
|
||||
|
||||
"""
|
||||
# skip("Currently not working. Centos image doesn't see both volumes")
|
||||
LOG.tc_step("Creating a volume size=8GB.....")
|
||||
vol_id_0 = cinder_helper.create_volume(size=8)[1]
|
||||
ResourceCleanup.add('volume', vol_id_0, scope='function')
|
||||
|
||||
LOG.tc_step("Creating a second volume size=8GB.....")
|
||||
vol_id_1 = cinder_helper.create_volume(size=8, bootable=False)[1]
|
||||
LOG.tc_step("Volume id is: {}".format(vol_id_1))
|
||||
ResourceCleanup.add('volume', vol_id_1, scope='function')
|
||||
|
||||
LOG.tc_step("Booting instance vm_0...")
|
||||
|
||||
vm_id = vm_helper.boot_vm(name='vm_0', source='volume', source_id=vol_id_0,
|
||||
cleanup='function')[1]
|
||||
time.sleep(5)
|
||||
|
||||
LOG.tc_step("Verify VM can be pinged from NAT box...")
|
||||
rc, boot_time = check_vm_boot_time(vm_id)
|
||||
assert rc, "VM is not pingable after {} seconds ".format(boot_time)
|
||||
|
||||
LOG.tc_step("Login to VM and to check filesystem is rw mode....")
|
||||
assert is_vm_filesystem_rw(
|
||||
vm_id), 'vol_0 rootfs filesystem is not RW as expected.'
|
||||
|
||||
LOG.tc_step("Attemping to attach a second volume to VM...")
|
||||
vm_helper.attach_vol_to_vm(vm_id, vol_id_1)
|
||||
|
||||
LOG.tc_step(
|
||||
"Login to VM and to check filesystem is rw mode for both volumes....")
|
||||
assert is_vm_filesystem_rw(vm_id, rootfs=['vda',
|
||||
'vdb']), 'volumes rootfs ' \
|
||||
'filesystem is not RW ' \
|
||||
'as expected.'
|
||||
|
||||
LOG.tc_step("Attemping live migrate VM...")
|
||||
vm_helper.live_migrate_vm(vm_id=vm_id)
|
||||
|
||||
LOG.tc_step(
|
||||
"Login to VM and to check filesystem is rw mode after live "
|
||||
"migration....")
|
||||
assert is_vm_filesystem_rw(vm_id, rootfs=['vda',
|
||||
'vdb']), 'After live migration ' \
|
||||
'rootfs filesystem is ' \
|
||||
'not RW'
|
||||
|
||||
LOG.tc_step("Attempting cold migrate VM...")
|
||||
vm_helper.cold_migrate_vm(vm_id)
|
||||
|
||||
LOG.tc_step(
|
||||
"Login to VM and to check filesystem is rw mode after live "
|
||||
"migration....")
|
||||
assert is_vm_filesystem_rw(vm_id, rootfs=['vda',
|
||||
'vdb']), 'After cold migration ' \
|
||||
'rootfs filesystem is ' \
|
||||
'not RW'
|
||||
LOG.tc_step("Testing VM evacuation.....")
|
||||
before_host_0 = vm_helper.get_vm_host(vm_id)
|
||||
|
||||
LOG.tc_step("Rebooting compute {} to initiate vm evacuation .....".format(
|
||||
before_host_0))
|
||||
vm_helper.evacuate_vms(host=before_host_0, vms_to_check=vm_id,
|
||||
ping_vms=True)
|
||||
|
||||
LOG.tc_step(
|
||||
"Login to VM and to check filesystem is rw mode after live "
|
||||
"migration....")
|
||||
assert is_vm_filesystem_rw(vm_id, rootfs=['vda',
|
||||
'vdb']), 'After evacuation ' \
|
||||
'filesystem is not RW'
|
||||
|
||||
|
||||
def check_vm_boot_time(vm_id):
|
||||
start_time = time.time()
|
||||
output = vm_helper.wait_for_vm_pingable_from_natbox(vm_id, fail_ok=False)
|
||||
elapsed_time = time.time() - start_time
|
||||
return output, elapsed_time
|
||||
|
||||
|
||||
def is_vm_filesystem_rw(vm_id, rootfs='vda', vm_image_name=None):
|
||||
"""
|
||||
|
||||
Args:
|
||||
vm_id:
|
||||
rootfs (str|list):
|
||||
vm_image_name (None|str):
|
||||
|
||||
Returns:
|
||||
|
||||
"""
|
||||
vm_helper.wait_for_vm_pingable_from_natbox(vm_id, timeout=240)
|
||||
|
||||
if vm_image_name is None:
|
||||
vm_image_name = GuestImages.DEFAULT['guest']
|
||||
|
||||
router_host = dhcp_host = None
|
||||
try:
|
||||
LOG.info(
|
||||
"---------Collecting router and dhcp agent host info-----------")
|
||||
router_host = network_helper.get_router_host()
|
||||
mgmt_net = network_helper.get_mgmt_net_id()
|
||||
dhcp_host = network_helper.get_network_agents(field='Host',
|
||||
network=mgmt_net)
|
||||
|
||||
with vm_helper.ssh_to_vm_from_natbox(vm_id, vm_image_name=vm_image_name,
|
||||
retry_timeout=300) as vm_ssh:
|
||||
if isinstance(rootfs, str):
|
||||
rootfs = [rootfs]
|
||||
for fs in rootfs:
|
||||
cmd = "mount | grep {} | grep rw | wc -l".format(fs)
|
||||
cmd_output = vm_ssh.exec_sudo_cmd(cmd)[1]
|
||||
if cmd_output != '1':
|
||||
LOG.info("Filesystem /dev/{} is not rw for VM: "
|
||||
"{}".format(fs, vm_id))
|
||||
return False
|
||||
return True
|
||||
except exceptions.SSHRetryTimeout:
|
||||
LOG.error("Failed to ssh, collecting vm console log.")
|
||||
vm_helper.get_console_logs(vm_ids=vm_id)
|
||||
LOG.info("Router host: {}. dhcp agent host: {}".format(router_host,
|
||||
dhcp_host))
|
||||
raise
|
|
@ -0,0 +1,389 @@
|
|||
#
|
||||
# Copyright (c) 2019 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
import os
|
||||
import re
|
||||
import time
|
||||
|
||||
from pytest import fixture, mark, skip
|
||||
|
||||
from utils.tis_log import LOG
|
||||
from utils.clients.ssh import ControllerClient
|
||||
from utils.clients.local import LocalHostClient
|
||||
|
||||
from keywords import common, kube_helper, host_helper, system_helper, \
|
||||
container_helper, keystone_helper
|
||||
from consts.filepaths import TestServerPath, StxPath
|
||||
from consts.stx import HostAvailState, Container
|
||||
from consts.proj_vars import ProjVar
|
||||
from consts.auth import HostLinuxUser
|
||||
from testfixtures.recover_hosts import HostsToRecover
|
||||
|
||||
|
||||
POD_YAML = 'hellokitty.yaml'
|
||||
POD_NAME = 'hellokitty'
|
||||
|
||||
HELM_TAR = 'hello-kitty.tgz'
|
||||
HELM_APP_NAME = 'hello-kitty'
|
||||
HELM_POD_FULL_NAME = 'hk-hello-kitty-hello-kit'
|
||||
HELM_MSG = '<h3>Hello Kitty World!</h3>'
|
||||
|
||||
|
||||
def controller_precheck(controller):
|
||||
host = system_helper.get_active_controller_name()
|
||||
if controller == 'standby':
|
||||
controllers = system_helper.get_controllers(
|
||||
availability=(HostAvailState.AVAILABLE, HostAvailState.DEGRADED,
|
||||
HostAvailState.ONLINE))
|
||||
controllers.remove(host)
|
||||
if not controllers:
|
||||
skip('Standby controller does not exist or not in good state')
|
||||
host = controllers[0]
|
||||
|
||||
return host
|
||||
|
||||
|
||||
@fixture(scope='module')
|
||||
def copy_test_apps():
|
||||
skip('Shared Test File Server is not ready')
|
||||
stx_home = HostLinuxUser.get_home()
|
||||
con_ssh = ControllerClient.get_active_controller()
|
||||
app_dir = os.path.join(stx_home, 'custom_apps/')
|
||||
if not con_ssh.file_exists(app_dir + POD_YAML):
|
||||
common.scp_from_test_server_to_active_controller(
|
||||
source_path=TestServerPath.CUSTOM_APPS, con_ssh=con_ssh,
|
||||
dest_dir=stx_home, timeout=60, is_dir=True)
|
||||
|
||||
if not system_helper.is_aio_simplex():
|
||||
dest_host = 'controller-1' if con_ssh.get_hostname() == \
|
||||
'controller-0' else 'controller-0'
|
||||
con_ssh.rsync(source=app_dir, dest_server=dest_host, dest=app_dir,
|
||||
timeout=60)
|
||||
|
||||
return app_dir
|
||||
|
||||
|
||||
@fixture()
|
||||
def delete_test_pod():
|
||||
LOG.info("Delete {} pod if exists".format(POD_NAME))
|
||||
kube_helper.delete_resources(resource_names=POD_NAME, fail_ok=True)
|
||||
|
||||
|
||||
@mark.platform_sanity
|
||||
@mark.parametrize('controller', [
|
||||
'active',
|
||||
'standby'
|
||||
])
|
||||
def test_launch_pod_via_kubectl(copy_test_apps, delete_test_pod, controller):
|
||||
"""
|
||||
Test custom pod apply and delete
|
||||
Args:
|
||||
copy_test_apps (str): module fixture
|
||||
delete_test_pod: fixture
|
||||
controller: test param
|
||||
|
||||
Setups:
|
||||
- Copy test files from test server to stx system (module)
|
||||
- Delete test pod if already exists on system
|
||||
|
||||
Test Steps:
|
||||
- ssh to given controller
|
||||
- kubectl apply custom pod yaml and verify custom pod is added to
|
||||
both controllers (if applicable)
|
||||
- kubectl delete custom pod and verify it is removed from both
|
||||
controllers (if applicable)
|
||||
|
||||
"""
|
||||
host = controller_precheck(controller)
|
||||
|
||||
with host_helper.ssh_to_host(hostname=host) as con_ssh:
|
||||
app_path = os.path.join(copy_test_apps, POD_YAML)
|
||||
LOG.tc_step('kubectl apply {}, and check {} pod is created and '
|
||||
'running'.format(POD_YAML, POD_NAME))
|
||||
kube_helper.apply_pod(file_path=app_path, pod_name=POD_NAME,
|
||||
check_both_controllers=True, con_ssh=con_ssh)
|
||||
|
||||
LOG.tc_step("Delete {} pod and check it's removed from both "
|
||||
"controllers if applicable".format(POD_NAME))
|
||||
kube_helper.delete_resources(resource_names=POD_NAME, con_ssh=con_ssh)
|
||||
|
||||
|
||||
@fixture()
|
||||
def cleanup_app():
|
||||
if container_helper.get_apps(application=HELM_APP_NAME):
|
||||
LOG.fixture_step("Remove {} app if applied".format(HELM_APP_NAME))
|
||||
container_helper.remove_app(app_name=HELM_APP_NAME)
|
||||
|
||||
LOG.fixture_step("Delete {} app".format(HELM_APP_NAME))
|
||||
container_helper.delete_app(app_name=HELM_APP_NAME)
|
||||
|
||||
|
||||
@mark.platform_sanity
|
||||
def test_launch_app_via_sysinv(copy_test_apps, cleanup_app):
|
||||
"""
|
||||
Test upload, apply, remove, delete custom app via system cmd
|
||||
Args:
|
||||
copy_test_apps (str): module fixture
|
||||
cleanup_app: fixture
|
||||
|
||||
Setups:
|
||||
- Copy test files from test server to stx system (module)
|
||||
- Remove and delete test app if exists
|
||||
|
||||
Test Steps:
|
||||
- system application-upload test app tar file and wait for it to be
|
||||
uploaded
|
||||
- system application-apply test app and wait for it to be applied
|
||||
- wget <oam_ip>:<app_targetPort> from remote host
|
||||
- Verify app contains expected content
|
||||
- system application-remove test app and wait for it to be uninstalled
|
||||
- system application-delete test app from system
|
||||
|
||||
"""
|
||||
app_dir = copy_test_apps
|
||||
app_name = HELM_APP_NAME
|
||||
|
||||
LOG.tc_step("Upload {} helm charts".format(app_name))
|
||||
container_helper.upload_app(app_name=app_name, app_version='1.0',
|
||||
tar_file=os.path.join(app_dir, HELM_TAR))
|
||||
|
||||
LOG.tc_step("Apply {}".format(app_name))
|
||||
container_helper.apply_app(app_name=app_name)
|
||||
|
||||
LOG.tc_step("wget app via <oam_ip>:<targetPort>")
|
||||
json_path = '{.spec.ports[0].nodePort}'
|
||||
node_port = kube_helper.get_pod_value_jsonpath(
|
||||
type_name='service/{}'.format(HELM_POD_FULL_NAME), jsonpath=json_path)
|
||||
assert re.match(r'\d+', node_port), "Unable to get nodePort via " \
|
||||
"jsonpath '{}'".format(json_path)
|
||||
|
||||
localhost = LocalHostClient(connect=True)
|
||||
prefix = 'https' if keystone_helper.is_https_enabled() else 'http'
|
||||
oam_ip = ProjVar.get_var('LAB')['floating ip']
|
||||
output_file = '{}/{}.html'.format(ProjVar.get_var('TEMP_DIR'),
|
||||
HELM_APP_NAME)
|
||||
localhost.exec_cmd('wget {}://{}:{} -O {}'.format(
|
||||
prefix, oam_ip, node_port, output_file), fail_ok=False)
|
||||
|
||||
LOG.tc_step("Verify app contains expected content")
|
||||
app_content = localhost.exec_cmd('cat {}; echo'.format(output_file),
|
||||
get_exit_code=False)[1]
|
||||
assert app_content.startswith(HELM_MSG), \
|
||||
"App does not start with expected message."
|
||||
|
||||
LOG.tc_step("Remove applied app")
|
||||
container_helper.remove_app(app_name=app_name)
|
||||
|
||||
LOG.tc_step("Delete uninstalled app")
|
||||
container_helper.delete_app(app_name=app_name)
|
||||
|
||||
LOG.tc_step("Wait for pod terminate")
|
||||
kube_helper.wait_for_resources_gone(resource_names=HELM_POD_FULL_NAME,
|
||||
check_interval=10, namespace='default')
|
||||
|
||||
|
||||
def remove_cache_and_pull(con_ssh, name, test_image, fail_ok=False):
|
||||
container_helper.remove_docker_images(images=(test_image, name),
|
||||
con_ssh=con_ssh, fail_ok=fail_ok)
|
||||
container_helper.pull_docker_image(name=name, con_ssh=con_ssh)
|
||||
|
||||
|
||||
@mark.platform_sanity
|
||||
@mark.parametrize('controller', [
|
||||
'active',
|
||||
'standby'
|
||||
])
|
||||
def test_push_docker_image_to_local_registry(controller):
|
||||
"""
|
||||
Test push a docker image to local docker registry
|
||||
Args:
|
||||
controller:
|
||||
|
||||
Setup:
|
||||
- Copy test files from test server to stx system (module)
|
||||
|
||||
Test Steps:
|
||||
On specified controller (active or standby):
|
||||
- Pull test image busybox and get its ID
|
||||
- Remove busybox repo from local registry if exists
|
||||
- Tag image with local registry
|
||||
- Push test image to local registry
|
||||
- Remove cached test images
|
||||
- Pull test image from local registry
|
||||
On the other controller if exists, verify local registry is synced:
|
||||
- Remove cached test images
|
||||
- Pull test image from local registry
|
||||
|
||||
"""
|
||||
test_image = 'busybox'
|
||||
reg_addr = Container.LOCAL_DOCKER_REG
|
||||
host = controller_precheck(controller)
|
||||
controllers = system_helper.get_controllers(
|
||||
availability=(HostAvailState.AVAILABLE, HostAvailState.DEGRADED,
|
||||
HostAvailState.ONLINE))
|
||||
controllers.remove(host)
|
||||
|
||||
with host_helper.ssh_to_host(hostname=host) as con_ssh:
|
||||
|
||||
LOG.tc_step("Pull {} image from external on {} controller "
|
||||
"{}".format(test_image, controller, host))
|
||||
image_id = container_helper.pull_docker_image(name=test_image,
|
||||
con_ssh=con_ssh)[1]
|
||||
|
||||
LOG.tc_step("Remove {} from local registry if"
|
||||
" exists".format(test_image))
|
||||
con_ssh.exec_sudo_cmd('rm -rf {}/{}'.format(StxPath.DOCKER_REPO,
|
||||
test_image))
|
||||
|
||||
LOG.tc_step("Tag image with local registry")
|
||||
target_name = '{}/{}'.format(reg_addr, test_image)
|
||||
container_helper.tag_docker_image(source_image=image_id,
|
||||
target_name=target_name,
|
||||
con_ssh=con_ssh)
|
||||
|
||||
LOG.tc_step("Login to local docker registry and push test image from "
|
||||
"{} controller {}".format(controller, host))
|
||||
container_helper.login_to_docker(registry=reg_addr, con_ssh=con_ssh)
|
||||
container_helper.push_docker_image(target_name, con_ssh=con_ssh)
|
||||
|
||||
LOG.tc_step("Remove cached test images and pull from local "
|
||||
"registry on {}".format(host))
|
||||
remove_cache_and_pull(con_ssh=con_ssh, name=target_name,
|
||||
test_image=test_image)
|
||||
container_helper.remove_docker_images(images=(target_name, ),
|
||||
con_ssh=con_ssh)
|
||||
|
||||
if controllers:
|
||||
other_host = controllers[0]
|
||||
with host_helper.ssh_to_host(other_host, con_ssh=con_ssh) as \
|
||||
other_ssh:
|
||||
LOG.tc_step("Remove cached test images on the other "
|
||||
"controller {} if exists and pull from local "
|
||||
"registry".format(other_host))
|
||||
container_helper.login_to_docker(registry=reg_addr,
|
||||
con_ssh=other_ssh)
|
||||
remove_cache_and_pull(con_ssh=other_ssh, name=target_name,
|
||||
fail_ok=True, test_image=test_image)
|
||||
container_helper.remove_docker_images(images=(target_name,),
|
||||
con_ssh=other_ssh)
|
||||
|
||||
LOG.tc_step("Cleanup {} from local docker registry after "
|
||||
"test".format(test_image))
|
||||
con_ssh.exec_sudo_cmd('rm -rf {}/{}'.format(StxPath.DOCKER_REPO,
|
||||
test_image))
|
||||
|
||||
|
||||
# Taking out following test case until a shared file server is available for
|
||||
# community and test charts are available to public
|
||||
@mark.platform_sanity
|
||||
def test_upload_charts_via_helm_upload(copy_test_apps):
|
||||
"""
|
||||
Test upload helm charts via helm-upload cmd directly. i.e., without
|
||||
using sysinv cmd.
|
||||
Args:
|
||||
copy_test_apps:
|
||||
|
||||
Setups:
|
||||
- Copy test files from test server to stx system (module)
|
||||
|
||||
Test Steps:
|
||||
- Upload helm charts from given controller via 'helm-upload <tar_file>'
|
||||
- Verify the charts appear at /www/pages/helm_charts/ on both
|
||||
controllers (if applicable)
|
||||
|
||||
"""
|
||||
app_dir = copy_test_apps
|
||||
|
||||
LOG.tc_step("Upload helm charts via helm-upload cmd from active controller "
|
||||
"and check charts are in /www/pages/")
|
||||
file_path = container_helper.upload_helm_charts(
|
||||
tar_file=os.path.join(app_dir, HELM_TAR), delete_first=True)[1]
|
||||
|
||||
if system_helper.get_standby_controller_name():
|
||||
LOG.tc_step("Swact active controller and verify uploaded charts "
|
||||
"are synced over")
|
||||
host_helper.swact_host()
|
||||
con_ssh = ControllerClient.get_active_controller()
|
||||
charts_exist = con_ssh.file_exists(file_path)
|
||||
assert charts_exist, "{} does not exist after swact to {}".format(
|
||||
file_path, con_ssh.get_hostname())
|
||||
LOG.info("{} successfully synced after swact".format(file_path))
|
||||
|
||||
|
||||
@fixture()
|
||||
def deploy_delete_kubectl_app(request):
|
||||
app_name = 'resource-consumer'
|
||||
app_params = \
|
||||
'--image=gcr.io/kubernetes-e2e-test-images/resource-consumer:1.4' \
|
||||
+ ' --expose' \
|
||||
+ ' --service-overrides=' \
|
||||
+ "'{ " + '"spec": { "type": "LoadBalancer" } }' \
|
||||
+ "' --port 8080 --requests='cpu=1000m,memory=1024Mi'"
|
||||
|
||||
LOG.fixture_step("Create {} test app by kubectl run".format(app_name))
|
||||
sub_cmd = "run {}".format(app_name)
|
||||
kube_helper.exec_kube_cmd(sub_cmd=sub_cmd, args=app_params, fail_ok=False)
|
||||
|
||||
LOG.fixture_step("Check {} test app is created ".format(app_name))
|
||||
pod_name = kube_helper.get_pods(field='NAME', namespace='default',
|
||||
name=app_name, strict=False)[0]
|
||||
|
||||
def delete_app():
|
||||
LOG.fixture_step("Delete {} pod if exists after test "
|
||||
"run".format(app_name))
|
||||
kube_helper.delete_resources(resource_names=app_name,
|
||||
resource_types=('deployment', 'service'),
|
||||
namespace='default', post_check=False)
|
||||
kube_helper.wait_for_resources_gone(resource_names=pod_name,
|
||||
namespace='default')
|
||||
request.addfinalizer(delete_app)
|
||||
|
||||
kube_helper.wait_for_pods_status(pod_names=pod_name, namespace='default',
|
||||
fail_ok=False)
|
||||
return app_name, pod_name
|
||||
|
||||
|
||||
@mark.platform_sanity
|
||||
def test_host_operations_with_custom_kubectl_app(deploy_delete_kubectl_app):
|
||||
"""
|
||||
Test create, delete custom app via kubectl run cmd
|
||||
Args:
|
||||
deploy_delete_kubectl_app: fixture
|
||||
|
||||
Setups:
|
||||
- Create kubectl app via kubectl run
|
||||
|
||||
Test Steps:
|
||||
- If duplex: swact and verify pod still Running
|
||||
- Lock/unlock controller and verify pod still Running
|
||||
|
||||
Teardown:
|
||||
- Delete kubectl deployment and service
|
||||
- Verify pod is removed
|
||||
|
||||
"""
|
||||
app_name, pod_name = deploy_delete_kubectl_app
|
||||
active, standby = system_helper.get_active_standby_controllers()
|
||||
|
||||
if standby:
|
||||
LOG.tc_step("Swact active controller and verify {} test app is "
|
||||
"running ".format(pod_name))
|
||||
host_helper.swact_host()
|
||||
kube_helper.wait_for_pods_status(pod_names=pod_name,
|
||||
namespace='default', fail_ok=False)
|
||||
|
||||
LOG.tc_step("Lock/unlock {} and verify {} test app is "
|
||||
"running.".format(active, pod_name))
|
||||
HostsToRecover.add(active)
|
||||
host_helper.lock_host(active, swact=False)
|
||||
|
||||
# wait for services to stabilize before unlocking
|
||||
time.sleep(20)
|
||||
|
||||
host_helper.unlock_host(active)
|
||||
kube_helper.wait_for_pods_status(pod_names=pod_name, namespace=None,
|
||||
fail_ok=False)
|
|
@ -0,0 +1,117 @@
|
|||
#
|
||||
# Copyright (c) 2019 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
from pytest import fixture, mark, skip
|
||||
|
||||
from keywords import kube_helper, system_helper, host_helper
|
||||
from consts.stx import PodStatus, HostAvailState
|
||||
from utils.tis_log import LOG
|
||||
from utils.clients.ssh import ControllerClient
|
||||
|
||||
EDGEX_URL = \
|
||||
'https://github.com/rohitsardesai83/edgex-on-kubernetes/archive/master.zip'
|
||||
EDGEX_ARCHIVE = '~/master.zip'
|
||||
EDGEX_DIR = '~/edgex-on-kubernetes-master'
|
||||
EDGEX_START = '{}/hack/edgex-up.sh'.format(EDGEX_DIR)
|
||||
EDGEX_STOP = '{}/hack/edgex-down.sh'.format(EDGEX_DIR)
|
||||
|
||||
|
||||
@fixture(scope='module')
|
||||
def deploy_edgex(request):
|
||||
con_ssh = ControllerClient.get_active_controller()
|
||||
|
||||
LOG.fixture_step("Downloading EdgeX-on-Kubernetes")
|
||||
con_ssh.exec_cmd('wget {}'.format(EDGEX_URL), fail_ok=False)
|
||||
charts_exist = con_ssh.file_exists(EDGEX_ARCHIVE)
|
||||
assert charts_exist, '{} does not exist'.format(EDGEX_ARCHIVE)
|
||||
|
||||
LOG.fixture_step("Extracting EdgeX-on-Kubernetes")
|
||||
con_ssh.exec_cmd('unzip {}'.format(EDGEX_ARCHIVE), fail_ok=False)
|
||||
|
||||
LOG.fixture_step("Deploying EdgeX-on-Kubernetes")
|
||||
con_ssh.exec_cmd(EDGEX_START, 300, fail_ok=False)
|
||||
|
||||
def delete_edgex():
|
||||
LOG.fixture_step("Destroying EdgeX-on-Kubernetes")
|
||||
con_ssh.exec_cmd(EDGEX_STOP, 300, fail_ok=False)
|
||||
|
||||
LOG.fixture_step("Removing EdgeX-on-Kubernetes")
|
||||
con_ssh.exec_cmd('rm -rf {} {}'.format(EDGEX_ARCHIVE, EDGEX_DIR))
|
||||
request.addfinalizer(delete_edgex)
|
||||
|
||||
return
|
||||
|
||||
|
||||
def check_host(controller):
|
||||
host = system_helper.get_active_controller_name()
|
||||
if controller == 'standby':
|
||||
controllers = system_helper.get_controllers(
|
||||
availability=(HostAvailState.AVAILABLE, HostAvailState.DEGRADED,
|
||||
HostAvailState.ONLINE))
|
||||
controllers.remove(host)
|
||||
if not controllers:
|
||||
skip('Standby controller does not exist or not in good state')
|
||||
host = controllers[0]
|
||||
return host
|
||||
|
||||
|
||||
@mark.platform
|
||||
@mark.parametrize('controller', [
|
||||
'active',
|
||||
'standby'
|
||||
])
|
||||
def test_kube_edgex_services(deploy_edgex, controller):
|
||||
"""
|
||||
Test edgex pods are deployed and running
|
||||
Args:
|
||||
deploy_edgex (str): module fixture
|
||||
controller: test param
|
||||
Test Steps:
|
||||
- ssh to given controller
|
||||
- Wait for EdgeX pods deployment
|
||||
- Check all EdgeX pods are running
|
||||
- Check EdgeX services displayed:
|
||||
'edgex-core-command', 'edgex-core-consul',
|
||||
'edgex-core-data', 'edgex-core-metadata'
|
||||
- Check EdgeX deployments displayed:
|
||||
'edgex-core-command', 'edgex-core-consul',
|
||||
'edgex-core-data', 'edgex-core-metadata'
|
||||
|
||||
"""
|
||||
pods = ('edgex-core-command', 'edgex-core-consul',
|
||||
'edgex-core-data', 'edgex-core-metadata')
|
||||
services = ('edgex-core-command', 'edgex-core-consul',
|
||||
'edgex-core-data', 'edgex-core-metadata')
|
||||
deployments = ('edgex-core-command', 'edgex-core-consul',
|
||||
'edgex-core-data', 'edgex-core-metadata')
|
||||
|
||||
host = check_host(controller=controller)
|
||||
with host_helper.ssh_to_host(hostname=host) as con_ssh:
|
||||
LOG.tc_step("Check EdgeX pods on {}: {}".format(controller, pods))
|
||||
edgex_services = kube_helper.get_resources(resource_type='service',
|
||||
namespace='default',
|
||||
con_ssh=con_ssh)
|
||||
edgex_deployments = kube_helper.get_resources(
|
||||
resource_type='deployment.apps', namespace='default',
|
||||
con_ssh=con_ssh)
|
||||
|
||||
LOG.tc_step("Wait for EdgeX pods Running")
|
||||
kube_helper.wait_for_pods_status(partial_names=pods,
|
||||
namespace='default',
|
||||
status=PodStatus.RUNNING,
|
||||
con_ssh=con_ssh, fail_ok=False)
|
||||
|
||||
LOG.tc_step("Check EdgeX services on {}: {}".format(controller,
|
||||
services))
|
||||
for service in services:
|
||||
assert service in edgex_services, "{} not in kube-system " \
|
||||
"service table".format(service)
|
||||
|
||||
LOG.tc_step("Check EdgeX deployments on {}: {}".format(controller,
|
||||
deployments))
|
||||
for deployment in deployments:
|
||||
assert deployment in edgex_deployments, \
|
||||
"{} not in kube-system deployment.apps table".format(deployment)
|
|
@ -0,0 +1,93 @@
|
|||
#
|
||||
# Copyright (c) 2019 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
import re
|
||||
|
||||
from pytest import mark, skip
|
||||
|
||||
from keywords import kube_helper, system_helper, host_helper
|
||||
from consts.stx import PodStatus, HostAvailState
|
||||
from utils.tis_log import LOG
|
||||
|
||||
|
||||
def check_host(controller):
|
||||
host = system_helper.get_active_controller_name()
|
||||
if controller == 'standby':
|
||||
controllers = system_helper.get_controllers(
|
||||
availability=(HostAvailState.AVAILABLE, HostAvailState.DEGRADED,
|
||||
HostAvailState.ONLINE))
|
||||
controllers.remove(host)
|
||||
if not controllers:
|
||||
skip('Standby controller does not exist or not in good state')
|
||||
host = controllers[0]
|
||||
return host
|
||||
|
||||
|
||||
@mark.platform_sanity
|
||||
@mark.parametrize('controller', [
|
||||
'active',
|
||||
'standby'
|
||||
])
|
||||
def test_kube_system_services(controller):
|
||||
"""
|
||||
Test kube-system pods are deployed and running
|
||||
|
||||
Test Steps:
|
||||
- ssh to given controller
|
||||
- Check all kube-system pods are running
|
||||
- Check kube-system services displayed: 'calico-typha',
|
||||
'kube-dns', 'tiller-deploy'
|
||||
- Check kube-system deployments displayed: 'calico-typha',
|
||||
'coredns', 'tiller-deploy'
|
||||
|
||||
"""
|
||||
host = check_host(controller=controller)
|
||||
|
||||
with host_helper.ssh_to_host(hostname=host) as con_ssh:
|
||||
|
||||
kube_sys_pods_values = kube_helper.get_resources(
|
||||
field=('NAME', 'STATUS'), resource_type='pod',
|
||||
namespace='kube-system', con_ssh=con_ssh)
|
||||
kube_sys_services = kube_helper.get_resources(
|
||||
resource_type='service', namespace='kube-system', con_ssh=con_ssh)
|
||||
kube_sys_deployments = kube_helper.get_resources(
|
||||
resource_type='deployment.apps', namespace='kube-system',
|
||||
con_ssh=con_ssh)
|
||||
|
||||
LOG.tc_step("Check kube-system pods status on {}".format(controller))
|
||||
# allow max 1 coredns pending on aio-sx
|
||||
coredns_pending = False if system_helper.is_aio_simplex() else True
|
||||
for pod_info in kube_sys_pods_values:
|
||||
pod_name, pod_status = pod_info
|
||||
if not coredns_pending and 'coredns-' in pod_name and \
|
||||
pod_status == PodStatus.PENDING:
|
||||
coredns_pending = True
|
||||
continue
|
||||
|
||||
valid_status = PodStatus.RUNNING
|
||||
if re.search('audit-|init-', pod_name):
|
||||
valid_status = PodStatus.COMPLETED
|
||||
|
||||
if pod_status not in valid_status:
|
||||
kube_helper.wait_for_pods_status(pod_names=pod_name,
|
||||
status=valid_status,
|
||||
namespace='kube-system',
|
||||
con_ssh=con_ssh, timeout=300)
|
||||
|
||||
services = ('kube-dns', 'tiller-deploy')
|
||||
LOG.tc_step("Check kube-system services on {}: {}".format(controller,
|
||||
services))
|
||||
for service in services:
|
||||
assert service in kube_sys_services, \
|
||||
"{} not in kube-system service table".format(service)
|
||||
|
||||
deployments = ('calico-kube-controllers', 'coredns', 'tiller-deploy')
|
||||
LOG.tc_step("Check kube-system deployments on {}: "
|
||||
"{}".format(controller, deployments))
|
||||
for deployment in deployments:
|
||||
assert deployment in kube_sys_deployments, \
|
||||
"{} not in kube-system deployment.apps table".format(deployment)
|
|
@ -0,0 +1,292 @@
|
|||
#
|
||||
# Copyright (c) 2019 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
import time
|
||||
|
||||
from pytest import skip, mark, fixture
|
||||
|
||||
from keywords import container_helper, system_helper, host_helper, kube_helper
|
||||
from consts.stx import HostAvailState, PodStatus, AppStatus
|
||||
from utils.tis_log import LOG
|
||||
|
||||
|
||||
def get_valid_controllers():
|
||||
controllers = system_helper.get_controllers(
|
||||
availability=(HostAvailState.AVAILABLE, HostAvailState.DEGRADED,
|
||||
HostAvailState.ONLINE))
|
||||
return controllers
|
||||
|
||||
|
||||
def check_openstack_pods_healthy(host, timeout):
|
||||
with host_helper.ssh_to_host(hostname=host) as con_ssh:
|
||||
kube_helper.wait_for_pods_healthy(namespace='stx-openstack',
|
||||
con_ssh=con_ssh, timeout=timeout)
|
||||
|
||||
|
||||
@mark.sanity
|
||||
@mark.sx_sanity
|
||||
@mark.cpe_sanity
|
||||
def test_openstack_services_healthy():
|
||||
"""
|
||||
Pre-requisite:
|
||||
- stx-openstack application exists
|
||||
|
||||
Test steps:
|
||||
- Check stx-openstack application in applied state via system
|
||||
application-list
|
||||
- Check all openstack pods in running or completed state via kubectl get
|
||||
|
||||
"""
|
||||
LOG.tc_step("Check stx-openstack application is applied")
|
||||
status = container_helper.get_apps(application='stx-openstack')[0]
|
||||
if not status:
|
||||
skip('Openstack application is not uploaded.')
|
||||
assert status == AppStatus.APPLIED, "stx-openstack is in {} status " \
|
||||
"instead of applied".format(status)
|
||||
|
||||
LOG.tc_step("Check openstack pods are in running or completed status via "
|
||||
"kubectl get on all controllers")
|
||||
controllers = get_valid_controllers()
|
||||
for host in controllers:
|
||||
check_openstack_pods_healthy(host=host, timeout=60)
|
||||
|
||||
|
||||
@mark.trylast
|
||||
@mark.sanity
|
||||
@mark.sx_sanity
|
||||
@mark.cpe_sanity
|
||||
@mark.parametrize('controller', [
|
||||
'controller-0',
|
||||
'controller-1'
|
||||
])
|
||||
def test_reapply_stx_openstack_no_change(stx_openstack_required, controller):
|
||||
"""
|
||||
Args:
|
||||
stx_openstack_required:
|
||||
|
||||
Pre-requisite:
|
||||
- stx-openstack application in applied state
|
||||
|
||||
Test Steps:
|
||||
- Re-apply stx-openstack application
|
||||
- Check openstack pods healthy
|
||||
|
||||
"""
|
||||
if system_helper.is_aio_simplex() and controller != 'controller-0':
|
||||
skip('Simplex system only has controller-0')
|
||||
|
||||
active, standby = system_helper.get_active_standby_controllers()
|
||||
if active != controller:
|
||||
if not standby:
|
||||
skip('{} is not ready to take over'.format(controller))
|
||||
|
||||
LOG.tc_step("Swact active controller to test reapply from "
|
||||
"{}".format(controller))
|
||||
host_helper.swact_host()
|
||||
time.sleep(60)
|
||||
|
||||
LOG.info("helm list before reapply after swact")
|
||||
from utils.clients.ssh import ControllerClient
|
||||
con_ssh = ControllerClient.get_active_controller()
|
||||
end_time = time.time() + 180
|
||||
while time.time() < end_time:
|
||||
code = con_ssh.exec_cmd('helm list', expect_timeout=60)[0]
|
||||
if code == 0:
|
||||
break
|
||||
time.sleep(30)
|
||||
|
||||
LOG.tc_step("Re-apply stx-openstack application")
|
||||
container_helper.apply_app(app_name='stx-openstack')
|
||||
|
||||
LOG.tc_step("Check openstack pods in good state on all controllers "
|
||||
"after stx-openstack re-applied")
|
||||
for host in get_valid_controllers():
|
||||
check_openstack_pods_healthy(host=host, timeout=120)
|
||||
|
||||
|
||||
NEW_NOVA_COMPUTE_PODS = None
|
||||
|
||||
|
||||
@fixture()
|
||||
def reset_if_modified(request):
|
||||
if not container_helper.is_stx_openstack_deployed(applied_only=True):
|
||||
skip('stx-openstack application is not in Applied status. Skip test.')
|
||||
|
||||
valid_hosts = get_valid_controllers()
|
||||
conf_path = '/etc/nova/nova.conf'
|
||||
|
||||
def reset():
|
||||
app_name = 'stx-openstack'
|
||||
post_status = container_helper.get_apps(application=app_name,
|
||||
field='status')[0]
|
||||
if not post_status.endswith('ed'):
|
||||
LOG.fixture_step("Wait for application apply finish")
|
||||
container_helper.wait_for_apps_status(apps=app_name,
|
||||
status=AppStatus.APPLIED,
|
||||
timeout=1800,
|
||||
check_interval=15,
|
||||
fail_ok=False)
|
||||
|
||||
user_overrides = container_helper.get_helm_override_values(
|
||||
chart='nova', namespace='openstack', fields='user_overrides')[0]
|
||||
if not user_overrides or user_overrides == 'None':
|
||||
LOG.info("No change in nova user_overrides. Do nothing.")
|
||||
return
|
||||
|
||||
LOG.fixture_step("Update nova helm-override to reset values")
|
||||
container_helper.update_helm_override(chart='nova',
|
||||
namespace='openstack',
|
||||
reset_vals=True)
|
||||
user_overrides = container_helper.get_helm_override_values(
|
||||
chart='nova', namespace='openstack', fields='user_overrides')[0]
|
||||
assert not user_overrides, "nova helm user_overrides still exist " \
|
||||
"after reset-values"
|
||||
|
||||
LOG.fixture_step("Re-apply stx-openstack application and ensure "
|
||||
"it is applied")
|
||||
container_helper.apply_app(app_name='stx-openstack', check_first=False,
|
||||
applied_timeout=1800)
|
||||
|
||||
check_cmd = 'grep foo {}'.format(conf_path)
|
||||
LOG.fixture_step("Ensure user_override is removed from {} in "
|
||||
"nova-compute containers".format(conf_path))
|
||||
for host in valid_hosts:
|
||||
with host_helper.ssh_to_host(host) as host_ssh:
|
||||
LOG.info(
|
||||
"Wait for nova-cell-setup completed on {}".format(host))
|
||||
kube_helper.wait_for_openstack_pods_status(
|
||||
application='nova', component='cell-setup',
|
||||
con_ssh=host_ssh, status=PodStatus.COMPLETED)
|
||||
|
||||
LOG.info("Check new release generated for nova compute "
|
||||
"pods on {}".format(host))
|
||||
nova_compute_pods = kube_helper.get_openstack_pods(
|
||||
field='NAME', application='nova', component='compute',
|
||||
con_ssh=host_ssh)[0]
|
||||
nova_compute_pods = sorted(nova_compute_pods)
|
||||
if NEW_NOVA_COMPUTE_PODS:
|
||||
assert NEW_NOVA_COMPUTE_PODS != nova_compute_pods, \
|
||||
"No new release generated after reset values"
|
||||
|
||||
LOG.info("Check custom conf is removed from {} in nova "
|
||||
"compute container on {}".format(conf_path, host))
|
||||
for nova_compute_pod in nova_compute_pods:
|
||||
code, output = kube_helper.exec_cmd_in_container(
|
||||
cmd=check_cmd, pod=nova_compute_pod, fail_ok=True,
|
||||
con_ssh=host_ssh, namespace='openstack',
|
||||
container_name='nova-compute')
|
||||
assert code == 1, \
|
||||
"{} on {} still contains user override info after " \
|
||||
"reset nova helm-override values and reapply " \
|
||||
"stx-openstack app: {}".format(conf_path, host, output)
|
||||
|
||||
request.addfinalizer(reset)
|
||||
|
||||
return valid_hosts, conf_path
|
||||
|
||||
|
||||
@mark.trylast
|
||||
@mark.sanity
|
||||
@mark.sx_sanity
|
||||
@mark.cpe_sanity
|
||||
def test_stx_openstack_helm_override_update_and_reset(reset_if_modified):
|
||||
"""
|
||||
Test helm override for openstack nova chart and reset
|
||||
Args:
|
||||
reset_if_modified:
|
||||
|
||||
Pre-requisite:
|
||||
- stx-openstack application in applied state
|
||||
|
||||
Test Steps:
|
||||
- Update nova helm-override default conf
|
||||
- Check nova helm-override is updated in system helm-override-show
|
||||
- Re-apply stx-openstack application and ensure it is applied (in
|
||||
applied status and alarm cleared)
|
||||
- On all controller(s):
|
||||
- Check nova compute pods names are changed in kubectl get
|
||||
- Check actual nova-compute.conf is updated in all nova-compute
|
||||
containers
|
||||
|
||||
Teardown:
|
||||
- Update nova helm-override to reset values
|
||||
- Re-apply stx-openstack application and ensure it is applied
|
||||
|
||||
"""
|
||||
valid_hosts, conf_path = reset_if_modified
|
||||
new_conf = 'conf.nova.DEFAULT.foo=bar'
|
||||
|
||||
LOG.tc_step("Update nova helm-override: {}".format(new_conf))
|
||||
container_helper.update_helm_override(
|
||||
chart='nova', namespace='openstack',
|
||||
kv_pairs={'conf.nova.DEFAULT.foo': 'bar'})
|
||||
|
||||
LOG.tc_step("Check nova helm-override is updated in system "
|
||||
"helm-override-show")
|
||||
fields = ('combined_overrides', 'system_overrides', 'user_overrides')
|
||||
combined_overrides, system_overrides, user_overrides = \
|
||||
container_helper.get_helm_override_values(chart='nova',
|
||||
namespace='openstack',
|
||||
fields=fields)
|
||||
|
||||
assert 'bar' == \
|
||||
user_overrides['conf']['nova'].get('DEFAULT', {}).get('foo'), \
|
||||
"{} is not shown in user overrides".format(new_conf)
|
||||
assert 'bar' == \
|
||||
combined_overrides['conf']['nova'].get('DEFAULT', {}).get('foo'), \
|
||||
"{} is not shown in combined overrides".format(new_conf)
|
||||
assert not system_overrides['conf']['nova'].get('DEFAULT', {}).get('foo'), \
|
||||
"User override {} listed in system overrides " \
|
||||
"unexpectedly".format(new_conf)
|
||||
|
||||
prev_nova_cell_setup_pods = kube_helper.get_openstack_pods(
|
||||
application='nova', component='cell-setup', fail_ok=False)
|
||||
prev_count = len(prev_nova_cell_setup_pods)
|
||||
prev_nova_compute_pods = sorted(kube_helper.get_openstack_pods(
|
||||
application='nova', component='compute'))
|
||||
|
||||
LOG.tc_step("Re-apply stx-openstack application and ensure it is applied")
|
||||
container_helper.apply_app(app_name='stx-openstack', check_first=False,
|
||||
applied_timeout=1800, fail_ok=False,
|
||||
check_interval=10)
|
||||
|
||||
post_names = None
|
||||
for host in valid_hosts:
|
||||
with host_helper.ssh_to_host(hostname=host) as host_ssh:
|
||||
LOG.tc_step("Wait for all nova-cell-setup pods reach completed "
|
||||
"status on {}".format(host))
|
||||
kube_helper.wait_for_openstack_pods_status(
|
||||
application='nova', component='cell-setup',
|
||||
status=PodStatus.COMPLETED, con_ssh=host_ssh)
|
||||
|
||||
LOG.tc_step("Check nova compute pods names are changed in kubectl "
|
||||
"get on {}".format(host))
|
||||
post_nova_cell_setup_pods = kube_helper.get_openstack_pods(
|
||||
application='nova', component='cell-setup', con_ssh=host_ssh)
|
||||
post_nova_compute_pods = sorted(kube_helper.get_openstack_pods(
|
||||
application='nova', component='compute', con_ssh=host_ssh))
|
||||
|
||||
assert prev_count + 1 == len(post_nova_cell_setup_pods), \
|
||||
"No new nova cell setup pod created"
|
||||
if post_names:
|
||||
assert post_nova_compute_pods == post_names, \
|
||||
"nova compute pods names differ on two controllers"
|
||||
else:
|
||||
post_names = post_nova_compute_pods
|
||||
assert prev_nova_compute_pods != post_names, \
|
||||
"No new release generated for nova compute pods"
|
||||
|
||||
LOG.tc_step("Check actual {} is updated in nova-compute "
|
||||
"containers on {}".format(conf_path, host))
|
||||
check_cmd = 'grep foo {}'.format(conf_path)
|
||||
for nova_compute_pod in post_nova_compute_pods:
|
||||
kube_helper.exec_cmd_in_container(cmd=check_cmd,
|
||||
pod=nova_compute_pod,
|
||||
fail_ok=False,
|
||||
con_ssh=host_ssh,
|
||||
namespace='openstack',
|
||||
container_name='nova-compute')
|
|
@ -0,0 +1,41 @@
|
|||
#
|
||||
# Copyright (c) 2019 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
from pytest import skip
|
||||
|
||||
from utils.tis_log import LOG
|
||||
|
||||
|
||||
def get(rest_client, resource, auth=True):
|
||||
"""
|
||||
Test GET of <resource> with valid authentication.
|
||||
|
||||
Args:
|
||||
n/a
|
||||
|
||||
Prerequisites: system is running
|
||||
Test Setups:
|
||||
n/a
|
||||
Test Steps:
|
||||
- Using requests GET <resource> with proper authentication
|
||||
- Determine if expected status_code of 200 is received
|
||||
Test Teardown:
|
||||
n/a
|
||||
"""
|
||||
message = "Using requests GET {} with proper authentication"
|
||||
LOG.info(message.format(resource))
|
||||
|
||||
status_code, text = rest_client.get(resource=resource, auth=auth)
|
||||
message = "Retrieved: status_code: {} message: {}"
|
||||
LOG.debug(message.format(status_code, text))
|
||||
|
||||
if status_code == 404:
|
||||
skip("Unsupported resource in this configuration.")
|
||||
else:
|
||||
LOG.info("Determine if expected status_code of 200 is received")
|
||||
message = "Expected status_code of 200 - received {} and message {}"
|
||||
assert status_code == 200, message.format(status_code, text)
|
|
@ -0,0 +1,98 @@
|
|||
#
|
||||
# Copyright (c) 2019 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
import pytest
|
||||
|
||||
from utils.tis_log import LOG
|
||||
from utils.rest import Rest
|
||||
from keywords import system_helper
|
||||
import string
|
||||
|
||||
|
||||
@pytest.fixture(scope='module')
|
||||
def sysinv_rest():
|
||||
r = Rest('sysinv', platform=True)
|
||||
return r
|
||||
|
||||
|
||||
def test_GET_ihosts_host_id_shortUUID(sysinv_rest):
|
||||
"""
|
||||
Test GET of <resource> with valid authentication and upper
|
||||
case UUID values.
|
||||
RFC 4122 covers the need for uppercase UUID values
|
||||
|
||||
Args:
|
||||
n/a
|
||||
|
||||
Prerequisites: system is running
|
||||
Test Setups:
|
||||
n/a
|
||||
Test Steps:
|
||||
- Using requests GET <resource> with proper authentication
|
||||
- Determine if expected status_code of 200 is received
|
||||
Test Teardown:
|
||||
n/a
|
||||
"""
|
||||
path = "/ihosts/{}/addresses"
|
||||
r = sysinv_rest
|
||||
LOG.info(path)
|
||||
LOG.info(system_helper.get_hosts())
|
||||
for host in system_helper.get_hosts():
|
||||
uuid = system_helper.get_host_values(host, 'uuid')[0]
|
||||
LOG.info("host: {} uuid: {}".format(host, uuid))
|
||||
message = "Using requests GET {} with proper authentication"
|
||||
LOG.tc_step(message.format(path))
|
||||
|
||||
short_uuid = uuid[:-1]
|
||||
status_code, text = r.get(resource=path.format(short_uuid),
|
||||
auth=True)
|
||||
message = "Retrieved: status_code: {} message: {}"
|
||||
LOG.info(message.format(status_code, text))
|
||||
LOG.tc_step("Determine if expected code of 400 is received")
|
||||
message = "Expected code of 400 - received {} and message {}"
|
||||
assert status_code == 400, message.format(status_code, text)
|
||||
|
||||
|
||||
def test_GET_ihosts_host_id_invalidUUID(sysinv_rest):
|
||||
"""
|
||||
Test GET of <resource> with valid authentication and upper
|
||||
case UUID values.
|
||||
RFC 4122 covers the need for uppercase UUID values
|
||||
|
||||
Args:
|
||||
n/a
|
||||
|
||||
Prerequisites: system is running
|
||||
Test Setups:
|
||||
n/a
|
||||
Test Steps:
|
||||
- Using requests GET <resource> with proper authentication
|
||||
- Determine if expected status_code of 200 is received
|
||||
Test Teardown:
|
||||
n/a
|
||||
"""
|
||||
path = "/ihosts/{}/addresses"
|
||||
r = sysinv_rest
|
||||
LOG.info(path)
|
||||
LOG.info(system_helper.get_hosts())
|
||||
for host in system_helper.get_hosts():
|
||||
uuid = system_helper.get_host_values(host, 'uuid')[0]
|
||||
LOG.info("host: {} uuid: {}".format(host, uuid))
|
||||
message = "Using requests GET {} with proper authentication"
|
||||
LOG.tc_step(message.format(path))
|
||||
|
||||
# shift a->g, b->h, etc - all to generate invalid uuid
|
||||
shifted_uuid = \
|
||||
''.join(map(lambda x: chr((ord(x) - ord('a') + 6) % 26 + ord(
|
||||
'a')) if x in string.ascii_lowercase else x, uuid.lower()))
|
||||
status_code, text = r.get(resource=path.format(shifted_uuid),
|
||||
auth=True)
|
||||
message = "Retrieved: status_code: {} message: {}"
|
||||
LOG.info(message.format(status_code, text))
|
||||
LOG.tc_step("Determine if expected code of 400 is received")
|
||||
message = "Expected code of 400 - received {} and message {}"
|
||||
assert status_code == 400, message.format(status_code, text)
|
|
@ -0,0 +1,67 @@
|
|||
#
|
||||
# Copyright (c) 2019 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
import pytest
|
||||
from utils.tis_log import LOG
|
||||
from utils.rest import Rest
|
||||
|
||||
from testcases.rest import rest_test_helper
|
||||
|
||||
|
||||
@pytest.fixture(scope='module')
|
||||
def sysinv_rest():
|
||||
r = Rest('sysinv', platform=True)
|
||||
return r
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
'operation,resource', [
|
||||
('GET', '/addrpools'),
|
||||
('GET', '/ceph_mon'),
|
||||
('GET', '/clusters'),
|
||||
('GET', '/controller_fs'),
|
||||
('GET', '/drbdconfig'),
|
||||
('GET', '/event_log'),
|
||||
('GET', '/event_suppression'),
|
||||
('GET', '/health'),
|
||||
('GET', '/health/upgrade'),
|
||||
('GET', '/ialarms'),
|
||||
('GET', '/icommunity'),
|
||||
('GET', '/idns'),
|
||||
('GET', '/iextoam'),
|
||||
('GET', '/ihosts'),
|
||||
('GET', '/ihosts/bulk_export'),
|
||||
('GET', '/iinfra'),
|
||||
('GET', '/intp'),
|
||||
('GET', '/ipm'),
|
||||
('GET', '/iprofiles'),
|
||||
('GET', '/istorconfig'),
|
||||
('GET', '/isystems'),
|
||||
('GET', '/itrapdest'),
|
||||
('GET', '/lldp_agents'),
|
||||
('GET', '/lldp_neighbors'),
|
||||
('GET', '/loads'),
|
||||
('GET', '/networks'),
|
||||
('GET', '/remotelogging'),
|
||||
('GET', '/sdn_controller'),
|
||||
('GET', '/servicegroup'),
|
||||
('GET', '/servicenodes'),
|
||||
('GET', '/service_parameter'),
|
||||
('GET', '/services'),
|
||||
('GET', '/storage_backend'),
|
||||
('GET', '/storage_backend/usage'),
|
||||
('GET', '/storage_ceph'),
|
||||
('GET', '/storage_lvm'),
|
||||
# ('GET', '/tpmconfig'),
|
||||
('GET', '/upgrade'),
|
||||
('GET', '/')
|
||||
]
|
||||
)
|
||||
def test_good_authentication(sysinv_rest, operation, resource):
|
||||
if operation == "GET":
|
||||
LOG.info("getting... {}".format(resource))
|
||||
rest_test_helper.get(sysinv_rest, resource=resource)
|
|
@ -0,0 +1,72 @@
|
|||
#
|
||||
# Copyright (c) 2019 Wind River Systems, Inc.
|
||||
#
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
#
|
||||
|
||||
|
||||
import re
|
||||
import pytest
|
||||
|
||||
from utils.tis_log import LOG
|
||||
from utils.rest import Rest
|
||||
from keywords import system_helper
|
||||
|
||||
|
||||
@pytest.fixture(scope='module')
|
||||
def sysinv_rest():
|
||||
r = Rest('sysinv', platform=True)
|
||||
return r
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
'path', [
|
||||
'/ihosts/-/addresses',
|
||||
'/ihosts/-/idisks',
|
||||
'/ihosts/-/ilvgs',
|
||||
'/ihosts/-/imemories',
|
||||
'/ihosts/-/ipvs',
|
||||
'/ihosts/-/isensors',
|
||||
'/ihosts/-/isensorgroups',
|
||||
'/ihosts/-/istors',
|
||||
'/ihosts/-/pci_devices',
|
||||
'/ihosts/-/routes',
|
||||
'/ihosts/-',
|
||||
]
|
||||
)
|
||||
def test_GET_various_host_id_valid(sysinv_rest, path):
|
||||
"""
|
||||
Test GET of <resource> with valid authentication.
|
||||
|
||||
Args:
|
||||
sysinv_rest
|
||||
path
|
||||
|
||||
Prerequisites: system is running
|
||||
Test Setups:
|
||||
n/a
|
||||
Test Steps:
|
||||
- Using requests GET <resource> with proper authentication
|
||||
- Determine if expected status_code of 200 is received
|
||||
Test Teardown:
|
||||
n/a
|
||||
"""
|
||||
r = sysinv_rest
|
||||
path = re.sub("-", "{}", path)
|
||||
LOG.info(path)
|
||||
LOG.info(system_helper.get_hosts())
|
||||
for host in system_helper.get_hosts():
|
||||
uuid = system_helper.get_host_values(host, 'uuid')[0]
|
||||
res = path.format(uuid)
|
||||
message = "Using requests GET {} with proper authentication"
|
||||
LOG.tc_step(message.format(res))
|
||||
status_code, text = r.get(resource=res, auth=True)
|
||||
message = "Retrieved: status_code: {} message: {}"
|
||||
LOG.info(message.format(status_code, text))
|
||||
if status_code == 404:
|
||||
pytest.skip("Unsupported resource in this configuration.")
|
||||
else:
|
||||
message = "Determine if expected code of 200 is received"
|
||||
LOG.tc_step(message)
|
||||
message = "Expected code of 200 - received {} and message {}"
|
||||
assert status_code == 200, message.format(status_code, text)
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue