Add 'subcloud deploy create' command to dcmanager

This commit adds the subcloud deploy create command to dcmanager. It
accepts the same parameters as the subcloud add command, but only
performs the pre-deployment phase, where all parameters are validated
and the subcloud database entry is created. It does not perform the
install, bootstrap or config phases.

The commit does not modify the subcloud add command to use this phase
internally, this will be done in another commit, after the other
deployment phases are implemented.

Test Plan:
1. PASS - Create a subcloud using all the parameters and verify that
          the data is correctly stored in the DB;
2. PASS - Verify that the values from --install-values are correctly
          stored in the DB;
3. PASS - Verify that the values from --deploy-config and
          --bootstrap-values are are stored correctly in the
          ANSIBLE_OVERRIDES_PATH directory;
4. PASS - Verify that it's not possible to create a subcloud without
          the required parameters;
5. PASS - Verify that it's not possible to create a subcloud while
          another one with the same name or address already exists;
6. PASS - Repeat previous tests after swacting to controller-1.
7. PASS - Repeat previous tests but directly call the API (using
          CURL) instead of using the CLI;
8. PASS - Call the API directly, passing bmc-password as plain text
          as opposed to b64encoded and verify that the response
          contains the correct error code and message.

Story: 2010756
Task: 48030

Change-Id: Ia5321d08df7bec5aef1a8f90cb7292a522da9af9
Signed-off-by: Gustavo Herzmann <gustavo.herzmann@windriver.com>
This commit is contained in:
Gustavo Herzmann 2023-05-17 10:43:53 -03:00
parent d273ef37de
commit cb8737316f
15 changed files with 1396 additions and 30 deletions

View File

@ -1673,7 +1673,6 @@ Subcloud Deploy
These APIs allow for the display and upload of the deployment manager common
files which include deploy playbook, deploy overrides, deploy helm charts, and prestage images list.
**************************
Show Subcloud Deploy Files
**************************
@ -1765,3 +1764,83 @@ Response Example
.. literalinclude:: samples/subcloud-deploy/subcloud-deploy-post-response.json
:language: json
----------------------
Phased Subcloud Deploy
----------------------
These APIs allow for subcloud deployment to be done in phases.
******************
Creates a subcloud
******************
.. rest_method:: POST /v1.0/phased-subcloud-deploy
Accepts Content-Type multipart/form-data.
**Normal response codes**
200
**Error response codes**
badRequest (400), unauthorized (401), forbidden (403), badMethod (405),
conflict (409), HTTPUnprocessableEntity (422), internalServerError (500),
serviceUnavailable (503)
**Request parameters**
.. rest_parameters:: parameters.yaml
- bmc_password: bmc_password
- bootstrap-address: bootstrap_address
- bootstrap_values: bootstrap_values
- deploy_config: deploy_config
- description: subcloud_description
- group_id: group_id
- install_values: install_values
- location: subcloud_location
- release: release
Request Example
----------------
.. literalinclude:: samples/phased-subcloud-deploy/phased-subcloud-deploy-post-request.json
:language: json
**Response parameters**
.. rest_parameters:: parameters.yaml
- id: subcloud_id
- name: subcloud_name
- description: subcloud_description
- location: subcloud_location
- software-version: software_version
- management-state: management_state
- availability-status: availability_status
- deploy-status: deploy_status
- backup-status: backup_status
- backup-datetime: backup_datetime
- error-description: error_description
- management-subnet: management_subnet
- management-start-ip: management_start_ip
- management-end-ip: management_end_ip
- management-gateway-ip: management_gateway_ip
- openstack-installed: openstack_installed
- systemcontroller-gateway-ip: systemcontroller_gateway_ip
- data_install: data_install
- data_upgrade: data_upgrade
- created-at: created_at
- updated-at: updated_at
- group_id: group_id
Response Example
----------------
.. literalinclude:: samples/phased-subcloud-deploy/phased-subcloud-deploy-post-response.json
:language: json

View File

@ -0,0 +1,11 @@
{
"bmc_password": "YYYYYYY",
"bootstrap-address": "10.10.10.12",
"bootstrap_values": "content of bootstrap_values file",
"deploy_config": "content of deploy_config file",
"description": "Subcloud 1",
"group_id": 1,
"install_values": "content of install_values file",
"location": "Somewhere",
"release": "22.12"
}

View File

@ -0,0 +1,24 @@
{
"id": 1,
"name": "subcloud1",
"description": "Subcloud 1",
"location": "Somewhere",
"software-version": "22.12",
"management-state": "unmanaged",
"availability-status": "offline",
"deploy-status": "not-deployed",
"backup-status": null,
"backup-datetime": null,
"error-description": "No errors present",
"management-subnet": "192.168.102.0/24",
"management-start-ip": "192.168.102.2",
"management-end-ip": "192.168.102.50",
"management-gateway-ip": "192.168.102.1",
"openstack-installed": null,
"systemcontroller-gateway-ip": "192.168.204.101",
"data_install": null,
"data_upgrade": null,
"created-at": "2023-05-15 20: 58: 22.992609",
"updated-at": null,
"group_id": 1
}

View File

@ -0,0 +1,130 @@
#
# Copyright (c) 2023 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
import http.client as httpclient
import os
from oslo_log import log as logging
from oslo_messaging import RemoteError
import pecan
import tsconfig.tsconfig as tsc
import yaml
from dcmanager.api.controllers import restcomm
from dcmanager.api.policies import phased_subcloud_deploy as \
phased_subcloud_deploy_policy
from dcmanager.api import policy
from dcmanager.common.context import RequestContext
from dcmanager.common.i18n import _
from dcmanager.common import phased_subcloud_deploy as psd_common
from dcmanager.common import utils
from dcmanager.rpc import client as rpc_client
LOG = logging.getLogger(__name__)
LOCK_NAME = 'PhasedSubcloudDeployController'
BOOTSTRAP_ADDRESS = 'bootstrap-address'
BOOTSTRAP_VALUES = 'bootstrap_values'
INSTALL_VALUES = 'install_values'
SUBCLOUD_CREATE_REQUIRED_PARAMETERS = (
BOOTSTRAP_VALUES,
BOOTSTRAP_ADDRESS
)
# The consts.DEPLOY_CONFIG is missing here because it's handled differently
# by the upload_deploy_config_file() function
SUBCLOUD_CREATE_GET_FILE_CONTENTS = (
BOOTSTRAP_VALUES,
INSTALL_VALUES,
)
def get_create_payload(request: pecan.Request) -> dict:
payload = dict()
for f in SUBCLOUD_CREATE_GET_FILE_CONTENTS:
if f in request.POST:
file_item = request.POST[f]
file_item.file.seek(0, os.SEEK_SET)
data = yaml.safe_load(file_item.file.read().decode('utf8'))
if f == BOOTSTRAP_VALUES:
payload.update(data)
else:
payload.update({f: data})
del request.POST[f]
payload.update(request.POST)
return payload
class PhasedSubcloudDeployController(object):
def __init__(self):
super().__init__()
self.dcmanager_rpc_client = rpc_client.ManagerClient()
def _deploy_create(self, context: RequestContext, request: pecan.Request):
policy.authorize(phased_subcloud_deploy_policy.POLICY_ROOT % "create",
{}, restcomm.extract_credentials_for_policy())
psd_common.check_required_parameters(
request, SUBCLOUD_CREATE_REQUIRED_PARAMETERS)
payload = get_create_payload(request)
if not payload:
pecan.abort(400, _('Body required'))
psd_common.validate_bootstrap_values(payload)
# If a subcloud release is not passed, use the current
# system controller software_version
payload['software_version'] = payload.get('release', tsc.SW_VERSION)
psd_common.validate_subcloud_name_availability(context, payload['name'])
psd_common.validate_system_controller_patch_status("create")
psd_common.validate_subcloud_config(context, payload)
psd_common.validate_install_values(payload)
psd_common.validate_k8s_version(payload)
psd_common.format_ip_address(payload)
# Upload the deploy config files if it is included in the request
# It has a dependency on the subcloud name, and it is called after
# the name has been validated
psd_common.upload_deploy_config_file(request, payload)
try:
# Add the subcloud details to the database
subcloud = psd_common.add_subcloud_to_database(context, payload)
# Ask dcmanager-manager to add the subcloud.
# It will do all the real work...
subcloud = self.dcmanager_rpc_client.subcloud_deploy_create(
context, subcloud.id, payload)
return subcloud
except RemoteError as e:
pecan.abort(httpclient.UNPROCESSABLE_ENTITY, e.value)
except Exception:
LOG.exception("Unable to create subcloud %s" % payload.get('name'))
pecan.abort(httpclient.INTERNAL_SERVER_ERROR,
_('Unable to create subcloud'))
@pecan.expose(generic=True, template='json')
def index(self):
# Route the request to specific methods with parameters
pass
@utils.synchronized(LOCK_NAME)
@index.when(method='POST', template='json')
def post(self):
context = restcomm.extract_context_from_environ()
return self._deploy_create(context, pecan.request)

View File

@ -1,5 +1,5 @@
# Copyright (c) 2017 Ericsson AB.
# Copyright (c) 2017-2021 Wind River Systems, Inc.
# Copyright (c) 2017-2023 Wind River Systems, Inc.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
@ -18,6 +18,7 @@ import pecan
from dcmanager.api.controllers.v1 import alarm_manager
from dcmanager.api.controllers.v1 import notifications
from dcmanager.api.controllers.v1 import phased_subcloud_deploy
from dcmanager.api.controllers.v1 import subcloud_backup
from dcmanager.api.controllers.v1 import subcloud_deploy
from dcmanager.api.controllers.v1 import subcloud_group
@ -51,6 +52,8 @@ class Controller(object):
notifications.NotificationsController
sub_controllers["subcloud-backup"] = subcloud_backup.\
SubcloudBackupController
sub_controllers["phased-subcloud-deploy"] = phased_subcloud_deploy.\
PhasedSubcloudDeployController
for name, ctrl in sub_controllers.items():
setattr(self, name, ctrl)

View File

@ -1,5 +1,5 @@
#
# Copyright (c) 2022 Wind River Systems, Inc.
# Copyright (c) 2023 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
@ -8,6 +8,7 @@ import itertools
from dcmanager.api.policies import alarm_manager
from dcmanager.api.policies import base
from dcmanager.api.policies import phased_subcloud_deploy
from dcmanager.api.policies import subcloud_backup
from dcmanager.api.policies import subcloud_deploy
from dcmanager.api.policies import subcloud_group
@ -25,5 +26,6 @@ def list_rules():
sw_update_strategy.list_rules(),
sw_update_options.list_rules(),
subcloud_group.list_rules(),
subcloud_backup.list_rules()
subcloud_backup.list_rules(),
phased_subcloud_deploy.list_rules()
)

View File

@ -0,0 +1,29 @@
#
# Copyright (c) 2023 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
from dcmanager.api.policies import base
from oslo_policy import policy
POLICY_ROOT = 'dc_api:phased_subcloud_deploy:%s'
phased_subcloud_deploy_rules = [
policy.DocumentedRuleDefault(
name=POLICY_ROOT % 'create',
check_str='rule:' + base.ADMIN_IN_SYSTEM_PROJECTS,
description="Create a subcloud",
operations=[
{
'method': 'POST',
'path': '/v1.0/phased-subcloud-deploy'
}
]
),
]
def list_rules():
return phased_subcloud_deploy_rules

View File

@ -161,6 +161,9 @@ STRATEGY_STATE_PRESTAGE_IMAGES = "prestaging-images"
DEPLOY_STATE_NONE = 'not-deployed'
DEPLOY_STATE_PRE_DEPLOY = 'pre-deploy'
DEPLOY_STATE_DEPLOY_PREP_FAILED = 'deploy-prep-failed'
DEPLOY_STATE_CREATING = 'creating'
DEPLOY_STATE_CREATE_FAILED = 'create-failed'
DEPLOY_STATE_CREATED = 'create-complete'
DEPLOY_STATE_PRE_INSTALL = 'pre-install'
DEPLOY_STATE_PRE_INSTALL_FAILED = 'pre-install-failed'
DEPLOY_STATE_INSTALLING = 'installing'

View File

@ -0,0 +1,777 @@
#
# Copyright (c) 2023 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
import base64
import json
import os
import netaddr
from oslo_log import log as logging
import pecan
import tsconfig.tsconfig as tsc
from dccommon import consts as dccommon_consts
from dccommon.drivers.openstack import patching_v1
from dccommon.drivers.openstack.patching_v1 import PatchingClient
from dccommon.drivers.openstack.sdk_platform import OpenStackDriver
from dccommon.drivers.openstack.sysinv_v1 import SysinvClient
from dccommon import install_consts
from dcmanager.common import consts
from dcmanager.common import exceptions
from dcmanager.common.i18n import _
from dcmanager.common import utils
from dcmanager.db import api as db_api
LOG = logging.getLogger(__name__)
ANSIBLE_BOOTSTRAP_VALIDATE_CONFIG_VARS = \
consts.ANSIBLE_CURRENT_VERSION_BASE_PATH + \
'/roles/bootstrap/validate-config/vars/main.yml'
FRESH_INSTALL_K8S_VERSION = 'fresh_install_k8s_version'
KUBERNETES_VERSION = 'kubernetes_version'
INSTALL_VALUES = 'install_values'
INSTALL_VALUES_ADDRESSES = [
'bootstrap_address', 'bmc_address', 'nexthop_gateway',
'network_address'
]
BOOTSTRAP_VALUES_ADDRESSES = [
'bootstrap-address', 'management_start_address', 'management_end_address',
'management_gateway_address', 'systemcontroller_gateway_address',
'external_oam_gateway_address', 'external_oam_floating_address',
'admin_start_address', 'admin_end_address', 'admin_gateway_address'
]
def get_ks_client(region_name=dccommon_consts.DEFAULT_REGION_NAME):
"""This will get a new keystone client (and new token)"""
try:
os_client = OpenStackDriver(region_name=region_name,
region_clients=None)
return os_client.keystone_client
except Exception:
LOG.warn('Failure initializing KeystoneClient '
'for region %s' % region_name)
raise
def validate_bootstrap_values(payload: dict):
name = payload.get('name')
if not name:
pecan.abort(400, _('name required'))
system_mode = payload.get('system_mode')
if not system_mode:
pecan.abort(400, _('system_mode required'))
# The admin network is optional, but takes precedence over the
# management network for communication between the subcloud and
# system controller if it is defined.
admin_subnet = payload.get('admin_subnet', None)
admin_start_ip = payload.get('admin_start_address', None)
admin_end_ip = payload.get('admin_end_address', None)
admin_gateway_ip = payload.get('admin_gateway_address', None)
if any([admin_subnet, admin_start_ip, admin_end_ip,
admin_gateway_ip]):
# If any admin parameter is defined, all admin parameters
# should be defined.
if not admin_subnet:
pecan.abort(400, _('admin_subnet required'))
if not admin_start_ip:
pecan.abort(400, _('admin_start_address required'))
if not admin_end_ip:
pecan.abort(400, _('admin_end_address required'))
if not admin_gateway_ip:
pecan.abort(400, _('admin_gateway_address required'))
management_subnet = payload.get('management_subnet')
if not management_subnet:
pecan.abort(400, _('management_subnet required'))
management_start_ip = payload.get('management_start_address')
if not management_start_ip:
pecan.abort(400, _('management_start_address required'))
management_end_ip = payload.get('management_end_address')
if not management_end_ip:
pecan.abort(400, _('management_end_address required'))
management_gateway_ip = payload.get('management_gateway_address')
if (admin_gateway_ip and management_gateway_ip):
pecan.abort(400, _('admin_gateway_address and '
'management_gateway_address cannot be '
'specified at the same time'))
elif (not admin_gateway_ip and not management_gateway_ip):
pecan.abort(400, _('management_gateway_address required'))
systemcontroller_gateway_ip = payload.get(
'systemcontroller_gateway_address')
if not systemcontroller_gateway_ip:
pecan.abort(400,
_('systemcontroller_gateway_address required'))
external_oam_subnet = payload.get('external_oam_subnet')
if not external_oam_subnet:
pecan.abort(400, _('external_oam_subnet required'))
external_oam_gateway_ip = payload.get('external_oam_gateway_address')
if not external_oam_gateway_ip:
pecan.abort(400, _('external_oam_gateway_address required'))
external_oam_floating_ip = payload.get('external_oam_floating_address')
if not external_oam_floating_ip:
pecan.abort(400, _('external_oam_floating_address required'))
def validate_system_controller_patch_status(operation: str):
ks_client = get_ks_client()
patching_client = PatchingClient(
dccommon_consts.DEFAULT_REGION_NAME,
ks_client.session,
endpoint=ks_client.endpoint_cache.get_endpoint('patching'))
patches = patching_client.query()
patch_ids = list(patches.keys())
for patch_id in patch_ids:
valid_states = [
patching_v1.PATCH_STATE_PARTIAL_APPLY,
patching_v1.PATCH_STATE_PARTIAL_REMOVE
]
if patches[patch_id]['patchstate'] in valid_states:
pecan.abort(422,
_('Subcloud %s is not allowed while system '
'controller patching is still in progress.')
% operation)
def validate_subcloud_config(context, payload, operation=None):
"""Check whether subcloud config is valid."""
# Validate the name
if payload.get('name').isdigit():
pecan.abort(400, _("name must contain alphabetic characters"))
# If a subcloud group is not passed, use the default
group_id = payload.get('group_id', consts.DEFAULT_SUBCLOUD_GROUP_ID)
if payload.get('name') in [dccommon_consts.DEFAULT_REGION_NAME,
dccommon_consts.SYSTEM_CONTROLLER_NAME]:
pecan.abort(400, _("name cannot be %(bad_name1)s or %(bad_name2)s")
% {'bad_name1': dccommon_consts.DEFAULT_REGION_NAME,
'bad_name2': dccommon_consts.SYSTEM_CONTROLLER_NAME})
admin_subnet = payload.get('admin_subnet', None)
admin_start_ip = payload.get('admin_start_address', None)
admin_end_ip = payload.get('admin_end_address', None)
admin_gateway_ip = payload.get('admin_gateway_address', None)
# Parse/validate the management subnet
subcloud_subnets = []
subclouds = db_api.subcloud_get_all(context)
for subcloud in subclouds:
subcloud_subnets.append(netaddr.IPNetwork(subcloud.management_subnet))
MIN_MANAGEMENT_SUBNET_SIZE = 8
# subtract 3 for network, gateway and broadcast addresses.
MIN_MANAGEMENT_ADDRESSES = MIN_MANAGEMENT_SUBNET_SIZE - 3
management_subnet = None
try:
management_subnet = utils.validate_network_str(
payload.get('management_subnet'),
minimum_size=MIN_MANAGEMENT_SUBNET_SIZE,
existing_networks=subcloud_subnets,
operation=operation)
except exceptions.ValidateFail as e:
LOG.exception(e)
pecan.abort(400, _("management_subnet invalid: %s") % e)
# Parse/validate the start/end addresses
management_start_ip = None
try:
management_start_ip = utils.validate_address_str(
payload.get('management_start_address'), management_subnet)
except exceptions.ValidateFail as e:
LOG.exception(e)
pecan.abort(400, _("management_start_address invalid: %s") % e)
management_end_ip = None
try:
management_end_ip = utils.validate_address_str(
payload.get('management_end_address'), management_subnet)
except exceptions.ValidateFail as e:
LOG.exception(e)
pecan.abort(400, _("management_end_address invalid: %s") % e)
if not management_start_ip < management_end_ip:
pecan.abort(
400,
_("management_start_address not less than "
"management_end_address"))
if not len(netaddr.IPRange(management_start_ip, management_end_ip)) >= \
MIN_MANAGEMENT_ADDRESSES:
pecan.abort(
400,
_("management address range must contain at least %d "
"addresses") % MIN_MANAGEMENT_ADDRESSES)
# Parse/validate the gateway
management_gateway_ip = None
if not admin_gateway_ip:
try:
management_gateway_ip = utils.validate_address_str(payload.get(
'management_gateway_address'), management_subnet)
except exceptions.ValidateFail as e:
LOG.exception(e)
pecan.abort(400, _("management_gateway_address invalid: %s") % e)
validate_admin_network_config(
admin_subnet,
admin_start_ip,
admin_end_ip,
admin_gateway_ip,
subcloud_subnets,
operation
)
# Ensure subcloud management gateway is not within the actual subcloud
# management subnet address pool for consistency with the
# systemcontroller gateway restriction below. Address collision
# is not a concern as the address is added to sysinv.
if admin_start_ip:
subcloud_mgmt_address_start = netaddr.IPAddress(admin_start_ip)
else:
subcloud_mgmt_address_start = management_start_ip
if admin_end_ip:
subcloud_mgmt_address_end = netaddr.IPAddress(admin_end_ip)
else:
subcloud_mgmt_address_end = management_end_ip
if admin_gateway_ip:
subcloud_mgmt_gw_ip = netaddr.IPAddress(admin_gateway_ip)
else:
subcloud_mgmt_gw_ip = management_gateway_ip
if ((subcloud_mgmt_gw_ip >= subcloud_mgmt_address_start) and
(subcloud_mgmt_gw_ip <= subcloud_mgmt_address_end)):
pecan.abort(400, _("%(network)s_gateway_address invalid, "
"is within management pool: %(start)s - "
"%(end)s") %
{'network': 'admin' if admin_gateway_ip else 'management',
'start': subcloud_mgmt_address_start,
'end': subcloud_mgmt_address_end})
# Ensure systemcontroller gateway is in the management subnet
# for the systemcontroller region.
management_address_pool = get_network_address_pool()
systemcontroller_subnet_str = "%s/%d" % (
management_address_pool.network,
management_address_pool.prefix)
systemcontroller_subnet = netaddr.IPNetwork(systemcontroller_subnet_str)
try:
systemcontroller_gw_ip = utils.validate_address_str(
payload.get('systemcontroller_gateway_address'),
systemcontroller_subnet
)
except exceptions.ValidateFail as e:
LOG.exception(e)
pecan.abort(400, _("systemcontroller_gateway_address invalid: %s") % e)
# Ensure systemcontroller gateway is not within the actual
# management subnet address pool to prevent address collision.
mgmt_address_start = netaddr.IPAddress(management_address_pool.ranges[0][0])
mgmt_address_end = netaddr.IPAddress(management_address_pool.ranges[0][1])
if ((systemcontroller_gw_ip >= mgmt_address_start) and
(systemcontroller_gw_ip <= mgmt_address_end)):
pecan.abort(400, _("systemcontroller_gateway_address invalid, "
"is within management pool: %(start)s - "
"%(end)s") %
{'start': mgmt_address_start, 'end': mgmt_address_end})
validate_oam_network_config(
payload.get('external_oam_subnet'),
payload.get('external_oam_gateway_address'),
payload.get('external_oam_floating_address'),
subcloud_subnets
)
validate_group_id(context, group_id)
def validate_admin_network_config(admin_subnet_str,
admin_start_address_str,
admin_end_address_str,
admin_gateway_address_str,
existing_networks,
operation):
"""validate whether admin network configuration is valid"""
if not (admin_subnet_str or admin_start_address_str or
admin_end_address_str or admin_gateway_address_str):
return
MIN_ADMIN_SUBNET_SIZE = 5
# subtract 3 for network, gateway and broadcast addresses.
MIN_ADMIN_ADDRESSES = MIN_ADMIN_SUBNET_SIZE - 3
admin_subnet = None
try:
admin_subnet = utils.validate_network_str(
admin_subnet_str,
minimum_size=MIN_ADMIN_SUBNET_SIZE,
existing_networks=existing_networks,
operation=operation)
except exceptions.ValidateFail as e:
LOG.exception(e)
pecan.abort(400, _("admin_subnet invalid: %s") % e)
# Parse/validate the start/end addresses
admin_start_ip = None
try:
admin_start_ip = utils.validate_address_str(
admin_start_address_str, admin_subnet)
except exceptions.ValidateFail as e:
LOG.exception(e)
pecan.abort(400, _("admin_start_address invalid: %s") % e)
admin_end_ip = None
try:
admin_end_ip = utils.validate_address_str(
admin_end_address_str, admin_subnet)
except exceptions.ValidateFail as e:
LOG.exception(e)
pecan.abort(400, _("admin_end_address invalid: %s") % e)
if not admin_start_ip < admin_end_ip:
pecan.abort(
400,
_("admin_start_address not less than "
"admin_end_address"))
if not len(netaddr.IPRange(admin_start_ip, admin_end_ip)) >= \
MIN_ADMIN_ADDRESSES:
pecan.abort(
400,
_("admin address range must contain at least %d "
"addresses") % MIN_ADMIN_ADDRESSES)
# Parse/validate the gateway
try:
utils.validate_address_str(
admin_gateway_address_str, admin_subnet)
except exceptions.ValidateFail as e:
LOG.exception(e)
pecan.abort(400, _("admin_gateway_address invalid: %s") % e)
subcloud_admin_address_start = netaddr.IPAddress(admin_start_address_str)
subcloud_admin_address_end = netaddr.IPAddress(admin_end_address_str)
subcloud_admin_gw_ip = netaddr.IPAddress(admin_gateway_address_str)
if ((subcloud_admin_gw_ip >= subcloud_admin_address_start) and
(subcloud_admin_gw_ip <= subcloud_admin_address_end)):
pecan.abort(400, _("admin_gateway_address invalid, "
"is within admin pool: %(start)s - "
"%(end)s") %
{'start': subcloud_admin_address_start,
'end': subcloud_admin_address_end})
def validate_oam_network_config(external_oam_subnet_str,
external_oam_gateway_address_str,
external_oam_floating_address_str,
existing_networks):
"""validate whether oam network configuration is valid"""
# Parse/validate the oam subnet
MIN_OAM_SUBNET_SIZE = 3
oam_subnet = None
try:
oam_subnet = utils.validate_network_str(
external_oam_subnet_str,
minimum_size=MIN_OAM_SUBNET_SIZE,
existing_networks=existing_networks)
except exceptions.ValidateFail as e:
LOG.exception(e)
pecan.abort(400, _("external_oam_subnet invalid: %s") % e)
# Parse/validate the addresses
try:
utils.validate_address_str(
external_oam_gateway_address_str, oam_subnet)
except exceptions.ValidateFail as e:
LOG.exception(e)
pecan.abort(400, _("oam_gateway_address invalid: %s") % e)
try:
utils.validate_address_str(
external_oam_floating_address_str, oam_subnet)
except exceptions.ValidateFail as e:
LOG.exception(e)
pecan.abort(400, _("oam_floating_address invalid: %s") % e)
def validate_group_id(context, group_id):
try:
# The DB API will raise an exception if the group_id is invalid
db_api.subcloud_group_get(context, group_id)
except Exception as e:
LOG.exception(e)
pecan.abort(400, _("Invalid group_id"))
def get_network_address_pool(network='management',
region_name=dccommon_consts.DEFAULT_REGION_NAME):
"""Get the region network address pool"""
ks_client = get_ks_client(region_name)
endpoint = ks_client.endpoint_cache.get_endpoint('sysinv')
sysinv_client = SysinvClient(region_name,
ks_client.session,
endpoint=endpoint)
if network == 'admin':
return sysinv_client.get_admin_address_pool()
return sysinv_client.get_management_address_pool()
def validate_install_values(payload, subcloud=None):
"""Validate install values if 'install_values' is present in payload.
The image in payload install values is optional, and if not provided,
the image is set to the available active/inactive load image.
:return boolean: True if bmc install requested, otherwise False
"""
install_values = payload.get('install_values')
if not install_values:
return False
original_install_values = None
if subcloud:
if subcloud.data_install:
original_install_values = json.loads(subcloud.data_install)
bmc_password = payload.get('bmc_password')
if not bmc_password:
pecan.abort(400, _('subcloud bmc_password required'))
try:
base64.b64decode(bmc_password).decode('utf-8')
except Exception:
msg = _('Failed to decode subcloud bmc_password, verify'
' the password is base64 encoded')
LOG.exception(msg)
pecan.abort(400, msg)
payload['install_values'].update({'bmc_password': bmc_password})
software_version = payload.get('software_version')
if not software_version and subcloud:
software_version = subcloud.software_version
if 'software_version' in install_values:
install_software_version = str(install_values.get('software_version'))
if software_version and software_version != install_software_version:
pecan.abort(400,
_("The software_version value %s in the install values "
"yaml file does not match with the specified/current "
"software version of %s. Please correct or remove "
"this parameter from the yaml file and try again.") %
(install_software_version, software_version))
else:
# Only install_values payload will be passed to the subcloud
# installation backend methods. The software_version is required by
# the installation, so it cannot be absent in the install_values.
LOG.debug("software_version (%s) is added to install_values" %
software_version)
payload['install_values'].update({'software_version': software_version})
if 'persistent_size' in install_values:
persistent_size = install_values.get('persistent_size')
if not isinstance(persistent_size, int):
pecan.abort(400, _("The install value persistent_size (in MB) must "
"be a whole number greater than or equal to %s") %
consts.DEFAULT_PERSISTENT_SIZE)
if persistent_size < consts.DEFAULT_PERSISTENT_SIZE:
# the expected value is less than the default. so throw an error.
pecan.abort(400, _("persistent_size of %s MB is less than "
"the permitted minimum %s MB ") %
(str(persistent_size), consts.DEFAULT_PERSISTENT_SIZE))
if 'hw_settle' in install_values:
hw_settle = install_values.get('hw_settle')
if not isinstance(hw_settle, int):
pecan.abort(400, _("The install value hw_settle (in seconds) must "
"be a whole number greater than or equal to 0"))
if hw_settle < 0:
pecan.abort(400, _("hw_settle of %s seconds is less than 0") %
(str(hw_settle)))
for k in install_consts.MANDATORY_INSTALL_VALUES:
if k not in install_values:
if original_install_values:
pecan.abort(400, _("Mandatory install value %s not present, "
"existing %s in DB: %s") %
(k, k, original_install_values.get(k)))
else:
pecan.abort(400,
_("Mandatory install value %s not present") % k)
# check for the image at load vault load location
matching_iso, err_msg = utils.get_matching_iso(software_version)
if err_msg:
LOG.exception(err_msg)
pecan.abort(400, _(err_msg))
LOG.info("Image in install_values is set to %s" % matching_iso)
payload['install_values'].update({'image': matching_iso})
if (install_values['install_type'] not in
list(range(install_consts.SUPPORTED_INSTALL_TYPES))):
pecan.abort(400, _("install_type invalid: %s") %
install_values['install_type'])
try:
ip_version = (netaddr.IPAddress(install_values['bootstrap_address']).
version)
except netaddr.AddrFormatError as e:
LOG.exception(e)
pecan.abort(400, _("bootstrap_address invalid: %s") % e)
try:
bmc_address = netaddr.IPAddress(install_values['bmc_address'])
except netaddr.AddrFormatError as e:
LOG.exception(e)
pecan.abort(400, _("bmc_address invalid: %s") % e)
if bmc_address.version != ip_version:
pecan.abort(400, _("bmc_address and bootstrap_address "
"must be the same IP version"))
if 'nexthop_gateway' in install_values:
try:
gateway_ip = netaddr.IPAddress(install_values['nexthop_gateway'])
except netaddr.AddrFormatError as e:
LOG.exception(e)
pecan.abort(400, _("nexthop_gateway address invalid: %s") % e)
if gateway_ip.version != ip_version:
pecan.abort(400, _("nexthop_gateway and bootstrap_address "
"must be the same IP version"))
if ('network_address' in install_values and
'nexthop_gateway' not in install_values):
pecan.abort(400, _("nexthop_gateway is required when "
"network_address is present"))
if 'nexthop_gateway' and 'network_address' in install_values:
if 'network_mask' not in install_values:
pecan.abort(400, _("The network mask is required when network "
"address is present"))
network_str = (install_values['network_address'] + '/' +
str(install_values['network_mask']))
try:
network = utils.validate_network_str(network_str, 1)
except exceptions.ValidateFail as e:
LOG.exception(e)
pecan.abort(400, _("network address invalid: %s") % e)
if network.version != ip_version:
pecan.abort(400, _("network address and bootstrap address "
"must be the same IP version"))
if 'rd.net.timeout.ipv6dad' in install_values:
try:
ipv6dad_timeout = int(install_values['rd.net.timeout.ipv6dad'])
if ipv6dad_timeout <= 0:
pecan.abort(400, _("rd.net.timeout.ipv6dad must be greater "
"than 0: %d") % ipv6dad_timeout)
except ValueError as e:
LOG.exception(e)
pecan.abort(400, _("rd.net.timeout.ipv6dad invalid: %s") % e)
return True
def validate_k8s_version(payload):
"""Validate k8s version.
If the specified release in the payload is not the active release,
the kubernetes_version value if specified in the subcloud bootstrap
yaml file must be of the same value as fresh_install_k8s_version of
the specified release.
"""
if payload['software_version'] == tsc.SW_VERSION:
return
kubernetes_version = payload.get(KUBERNETES_VERSION)
if kubernetes_version:
try:
bootstrap_var_file = utils.get_playbook_for_software_version(
ANSIBLE_BOOTSTRAP_VALIDATE_CONFIG_VARS,
payload['software_version'])
fresh_install_k8s_version = utils.get_value_from_yaml_file(
bootstrap_var_file,
FRESH_INSTALL_K8S_VERSION)
if not fresh_install_k8s_version:
pecan.abort(400, _("%s not found in %s")
% (FRESH_INSTALL_K8S_VERSION,
bootstrap_var_file))
if kubernetes_version != fresh_install_k8s_version:
pecan.abort(400, _("The kubernetes_version value (%s) "
"specified in the subcloud bootstrap "
"yaml file doesn't match "
"fresh_install_k8s_version value (%s) "
"of the specified release %s")
% (kubernetes_version,
fresh_install_k8s_version,
payload['software_version']))
except exceptions.PlaybookNotFound:
pecan.abort(400, _("The bootstrap playbook validate-config vars "
"not found for %s software version")
% payload['software_version'])
def validate_sysadmin_password(payload: dict):
sysadmin_password = payload.get('sysadmin_password')
if not sysadmin_password:
pecan.abort(400, _('subcloud sysadmin_password required'))
try:
payload['sysadmin_password'] = utils.decode_and_normalize_passwd(
sysadmin_password)
except Exception:
msg = _('Failed to decode subcloud sysadmin_password, '
'verify the password is base64 encoded')
LOG.exception(msg)
pecan.abort(400, msg)
def format_ip_address(payload):
"""Format IP addresses in 'bootstrap_values' and 'install_values'.
The IPv6 addresses can be represented in multiple ways. Format and
update the IP addresses in payload before saving it to database.
"""
if INSTALL_VALUES in payload:
for k in INSTALL_VALUES_ADDRESSES:
if k in payload[INSTALL_VALUES]:
try:
address = netaddr.IPAddress(payload[INSTALL_VALUES]
.get(k)).format()
except netaddr.AddrFormatError as e:
LOG.exception(e)
pecan.abort(400, _("%s invalid: %s") % (k, e))
payload[INSTALL_VALUES].update({k: address})
for k in BOOTSTRAP_VALUES_ADDRESSES:
if k in payload:
try:
address = netaddr.IPAddress(payload.get(k)).format()
except netaddr.AddrFormatError as e:
LOG.exception(e)
pecan.abort(400, _("%s invalid: %s") % (k, e))
payload.update({k: address})
def upload_deploy_config_file(request, payload):
if consts.DEPLOY_CONFIG in request.POST:
file_item = request.POST[consts.DEPLOY_CONFIG]
filename = getattr(file_item, 'filename', '')
if not filename:
pecan.abort(400, _("No %s file uploaded"
% consts.DEPLOY_CONFIG))
file_item.file.seek(0, os.SEEK_SET)
contents = file_item.file.read()
# the deploy config needs to upload to the override location
fn = get_config_file_path(payload['name'], consts.DEPLOY_CONFIG)
upload_config_file(contents, fn, consts.DEPLOY_CONFIG)
payload.update({consts.DEPLOY_CONFIG: fn})
get_common_deploy_files(payload, payload['software_version'])
def get_config_file_path(subcloud_name, config_file_type=None):
if config_file_type == consts.DEPLOY_CONFIG:
file_path = os.path.join(
consts.ANSIBLE_OVERRIDES_PATH,
subcloud_name + '_' + config_file_type + '.yml'
)
elif config_file_type == INSTALL_VALUES:
file_path = os.path.join(
consts.ANSIBLE_OVERRIDES_PATH + '/' + subcloud_name,
config_file_type + '.yml'
)
else:
file_path = os.path.join(
consts.ANSIBLE_OVERRIDES_PATH,
subcloud_name + '.yml'
)
return file_path
def upload_config_file(file_item, config_file, config_type):
try:
with open(config_file, "w") as f:
f.write(file_item.decode('utf8'))
except Exception:
msg = _("Failed to upload %s file" % config_type)
LOG.exception(msg)
pecan.abort(400, msg)
def get_common_deploy_files(payload, software_version):
for f in consts.DEPLOY_COMMON_FILE_OPTIONS:
# Skip the prestage_images option as it is not relevant in this
# context
if f == consts.DEPLOY_PRESTAGE:
continue
filename = None
dir_path = os.path.join(dccommon_consts.DEPLOY_DIR, software_version)
if os.path.isdir(dir_path):
filename = utils.get_filename_by_prefix(dir_path, f + '_')
if filename is None:
pecan.abort(400, _("Missing required deploy file for %s") % f)
payload.update({f: os.path.join(dir_path, filename)})
def validate_subcloud_name_availability(context, subcloud_name):
try:
db_api.subcloud_get_by_name(context, subcloud_name)
except exceptions.SubcloudNameNotFound:
pass
else:
msg = _("Subcloud with name=%s already exists") % subcloud_name
LOG.info(msg)
pecan.abort(409, msg)
def check_required_parameters(request, required_parameters):
missing_parameters = []
for p in required_parameters:
if p not in request.POST:
missing_parameters.append(p)
if missing_parameters:
parameters_str = ', '.join(missing_parameters)
pecan.abort(
400, _("Missing required parameter(s): %s") % parameters_str)
def add_subcloud_to_database(context, payload):
# if group_id has been omitted from payload, use 'Default'.
group_id = payload.get('group_id',
consts.DEFAULT_SUBCLOUD_GROUP_ID)
data_install = None
if 'install_values' in payload:
data_install = json.dumps(payload['install_values'])
subcloud = db_api.subcloud_create(
context,
payload['name'],
payload.get('description'),
payload.get('location'),
payload.get('software_version'),
utils.get_management_subnet(payload),
utils.get_management_gateway_address(payload),
utils.get_management_start_address(payload),
utils.get_management_end_address(payload),
payload['systemcontroller_gateway_address'],
consts.DEPLOY_STATE_NONE,
consts.ERROR_DESC_EMPTY,
False,
group_id,
data_install=data_install)
return subcloud

View File

@ -188,6 +188,15 @@ class DCManagerService(service.Service):
payload['subcloud_name'])
return self.subcloud_manager.prestage_subcloud(context, payload)
@request_context
def subcloud_deploy_create(self, context, subcloud_id, payload):
# Adds a subcloud
LOG.info("Handling subcloud_deploy_create request for: %s" %
payload.get('name'))
return self.subcloud_manager.subcloud_deploy_create(context,
subcloud_id,
payload)
def _stop_rpc_server(self):
# Stop RPC connection to prevent new requests
LOG.debug(_("Attempting to stop RPC service..."))

View File

@ -127,6 +127,14 @@ MAX_PARALLEL_SUBCLOUD_BACKUP_DELETE = 250
MAX_PARALLEL_SUBCLOUD_BACKUP_RESTORE = 100
CENTRAL_BACKUP_DIR = '/opt/dc-vault/backups'
ENDPOINT_URLS = {
dccommon_consts.ENDPOINT_TYPE_PLATFORM: "https://{}:6386/v1",
dccommon_consts.ENDPOINT_TYPE_IDENTITY: "https://{}:5001/v3",
dccommon_consts.ENDPOINT_TYPE_PATCHING: "https://{}:5492",
dccommon_consts.ENDPOINT_TYPE_FM: "https://{}:18003",
dccommon_consts.ENDPOINT_TYPE_NFV: "https://{}:4546"
}
class SubcloudManager(manager.Manager):
"""Manages tasks related to subclouds."""
@ -745,6 +753,145 @@ class SubcloudManager(manager.Manager):
return self._subcloud_operation_notice('restore', restore_subclouds,
failed_subclouds, invalid_subclouds)
def subcloud_deploy_create(self, context, subcloud_id, payload):
"""Create subcloud and notify orchestrators.
:param context: request context object
:param subcloud_id: subcloud_id from db
:param payload: subcloud configuration
"""
LOG.info("Creating subcloud %s." % payload['name'])
subcloud = db_api.subcloud_update(
context, subcloud_id,
deploy_status=consts.DEPLOY_STATE_CREATING)
try:
# Create a new route to this subcloud on the management interface
# on both controllers.
m_ks_client = OpenStackDriver(
region_name=dccommon_consts.DEFAULT_REGION_NAME,
region_clients=None).keystone_client
subcloud_subnet = netaddr.IPNetwork(
utils.get_management_subnet(payload))
endpoint = m_ks_client.endpoint_cache.get_endpoint('sysinv')
sysinv_client = SysinvClient(dccommon_consts.DEFAULT_REGION_NAME,
m_ks_client.session,
endpoint=endpoint)
LOG.debug("Getting cached regionone data for %s" % subcloud.name)
cached_regionone_data = self._get_cached_regionone_data(
m_ks_client, sysinv_client)
for mgmt_if_uuid in cached_regionone_data['mgmt_interface_uuids']:
sysinv_client.create_route(
mgmt_if_uuid,
str(subcloud_subnet.ip),
subcloud_subnet.prefixlen,
payload['systemcontroller_gateway_address'],
1)
# Create endpoints to this subcloud on the
# management-start-ip of the subcloud which will be allocated
# as the floating Management IP of the Subcloud if the
# Address Pool is not shared. Incase the endpoint entries
# are incorrect, or the management IP of the subcloud is changed
# in the future, it will not go managed or will show up as
# out of sync. To fix this use Openstack endpoint commands
# on the SystemController to change the subcloud endpoints.
# The non-identity endpoints are added to facilitate horizon access
# from the System Controller to the subcloud.
endpoint_config = []
endpoint_ip = utils.get_management_start_address(payload)
if netaddr.IPAddress(endpoint_ip).version == 6:
endpoint_ip = '[' + endpoint_ip + ']'
for service in m_ks_client.services_list:
admin_endpoint_url = ENDPOINT_URLS.get(service.type, None)
if admin_endpoint_url:
admin_endpoint_url = admin_endpoint_url.format(endpoint_ip)
endpoint_config.append(
{"id": service.id,
"admin_endpoint_url": admin_endpoint_url})
if len(endpoint_config) < len(ENDPOINT_URLS):
raise exceptions.BadRequest(
resource='subcloud',
msg='Missing service in SystemController')
for endpoint in endpoint_config:
try:
m_ks_client.keystone_client.endpoints.create(
endpoint["id"],
endpoint['admin_endpoint_url'],
interface=dccommon_consts.KS_ENDPOINT_ADMIN,
region=subcloud.name)
except Exception as e:
# Keystone service must be temporarily busy, retry
LOG.error(str(e))
m_ks_client.keystone_client.endpoints.create(
endpoint["id"],
endpoint['admin_endpoint_url'],
interface=dccommon_consts.KS_ENDPOINT_ADMIN,
region=subcloud.name)
# Inform orchestrator that subcloud has been added
self.dcorch_rpc_client.add_subcloud(
context, subcloud.name, subcloud.software_version)
# create entry into alarm summary table, will get real values later
alarm_updates = {'critical_alarms': -1,
'major_alarms': -1,
'minor_alarms': -1,
'warnings': -1,
'cloud_status': consts.ALARMS_DISABLED}
db_api.subcloud_alarms_create(context, subcloud.name,
alarm_updates)
# Regenerate the addn_hosts_dc file
self._create_addn_hosts_dc(context)
self._populate_payload_with_cached_keystone_data(
cached_regionone_data, payload, populate_passwords=False)
if "deploy_playbook" in payload:
self._prepare_for_deployment(payload, subcloud.name,
populate_passwords=False)
payload['users'] = dict()
for user in USERS_TO_REPLICATE:
payload['users'][user] = \
str(keyring.get_password(
user, dccommon_consts.SERVICES_USER_NAME))
# Ansible inventory filename for the specified subcloud
ansible_subcloud_inventory_file = utils.get_ansible_filename(
subcloud.name, INVENTORY_FILE_POSTFIX)
# Create the ansible inventory for the new subcloud
utils.create_subcloud_inventory(payload,
ansible_subcloud_inventory_file)
# create subcloud intermediate certificate and pass in keys
self._create_intermediate_ca_cert(payload)
# Write this subclouds overrides to file
# NOTE: This file should not be deleted if subcloud add fails
# as it is used for debugging
self._write_subcloud_ansible_config(cached_regionone_data, payload)
subcloud = db_api.subcloud_update(
context, subcloud_id,
deploy_status=consts.DEPLOY_STATE_CREATED)
return db_api.subcloud_db_model_to_dict(subcloud)
except Exception:
LOG.exception("Failed to create subcloud %s" % payload['name'])
# If we failed to create the subcloud, update the deployment status
subcloud = db_api.subcloud_update(
context, subcloud.id,
deploy_status=consts.DEPLOY_STATE_CREATE_FAILED)
return db_api.subcloud_db_model_to_dict(subcloud)
def _subcloud_operation_notice(
self, operation, restore_subclouds, failed_subclouds,
invalid_subclouds):
@ -1492,14 +1639,16 @@ class SubcloudManager(manager.Manager):
with open(deploy_values_file, 'w') as f_out_deploy_values_file:
json.dump(payload['deploy_values'], f_out_deploy_values_file)
def _prepare_for_deployment(self, payload, subcloud_name):
def _prepare_for_deployment(self, payload, subcloud_name,
populate_passwords=True):
payload['deploy_values'] = dict()
payload['deploy_values']['ansible_become_pass'] = \
payload['sysadmin_password']
payload['deploy_values']['ansible_ssh_pass'] = \
payload['sysadmin_password']
payload['deploy_values']['admin_password'] = \
str(keyring.get_password('CGCS', 'admin'))
if populate_passwords:
payload['deploy_values']['ansible_become_pass'] = \
payload['sysadmin_password']
payload['deploy_values']['ansible_ssh_pass'] = \
payload['sysadmin_password']
payload['deploy_values']['admin_password'] = \
str(keyring.get_password('CGCS', 'admin'))
payload['deploy_values']['deployment_config'] = \
payload[consts.DEPLOY_CONFIG]
payload['deploy_values']['deployment_manager_chart'] = \
@ -2108,7 +2257,8 @@ class SubcloudManager(manager.Manager):
cached_regionone_data = SubcloudManager.regionone_data
return cached_regionone_data
def _populate_payload_with_cached_keystone_data(self, cached_data, payload):
def _populate_payload_with_cached_keystone_data(self, cached_data, payload,
populate_passwords=True):
payload['system_controller_keystone_admin_user_id'] = \
cached_data['admin_user_id']
payload['system_controller_keystone_admin_project_id'] = \
@ -2120,9 +2270,10 @@ class SubcloudManager(manager.Manager):
payload['system_controller_keystone_dcmanager_user_id'] = \
cached_data['dcmanager_user_id']
# While at it, add the admin and service user passwords to the payload so
# they get copied to the overrides file
payload['ansible_become_pass'] = payload['sysadmin_password']
payload['ansible_ssh_pass'] = payload['sysadmin_password']
payload['admin_password'] = str(keyring.get_password('CGCS',
'admin'))
if populate_passwords:
# While at it, add the admin and service user passwords to the
# payload so they get copied to the overrides file
payload['ansible_become_pass'] = payload['sysadmin_password']
payload['ansible_ssh_pass'] = payload['sysadmin_password']
payload['admin_password'] = str(keyring.get_password('CGCS',
'admin'))

View File

@ -187,6 +187,11 @@ class ManagerClient(RPCClient):
return self.call(ctxt, self.make_msg('prestage_subcloud',
payload=payload))
def subcloud_deploy_create(self, ctxt, subcloud_id, payload):
return self.call(ctxt, self.make_msg('subcloud_deploy_create',
subcloud_id=subcloud_id,
payload=payload))
class DCManagerNotifications(RPCClient):
"""DC Manager Notification interface to broadcast subcloud state changed

View File

@ -15,6 +15,7 @@
# under the License.
#
import contextlib
import mock
from six.moves import http_client
@ -95,18 +96,22 @@ class APIMixin(object):
# upload_files kwarg is not supported by the json methods in web_test
class PostMixin(object):
@mock.patch.object(rpc_client, 'ManagerClient')
def test_create_success(self, mock_client):
def test_create_success(self):
# Test that a POST operation is supported by the API
params = self.get_post_params()
upload_files = self.get_post_upload_files()
response = self.app.post(self.get_api_prefix(),
params=params,
upload_files=upload_files,
headers=self.get_api_headers())
self.assertEqual(response.content_type, 'application/json')
self.assertEqual(response.status_code, http_client.OK)
self.assert_fields(response.json)
with contextlib.ExitStack() as stack:
# Only mocks it if it's not already mocked by the derived class
if not isinstance(rpc_client.ManagerClient, mock.Mock):
stack.enter_context(mock.patch.object(rpc_client,
'ManagerClient'))
params = self.get_post_params()
upload_files = self.get_post_upload_files()
response = self.app.post(self.get_api_prefix(),
params=params,
upload_files=upload_files,
headers=self.get_api_headers())
self.assertEqual(response.content_type, 'application/json')
self.assertEqual(response.status_code, http_client.OK)
self.assert_fields(response.json)
class PostRejectedMixin(object):

View File

@ -0,0 +1,58 @@
#
# Copyright (c) 2023 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
import mock
from dcmanager.common import phased_subcloud_deploy as psd_common
from dcmanager.db import api as db_api
from dcmanager.tests.unit.api.v1.controllers.test_subclouds import \
TestSubcloudPost
class FakeRPCClient(object):
def subcloud_deploy_create(self, context, subcloud_id, _):
subcloud = db_api.subcloud_get(context, subcloud_id)
return db_api.subcloud_db_model_to_dict(subcloud)
# Apply the TestSubcloudPost parameter validation tests to the subcloud deploy
# add endpoint as it uses the same parameter validation functions
class TestSubcloudDeployCreate(TestSubcloudPost):
API_PREFIX = '/v1.0/phased-subcloud-deploy'
RESULT_KEY = 'phased-subcloud-deploy'
def setUp(self):
super().setUp()
p = mock.patch.object(psd_common, 'get_network_address_pool')
self.mock_get_network_address_pool = p.start()
self.mock_get_network_address_pool.return_value = \
self.management_address_pool
self.addCleanup(p.stop)
p = mock.patch.object(psd_common, 'get_ks_client')
self.mock_get_ks_client = p.start()
self.addCleanup(p.stop)
p = mock.patch.object(psd_common.PatchingClient, 'query')
self.mock_query = p.start()
self.addCleanup(p.stop)
self.mock_rpc_client.return_value = FakeRPCClient()
def test_subcloud_create_missing_bootstrap_address(self):
"""Test POST operation without bootstrap-address."""
params = self.get_post_params()
del params['bootstrap-address']
upload_files = self.get_post_upload_files()
response = self.app.post(self.get_api_prefix(),
params=params,
upload_files=upload_files,
headers=self.get_api_headers(),
expect_errors=True)
self._verify_post_failure(response, "bootstrap-address", None)

View File

@ -424,6 +424,85 @@ class TestSubcloudManager(base.DCManagerTestCase):
self.assertEqual('localhost', sm.host)
self.assertEqual(self.ctx, sm.context)
@mock.patch.object(subcloud_manager.SubcloudManager,
'_create_intermediate_ca_cert')
@mock.patch.object(cutils, 'delete_subcloud_inventory')
@mock.patch.object(subcloud_manager, 'OpenStackDriver')
@mock.patch.object(subcloud_manager, 'SysinvClient')
@mock.patch.object(subcloud_manager.SubcloudManager,
'_get_cached_regionone_data')
@mock.patch.object(subcloud_manager.SubcloudManager,
'_create_addn_hosts_dc')
@mock.patch.object(cutils, 'create_subcloud_inventory')
@mock.patch.object(subcloud_manager.SubcloudManager,
'_write_subcloud_ansible_config')
@mock.patch.object(subcloud_manager,
'keyring')
def test_subcloud_deploy_create(self, mock_keyring,
mock_write_subcloud_ansible_config,
mock_create_subcloud_inventory,
mock_create_addn_hosts,
mock_get_cached_regionone_data,
mock_sysinv_client,
mock_keystone_client,
mock_delete_subcloud_inventory,
mock_create_intermediate_ca_cert):
values = utils.create_subcloud_dict(base.SUBCLOUD_SAMPLE_DATA_0)
values['deploy_status'] = consts.DEPLOY_STATE_NONE
# dcmanager add_subcloud queries the data from the db
subcloud = self.create_subcloud_static(self.ctx, name=values['name'])
values['id'] = subcloud.id
mock_keystone_client().keystone_client = FakeKeystoneClient()
mock_keyring.get_password.return_value = "testpassword"
mock_get_cached_regionone_data.return_value = FAKE_CACHED_REGIONONE_DATA
sm = subcloud_manager.SubcloudManager()
subcloud_dict = sm.subcloud_deploy_create(self.ctx, subcloud.id,
payload=values)
mock_get_cached_regionone_data.assert_called_once()
mock_sysinv_client().create_route.assert_called()
self.fake_dcorch_api.add_subcloud.assert_called_once()
mock_create_addn_hosts.assert_called_once()
mock_create_subcloud_inventory.assert_called_once()
mock_write_subcloud_ansible_config.assert_called_once()
mock_keyring.get_password.assert_called()
mock_create_intermediate_ca_cert.assert_called_once()
# Verify subcloud was updated with correct values
self.assertEqual(consts.DEPLOY_STATE_CREATED,
subcloud_dict['deploy-status'])
# Verify subcloud was updated with correct values
updated_subcloud = db_api.subcloud_get_by_name(self.ctx, values['name'])
self.assertEqual(consts.DEPLOY_STATE_CREATED,
updated_subcloud.deploy_status)
@mock.patch.object(subcloud_manager, 'OpenStackDriver')
def test_subcloud_deploy_create_failed(self, mock_keystone_client):
values = utils.create_subcloud_dict(base.SUBCLOUD_SAMPLE_DATA_0)
values['deploy_status'] = consts.DEPLOY_STATE_NONE
# dcmanager add_subcloud queries the data from the db
subcloud = self.create_subcloud_static(self.ctx, name=values['name'])
values['id'] = subcloud.id
mock_keystone_client.side_effect = FakeException('boom')
sm = subcloud_manager.SubcloudManager()
subcloud_dict = sm.subcloud_deploy_create(self.ctx, subcloud.id,
payload=values)
# Verify subcloud was updated with correct values
self.assertEqual(consts.DEPLOY_STATE_CREATE_FAILED,
subcloud_dict['deploy-status'])
# Verify subcloud was updated with correct values
updated_subcloud = db_api.subcloud_get_by_name(self.ctx, values['name'])
self.assertEqual(consts.DEPLOY_STATE_CREATE_FAILED,
updated_subcloud.deploy_status)
@mock.patch.object(subcloud_manager.SubcloudManager,
'compose_apply_command')
@mock.patch.object(subcloud_manager.SubcloudManager,
@ -1957,7 +2036,8 @@ class TestSubcloudManager(base.DCManagerTestCase):
sm = subcloud_manager.SubcloudManager()
sm._backup_subcloud(self.ctx, payload=values, subcloud=subcloud)
mock_create_subcloud_inventory_file.side_effort = Exception('FakeFailure')
mock_create_subcloud_inventory_file.side_effect = Exception(
'FakeFailure')
updated_subcloud = db_api.subcloud_get_by_name(self.ctx, subcloud.name)
self.assertEqual(consts.BACKUP_STATE_PREP_FAILED,