Deprecate dcmanager subcloud restore API

With the new dcmanager subcloud-backup restore [1][2] the
old restore code can be removed. This removes all code
related to the legacy restore and returns error 410 if
the user tries to access the old API.

1: https://review.opendev.org/c/starlingx/distcloud/+/862431
2: https://review.opendev.org/c/starlingx/distcloud/+/860598

Test Plan:
PASS - Test add subcloud

Story: 2010116
Task: 46763

Signed-off-by: Hugo Brito <hugo.brito@windriver.com>
Change-Id: I185464424da7b853a644ec905bdb383ce5c857f9
This commit is contained in:
Hugo Brito 2022-11-08 11:55:21 -03:00
parent 8c199722a7
commit 183e62525c
9 changed files with 28 additions and 727 deletions

View File

@ -26,7 +26,7 @@ This operation does not accept a request body.
**Error response codes** **Error response codes**
badRequest (400), unauthorized (401), forbidden (403), badRequest (400), unauthorized (401), forbidden (403),
itemNotFound (404), badMethod (405), HTTPUnprocessableEntity (422), itemNotFound (404), badMethod (405), HTTPUnprocessableEntity (422),
internalServerError (500), serviceUnavailable (503) internalServerError (500), serviceUnavailable (503)
@ -57,7 +57,7 @@ This operation does not accept a request body.
**Error response codes** **Error response codes**
badRequest (400), unauthorized (401), forbidden (403), badRequest (400), unauthorized (401), forbidden (403),
itemNotFound (404), badMethod (405), HTTPUnprocessableEntity (422), itemNotFound (404), badMethod (405), HTTPUnprocessableEntity (422),
internalServerError (500), serviceUnavailable (503) internalServerError (500), serviceUnavailable (503)
@ -171,7 +171,7 @@ Request Example
- management-gateway-ip: management_gateway_ip - management-gateway-ip: management_gateway_ip
- management-start-ip: management_start_ip - management-start-ip: management_start_ip
- management-end-ip: management_end_ip - management-end-ip: management_end_ip
Response Example Response Example
---------------- ----------------
@ -191,7 +191,7 @@ Shows information about a specific subcloud
**Error response codes** **Error response codes**
badRequest (400), unauthorized (401), forbidden (403), badRequest (400), unauthorized (401), forbidden (403),
itemNotFound (404), badMethod (405), HTTPUnprocessableEntity (422), itemNotFound (404), badMethod (405), HTTPUnprocessableEntity (422),
internalServerError (500), serviceUnavailable (503) internalServerError (500), serviceUnavailable (503)
@ -253,7 +253,7 @@ Shows additional information about a specific subcloud
**Error response codes** **Error response codes**
badRequest (400), unauthorized (401), forbidden (403), badRequest (400), unauthorized (401), forbidden (403),
itemNotFound (404), badMethod (405), HTTPUnprocessableEntity (422), itemNotFound (404), badMethod (405), HTTPUnprocessableEntity (422),
internalServerError (500), serviceUnavailable (503) internalServerError (500), serviceUnavailable (503)
@ -525,79 +525,8 @@ Response Example
.. literalinclude:: samples/subclouds/subcloud-patch-reinstall-response.json .. literalinclude:: samples/subclouds/subcloud-patch-reinstall-response.json
:language: json :language: json
********************************************************
Restores a specific subcloud from platform backup data
********************************************************
.. rest_method:: PATCH /v1.0/subclouds/{subcloud}/restore
Accepts Content-Type multipart/form-data.
**Normal response codes**
200
**Error response codes**
badRequest (400), unauthorized (401), forbidden (403), badMethod (405),
HTTPUnprocessableEntity (422), internalServerError (500),
serviceUnavailable (503)
**Request parameters**
.. rest_parameters:: parameters.yaml
- subcloud: subcloud_uri
- restore_values: restore_values
- sysadmin_password: sysadmin_password
- with_install: with_install
Request Example
----------------
.. literalinclude:: samples/subclouds/subcloud-patch-restore-request.json
:language: json
**Response parameters**
.. rest_parameters:: parameters.yaml
- id: subcloud_id
- group_id: group_id
- name: subcloud_name
- description: subcloud_description
- location: subcloud_location
- software-version: software_version
- availability-status: availability_status
- error-description: error_description
- deploy-status: deploy_status
- backup-status: backup_status
- backup-datetime: backup_datetime
- openstack-installed: openstack_installed
- management-state: management_state
- systemcontroller-gateway-ip: systemcontroller_gateway_ip
- management-start-ip: management_start_ip
- management-end-ip: management_end_ip
- management-subnet: management_subnet
- management-gateway-ip: management_gateway_ip
- created-at: created_at
- updated-at: updated_at
- data_install: data_install
- data_upgrade: data_upgrade
- endpoint_sync_status: endpoint_sync_status
- sync_status: sync_status
- endpoint_type: sync_status_type
Response Example
----------------
.. literalinclude:: samples/subclouds/subcloud-patch-restore-response.json
:language: json
***************************************** *****************************************
Update the status of a specific subcloud Update the status of a specific subcloud
***************************************** *****************************************
.. rest_method:: PATCH /v1.0/subclouds/{subcloud}/update_status .. rest_method:: PATCH /v1.0/subclouds/{subcloud}/update_status
@ -680,7 +609,7 @@ This operation does not accept a request body.
**Error response codes** **Error response codes**
badRequest (400), unauthorized (401), forbidden (403), badRequest (400), unauthorized (401), forbidden (403),
badMethod (405), HTTPUnprocessableEntity (422), badMethod (405), HTTPUnprocessableEntity (422),
internalServerError (500), serviceUnavailable (503) internalServerError (500), serviceUnavailable (503)
@ -767,7 +696,7 @@ Shows information about a specific subcloud group
**Error response codes** **Error response codes**
badRequest (400), unauthorized (401), forbidden (403), badRequest (400), unauthorized (401), forbidden (403),
itemNotFound (404), badMethod (405), HTTPUnprocessableEntity (422), itemNotFound (404), badMethod (405), HTTPUnprocessableEntity (422),
internalServerError (500), serviceUnavailable (503) internalServerError (500), serviceUnavailable (503)
@ -810,7 +739,7 @@ Shows subclouds that are part of a subcloud group
**Error response codes** **Error response codes**
badRequest (400), unauthorized (401), forbidden (403), badRequest (400), unauthorized (401), forbidden (403),
itemNotFound (404), badMethod (405), HTTPUnprocessableEntity (422), itemNotFound (404), badMethod (405), HTTPUnprocessableEntity (422),
internalServerError (500), serviceUnavailable (503) internalServerError (500), serviceUnavailable (503)
@ -1198,7 +1127,7 @@ Shows the details of the update strategy
**Error response codes** **Error response codes**
badRequest (400), unauthorized (401), forbidden (403), badRequest (400), unauthorized (401), forbidden (403),
itemNotFound (404), badMethod (405), HTTPUnprocessableEntity (422), itemNotFound (404), badMethod (405), HTTPUnprocessableEntity (422),
internalServerError (500), serviceUnavailable (503) internalServerError (500), serviceUnavailable (503)
@ -1303,7 +1232,7 @@ Deletes the update strategy
**Error response codes** **Error response codes**
badRequest (400), unauthorized (401), forbidden (403), badRequest (400), unauthorized (401), forbidden (403),
itemNotFound (404), badMethod (405), HTTPUnprocessableEntity (422), itemNotFound (404), badMethod (405), HTTPUnprocessableEntity (422),
internalServerError (500), serviceUnavailable (503) internalServerError (500), serviceUnavailable (503)
@ -1410,7 +1339,7 @@ This operation does not accept a request body.
**Error response codes** **Error response codes**
badRequest (400), unauthorized (401), forbidden (403), badRequest (400), unauthorized (401), forbidden (403),
itemNotFound (404), badMethod (405), HTTPUnprocessableEntity (422), itemNotFound (404), badMethod (405), HTTPUnprocessableEntity (422),
internalServerError (500), serviceUnavailable (503) internalServerError (500), serviceUnavailable (503)
@ -1448,7 +1377,7 @@ Shows the details of patch strategy steps for a particular cloud
**Error response codes** **Error response codes**
badRequest (400), unauthorized (401), forbidden (403), badRequest (400), unauthorized (401), forbidden (403),
itemNotFound (404), badMethod (405), HTTPUnprocessableEntity (422), itemNotFound (404), badMethod (405), HTTPUnprocessableEntity (422),
internalServerError (500), serviceUnavailable (503) internalServerError (500), serviceUnavailable (503)
@ -1500,7 +1429,7 @@ This operation does not accept a request body.
**Error response codes** **Error response codes**
badRequest (400), unauthorized (401), forbidden (403), badRequest (400), unauthorized (401), forbidden (403),
itemNotFound (404), badMethod (405), HTTPUnprocessableEntity (422), itemNotFound (404), badMethod (405), HTTPUnprocessableEntity (422),
internalServerError (500), serviceUnavailable (503) internalServerError (500), serviceUnavailable (503)
@ -1539,7 +1468,7 @@ Shows sw-update options (defaults or per subcloud). Use ``RegionOne`` as subclou
**Error response codes** **Error response codes**
badRequest (400), unauthorized (401), forbidden (403), badRequest (400), unauthorized (401), forbidden (403),
itemNotFound (404), badMethod (405), HTTPUnprocessableEntity (422), itemNotFound (404), badMethod (405), HTTPUnprocessableEntity (422),
internalServerError (500), serviceUnavailable (503) internalServerError (500), serviceUnavailable (503)

View File

@ -1,5 +0,0 @@
{
"sysadmin_password": "XXXXXXX",
"restore_values": "path to some file",
"with_install": false
}

View File

@ -1,25 +0,0 @@
{
"id": 1,
"name": "subcloud1",
"created-at": "2021-11-08T18:41:19.530228",
"updated-at": "2021-11-15T14:15:59.944851",
"availability-status": "offline",
"data_install": {
"bootstrap_interface": "eno1"
},
"data_upgrade": null,
"deploy-status": "pre-restore",
"backup-status": "complete",
"backup-datetime": "2022-07-08 11:23:58.132134",
"description": "Ottawa Site",
"group_id": 1,
"location": "YOW",
"management-end-ip": "192.168.101.50",
"management-gateway-ip": "192.168.101.1",
"management-start-ip": "192.168.101.2",
"management-state": "unmanaged",
"management-subnet": "192.168.101.0/24",
"openstack-installed": false,
"software-version": "21.12",
"systemcontroller-gateway-ip": "192.168.204.101",
}

View File

@ -68,7 +68,6 @@ LOCK_NAME = 'SubcloudsController'
BOOTSTRAP_VALUES = 'bootstrap_values' BOOTSTRAP_VALUES = 'bootstrap_values'
INSTALL_VALUES = 'install_values' INSTALL_VALUES = 'install_values'
RESTORE_VALUES = 'restore_values'
SUBCLOUD_ADD_MANDATORY_FILE = [ SUBCLOUD_ADD_MANDATORY_FILE = [
BOOTSTRAP_VALUES, BOOTSTRAP_VALUES,
@ -78,10 +77,6 @@ SUBCLOUD_RECONFIG_MANDATORY_FILE = [
consts.DEPLOY_CONFIG, consts.DEPLOY_CONFIG,
] ]
SUBCLOUD_RESTORE_MANDATORY_FILE = [
RESTORE_VALUES,
]
SUBCLOUD_ADD_GET_FILE_CONTENTS = [ SUBCLOUD_ADD_GET_FILE_CONTENTS = [
BOOTSTRAP_VALUES, BOOTSTRAP_VALUES,
INSTALL_VALUES, INSTALL_VALUES,
@ -98,19 +93,6 @@ INSTALL_VALUES_ADDRESSES = [
'network_address' 'network_address'
] ]
# The following parameters can be provided by the user for
# remote subcloud restore
# - initial_backup_dir (default to /opt/platform-backup)
# - backup_filename (mandatory parameter)
# - ansible_ssh_pass (sysadmin_password)
# - ansible_become_pass (sysadmin_password)
# - on_box_data (default to true)
# - wipe_ceph_osds (default to false)
# - ansible_remote_tmp (default to /tmp)
MANDATORY_RESTORE_VALUES = [
'backup_filename',
]
def _get_multipart_field_name(part): def _get_multipart_field_name(part):
content = part.headers[b"Content-Disposition"].decode("utf8") content = part.headers[b"Content-Disposition"].decode("utf8")
@ -278,32 +260,6 @@ class SubcloudsController(object):
self._get_common_deploy_files(payload) self._get_common_deploy_files(payload)
return payload return payload
@staticmethod
def _get_restore_payload(request):
payload = dict()
for f in SUBCLOUD_RESTORE_MANDATORY_FILE:
if f not in request.POST:
pecan.abort(400, _("Missing required file for %s") % f)
multipart_data = decoder.MultipartDecoder(
request.body, pecan.request.headers.get('Content-Type'))
for f in SUBCLOUD_RESTORE_MANDATORY_FILE:
for part in multipart_data.parts:
for hk, hv in part.headers.items():
hv = hv.decode('utf8')
if hk.decode('utf8') == 'Content-Disposition':
if f in hv:
file_item = request.POST[f]
file_item.file.seek(0, os.SEEK_SET)
data = yaml.safe_load(
file_item.file.read().decode('utf8'))
payload.update({RESTORE_VALUES: data})
elif "sysadmin_password" in hv:
payload.update({'sysadmin_password': part.content})
elif "with_install" in hv:
payload.update({'with_install': part.content})
return payload
def _get_config_file_path(self, subcloud_name, config_file_type=None): def _get_config_file_path(self, subcloud_name, config_file_type=None):
if config_file_type == consts.DEPLOY_CONFIG: if config_file_type == consts.DEPLOY_CONFIG:
file_path = os.path.join( file_path = os.path.join(
@ -694,14 +650,6 @@ class SubcloudsController(object):
return True return True
@staticmethod
def _validate_restore_values(payload):
"""Validate the restore values to ensure parameters for remote restore are present"""
restore_values = payload.get(RESTORE_VALUES)
for p in MANDATORY_RESTORE_VALUES:
if p not in restore_values:
pecan.abort(400, _('Mandatory restore value %s not present') % p)
def _get_subcloud_users(self): def _get_subcloud_users(self):
"""Get the subcloud users and passwords from keyring""" """Get the subcloud users and passwords from keyring"""
DEFAULT_SERVICE_PROJECT_NAME = 'services' DEFAULT_SERVICE_PROJECT_NAME = 'services'
@ -1354,89 +1302,8 @@ class SubcloudsController(object):
LOG.exception("Unable to reinstall subcloud %s" % subcloud.name) LOG.exception("Unable to reinstall subcloud %s" % subcloud.name)
pecan.abort(500, _('Unable to reinstall subcloud')) pecan.abort(500, _('Unable to reinstall subcloud'))
elif verb == "restore": elif verb == "restore":
payload = self._get_restore_payload(request) pecan.abort(410, _('This API is deprecated. '
if not payload: 'Please use /v1.0/subcloud-backup/restore'))
pecan.abort(400, _('Body required'))
if subcloud.management_state != dccommon_consts.MANAGEMENT_UNMANAGED:
pecan.abort(400, _('Subcloud can not be restored while it is still '
'in managed state. Please unmanage the subcloud '
'and try again.'))
elif subcloud.deploy_status in [consts.DEPLOY_STATE_INSTALLING,
consts.DEPLOY_STATE_BOOTSTRAPPING,
consts.DEPLOY_STATE_DEPLOYING]:
pecan.abort(400, _('This operation is not allowed while subcloud install, '
'bootstrap or deploy is in progress.'))
sysadmin_password = payload.get('sysadmin_password')
if not sysadmin_password:
pecan.abort(400, _('subcloud sysadmin_password required'))
try:
payload['sysadmin_password'] = base64.b64decode(
sysadmin_password).decode('utf-8')
except Exception:
msg = _('Failed to decode subcloud sysadmin_password, '
'verify the password is base64 encoded')
LOG.exception(msg)
pecan.abort(400, msg)
with_install = payload.get('with_install')
if with_install is not None:
if with_install == 'true' or with_install == 'True':
payload.update({'with_install': True})
elif with_install == 'false' or with_install == 'False':
payload.update({'with_install': False})
else:
pecan.abort(400, _('Invalid with_install value'))
self._validate_restore_values(payload)
if with_install:
# Request to remote install as part of subcloud restore. Confirm the
# subcloud install data in the db still contain the required parameters
# for remote install.
install_values = self._get_subcloud_db_install_values(subcloud)
payload.update({
'install_values': install_values,
})
# Get the active system controller load is still in dc-vault
matching_iso, err_msg = utils.get_matching_iso()
if err_msg:
LOG.exception(err_msg)
pecan.abort(400, _(err_msg))
else:
# Not Redfish capable subcloud. The subcloud has been reinstalled
# and required patches have been applied.
#
# Pseudo code:
# - Retrieve install_values of the subcloud from the database.
# If it does not exist, try to retrieve the bootstrap address
# from its ansible inventory file (/var/opt/dc/ansible).
# - If the bootstrap address can be obtained, add install_values
# to the payload and continue.
# - If the bootstrap address cannot be obtained, abort with an
# error message advising the user to run "dcmanager subcloud
# update --bootstrap-address <bootstrap_address>" command
msg = _('This operation is not yet supported for subclouds without '
'remote install capability.')
LOG.exception(msg)
pecan.abort(400, msg)
try:
self.dcmanager_rpc_client.restore_subcloud(context,
subcloud_id,
payload)
# Return deploy_status as pre-restore
subcloud.deploy_status = consts.DEPLOY_STATE_PRE_RESTORE
return db_api.subcloud_db_model_to_dict(subcloud)
except RemoteError as e:
pecan.abort(422, e.value)
except Exception:
LOG.exception("Unable to restore subcloud %s" % subcloud.name)
pecan.abort(500, _('Unable to restore subcloud'))
elif verb == 'update_status': elif verb == 'update_status':
res = self.updatestatus(subcloud.name) res = self.updatestatus(subcloud.name)
return res return res

View File

@ -162,14 +162,6 @@ class DCManagerService(service.Service):
(entity, (payload.get('subcloud') or payload.get('group')))) (entity, (payload.get('subcloud') or payload.get('group'))))
return self.subcloud_manager.restore_subcloud_backups(context, payload) return self.subcloud_manager.restore_subcloud_backups(context, payload)
@request_context
def restore_subcloud(self, context, subcloud_id, payload):
# Restore a subcloud
LOG.info("Handling restore_subcloud request for: %s" % subcloud_id)
return self.subcloud_manager.restore_subcloud(context,
subcloud_id,
payload)
@request_context @request_context
def update_subcloud_sync_endpoint_type(self, context, subcloud_name, def update_subcloud_sync_endpoint_type(self, context, subcloud_name,
endpoint_type_list, endpoint_type_list,

View File

@ -21,16 +21,17 @@ import datetime
import filecmp import filecmp
import functools import functools
import json import json
import keyring
import netaddr
import os import os
import threading import threading
import time import time
from eventlet import greenpool from eventlet import greenpool
from fm_api import constants as fm_const
from fm_api import fm_api
import keyring
import netaddr
from oslo_log import log as logging from oslo_log import log as logging
from oslo_messaging import RemoteError from oslo_messaging import RemoteError
from tsconfig.tsconfig import CONFIG_PATH from tsconfig.tsconfig import CONFIG_PATH
from tsconfig.tsconfig import SW_VERSION from tsconfig.tsconfig import SW_VERSION
@ -41,25 +42,20 @@ from dccommon.exceptions import PlaybookExecutionFailed
from dccommon import kubeoperator from dccommon import kubeoperator
from dccommon.subcloud_install import SubcloudInstall from dccommon.subcloud_install import SubcloudInstall
from dccommon.utils import run_playbook from dccommon.utils import run_playbook
from dcmanager.common.exceptions import DCManagerException
from dcmanager.db.sqlalchemy.models import Subcloud
from dcorch.rpc import client as dcorch_rpc_client
from dcmanager.audit import rpcapi as dcmanager_audit_rpc_client from dcmanager.audit import rpcapi as dcmanager_audit_rpc_client
from dcmanager.common import consts from dcmanager.common import consts
from dcmanager.common.consts import INVENTORY_FILE_POSTFIX from dcmanager.common.consts import INVENTORY_FILE_POSTFIX
from dcmanager.common import context as dcmanager_context from dcmanager.common import context as dcmanager_context
from dcmanager.common import exceptions from dcmanager.common import exceptions
from dcmanager.common.exceptions import DCManagerException
from dcmanager.common.i18n import _ from dcmanager.common.i18n import _
from dcmanager.common import manager from dcmanager.common import manager
from dcmanager.common import prestage from dcmanager.common import prestage
from dcmanager.common import utils from dcmanager.common import utils
from dcmanager.db import api as db_api from dcmanager.db import api as db_api
from dcmanager.db.sqlalchemy.models import Subcloud
from dcmanager.rpc import client as dcmanager_rpc_client from dcmanager.rpc import client as dcmanager_rpc_client
from dcorch.rpc import client as dcorch_rpc_client
from fm_api import constants as fm_const
from fm_api import fm_api
LOG = logging.getLogger(__name__) LOG = logging.getLogger(__name__)
@ -74,16 +70,12 @@ ANSIBLE_SUBCLOUD_BACKUP_DELETE_PLAYBOOK = \
'/usr/share/ansible/stx-ansible/playbooks/delete_subcloud_backup.yml' '/usr/share/ansible/stx-ansible/playbooks/delete_subcloud_backup.yml'
ANSIBLE_SUBCLOUD_BACKUP_RESTORE_PLAYBOOK = \ ANSIBLE_SUBCLOUD_BACKUP_RESTORE_PLAYBOOK = \
'/usr/share/ansible/stx-ansible/playbooks/restore_subcloud_backup.yml' '/usr/share/ansible/stx-ansible/playbooks/restore_subcloud_backup.yml'
ANSIBLE_HOST_VALIDATION_PLAYBOOK = \
'/usr/share/ansible/stx-ansible/playbooks/validate_host.yml'
ANSIBLE_SUBCLOUD_PLAYBOOK = \ ANSIBLE_SUBCLOUD_PLAYBOOK = \
'/usr/share/ansible/stx-ansible/playbooks/bootstrap.yml' '/usr/share/ansible/stx-ansible/playbooks/bootstrap.yml'
ANSIBLE_SUBCLOUD_INSTALL_PLAYBOOK = \ ANSIBLE_SUBCLOUD_INSTALL_PLAYBOOK = \
'/usr/share/ansible/stx-ansible/playbooks/install.yml' '/usr/share/ansible/stx-ansible/playbooks/install.yml'
ANSIBLE_SUBCLOUD_REHOME_PLAYBOOK = \ ANSIBLE_SUBCLOUD_REHOME_PLAYBOOK = \
'/usr/share/ansible/stx-ansible/playbooks/rehome_subcloud.yml' '/usr/share/ansible/stx-ansible/playbooks/rehome_subcloud.yml'
ANSIBLE_SUBCLOUD_RESTORE_PLAYBOOK = \
'/usr/share/ansible/stx-ansible/playbooks/restore_platform.yml'
USERS_TO_REPLICATE = [ USERS_TO_REPLICATE = [
'sysinv', 'sysinv',
@ -92,7 +84,8 @@ USERS_TO_REPLICATE = [
'mtce', 'mtce',
'fm', 'fm',
'barbican', 'barbican',
'dcmanager'] 'dcmanager'
]
# The timeout of the rehome playbook is set to 180 seconds as it takes a # The timeout of the rehome playbook is set to 180 seconds as it takes a
# long time for privilege escalation before resetting the host route and # long time for privilege escalation before resetting the host route and
@ -252,28 +245,6 @@ class SubcloudManager(manager.Manager):
] ]
return deploy_command return deploy_command
def compose_check_target_command(self, subcloud_name,
ansible_subcloud_inventory_file, payload):
check_target_command = [
"ansible-playbook", ANSIBLE_HOST_VALIDATION_PLAYBOOK,
"-i", ansible_subcloud_inventory_file,
"--limit", subcloud_name,
"-e", "@%s" % consts.ANSIBLE_OVERRIDES_PATH + "/" +
subcloud_name + "_check_target_values.yml"]
return check_target_command
def compose_restore_command(self, subcloud_name,
ansible_subcloud_inventory_file, payload):
restore_command = [
"ansible-playbook", ANSIBLE_SUBCLOUD_RESTORE_PLAYBOOK,
"-i", ansible_subcloud_inventory_file,
"--limit", subcloud_name,
"-e", "@%s" % consts.ANSIBLE_OVERRIDES_PATH + "/" +
subcloud_name + "_restore_values.yml"]
return restore_command
def compose_backup_command(self, subcloud_name, ansible_subcloud_inventory_file): def compose_backup_command(self, subcloud_name, ansible_subcloud_inventory_file):
backup_command = [ backup_command = [
"ansible-playbook", ANSIBLE_SUBCLOUD_BACKUP_CREATE_PLAYBOOK, "ansible-playbook", ANSIBLE_SUBCLOUD_BACKUP_CREATE_PLAYBOOK,
@ -479,7 +450,7 @@ class SubcloudManager(manager.Manager):
apply_thread = threading.Thread( apply_thread = threading.Thread(
target=self.run_deploy, target=self.run_deploy,
args=(subcloud, payload, context, args=(subcloud, payload, context,
None, None, None, None, None, rehome_command)) None, None, None, rehome_command))
else: else:
install_command = None install_command = None
if "install_values" in payload: if "install_values" in payload:
@ -622,52 +593,6 @@ class SubcloudManager(manager.Manager):
context, subcloud_id, context, subcloud_id,
deploy_status=consts.DEPLOY_STATE_PRE_INSTALL_FAILED) deploy_status=consts.DEPLOY_STATE_PRE_INSTALL_FAILED)
def _create_check_target_override_file(self, payload, subcloud_name):
check_target_override_file = os.path.join(
consts.ANSIBLE_OVERRIDES_PATH, subcloud_name +
'_check_target_values.yml')
with open(check_target_override_file, 'w') as f_out:
f_out.write(
'---\n'
)
for k, v in payload['check_target_values'].items():
f_out.write("%s: %s\n" % (k, json.dumps(v)))
def _create_restore_override_file(self, payload, subcloud_name):
restore_override_file = os.path.join(
consts.ANSIBLE_OVERRIDES_PATH, subcloud_name +
'_restore_values.yml')
with open(restore_override_file, 'w') as f_out:
f_out.write(
'---\n'
)
for k, v in payload['restore_values'].items():
f_out.write("%s: %s\n" % (k, json.dumps(v)))
def _prepare_for_restore(self, payload, subcloud_name):
payload['check_target_values'] = dict()
payload['check_target_values']['ansible_ssh_pass'] = \
payload['sysadmin_password']
payload['check_target_values']['software_version'] = SW_VERSION
payload['check_target_values']['bootstrap_address'] = \
payload['bootstrap-address']
payload['check_target_values']['check_bootstrap_address'] = 'true'
payload['check_target_values']['check_patches'] = 'false'
self._create_check_target_override_file(payload, subcloud_name)
payload['restore_values']['ansible_ssh_pass'] = \
payload['sysadmin_password']
payload['restore_values']['ansible_become_pass'] = \
payload['sysadmin_password']
payload['restore_values']['admin_password'] = \
str(keyring.get_password('CGCS', 'admin'))
payload['restore_values']['skip_patches_restore'] = 'true'
self._create_restore_override_file(payload, subcloud_name)
def create_subcloud_backups(self, context, payload): def create_subcloud_backups(self, context, payload):
"""Backup subcloud or group of subclouds """Backup subcloud or group of subclouds
@ -1211,87 +1136,12 @@ class SubcloudManager(manager.Manager):
except Exception as e: except Exception as e:
LOG.exception(e) LOG.exception(e)
def restore_subcloud(self, context, subcloud_id, payload):
"""Restore subcloud
:param context: request context object
:param subcloud_id: subcloud id from db
:param payload: subcloud restore detail
"""
# Retrieve the subcloud details from the database
subcloud = db_api.subcloud_get(context, subcloud_id)
if subcloud.management_state != dccommon_consts.MANAGEMENT_UNMANAGED:
raise exceptions.SubcloudNotUnmanaged()
db_api.subcloud_update(context, subcloud_id,
deploy_status=consts.DEPLOY_STATE_PRE_RESTORE)
try:
# Ansible inventory filename for the specified subcloud
ansible_subcloud_inventory_file = self._get_ansible_filename(
subcloud.name, INVENTORY_FILE_POSTFIX)
# Add parameters used to generate inventory
payload['name'] = subcloud.name
payload['bootstrap-address'] = \
payload['install_values']['bootstrap_address']
payload['software_version'] = SW_VERSION
install_command = None
if payload['with_install']:
# Redfish capable subclouds
LOG.info("Reinstalling subcloud %s." % subcloud.name)
# Disegard the current 'image' config. Always reinstall with
# the system controller active image in dc-vault.
matching_iso, matching_sig = utils.get_vault_load_files(SW_VERSION)
payload['install_values'].update({'image': matching_iso})
payload['install_values']['ansible_ssh_pass'] = \
payload['sysadmin_password']
utils.create_subcloud_inventory(payload,
ansible_subcloud_inventory_file)
install_command = self.compose_install_command(
subcloud.name, ansible_subcloud_inventory_file)
else:
# Non Redfish capable subcloud
# Shouldn't get here as the API has already rejected the request.
return
# Prepare for restore
self._prepare_for_restore(payload, subcloud.name)
check_target_command = self.compose_check_target_command(
subcloud.name, ansible_subcloud_inventory_file, payload)
restore_command = self.compose_restore_command(
subcloud.name, ansible_subcloud_inventory_file, payload)
apply_thread = threading.Thread(
target=self.run_deploy,
args=(subcloud, payload, context,
install_command, None, None, check_target_command, restore_command))
apply_thread.start()
return db_api.subcloud_db_model_to_dict(subcloud)
except Exception:
LOG.exception("Failed to restore subcloud %s" % subcloud.name)
db_api.subcloud_update(
context, subcloud_id,
deploy_status=consts.DEPLOY_STATE_RESTORE_PREP_FAILED)
# TODO(kmacleod) add outer try/except here to catch and log unexpected # TODO(kmacleod) add outer try/except here to catch and log unexpected
# exception. As this stands, any uncaught exception is a silent (unlogged) # exception. As this stands, any uncaught exception is a silent (unlogged)
# failure # failure
def run_deploy(self, subcloud, payload, context, def run_deploy(self, subcloud, payload, context,
install_command=None, apply_command=None, install_command=None, apply_command=None,
deploy_command=None, check_target_command=None, deploy_command=None, rehome_command=None):
restore_command=None, rehome_command=None):
log_file = os.path.join(consts.DC_ANSIBLE_LOG_DIR, subcloud.name) + \ log_file = os.path.join(consts.DC_ANSIBLE_LOG_DIR, subcloud.name) + \
'_playbook_output.log' '_playbook_output.log'
@ -1302,28 +1152,6 @@ class SubcloudManager(manager.Manager):
) )
if not install_success: if not install_success:
return return
# Leave the following block here in case there is another use
# case besides subcloud restore where validating host post
# fresh install is necessary.
if check_target_command:
try:
run_playbook(log_file, check_target_command)
except PlaybookExecutionFailed:
msg = "Failed to run the validate host playbook" \
" for subcloud %s, check individual log at " \
"%s for detailed output." % (
subcloud.name,
log_file)
LOG.error(msg)
if restore_command:
db_api.subcloud_update(
context, subcloud.id,
deploy_status=consts.DEPLOY_STATE_RESTORE_PREP_FAILED)
return
LOG.info("Successfully checked subcloud %s" % subcloud.name)
if apply_command: if apply_command:
try: try:
# Update the subcloud to bootstrapping # Update the subcloud to bootstrapping
@ -1349,7 +1177,6 @@ class SubcloudManager(manager.Manager):
error_description=msg[0:consts.ERROR_DESCRIPTION_LENGTH]) error_description=msg[0:consts.ERROR_DESCRIPTION_LENGTH])
return return
LOG.info("Successfully bootstrapped %s" % subcloud.name) LOG.info("Successfully bootstrapped %s" % subcloud.name)
if deploy_command: if deploy_command:
# Run the custom deploy playbook # Run the custom deploy playbook
LOG.info("Starting deploy of %s" % subcloud.name) LOG.info("Starting deploy of %s" % subcloud.name)
@ -1370,28 +1197,6 @@ class SubcloudManager(manager.Manager):
error_description=msg[0:consts.ERROR_DESCRIPTION_LENGTH]) error_description=msg[0:consts.ERROR_DESCRIPTION_LENGTH])
return return
LOG.info("Successfully deployed %s" % subcloud.name) LOG.info("Successfully deployed %s" % subcloud.name)
elif restore_command:
db_api.subcloud_update(
context, subcloud.id,
deploy_status=consts.DEPLOY_STATE_RESTORING)
# Run the restore platform playbook
try:
run_playbook(log_file, restore_command)
except PlaybookExecutionFailed:
msg = "Failed to run the subcloud restore playbook" \
" for subcloud %s, check individual log at " \
"%s for detailed output." % (
subcloud.name,
log_file)
LOG.error(msg)
db_api.subcloud_update(
context, subcloud.id,
deploy_status=consts.DEPLOY_STATE_RESTORE_FAILED)
return
LOG.info("Successfully restored controller-0 of subcloud %s" %
subcloud.name)
if rehome_command: if rehome_command:
# Update the deploy status to rehoming # Update the deploy status to rehoming
db_api.subcloud_update( db_api.subcloud_update(

View File

@ -158,11 +158,6 @@ class ManagerClient(RPCClient):
release_version=release_version, release_version=release_version,
payload=payload)) payload=payload))
def restore_subcloud(self, ctxt, subcloud_id, payload):
return self.cast(ctxt, self.make_msg('restore_subcloud',
subcloud_id=subcloud_id,
payload=payload))
def restore_subcloud_backups(self, ctxt, payload): def restore_subcloud_backups(self, ctxt, payload):
return self.cast(ctxt, self.make_msg('restore_subcloud_backups', return self.cast(ctxt, self.make_msg('restore_subcloud_backups',
payload=payload)) payload=payload))

View File

@ -821,14 +821,6 @@ class TestSubcloudPost(testroot.DCManagerApiTest,
class TestSubcloudAPIOther(testroot.DCManagerApiTest): class TestSubcloudAPIOther(testroot.DCManagerApiTest):
FAKE_RESTORE_PAYLOAD = {
'sysadmin_password':
(base64.b64encode('testpass'.encode("utf-8"))).decode('ascii'),
'with_install': 'true',
'restore_values': {'on_box_data': 'false',
'backup_filename': 'some_fake_tarfile'}
}
"""Test GET, delete and patch API calls""" """Test GET, delete and patch API calls"""
def setUp(self): def setUp(self):
super(TestSubcloudAPIOther, self).setUp() super(TestSubcloudAPIOther, self).setUp()
@ -1582,144 +1574,6 @@ class TestSubcloudAPIOther(testroot.DCManagerApiTest):
str(subcloud.id) + '/reinstall', str(subcloud.id) + '/reinstall',
headers=FAKE_HEADERS, params=reinstall_data) headers=FAKE_HEADERS, params=reinstall_data)
@mock.patch.object(rpc_client, 'ManagerClient')
@mock.patch.object(subclouds.SubcloudsController, '_get_restore_payload')
def test_restore_subcloud_no_body(self, mock_get_restore_payload,
mock_rpc_client):
subcloud = fake_subcloud.create_fake_subcloud(self.ctx)
restore_payload = {}
mock_rpc_client().restore_subcloud.return_value = True
mock_get_restore_payload.return_value = restore_payload
six.assertRaisesRegex(self, webtest.app.AppError, "400 *",
self.app.patch_json, FAKE_URL + '/' +
str(subcloud.id) + '/restore',
headers=FAKE_HEADERS, params=restore_payload)
@mock.patch.object(rpc_client, 'ManagerClient')
def test_restore_subcloud_missing_restore_values(self, mock_rpc_client):
subcloud = fake_subcloud.create_fake_subcloud(self.ctx)
restore_payload = copy.copy(self.FAKE_RESTORE_PAYLOAD)
del restore_payload['restore_values']
mock_rpc_client().restore_subcloud.return_value = True
six.assertRaisesRegex(self, webtest.app.AppError, "400 *",
self.app.patch_json, FAKE_URL + '/' +
str(subcloud.id) + '/restore',
headers=FAKE_HEADERS, params=restore_payload)
@mock.patch.object(rpc_client, 'ManagerClient')
@mock.patch.object(subclouds.SubcloudsController, '_get_restore_payload')
def test_restore_subcloud_in_managed_state(self, mock_get_restore_payload,
mock_rpc_client):
subcloud = fake_subcloud.create_fake_subcloud(self.ctx)
db_api.subcloud_update(self.ctx,
subcloud.id,
management_state=dccommon_consts.MANAGEMENT_MANAGED)
restore_payload = copy.copy(self.FAKE_RESTORE_PAYLOAD)
mock_rpc_client().restore_subcloud.return_value = True
mock_get_restore_payload.return_value = restore_payload
six.assertRaisesRegex(self, webtest.app.AppError, "400 *",
self.app.patch_json, FAKE_URL + '/' +
str(subcloud.id) + '/restore',
headers=FAKE_HEADERS, params=restore_payload)
@mock.patch.object(rpc_client, 'ManagerClient')
@mock.patch.object(subclouds.SubcloudsController, '_get_restore_payload')
def test_restore_subcloud_undergoing_bootstrap(self, mock_get_restore_payload,
mock_rpc_client):
subcloud = fake_subcloud.create_fake_subcloud(self.ctx)
db_api.subcloud_update(self.ctx,
subcloud.id,
deploy_status=consts.DEPLOY_STATE_BOOTSTRAPPING)
restore_payload = copy.copy(self.FAKE_RESTORE_PAYLOAD)
mock_rpc_client().restore_subcloud.return_value = True
mock_get_restore_payload.return_value = restore_payload
six.assertRaisesRegex(self, webtest.app.AppError, "400 *",
self.app.patch_json, FAKE_URL + '/' +
str(subcloud.id) + '/restore',
headers=FAKE_HEADERS, params=restore_payload)
@mock.patch.object(rpc_client, 'ManagerClient')
@mock.patch.object(subclouds.SubcloudsController, '_get_restore_payload')
def test_restore_subcloud_bad_sysadmin_password(self, mock_get_restore_payload,
mock_rpc_client):
subcloud = fake_subcloud.create_fake_subcloud(self.ctx)
restore_payload = copy.copy(self.FAKE_RESTORE_PAYLOAD)
restore_payload['sysadmin_password'] = 'not_base64_encoded'
mock_rpc_client().restore_subcloud.return_value = True
mock_get_restore_payload.return_value = restore_payload
six.assertRaisesRegex(self, webtest.app.AppError, "400 *",
self.app.patch_json, FAKE_URL + '/' +
str(subcloud.id) + '/restore',
headers=FAKE_HEADERS, params=restore_payload)
@mock.patch.object(rpc_client, 'ManagerClient')
@mock.patch.object(subclouds.SubcloudsController, '_get_restore_payload')
def test_restore_subcloud_without_remote_install(self, mock_get_restore_payload,
mock_rpc_client):
subcloud = fake_subcloud.create_fake_subcloud(self.ctx)
restore_payload = copy.copy(self.FAKE_RESTORE_PAYLOAD)
del restore_payload['with_install']
mock_rpc_client().restore_subcloud.return_value = True
mock_get_restore_payload.return_value = restore_payload
six.assertRaisesRegex(self, webtest.app.AppError, "400 *",
self.app.patch_json, FAKE_URL + '/' +
str(subcloud.id) + '/restore',
headers=FAKE_HEADERS, params=restore_payload)
@mock.patch.object(rpc_client, 'ManagerClient')
@mock.patch.object(subclouds.SubcloudsController, '_get_restore_payload')
def test_restore_subcloud_missing_mandatory_restore_parameter(
self, mock_get_restore_payload, mock_rpc_client):
subcloud = fake_subcloud.create_fake_subcloud(self.ctx)
restore_payload = copy.copy(self.FAKE_RESTORE_PAYLOAD)
restore_payload['restore_values'] = {'on_box_data': 'false'}
mock_rpc_client().restore_subcloud.return_value = True
mock_get_restore_payload.return_value = restore_payload
six.assertRaisesRegex(self, webtest.app.AppError, "400 *",
self.app.patch_json, FAKE_URL + '/' +
str(subcloud.id) + '/restore',
headers=FAKE_HEADERS, params=restore_payload)
@mock.patch.object(cutils, 'get_vault_load_files')
@mock.patch.object(rpc_client, 'ManagerClient')
@mock.patch.object(subclouds.SubcloudsController, '_get_subcloud_db_install_values')
@mock.patch.object(subclouds.SubcloudsController, '_get_restore_payload')
def test_restore_subcloud(self, mock_get_restore_payload,
mock_get_subcloud_db_install_values,
mock_rpc_client, mock_get_vault_load_files):
subcloud = fake_subcloud.create_fake_subcloud(self.ctx)
install_data = copy.copy(FAKE_SUBCLOUD_INSTALL_VALUES)
restore_payload = copy.copy(self.FAKE_RESTORE_PAYLOAD)
mock_get_subcloud_db_install_values.return_value = install_data
mock_rpc_client().restore_subcloud.return_value = True
mock_get_restore_payload.return_value = restore_payload
mock_get_vault_load_files.return_value = ('iso_file_path', 'sig_file_path')
response = self.app.patch_json(FAKE_URL + '/' + str(subcloud.id) +
'/restore',
headers=FAKE_HEADERS,
params=restore_payload)
mock_rpc_client().restore_subcloud.assert_called_once_with(
mock.ANY,
subcloud.id,
mock.ANY)
self.assertEqual(response.status_int, 200)
@mock.patch.object(rpc_client, 'ManagerClient') @mock.patch.object(rpc_client, 'ManagerClient')
@mock.patch.object(prestage, '_get_system_controller_upgrades') @mock.patch.object(prestage, '_get_system_controller_upgrades')
@mock.patch.object(prestage, '_get_prestage_subcloud_info') @mock.patch.object(prestage, '_get_prestage_subcloud_info')

View File

@ -251,24 +251,6 @@ class FakeException(Exception):
pass pass
FAKE_RESTORE_VALUES = {
"backup_filename": "subcloud_platform_backup.tgz",
"on_box_data": "false",
"initial_backup_dir": "/home/sysadmin",
"skip_patches_restore": "true"
}
FAKE_SUBCLOUD_RESTORE_PAYLOAD = {
"install_values": fake_subcloud.FAKE_SUBCLOUD_INSTALL_VALUES,
"with_install": True,
"bootstrap-address": "bootstrap_ip",
"software_version": "20.12",
"sysadmin_password": "testpasswd",
"restore_values": FAKE_RESTORE_VALUES
}
FAKE_SUBCLOUD_PRESTAGE_PAYLOAD = { FAKE_SUBCLOUD_PRESTAGE_PAYLOAD = {
"install_values": fake_subcloud.FAKE_SUBCLOUD_INSTALL_VALUES, "install_values": fake_subcloud.FAKE_SUBCLOUD_INSTALL_VALUES,
"subcloud_name": 'subcloud1', "subcloud_name": 'subcloud1',
@ -1620,99 +1602,6 @@ class TestSubcloudManager(base.DCManagerTestCase):
self.assertEqual(consts.DEPLOY_STATE_DONE, self.assertEqual(consts.DEPLOY_STATE_DONE,
subcloud.deploy_status) subcloud.deploy_status)
def test_compose_check_target_command(self):
sm = subcloud_manager.SubcloudManager()
check_target_command = sm.compose_check_target_command(
'subcloud1', '/var/opt/dc/ansible/subcloud1_inventory.yml',
FAKE_SUBCLOUD_RESTORE_PAYLOAD)
self.assertEqual(
check_target_command,
[
'ansible-playbook',
subcloud_manager.ANSIBLE_HOST_VALIDATION_PLAYBOOK,
'-i', '/var/opt/dc/ansible/subcloud1_inventory.yml',
'--limit', 'subcloud1',
'-e', '@/var/opt/dc/ansible/subcloud1_check_target_values.yml'
]
)
def test_compose_restore_command(self):
sm = subcloud_manager.SubcloudManager()
restore_command = sm.compose_restore_command(
'subcloud1', '/var/opt/dc/ansible/subcloud1_inventory.yml',
FAKE_SUBCLOUD_RESTORE_PAYLOAD)
self.assertEqual(
restore_command,
[
'ansible-playbook',
subcloud_manager.ANSIBLE_SUBCLOUD_RESTORE_PLAYBOOK,
'-i', '/var/opt/dc/ansible/subcloud1_inventory.yml',
'--limit', 'subcloud1',
'-e', '@/var/opt/dc/ansible/subcloud1_restore_values.yml'
]
)
def test_restore_managed_subcloud(self):
subcloud = self.create_subcloud_static(
self.ctx,
name='subcloud1',
deploy_status=consts.DEPLOY_STATE_DONE)
db_api.subcloud_update(self.ctx,
subcloud.id,
management_state=dccommon_consts.MANAGEMENT_MANAGED)
fake_dcmanager_cermon_api = FakeDCManagerNotifications()
p = mock.patch('dcmanager.rpc.client.DCManagerNotifications')
mock_dcmanager_api = p.start()
mock_dcmanager_api.return_value = fake_dcmanager_cermon_api
sm = subcloud_manager.SubcloudManager()
self.assertRaises(exceptions.SubcloudNotUnmanaged,
sm.restore_subcloud, self.ctx,
subcloud.id, FAKE_SUBCLOUD_RESTORE_PAYLOAD)
@mock.patch.object(cutils, 'get_vault_load_files')
@mock.patch.object(cutils, 'create_subcloud_inventory')
@mock.patch.object(
subcloud_manager.SubcloudManager, 'compose_install_command')
@mock.patch.object(subcloud_manager.SubcloudManager,
'_prepare_for_restore')
@mock.patch.object(
subcloud_manager.SubcloudManager, 'compose_check_target_command')
@mock.patch.object(
subcloud_manager.SubcloudManager, 'compose_restore_command')
@mock.patch.object(threading.Thread, 'start')
def test_restore_subcloud(
self, mock_thread_start,
mock_compose_restore_command, mock_compose_check_target_command,
mock_prepare_for_restore, mock_compose_install_command,
mock_create_subcloud_inventory, mock_get_vault_load_files):
subcloud = self.create_subcloud_static(
self.ctx,
name='subcloud1',
deploy_status=consts.DEPLOY_STATE_PRE_RESTORE)
sm = subcloud_manager.SubcloudManager()
mock_get_vault_load_files.return_value = ("iso file path", "sig file path")
sm.restore_subcloud(self.ctx, subcloud.id, FAKE_SUBCLOUD_RESTORE_PAYLOAD)
mock_get_vault_load_files.assert_called_once_with(SW_VERSION)
mock_create_subcloud_inventory.assert_called_once_with(
FAKE_SUBCLOUD_RESTORE_PAYLOAD, mock.ANY)
mock_compose_install_command.assert_called_once_with(subcloud.name, mock.ANY)
mock_compose_check_target_command.assert_called_once_with(
subcloud.name, mock.ANY, FAKE_SUBCLOUD_RESTORE_PAYLOAD)
mock_compose_restore_command.assert_called_once_with(
subcloud.name, mock.ANY, FAKE_SUBCLOUD_RESTORE_PAYLOAD)
mock_thread_start.assert_called_once()
# Verify that subcloud has the correct deploy status
updated_subcloud = db_api.subcloud_get_by_name(self.ctx, subcloud.name)
self.assertEqual(consts.DEPLOY_STATE_PRE_RESTORE,
updated_subcloud.deploy_status)
@mock.patch.object(subcloud_manager.SubcloudManager, @mock.patch.object(subcloud_manager.SubcloudManager,
'_run_parallel_group_operation') '_run_parallel_group_operation')
def test_backup_create_managed_online(self, mock_parallel_group_operation): def test_backup_create_managed_online(self, mock_parallel_group_operation):