Ceph uprev v13.2.2 Mimic

* ceph: update crushmap for ceph mimic

* puppet-ceph: remove ceph jewel rest-api configuration

    ceph-rest-api is implemented in ceph-mgr on ceph mimic/v13.2.2 version.
    Remove the configuration which is for ceph-v10.2.6/ceph-rest-api

* puppet-ceph: enable mgr-restful-plugin

    ceph configuration is under puppet control. ceph-mgr/restful
    plugin is going to be started in mgr-restful-plugin script.

    output log when starting mgr-restful-plugin
    output log in puppet log to know the execute commands.

* puppet-ceph: pass osdid to ceph::osd when creating resources

    ceph::osd needs to be created with the same OSD ID that's
    already present in sysinv database.

* puppet-ceph: update ceph.conf with osd device path

* puppet-ceph: fix aio-dx unlock issue caused by ceph-mon

* puppet-ceph: ensure radosgw systemd service is not started

    Make sure radosgw service is not accidentally
    started by systemd.

* puppet-sm: provision mgr-restful-plugin

    After mgr-restful-plugin is enabled by ceph.pp, SM will
    monitor mgr-restful-plugin status and contor its status.

* sysinv-common: ceph use status instead of overall_status

    'overall_status' is deprecated in Ceph Mimic. Use 'status' instead.

* sysinv-common: ceph incorrect parsing of osd_crush_tree output

    len(body) is used to iterate osd crush tree which is not
    correct because the crush tree dictionary is stored in
    body['output']

* sysinv-common: ceph refactor crushmap_tiers_add

    Refactor crushmap_tiers_add() to always check/create missing
    ceph tiers and corresponding crush rules. This is currently
    gated by tier.status == constants.SB_TIER_STATUS_DEFINED

* sysinv-conductor: remove api/v0.1 from ceph api endpoint

    "restapi base url"(ceph.conf) is removed from ceph Mimic
    version. remove the base url now.

* sysinv-conductor: ceph convert pool quota None to zero

    On non-kubernetes setup kube_pool_gib is None which
    raises an exception when trying to do integer
    arithmetic.

* sysinv-conductor: remove unused update_ceph_services

    update_ceph_services() is triggering application of
    runtime manifests but that's no longer supported on
    stx/containers.

    Removing dead/unused code.

* helm: rbd-provisioner setup kube-rbd pool

    Ceph Mimic no longer supports "ceph osd pool set <pool-name>
    crush_rule <ruleset>" with a numeric ruleset value. Crush
    rule name should be used instead.

    Starting with Ceph Luminous pools require application tags
    to be configured with: "ceph osd pool application enable
    <pool-name> <app-name> " otherwise ceph health warning is
    reported.

    Enable "rbd" application on "kube-rbd" pool.

* sysinv-helm: remove custom ceph_config_helper_image

    Remove custom ceph config helper image needed to adapt
    upstream helm charts to using Ceph Jewel release. Because
    we're using Ceph Mimic this helper image is no longer
    needed.

* sysinv-helm: ceph use rule name instead of id

    Ceph osd pool crush_rule is set by name. (Jewel release
    used numerical value for crush ruleset)

Story: 2003605
Task: 24932

Signed-off-by: Changcheng Liu <changcheng.liu@intel.com>
Signed-off-by: Ovidiu Poncea <ovidiu.poncea@windriver.com>
Signed-off-by: Dehao Shang <dehao.shang@intel.com>
Signed-off-by: Yong Hu <yong.hu@intel.com>
Signed-off-by: Daniel Badea <daniel.badea@windriver.com>

Depends-On: Ibfbecf0a8beb38009b9d7192ca9455a841402040
Change-Id: Ia322e5468026842d86e738ece82afd803dec315c
This commit is contained in:
Daniel Badea 2019-01-31 14:39:59 +08:00 committed by dbadea
parent 2e388cd71f
commit 414558fe8c
18 changed files with 108 additions and 164 deletions

View File

@ -126,11 +126,6 @@ data:
- type: job
labels:
app: rbd-provisioner
values:
images:
tags:
# TODO: Remove after ceph upgrade
rbd_provisioner_storage_init: docker.io/starlingx/stx-ceph-config-helper:master-centos-stable-latest
source:
type: tar
location: http://172.17.0.1/helm_charts/rbd-provisioner-0.1.0.tgz
@ -529,7 +524,6 @@ data:
bootstrap: docker.io/starlingx/stx-heat:master-centos-stable-latest
db_drop: docker.io/starlingx/stx-heat:master-centos-stable-latest
db_init: docker.io/starlingx/stx-heat:master-centos-stable-latest
glance_storage_init: docker.io/starlingx/stx-ceph-config-helper:master-centos-stable-latest
glance_api: docker.io/starlingx/stx-glance:master-centos-stable-latest
glance_db_sync: docker.io/starlingx/stx-glance:master-centos-stable-latest
glance_registry: docker.io/starlingx/stx-glance:master-centos-stable-latest
@ -601,10 +595,8 @@ data:
bootstrap: docker.io/starlingx/stx-heat:master-centos-stable-latest
cinder_api: docker.io/starlingx/stx-cinder:master-centos-stable-latest
cinder_backup: docker.io/starlingx/stx-cinder:master-centos-stable-latest
cinder_backup_storage_init: docker.io/starlingx/stx-ceph-config-helper:master-centos-stable-latest
cinder_db_sync: docker.io/starlingx/stx-cinder:master-centos-stable-latest
cinder_scheduler: docker.io/starlingx/stx-cinder:master-centos-stable-latest
cinder_storage_init: docker.io/starlingx/stx-ceph-config-helper:master-centos-stable-latest
cinder_volume: docker.io/starlingx/stx-cinder:master-centos-stable-latest
cinder_volume_usage_audit: docker.io/starlingx/stx-cinder:master-centos-stable-latest
db_drop: docker.io/starlingx/stx-heat:master-centos-stable-latest
@ -826,7 +818,6 @@ data:
nova_scheduler: docker.io/starlingx/stx-nova:master-centos-stable-latest
nova_spiceproxy: docker.io/starlingx/stx-nova:master-centos-stable-latest
nova_spiceproxy_assets: docker.io/starlingx/stx-nova:master-centos-stable-latest
nova_storage_init: docker.io/starlingx/stx-ceph-config-helper:master-centos-stable-latest
pod:
# TODO(rchurch):
# Change-Id: I5a60efd133c156ce2ecac31d22e94b25e4e837bf broke armada apply

View File

@ -60,13 +60,12 @@ data:
set -ex
# Get the ruleset from the rule name.
ruleset=$(ceph osd crush rule dump $POOL_CRUSH_RULE_NAME | grep "\"ruleset\":" | grep -Eo '[0-9]*')
# Make sure the pool exists.
ceph osd pool stats $POOL_NAME || ceph osd pool create $POOL_NAME $POOL_CHUNK_SIZE
# Set pool configuration.
ceph osd pool application enable $POOL_NAME rbd
ceph osd pool set $POOL_NAME size $POOL_REPLICATION
ceph osd pool set $POOL_NAME crush_rule $ruleset
ceph osd pool set $POOL_NAME crush_rule $POOL_CRUSH_RULE_NAME
if [[ -z $USER_ID && -z $CEPH_USER_SECRET ]]; then
msg="No need to create secrets for pool $POOL_NAME"

View File

@ -145,9 +145,6 @@ platform::memcached::params::udp_port: 0
platform::memcached::params::max_connections: 8192
platform::memcached::params::max_memory: 782
# ceph
platform::ceph::params::restapi_public_addr: '127.0.0.1:5001'
# sysinv
sysinv::journal_max_size: 51200
sysinv::journal_min_size: 1024

View File

@ -35,7 +35,6 @@ class platform::ceph::params(
$rgw_gc_obj_min_wait = '600',
$rgw_gc_processor_max_time = '300',
$rgw_gc_processor_period = '300',
$restapi_public_addr = undef,
$configure_ceph_mon_info = false,
$ceph_config_file = '/etc/ceph/ceph.conf',
$ceph_config_ready_path = '/var/run/.ceph_started',
@ -70,7 +69,6 @@ class platform::ceph
}
-> ceph_config {
'mon/mon clock drift allowed': value => '.1';
'client.restapi/public_addr': value => $restapi_public_addr;
}
if $system_type == 'All-in-one' {
# 1 and 2 node configurations have a single monitor
@ -305,6 +303,9 @@ define osd_crush_location(
$journal_path,
$tier_name,
) {
ceph_config{
"osd.${$osd_id}/devs": value => $data_path;
}
# Only set the crush location for additional tiers
if $tier_name != 'storage' {
ceph_config {
@ -335,7 +336,8 @@ define platform_ceph_osd(
command => template('platform/ceph.osd.create.erb'),
}
-> ceph::osd { $disk_path:
uuid => $osd_uuid,
uuid => $osd_uuid,
osdid => $osd_id,
}
-> exec { "configure journal location ${name}":
logoutput => true,
@ -414,6 +416,7 @@ class platform::ceph::rgw
rgw_frontends => "${rgw_frontend_type} port=${auth_host}:${rgw_port}",
# service is managed by SM
rgw_enable => false,
rgw_ensure => false,
# The location of the log file shoule be the same as what's specified in
# /etc/logrotate.d/radosgw in order for log rotation to work properly
log_file => $rgw_log_file,
@ -476,12 +479,13 @@ class platform::ceph::runtime_base {
include ::platform::ceph::monitor
include ::platform::ceph
# Make sure ceph-rest-api is running as it is needed by sysinv config
# Make sure mgr-restful-plugin is running as it is needed by sysinv config
# TODO(oponcea): Remove when sm supports in-service config reload
if str2bool($::is_controller_active) {
Ceph::Mon <| |>
-> exec { '/etc/init.d/ceph-rest-api start':
command => '/etc/init.d/ceph-rest-api start'
-> exec { '/etc/init.d/mgr-restful-plugin start':
command => '/etc/init.d/mgr-restful-plugin start',
logoutput => true,
}
}
}

View File

@ -744,18 +744,18 @@ class platform::sm
}
}
# Ceph-Rest-Api
exec { 'Provision Ceph-Rest-Api (service-domain-member storage-services)':
# Ceph mgr RESTful plugin
exec { 'Provision mgr-restful-plugin (service-domain-member storage-services)':
command => 'sm-provision service-domain-member controller storage-services',
}
-> exec { 'Provision Ceph-Rest-Api (service-group storage-services)':
-> exec { 'Provision mgr-restful-plugin (service-group storage-services)':
command => 'sm-provision service-group storage-services',
}
-> exec { 'Provision Ceph-Rest-Api (service-group-member ceph-rest-api)':
command => 'sm-provision service-group-member storage-services ceph-rest-api',
-> exec { 'Provision mgr-restful-plugin (service-group-member mgr-restful-plugin)':
command => 'sm-provision service-group-member storage-services mgr-restful-plugin',
}
-> exec { 'Provision Ceph-Rest-Api (service ceph-rest-api)':
command => 'sm-provision service ceph-rest-api',
-> exec { 'Provision mgr-restful-plugin (service mgr-restful-plugin)':
command => 'sm-provision service mgr-restful-plugin',
}
# Ceph-Manager

View File

@ -45,7 +45,7 @@ root storage-tier {
# rules
rule storage_tier_ruleset {
ruleset 0
id 0
type replicated
min_size 1
max_size 10

View File

@ -52,7 +52,7 @@ root storage-tier {
# rules
rule storage_tier_ruleset {
ruleset 0
id 0
type replicated
min_size 1
max_size 10

View File

@ -52,7 +52,7 @@ root storage-tier {
# rules
rule storage_tier_ruleset {
ruleset 0
id 0
type replicated
min_size 1
max_size 10

View File

@ -694,15 +694,7 @@ def _check_and_update_rbd_provisioner(new_storceph, remove=False):
def _apply_backend_changes(op, sb_obj):
services = api_helper.getListFromServices(sb_obj.as_dict())
if op == constants.SB_API_OP_CREATE:
if sb_obj.name != constants.SB_DEFAULT_NAMES[
constants.SB_TYPE_CEPH]:
# Enable the service(s) use of the backend
if constants.SB_SVC_CINDER in services:
pecan.request.rpcapi.update_ceph_services(
pecan.request.context, sb_obj.uuid)
elif op == constants.SB_API_OP_MODIFY:
if op == constants.SB_API_OP_MODIFY:
if sb_obj.name == constants.SB_DEFAULT_NAMES[
constants.SB_TYPE_CEPH]:
@ -710,13 +702,6 @@ def _apply_backend_changes(op, sb_obj):
pecan.request.rpcapi.update_ceph_config(pecan.request.context,
sb_obj.uuid,
services)
else:
# Services have been added or removed
pecan.request.rpcapi.update_ceph_services(
pecan.request.context, sb_obj.uuid)
elif op == constants.SB_API_OP_DELETE:
pass
def _apply_nova_specific_changes(sb_obj, old_sb_obj=None):

View File

@ -12,17 +12,18 @@
from __future__ import absolute_import
from sysinv.api.controllers.v1 import utils
import subprocess
import os
import pecan
import requests
from cephclient import wrapper as ceph
from sysinv.api.controllers.v1 import utils
from sysinv.common import constants
from sysinv.common import exception
from sysinv.common import utils as cutils
from sysinv.openstack.common import log as logging
import subprocess
import pecan
import os
import requests
from sysinv.api.controllers.v1.utils import is_aio_system
@ -36,7 +37,7 @@ class CephApiOperator(object):
def __init__(self):
self._ceph_api = ceph.CephWrapper(
endpoint='http://localhost:5001/api/v0.1/')
endpoint='http://localhost:5001')
self._default_tier = constants.SB_TIER_DEFAULT_NAMES[
constants.SB_TIER_TYPE_CEPH]
@ -140,7 +141,8 @@ class CephApiOperator(object):
depth=depth + 1,
rollback=rollback)
LOG.error("bucket_name = %s, depth = %d, ret_code = %s" % (bucket_name, depth, ret_code))
LOG.error("bucket_name = %s, depth = %d, ret_code = %s" % (
bucket_name, depth, ret_code))
self._crush_bucket_remove(bucket_name)
if ret_code != 0 and depth == 0:
@ -172,9 +174,7 @@ class CephApiOperator(object):
# Scan for the destination root, should not be present
dest_root = [r for r in body['output'] if r['name'] == dest_root_name]
if dest_root:
reason = "Tier '%s' already exists." % dest_root_name
raise exception.CephCrushInvalidTierUse(tier=dest_root_name,
reason=reason)
raise exception.CephCrushTierAlreadyExists(tier=dest_root_name)
src_root = [r for r in body['output'] if r['name'] == src_root_name]
if not src_root:
@ -244,7 +244,7 @@ class CephApiOperator(object):
for l in reversed(rule):
file_contents.insert(insertion_index, l)
def _crushmap_rule_add(self, name, replicate_by):
def _crushmap_rule_add(self, tier, replicate_by):
"""Add a tier crushmap rule."""
crushmap_flag_file = os.path.join(constants.SYSINV_CONFIG_PATH,
@ -254,20 +254,16 @@ class CephApiOperator(object):
raise exception.CephCrushMapNotApplied(reason=reason)
default_root_name = self._format_root_name(self._default_tier)
root_name = self._format_root_name(name)
root_name = self._format_root_name(tier)
if root_name == default_root_name:
reason = ("Rule for the default storage tier '%s' already exists." %
default_root_name)
raise exception.CephCrushInvalidTierUse(tier=name, reason=reason)
raise exception.CephCrushRuleAlreadyExists(
tier=tier, rule='default')
# get the current rule count
rule_is_present, rule_name, rule_count = self._crush_rule_status(root_name)
if rule_is_present:
reason = (("Rule '%s' is already present in the crushmap. No action "
"taken.") % rule_name)
raise exception.CephCrushInvalidRuleOperation(rule=rule_name,
reason=reason)
raise exception.CephCrushRuleAlreadyExists(
tier=tier, rule=rule_name)
# NOTE: The Ceph API only supports simple single step rule creation.
# Because of this we need to update the crushmap the hard way.
@ -369,40 +365,39 @@ class CephApiOperator(object):
except exception.CephCrushMaxRecursion as e:
raise e
def _crushmap_add_tier(self, tier):
# create crush map tree for tier mirroring default root
try:
self._crushmap_root_mirror(self._default_tier, tier.name)
except exception.CephCrushTierAlreadyExists:
pass
if utils.is_aio_simplex_system(pecan.request.dbapi):
# Since we have a single host replication is done on OSDs
# to ensure disk based redundancy.
replicate_by = 'osd'
else:
# Replication is done on different nodes of the same peer
# group ensuring host based redundancy.
replicate_by = 'host'
try:
self._crushmap_rule_add(tier.name, replicate_by=replicate_by)
except exception.CephCrushRuleAlreadyExists:
pass
def crushmap_tiers_add(self):
"""Add all custom storage tiers to the crushmap. """
ceph_cluster_name = constants.CLUSTER_CEPH_DEFAULT_NAME
cluster = pecan.request.dbapi.clusters_get_all(name=ceph_cluster_name)
# get the list of tiers
cluster = pecan.request.dbapi.clusters_get_all(
name=constants.CLUSTER_CEPH_DEFAULT_NAME)
tiers = pecan.request.dbapi.storage_tier_get_by_cluster(
cluster[0].uuid)
for t in tiers:
if (t.type == constants.SB_TIER_TYPE_CEPH and
t.name != self._default_tier and
t.status == constants.SB_TIER_STATUS_DEFINED):
try:
# First: Mirror the default hierarchy
self._crushmap_root_mirror(self._default_tier, t.name)
# Second: Add ruleset
# PG replication can be done per OSD or per host, hence replicate_by
# is set to either 'osd' or 'host'.
if utils.is_aio_simplex_system(pecan.request.dbapi):
# Since we have a single host replication is done on OSDs
# to ensure disk based redundancy.
self._crushmap_rule_add(t.name, replicate_by='osd')
else:
# Replication is done on different nodes of the same peer
# group ensuring host based redundancy.
self._crushmap_rule_add(t.name, replicate_by='host')
except exception.CephCrushInvalidTierUse as e:
if 'already exists' in e:
continue
except exception.CephCrushMaxRecursion as e:
raise e
if t.type != constants.SB_TIER_TYPE_CEPH:
continue
if t.name == self._default_tier:
continue
self._crushmap_add_tier(t)
def _crushmap_tiers_bucket_add(self, bucket_name, bucket_type):
"""Add a new bucket to all the tiers in the crushmap. """
@ -473,7 +468,7 @@ class CephApiOperator(object):
try:
response, body = self._ceph_api.status(body='json',
timeout=timeout)
ceph_status = body['output']['health']['overall_status']
ceph_status = body['output']['health']['status']
if ceph_status != constants.CEPH_HEALTH_OK:
LOG.warn("ceph status=%s " % ceph_status)
rc = False
@ -506,7 +501,7 @@ class CephApiOperator(object):
def osd_host_lookup(self, osd_id):
response, body = self._ceph_api.osd_crush_tree(body='json')
for i in range(0, len(body)):
for i in range(0, len(body['output'])):
# there are 2 chassis lists - cache-tier and root-tier
# that can be seen in the output of 'ceph osd crush tree':
# [{"id": -2,"name": "cache-tier", "type": "root",
@ -710,7 +705,7 @@ def fix_crushmap(dbapi=None):
try:
open(crushmap_flag_file, "w").close()
except IOError as e:
LOG.warn(_('Failed to create flag file: {}. '
LOG.warn(('Failed to create flag file: {}. '
'Reason: {}').format(crushmap_flag_file, e))
if not dbapi:

View File

@ -164,10 +164,18 @@ class CephCrushInvalidTierUse(CephFailure):
message = _("Cannot use tier '%(tier)s' for this operation. %(reason)s")
class CephCrushTierAlreadyExists(CephCrushInvalidTierUse):
message = _("Tier '%(tier)s' already exists")
class CephCrushInvalidRuleOperation(CephFailure):
message = _("Cannot perform operation on rule '%(rule)s'. %(reason)s")
class CephCrushRuleAlreadyExists(CephCrushInvalidRuleOperation):
message = _("Rule '%(rule)s' for storage tier '%(tier)s' already exists")
class CephPoolCreateFailure(CephFailure):
message = _("Creating OSD pool %(name)s failed: %(reason)s")

View File

@ -53,7 +53,7 @@ class CephOperator(object):
self._fm_api = fm_api.FaultAPIs()
self._db_api = db_api
self._ceph_api = ceph.CephWrapper(
endpoint='http://localhost:5001/api/v0.1/')
endpoint='http://localhost:5001')
self._db_cluster = None
self._db_primary_tier = None
self._cluster_name = 'ceph_cluster'
@ -99,7 +99,7 @@ class CephOperator(object):
try:
response, body = self._ceph_api.status(body='json',
timeout=timeout)
if (body['output']['health']['overall_status'] !=
if (body['output']['health']['status'] !=
constants.CEPH_HEALTH_OK):
rc = False
except Exception as e:
@ -1313,11 +1313,11 @@ class CephOperator(object):
"upgrade checks.")
# Grab the current values
cinder_pool_gib = storage_ceph.cinder_pool_gib
kube_pool_gib = storage_ceph.kube_pool_gib
glance_pool_gib = storage_ceph.glance_pool_gib
ephemeral_pool_gib = storage_ceph.ephemeral_pool_gib
object_pool_gib = storage_ceph.object_pool_gib
cinder_pool_gib = storage_ceph.cinder_pool_gib or 0
kube_pool_gib = storage_ceph.kube_pool_gib or 0
glance_pool_gib = storage_ceph.glance_pool_gib or 0
ephemeral_pool_gib = storage_ceph.ephemeral_pool_gib or 0
object_pool_gib = storage_ceph.object_pool_gib or 0
# Initial cluster provisioning after cluster is up
# glance_pool_gib = 20 GiB
@ -1403,8 +1403,8 @@ class CephOperator(object):
else:
# Grab the current values
cinder_pool_gib = storage_ceph.cinder_pool_gib
kube_pool_gib = storage_ceph.kube_pool_gib
cinder_pool_gib = storage_ceph.cinder_pool_gib or 0
kube_pool_gib = storage_ceph.kube_pool_gib or 0
# Secondary tiers: only cinder and kube pool supported.
tiers_size = self.get_ceph_tiers_size()

View File

@ -156,7 +156,7 @@ class ConductorManager(service.PeriodicService):
self._app = None
self._ceph = None
self._ceph_api = ceph.CephWrapper(
endpoint='http://localhost:5001/api/v0.1/')
endpoint='http://localhost:5001')
self._kube = None
self._fernet = None
@ -5925,40 +5925,6 @@ class ConductorManager(service.PeriodicService):
self.config_update_nova_local_backed_hosts(
context, constants.LVG_NOVA_BACKING_REMOTE)
def update_ceph_services(self, context, sb_uuid):
"""Update service configs for Ceph tier pools."""
LOG.info("Updating configuration for ceph services")
personalities = [constants.CONTROLLER]
config_uuid = self._config_update_hosts(context, personalities)
ctrls = self.dbapi.ihost_get_by_personality(constants.CONTROLLER)
valid_ctrls = [ctrl for ctrl in ctrls if
(utils.is_host_active_controller(ctrl) and
ctrl.administrative == constants.ADMIN_LOCKED and
ctrl.availability == constants.AVAILABILITY_ONLINE) or
(ctrl.administrative == constants.ADMIN_UNLOCKED and
ctrl.operational == constants.OPERATIONAL_ENABLED)]
if not valid_ctrls:
raise exception.SysinvException("Ceph services were not updated. "
"No valid controllers were found.")
config_dict = {
"personalities": personalities,
"classes": ['openstack::cinder::backends::ceph::runtime'],
"host_uuids": [ctrl.uuid for ctrl in valid_ctrls],
puppet_common.REPORT_STATUS_CFG:
puppet_common.REPORT_CEPH_SERVICES_CONFIG,
}
self.dbapi.storage_ceph_update(sb_uuid,
{'state': constants.SB_STATE_CONFIGURING,
'task': str({h.hostname: constants.SB_TASK_APPLY_MANIFESTS for h in valid_ctrls})})
self._config_apply_runtime_manifest(context, config_uuid, config_dict)
def _update_storage_backend_alarm(self, alarm_state, backend, reason_text=None):
""" Update storage backend configuration alarm"""
entity_instance_id = "%s=%s" % (fm_constants.FM_ENTITY_TYPE_STORAGE_BACKEND,

View File

@ -927,15 +927,6 @@ class ConductorAPI(sysinv.openstack.common.rpc.proxy.RpcProxy):
return self.call(context,
self.make_msg('update_external_cinder_config'))
def update_ceph_services(self, context, sb_uuid):
"""Synchronously, have the conductor update Ceph tier services
:param context: request context
:param sb_uuid: uuid of the storage backed to apply the service update.
"""
return self.call(context,
self.make_msg('update_ceph_services', sb_uuid=sb_uuid))
def get_k8s_namespaces(self, context):
"""Synchronously, get Kubernetes namespaces

View File

@ -47,7 +47,7 @@ class CephPoolsAuditHelm(base.BaseHelm):
if not tier:
raise Exception("No tier present for backend %s" % bk.name)
# Get the ruleset name.
# Get the tier rule name.
rule_name = "{0}{1}{2}".format(
tier.name,
constants.CEPH_CRUSH_TIER_SUFFIX,

View File

@ -60,9 +60,11 @@ class CinderHelm(openstack.OpenstackBaseHelm):
replication, min_replication =\
StorageBackendConfig.get_ceph_pool_replication(self.dbapi)
# We don't use the chart to configure the cinder-volumes
# pool, so these values don't have any impact right now.
ruleset = 0
rule_name = "{0}{1}{2}".format(
constants.SB_TIER_DEFAULT_NAMES[
constants.SB_TIER_TYPE_CEPH],
constants.CEPH_CRUSH_TIER_SUFFIX,
"-ruleset").replace('-', '_')
conf_ceph = {
'monitors': self._get_formatted_ceph_monitor_ips(),
@ -73,13 +75,13 @@ class CinderHelm(openstack.OpenstackBaseHelm):
# it's safe to use the same replication as for the primary
# tier pools.
'replication': replication,
'crush_rule': ruleset,
'crush_rule': rule_name,
},
'volume': {
# The cinder chart doesn't currently support specifying
# the config for multiple volume/backup pools.
'replication': replication,
'crush_rule': ruleset,
'crush_rule': rule_name,
}
}
}

View File

@ -119,9 +119,12 @@ class GlanceHelm(openstack.OpenstackBaseHelm):
replication, min_replication = \
StorageBackendConfig.get_ceph_pool_replication(self.dbapi)
# Only the primary Ceph tier is used for the glance images pool, so
# the crush ruleset is 0.
ruleset = 0
# Only the primary Ceph tier is used for the glance images pool
rule_name = "{0}{1}{2}".format(
constants.SB_TIER_DEFAULT_NAMES[
constants.SB_TIER_TYPE_CEPH],
constants.CEPH_CRUSH_TIER_SUFFIX,
"-ruleset").replace('-', '_')
conf = {
'glance': {
@ -134,7 +137,7 @@ class GlanceHelm(openstack.OpenstackBaseHelm):
'rbd_store_pool': rbd_store_pool,
'rbd_store_user': rbd_store_user,
'rbd_store_replication': replication,
'rbd_store_crush_rule': ruleset,
'rbd_store_crush_rule': rule_name,
}
}
}

View File

@ -451,15 +451,18 @@ class NovaHelm(openstack.OpenstackBaseHelm):
StorageBackendConfig.get_ceph_pool_replication(self.dbapi)
# For now, the ephemeral pool will only be on the primary Ceph tier
# that's using the 0 crush ruleset.
ruleset = 0
rule_name = "{0}{1}{2}".format(
constants.SB_TIER_DEFAULT_NAMES[
constants.SB_TIER_TYPE_CEPH],
constants.CEPH_CRUSH_TIER_SUFFIX,
"-ruleset").replace('-', '_')
# Form the dictionary with the info for the ephemeral pool.
# If needed, multiple pools can be specified.
ephemeral_pool = {
'rbd_pool_name': constants.CEPH_POOL_EPHEMERAL_NAME,
'rbd_user': RBD_POOL_USER,
'rbd_crush_rule': ruleset,
'rbd_crush_rule': rule_name,
'rbd_replication': replication,
'rbd_chunk_size': constants.CEPH_POOL_EPHEMERAL_PG_NUM
}