Update snapshot related cephfs and rbd provisioner error messages (dsR8MR2+, dsr8MR3 r9)

Create sections Create Cephfs Volume Snapshot Class and Create RBD Volume Snapshot Class
Update helm-override-show outputs
Fix conflict

Closes-bug: 2055207

Change-Id: I203643805d26871fc6de1671d57e72ee2179b545
Signed-off-by: Elisamara Aoki Goncalves <elisamaraaoki.goncalves@windriver.com>
This commit is contained in:
Elisamara Aoki Goncalves 2024-02-27 17:46:24 +00:00 committed by Juanita-Balaraj
parent da8043d782
commit c325161a57
10 changed files with 535 additions and 41 deletions

View File

@ -0,0 +1,2 @@
.. fresh-install-begin
.. fresh-install-end

View File

@ -0,0 +1,2 @@
.. fresh-install-begin
.. fresh-install-end

View File

@ -183,7 +183,7 @@ with read/write type access to a single private namespace
.. code-block:: none
% cat <<EOF > rbd-namespaces.yaml
classes:
storageClasses:
- additionalNamespaces: [default, kube-public, billing-dept-ns]
chunk_size: 64
crush_rule_name: storage_tier_ruleset

View File

@ -0,0 +1,235 @@
.. _create-cephfs-volume-snapshot-class-92f4ad13d166:
===================================
Create Cephfs Volume Snapshot Class
===================================
Volume Snapshot Class for Cephfs provisioner can be created via Helm overrides
to support |PVC| snapshots.
.. rubric:: |context|
A Volume Snapshot Class enables the creation of snapshots for |PVCs|, allowing
for efficient backups and data restoration. This functionality ensures data
protection, facilitating point-in-time recovery and minimizing the risk of data
loss in Kubernetes clusters.
The procedure below demonstrates how to create a Volume Snapshot Class and
Volume Snapshot for the Cephfs provisioner.
.. note::
It is necessary that the |CRDs| and ``snapshot-controller`` running pod are
present in the system to create the Volume Snapshot Class.
The |CRDs| and ``snapshot-controller`` are created by default during
installation when running the bootstrap playbook.
.. only:: partner
.. include:: /_includes/create-cephfs-volume-snapshot-class-92f4ad13d166.rest
:start-after: fresh-install-begin
:end-before: fresh-install-end
.. rubric:: |proc|
#. List installed Helm chart overrides for the ``platform-integ-apps``.
.. code-block:: none
~(keystone_admin)$ system helm-override-list platform-integ-apps
+--------------------+----------------------+
| chart name | overrides namespaces |
+--------------------+----------------------+
| ceph-pools-audit | ['kube-system'] |
| cephfs-provisioner | ['kube-system'] |
| rbd-provisioner | ['kube-system'] |
+--------------------+----------------------+
#. Review existing overrides for the ``cephfs-provisioner`` chart.
.. code-block:: none
~(keystone_admin)$ system helm-override-show platform-integ-apps cephfs-provisioner kube-system
#. Check if ``provisioner.snapshotter.enabled`` is set to true.
.. code-block:: none
~(keystone_admin)$ system helm-override-show platform-integ-apps cephfs-provisioner kube-system
+--------------------+------------------------------------------------------+
| Property | Value |
+--------------------+------------------------------------------------------+
| attributes | enabled: true |
| | |
| combined_overrides | ... |
| | provisioner: |
| | replicaCount: 1 |
| | snapshotter: |
| | enabled: true |
+--------------------+------------------------------------------------------+
True means that the ``csi-snapshotter`` container is created inside the
Cephfs provisioner pod and that the |CRDs| and ``snapshot-controller`` with
the corresponding Kubernetes version are created.
If the value is false, and the |CRDs| and snapshot controller are present
in a later version than what is recommended for Kubernetes on your system,
you can update the value via ``helm-overrides`` and set it to ``true`` and
continue with the creation of the container as follows:
#. Update to ``true`` via ``helm-overrides``.
.. code-block:: none
~(keystone_admin)$ system helm-override-update --reuse-values platform-integ-apps cephfs-provisioner kube-system --set provisioner.snapshotter.enabled=true
#. Create container.
.. code-block:: none
~(keystone_admin)$ system application-apply platform-integ-apps
.. important::
To proceed with the creation of the snapshot class and volume
snapshot, it is strictly necessary that the ``csi-snapshotter``
container is created.
#. Update ``snapshotClass.create`` to ``true`` via Helm.
.. code-block:: none
~(keystone_admin)$ system helm-override-update --reuse-values platform-integ-apps cephfs-provisioner kube-system --set snapshotClass.create=True
#. Confirm that the new overrides have been applied to the chart.
.. code-block:: none
~(keystone_admin)$ system helm-override-show platform-integ-apps cephfs-provisioner kube-system
+--------------------+------------------------------------------------------+
| Property | Value |
+--------------------+------------------------------------------------------+
| attributes | enabled: true |
| | |
| combined_overrides | classdefaults: |
| | adminId: admin |
| | adminSecretName: ceph-secret-admin |
| | monitors: |
| | - 192.168.204.2:6789 |
| | csiConfig: |
| | - cephFS: |
| | subvolumeGroup: csi |
| | clusterID: c10448eb-6dee-4992-a93c-a1c628b9165e |
| | monitors: |
| | - 192.168.204.2:6789 |
| | provisioner: |
| | replicaCount: 1 |
| | snapshotter: |
| | enabled: true |
| | snapshotClass: |
| | clusterID: c10448eb-6dee-4992-a93c-a1c628b9165e |
| | create: true |
| | provisionerSecret: ceph-pool-kube-cephfs-data |
| | storageClasses: |
| | - additionalNamespaces: |
| | - default |
| | - kube-public |
| | chunk_size: 64 |
| | clusterID: c10448eb-6dee-4992-a93c-a1c628b9165e |
| | controllerExpandSecret: ceph-pool-kube-cephfs-data |
| | crush_rule_name: storage_tier_ruleset |
| | data_pool_name: kube-cephfs-data |
| | fs_name: kube-cephfs |
| | metadata_pool_name: kube-cephfs-metadata |
| | name: cephfs |
| | nodeStageSecret: ceph-pool-kube-cephfs-data |
| | provisionerSecret: ceph-pool-kube-cephfs-data |
| | replication: 1 |
| | userId: ceph-pool-kube-cephfs-data |
| | userSecretName: ceph-pool-kube-cephfs-data |
| | volumeNamePrefix: pvc-volumes- |
| | |
| name | cephfs-provisioner |
| namespace | kube-system |
| system_overrides | ... |
| | |
| user_overrides | snapshotClass: |
| | create: true |
| | |
+--------------------+------------------------------------------------------+
#. Apply the overrides.
#. Run the :command:`application-apply` command.
.. code-block:: none
~(keystone_admin)$ system application-apply platform-integ-apps
+---------------+--------------------------------------+
| Property | Value |
+---------------+--------------------------------------+
| active | True |
| app_version | 1.0-65 |
| created_at | 2024-01-08T18:15:07.178753+00:00 |
| manifest_file | fluxcd-manifests |
| manifest_name | platform-integ-apps-fluxcd-manifests |
| name | platform-integ-apps |
| progress | None |
| status | applying |
| updated_at | 2024-01-08T18:39:10.251660+00:00 |
+---------------+--------------------------------------+
#. Monitor progress using the :command:`application-list` command.
.. code-block:: none
~(keystone_admin)$ system application-list
+--------------------------+---------+-------------------------------------------+------------------+----------+-----------+
| application | version | manifest name | manifest file | status | progress |
+--------------------------+---------+-------------------------------------------+------------------+----------+-----------+
| platform-integ-apps | 1.0-65 | platform-integ-apps-fluxcd-manifests | fluxcd-manifests | applied | completed |
+--------------------------+---------+-------------------------------------------+------------------+----------+-----------+
#. Confirm the creation of the Volume Snapshot Class after a few seconds.
.. code-block:: none
~(keystone_admin)$ kubectl get volumesnapshotclasses.snapshot.storage.k8s.io
NAME DRIVER DELETIONPOLICY AGE
cephfs-snapshot cephfs.csi.ceph.com Delete 5s
#. You can now create Cephfs |PVC| snapshots.
#. Consider the Cephfs Volume Snapshot yaml example.
.. code-block:: none
~(keystone_admin)$ cat << EOF > ~/cephfs-volume-snapshot.yaml
---
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
name: <cephfs-pvc-snapshot-name>
spec:
volumeSnapshotClassName: cephfs-snapshot
source:
persistentVolumeClaimName: <cephfs-pvc-name>
EOF
#. Replace the values in the ``persistentVolumeClaimName`` and ``name``
fields.
#. Create the Volume Snapshot.
.. code-block:: none
~(keystone_admin)$ kubectl create -f cephfs-volume-snapshot.yaml
#. Confirm that it was created successfully.
.. code-block:: none
~(keystone_admin)$ kubectl get volumesnapshots.snapshot.storage.k8s.io
NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE
cephfs-pvc-snapshot true csi-cephfs-pvc 1Gi cephfs-snapshot snapcontent-3953fe61-6c25-4536-9da5-efc05a216d27 3s 5s

View File

@ -0,0 +1,232 @@
.. _create-rbd-volume-snapshot-class-0318eed94b92:
================================
Create RBD Volume Snapshot Class
================================
Volume Snapshot Class for |RBD| provisioner can be created via Helm overrides
to support |PVC| snapshots.
.. rubric:: |context|
A Volume Snapshot Class enables the creation of snapshots for |PVCs|, allowing
efficient backups and data restoration. This functionality ensures data
protection, facilitating point-in-time recovery and minimizing the risk of data
loss in Kubernetes clusters.
The procedure below demonstrates how to create a Volume Snapshot Class and
Volume Snapshot for the |RBD| provisioner.
.. note::
It is necessary that the |CRDs| and snapshot-controller running pod are
present in the system to create the Volume Snapshot Class.
The |CRDs| and snapshot-controller are created by default during
installation when running the bootstrap playbook.
.. only:: partner
.. include:: /_includes/create-rbd-volume-snapshot-class-0318eed94b92.rest
:start-after: fresh-install-begin
:end-before: fresh-install-end
.. rubric:: |proc|
#. List installed Helm chart overrides for the ``platform-integ-apps``.
.. code-block:: none
~(keystone_admin)$ system helm-override-list platform-integ-apps
+--------------------+----------------------+
| chart name | overrides namespaces |
+--------------------+----------------------+
| ceph-pools-audit | ['kube-system'] |
| cephfs-provisioner | ['kube-system'] |
| rbd-provisioner | ['kube-system'] |
+--------------------+----------------------+
#. Review existing overrides for the rbd-provisioner chart.
.. code-block:: none
~(keystone_admin)$ system helm-override-show platform-integ-apps rbd-provisioner kube-system
#. Check if the ``provisioner.snapshotter.enabled`` is set to true.
.. code-block:: none
~(keystone_admin)$ system helm-override-show platform-integ-apps rbd-provisioner kube-system
+--------------------+------------------------------------------------------+
| Property | Value |
+--------------------+------------------------------------------------------+
| attributes | enabled: true |
| | |
| combined_overrides | ... |
| | provisioner: |
| | replicaCount: 1 |
| | snapshotter: |
| | enabled: true |
+--------------------+------------------------------------------------------+
True means that the ``csi-snapshotter`` container is created inside the
|RBD| provisioner pod and that the |CRDs| and ``snapshot-controller`` with
the corresponding Kubernetes version are created.
If the value is false, and the |CRDs| and snapshot controller are present
in a later version than what is recommended for Kubernetes on your system,
you can update the value via ``helm-overrides`` and set it to ``true`` and
continue with the creation of the container as follows:
#. Update to ``true`` via ``helm-overrides``.
.. code-block:: none
~(keystone_admin)$ system helm-override-update --reuse-values platform-integ-apps rbd-provisioner kube-system --set provisioner.snapshotter.enabled=true
#. Create container.
.. code-block:: none
~(keystone_admin)$ system application-apply platform-integ-apps
.. important::
To proceed with the creation of the snapshot class and volume snapshot,
it is strictly necessary that the ``csi-snapshotter`` container is
created.
#. Update ``snapshotClass.create`` to ``true`` via Helm.
.. code-block:: none
~(keystone_admin)$ system helm-override-update --reuse-values platform-integ-apps rbd-provisioner kube-system --set snapshotClass.create=True
#. Confirm that the new overrides have been applied to the chart.
.. code-block:: none
~(keystone_admin)$ system helm-override-show platform-integ-apps rbd-provisioner kube-system
+--------------------+------------------------------------------------------+
| Property | Value |
+--------------------+------------------------------------------------------+
| attributes | enabled: true |
| | |
| combined_overrides | classdefaults: |
| | adminId: admin |
| | adminSecretName: ceph-admin |
| | monitors: |
| | - 192.168.204.2:6789 |
| | storageClass: general |
| | csiConfig: |
| | - clusterID: c10448eb-6dee-4992-a93c-a1c628b9165e |
| | monitors: |
| | - 192.168.204.2:6789 |
| | provisioner: |
| | replicaCount: 1 |
| | snapshotter: |
| | enabled: true |
| | snapshotClass: |
| | clusterID: c10448eb-6dee-4992-a93c-a1c628b9165e |
| | create: true |
| | provisionerSecret: ceph-pool-kube-rbd |
| | storageClasses: |
| | - additionalNamespaces: |
| | - default |
| | - kube-public |
| | chunk_size: 64 |
| | clusterID: c10448eb-6dee-4992-a93c-a1c628b9165e |
| | controllerExpandSecret: ceph-pool-kube-rbd |
| | crush_rule_name: storage_tier_ruleset |
| | name: general |
| | nodeStageSecret: ceph-pool-kube-rbd |
| | pool_name: kube-rbd |
| | provisionerSecret: ceph-pool-kube-rbd |
| | replication: 1 |
| | userId: ceph-pool-kube-rbd |
| | userSecretName: ceph-pool-kube-rbd |
| | |
| name | rbd-provisioner |
| namespace | kube-system |
| system_overrides | ... |
| | |
| user_overrides | snapshotClass: |
| | create: true |
| | |
+--------------------+------------------------------------------------------+
#. Apply the overrides.
#. Run the :command:`application-apply` command.
.. code-block:: none
~(keystone_admin)$ system application-apply platform-integ-apps
+---------------+--------------------------------------+
| Property | Value |
+---------------+--------------------------------------+
| active | True |
| app_version | 1.0-65 |
| created_at | 2024-01-08T18:15:07.178753+00:00 |
| manifest_file | fluxcd-manifests |
| manifest_name | platform-integ-apps-fluxcd-manifests |
| name | platform-integ-apps |
| progress | None |
| status | applying |
| updated_at | 2024-01-08T18:39:10.251660+00:00 |
+---------------+--------------------------------------+
#. Monitor progress using the application-list command.
.. code-block:: none
~(keystone_admin)$ system application-list
+--------------------------+---------+-------------------------------------------+------------------+----------+-----------+
| application | version | manifest name | manifest file | status | progress |
+--------------------------+---------+-------------------------------------------+------------------+----------+-----------+
| platform-integ-apps | 1.0-65 | platform-integ-apps-fluxcd-manifests | fluxcd-manifests | applied | completed |
+--------------------------+---------+-------------------------------------------+------------------+----------+-----------+
#. Confirm the creation of the Volume Snapshot Class after a few seconds.
.. code-block:: none
~(keystone_admin)$ kubectl get volumesnapshotclasses.snapshot.storage.k8s.io
NAME DRIVER DELETIONPOLICY AGE
rbd-snapshot rbd.csi.ceph.com Delete 5s
#. With the |RBD| Volume Snapshot Class created, you can now create |RBD| |PVC|
snapshots.
#. Consider the |RBD| Volume Snapshot yaml example:
.. code-block:: none
~(keystone_admin)$ cat << EOF > ~/rbd-volume-snapshot.yaml
---
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
name: <rbd-pvc-snapshot-name>
spec:
volumeSnapshotClassName: rbd-snapshot
source:
persistentVolumeClaimName: <rbd-pvc-name>
EOF
#. Replace the values in the ``persistentVolumeClaimName`` and ``name``
fields.
#. Create the Volume Snapshot.
.. code-block:: none
~(keystone_admin)$ kubectl create -f rbd-volume-snapshot.yaml
#. Confirm that it was created successfully.
.. code-block:: none
~(keystone_admin)$ kubectl get volumesnapshots.snapshot.storage.k8s.io
NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE
rbd-pvc-snapshot true rbd-pvc 1Gi rbd-snapshot snapcontent-1bb7e2cb-9123-47c4-9e56-7d16f24f973e 13s 17s

View File

@ -61,7 +61,7 @@ utilized by a specific namespace.
.. code-block:: none
~(keystone_admin)]$ cat <<EOF > ~/update-storageclass.yaml
classes:
storageClasses:
- additionalNamespaces: [default, kube-public, new-app, new-app2, new-app3]
chunk_size: 64
crush_rule_name: storage_tier_ruleset
@ -90,7 +90,7 @@ utilized by a specific namespace.
+----------------+-----------------------------------------+
| name | rbd-provisioner |
| namespace | kube-system |
| user_overrides | classes: |
| user_overrides | storageClasses: |
| | - additionalNamespaces: |
| | - default |
| | - kube-public |
@ -133,7 +133,7 @@ utilized by a specific namespace.
| namespace | |
| system_overrides | ... |
| | |
| user_overrides | classes: |
| user_overrides | storageClasses: |
| | - additionalNamespaces: |
| | - default |
| | - kube-public |

View File

@ -56,7 +56,20 @@ application-specific namespaces to access the **cephfs-provisioner**
| | adminSecretName: ceph-secret-admin |
| | monitors: |
| | - 192.168.204.2:6789 |
| | classes: |
| | csiConfig: |
| | - cephFS: |
| | subvolumeGroup: csi |
| | clusterID: 6d273112-f2a6-4aec-8727-76b690274c60 |
| | monitors: |
| | - 192.168.204.2:6789 |
| | provisioner: |
| | replicaCount: 1 |
| | snapshotter: |
| | enabled: true |
| | snapshotClass: |
| | clusterID: 6d273112-f2a6-4aec-8727-76b690274c60 |
| | provisionerSecret: ceph-pool-kube-cephfs-data |
| | storageClasses: |
| | - additionalNamespaces: |
| | - default |
| | - kube-public |
@ -74,14 +87,6 @@ application-specific namespaces to access the **cephfs-provisioner**
| | userId: ceph-pool-kube-cephfs-data |
| | userSecretName: ceph-pool-kube-cephfs-data |
| | volumeNamePrefix: pvc-volumes- |
| | csiConfig: |
| | - cephFS: |
| | subvolumeGroup: csi |
| | clusterID: 6d273112-f2a6-4aec-8727-76b690274c60 |
| | monitors: |
| | - 192.168.204.2:6789 |
| | provisioner: |
| | replicaCount: 1 |
| | |
| name | cephfs-provisioner |
| namespace | kube-system |
@ -89,7 +94,19 @@ application-specific namespaces to access the **cephfs-provisioner**
| | adminId: admin |
| | adminSecretName: ceph-secret-admin |
| | monitors: ['192.168.204.2:6789'] |
| | classes: |
| | csiConfig: |
| | - cephFS: {subvolumeGroup: csi} |
| | clusterID: !!binary | |
| | NmQyNzMxMTItZjJhNi00YWVjLTg3MjctNzZiNjkwMjc0YzYw |
| | monitors: ['192.168.204.2:6789'] |
| | provisioner: |
| | replicaCount: 1 |
| | snapshotter: {enabled: true} |
| | snapshotClass: |
| | clusterID: !!binary | |
| | NmQyNzMxMTItZjJhNi00YWVjLTg3MjctNzZiNjkwMjc0YzYw |
| | provisionerSecret: ceph-pool-kube-cephfs-data |
| | storageClasses: |
| | - additionalNamespaces: [default, kube-public] |
| | chunk_size: 64 |
| | clusterID: !!binary | |
@ -106,12 +123,6 @@ application-specific namespaces to access the **cephfs-provisioner**
| | userId: ceph-pool-kube-cephfs-data |
| | userSecretName: ceph-pool-kube-cephfs-data |
| | volumeNamePrefix: pvc-volumes- |
| | csiConfig: |
| | - cephFS: {subvolumeGroup: csi} |
| | clusterID: !!binary | |
| | NmQyNzMxMTItZjJhNi00YWVjLTg3MjctNzZiNjkwMjc0YzYw |
| | monitors: ['192.168.204.2:6789'] |
| | provisioner: {replicaCount: 1} |
| | |
| user_overrides | None |
+--------------------+------------------------------------------------------+
@ -124,7 +135,7 @@ application-specific namespaces to access the **cephfs-provisioner**
.. code-block:: none
~(keystone_admin)]$ cat <<EOF > ~/update-namespaces.yaml
classes:
storageClasses:
- additionalNamespaces: [default, kube-public, new-app, new-app2, new-app3]
chunk_size: 64
claim_root: /pvc-volumes
@ -148,7 +159,7 @@ application-specific namespaces to access the **cephfs-provisioner**
+----------------+----------------------------------------------+
| name | cephfs-provisioner |
| namespace | kube-system |
| user_overrides | classes: |
| user_overrides | storageClasses: |
| | - additionalNamespaces: |
| | - default |
| | - kube-public |
@ -177,7 +188,7 @@ application-specific namespaces to access the **cephfs-provisioner**
+--------------------+---------------------------------------------+
| Property | Value |
+--------------------+---------------------------------------------+
| user_overrides | classes: |
| user_overrides | storageClasses: |
| | - additionalNamespaces: |
| | - default |
| | - kube-public |

View File

@ -54,7 +54,18 @@ application-specific namespaces to access the |RBD| provisioner's **general stor
| | monitors: |
| | - 192.168.204.2:6789 |
| | storageClass: general |
| | classes: |
| | csiConfig: |
| | - clusterID: 6d273112-f2a6-4aec-8727-76b690274c60 |
| | monitors: |
| | - 192.168.204.2:6789 |
| | provisioner: |
| | replicaCount: 1 |
| | snapshotter: |
| | enabled: true |
| | snapshotClass: |
| | clusterID: 6d273112-f2a6-4aec-8727-76b690274c60 |
| | provisionerSecret: ceph-pool-kube-rbd |
| | storageClasses: |
| | - additionalNamespaces: |
| | - default |
| | - kube-public |
@ -69,12 +80,6 @@ application-specific namespaces to access the |RBD| provisioner's **general stor
| | replication: 1 |
| | userId: ceph-pool-kube-rbd |
| | userSecretName: ceph-pool-kube-rbd |
| | csiConfig: |
| | - clusterID: 6d273112-f2a6-4aec-8727-76b690274c60 |
| | monitors: |
| | - 192.168.204.2:6789 |
| | provisioner: |
| | replicaCount: 1 |
| | |
| name | rbd-provisioner |
| namespace | kube-system |
@ -83,7 +88,18 @@ application-specific namespaces to access the |RBD| provisioner's **general stor
| | adminSecretName: ceph-admin |
| | monitors: ['192.168.204.2:6789'] |
| | storageClass: general |
| | classes: |
| | csiConfig: |
| | - clusterID: !!binary | |
| | NmQyNzMxMTItZjJhNi00YWVjLTg3MjctNzZiNjkwMjc0YzYw |
| | monitors: ['192.168.204.2:6789'] |
| | provisioner: |
| | replicaCount: 1 |
| | snapshotter: {enabled: true} |
| | snapshotClass: |
| | clusterID: !!binary | |
| | NmQyNzMxMTItZjJhNi00YWVjLTg3MjctNzZiNjkwMjc0YzYw |
| | provisionerSecret: ceph-pool-kube-rbd |
| | storageClasses: |
| | - additionalNamespaces: [default, kube-public] |
| | chunk_size: 64 |
| | clusterID: !!binary | |
@ -97,16 +113,10 @@ application-specific namespaces to access the |RBD| provisioner's **general stor
| | replication: 1 |
| | userId: ceph-pool-kube-rbd |
| | userSecretName: ceph-pool-kube-rbd |
| | csiConfig: |
| | - clusterID: !!binary | |
| | NmQyNzMxMTItZjJhNi00YWVjLTg3MjctNzZiNjkwMjc0YzYw |
| | monitors: ['192.168.204.2:6789'] |
| | provisioner: {replicaCount: 1} |
| | |
| user_overrides | None |
+--------------------+------------------------------------------------------+
#. Create an overrides yaml file defining the new namespaces. In this example
we will create the file ``/home/sysadmin/update-namespaces.yaml`` with the
following content:
@ -114,7 +124,7 @@ application-specific namespaces to access the |RBD| provisioner's **general stor
.. code-block:: none
~(keystone_admin)]$ cat <<EOF > ~/update-namespaces.yaml
classes:
storageClasses:
- additionalNamespaces: [default, kube-public, new-app, new-app2, new-app3]
chunk_size: 64
crush_rule_name: storage_tier_ruleset
@ -135,7 +145,7 @@ application-specific namespaces to access the |RBD| provisioner's **general stor
+----------------+-----------------------------------------+
| name | rbd-provisioner |
| namespace | kube-system |
| user_overrides | classes: |
| user_overrides | storageClasses: |
| | - additionalNamespaces: |
| | - default |
| | - kube-public |
@ -168,7 +178,7 @@ application-specific namespaces to access the |RBD| provisioner's **general stor
| system_overrides | ... |
| | |
| | |
| user_overrides | classes: |
| user_overrides | storageClasses: |
| | - additionalNamespaces: |
| | - default |
| | - kube-public |

View File

@ -153,6 +153,7 @@ RBD Provisioner
enable-readwriteonce-pvc-support-in-additional-namespaces
enable-rbd-readwriteonly-additional-storage-classes
install-additional-rbd-provisioners
create-rbd-volume-snapshot-class-0318eed94b92
****************************
Ceph File System Provisioner
@ -165,6 +166,7 @@ Ceph File System Provisioner
create-readwritemany-persistent-volume-claims
mount-readwritemany-persistent-volumes-in-containers
enable-readwritemany-pvc-support-in-additional-namespaces
create-cephfs-volume-snapshot-class-92f4ad13d166
----------------------------
Storage-Related CLI Commands

View File

@ -49,7 +49,7 @@ This procedure uses standard Helm mechanisms to install a second
classdefaults:
monitors:
${MON_LIST}
classes:
storageClasses:
- additionalNamespaces:
- default
- kube-public