config/kubernetes
Irina Mihai e4ff1ef2c3 Decouple Ceph pools creation from sysinv
For a containerized deployment:
- the following Ceph pools are no longer created by sysinv:
  cinder-volumes, images, ephemeral, kube-rbd; the creation has
  been left/moved to the helm charts as follows:
    -> the cinder chart: cinder-volumes, cinder.backup
    -> the rbd-provisioner chart: kube-rbd pools and
       the ephemeral pool for nova (temporary)
    -> glance: images
- sysinv no longer supports updating the pool quotas
- sysinv no longer supports updating the replication for Ceph pools:
  the replication is updated in the DB through the
  'system storage-backend-modify' command, but the chart is applying
  the change through the helm overrides when the application is
  (re)applied
- sysinv no longer audits the Ceph pools and adjusts the PG num
- sysinv no longer generates the Ceph keys for the Ceph pools and the
  k8s secrets, as these have been moved to the rbd-provisioner chart
- upon storage node lock, we determine which are the existing data
  Ceph pools and deny lock if they are not empty

NOTE: There are still parts of the pool management code that
will have to be removed once we switch to a containerized
deployment only. I've marked that with "TODO(CephPoolsDecouple)"
to easily track it.

Validation:
- install AIO-SX & Storage setups with --kubernetes and
  -> add Ceph (primary and secondary tier)
  -> lock/unlock host/storage hosts
  -> check pools are not created by sysinv
  -> generate the stx-openstack application tarball
  -> system application-upload stx-openstack
     helm-charts-manifest-no-tests.tgz
  -> system application-apply stx-openstack

- install AIO-SX without --kubernetes
  -> check lock/unlock

- install Storage setup without --kubernetes
  -> check lock/unlock of storage nodes
  -> check Ceph pools are created
  -> test quotas can be changed

Story: 2002844
Task: 28190
Change-Id: Ic2190e488917bffebcd16daf895dcddd92c6d9c5
Signed-off-by: Irina Mihai <irina.mihai@windriver.com>
2018-12-01 16:21:31 +00:00
..
applications/stx-openstack/stx-openstack-helm Updates for nova-api-proxy helm chart 2018-11-26 09:39:36 -05:00
helm-charts/rbd-provisioner Decouple Ceph pools creation from sysinv 2018-12-01 16:21:31 +00:00
README Enable StarlingX helm charts for stx-openstack app 2018-11-07 16:14:42 -05:00

README

The expected layout for this subdirectory is as follows:

kubernetes
|-- applications
|   `-- <application>
|       `-- <application>-helm RPM
|           `-- centos
|               `-- build_srpm.data
|               `-- <application>-helm.spec
|           `-- <application>-helm
|               `-- manifests
|                   `-- main-manifest.yaml
|                   `-- alt-manifest-1.yaml
|                   `-- ...
|                   `-- alt-manifest-N.yaml
|               `-- custom chart 1
|                   `-- Chart.yaml
|                   `-- ...
|               `-- ...
|               `-- custom chart N
|                   `-- Chart.yaml
|                   `-- ...
|-- helm-charts
|   `-- chart
|       `-- chart
`-- README

The idea is that all our custom helm charts that are common across applications
would go under "helm-charts". Each chart would get a subdirectory.

Custom applications would generally consist of one or more armada manifest
referencing multiple helm charts (both ours and upstream ones). The application
is packaged as an RPM. These application RPM are used to produce the build
artifacts (helm tarballs + armada manifests) but are not installed on the
system. These artifacts are extracted later for proper application packaging
with additional required metadata (TBD).

These applications would each get their own subdirectory under
"applications".