integ/ceph
Daniel Badea 9faad45703 ceph-init-wrapper use flock instead of flag files
When swact occurs and ceph-init-wrapper is slow to respond
to a status request it gets killed by SM. This means the
corresponding flag file that marks status in progress is left
behind.

When controller swacts back ceph-init-wrapper sees status
in progress and waits for it to finish (with a timeout).
Because it does not respond fast enough SM tries to start
again ceph-init-wrapper to get ceph-mon service up and running.

This happens a couple of times until the service is declared
failed and controller swacts back.

To fix this we need to use flock instead of flag files as the
locks will be automatically released by the OS when process
is killed.

Change-Id: If1912e8575258a4f79321d8435c8ae1b96b78b98
Closes-bug: 1840176
Signed-off-by: Daniel Badea <daniel.badea@windriver.com>
2019-08-27 14:53:32 +00:00
..
ceph ceph-init-wrapper use flock instead of flag files 2019-08-27 14:53:32 +00:00
ceph-manager ceph-manager: fix tox issues 2019-04-26 08:54:44 +00:00
python-cephclient python-cephclient: populate items list for all nodes except osd 2019-07-24 16:41:10 +00:00