Remove package io-monitor.
Change-Id: If9f55fb6eacb82005f778af1e7320ce03ac55e43 Story: 2002801 Task: 22687 Signed-off-by: Scott Little <scott.little@windriver.com>
This commit is contained in:
parent
49d9ec50b3
commit
9c15b0fc0e
|
@ -1,2 +1 @@
|
||||||
middleware/io-monitor/recipes-common/io-monitor
|
|
||||||
middleware/recipes-common/build-info
|
middleware/recipes-common/build-info
|
||||||
|
|
|
@ -1,6 +0,0 @@
|
||||||
!.distro
|
|
||||||
.distro/centos7/rpmbuild/RPMS
|
|
||||||
.distro/centos7/rpmbuild/SRPMS
|
|
||||||
.distro/centos7/rpmbuild/BUILD
|
|
||||||
.distro/centos7/rpmbuild/BUILDROOT
|
|
||||||
.distro/centos7/rpmbuild/SOURCES/io-monitor*tar.gz
|
|
|
@ -1,202 +0,0 @@
|
||||||
|
|
||||||
Apache License
|
|
||||||
Version 2.0, January 2004
|
|
||||||
http://www.apache.org/licenses/
|
|
||||||
|
|
||||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
|
||||||
|
|
||||||
1. Definitions.
|
|
||||||
|
|
||||||
"License" shall mean the terms and conditions for use, reproduction,
|
|
||||||
and distribution as defined by Sections 1 through 9 of this document.
|
|
||||||
|
|
||||||
"Licensor" shall mean the copyright owner or entity authorized by
|
|
||||||
the copyright owner that is granting the License.
|
|
||||||
|
|
||||||
"Legal Entity" shall mean the union of the acting entity and all
|
|
||||||
other entities that control, are controlled by, or are under common
|
|
||||||
control with that entity. For the purposes of this definition,
|
|
||||||
"control" means (i) the power, direct or indirect, to cause the
|
|
||||||
direction or management of such entity, whether by contract or
|
|
||||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
|
||||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
|
||||||
|
|
||||||
"You" (or "Your") shall mean an individual or Legal Entity
|
|
||||||
exercising permissions granted by this License.
|
|
||||||
|
|
||||||
"Source" form shall mean the preferred form for making modifications,
|
|
||||||
including but not limited to software source code, documentation
|
|
||||||
source, and configuration files.
|
|
||||||
|
|
||||||
"Object" form shall mean any form resulting from mechanical
|
|
||||||
transformation or translation of a Source form, including but
|
|
||||||
not limited to compiled object code, generated documentation,
|
|
||||||
and conversions to other media types.
|
|
||||||
|
|
||||||
"Work" shall mean the work of authorship, whether in Source or
|
|
||||||
Object form, made available under the License, as indicated by a
|
|
||||||
copyright notice that is included in or attached to the work
|
|
||||||
(an example is provided in the Appendix below).
|
|
||||||
|
|
||||||
"Derivative Works" shall mean any work, whether in Source or Object
|
|
||||||
form, that is based on (or derived from) the Work and for which the
|
|
||||||
editorial revisions, annotations, elaborations, or other modifications
|
|
||||||
represent, as a whole, an original work of authorship. For the purposes
|
|
||||||
of this License, Derivative Works shall not include works that remain
|
|
||||||
separable from, or merely link (or bind by name) to the interfaces of,
|
|
||||||
the Work and Derivative Works thereof.
|
|
||||||
|
|
||||||
"Contribution" shall mean any work of authorship, including
|
|
||||||
the original version of the Work and any modifications or additions
|
|
||||||
to that Work or Derivative Works thereof, that is intentionally
|
|
||||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
|
||||||
or by an individual or Legal Entity authorized to submit on behalf of
|
|
||||||
the copyright owner. For the purposes of this definition, "submitted"
|
|
||||||
means any form of electronic, verbal, or written communication sent
|
|
||||||
to the Licensor or its representatives, including but not limited to
|
|
||||||
communication on electronic mailing lists, source code control systems,
|
|
||||||
and issue tracking systems that are managed by, or on behalf of, the
|
|
||||||
Licensor for the purpose of discussing and improving the Work, but
|
|
||||||
excluding communication that is conspicuously marked or otherwise
|
|
||||||
designated in writing by the copyright owner as "Not a Contribution."
|
|
||||||
|
|
||||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
|
||||||
on behalf of whom a Contribution has been received by Licensor and
|
|
||||||
subsequently incorporated within the Work.
|
|
||||||
|
|
||||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
|
||||||
this License, each Contributor hereby grants to You a perpetual,
|
|
||||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
|
||||||
copyright license to reproduce, prepare Derivative Works of,
|
|
||||||
publicly display, publicly perform, sublicense, and distribute the
|
|
||||||
Work and such Derivative Works in Source or Object form.
|
|
||||||
|
|
||||||
3. Grant of Patent License. Subject to the terms and conditions of
|
|
||||||
this License, each Contributor hereby grants to You a perpetual,
|
|
||||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
|
||||||
(except as stated in this section) patent license to make, have made,
|
|
||||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
|
||||||
where such license applies only to those patent claims licensable
|
|
||||||
by such Contributor that are necessarily infringed by their
|
|
||||||
Contribution(s) alone or by combination of their Contribution(s)
|
|
||||||
with the Work to which such Contribution(s) was submitted. If You
|
|
||||||
institute patent litigation against any entity (including a
|
|
||||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
|
||||||
or a Contribution incorporated within the Work constitutes direct
|
|
||||||
or contributory patent infringement, then any patent licenses
|
|
||||||
granted to You under this License for that Work shall terminate
|
|
||||||
as of the date such litigation is filed.
|
|
||||||
|
|
||||||
4. Redistribution. You may reproduce and distribute copies of the
|
|
||||||
Work or Derivative Works thereof in any medium, with or without
|
|
||||||
modifications, and in Source or Object form, provided that You
|
|
||||||
meet the following conditions:
|
|
||||||
|
|
||||||
(a) You must give any other recipients of the Work or
|
|
||||||
Derivative Works a copy of this License; and
|
|
||||||
|
|
||||||
(b) You must cause any modified files to carry prominent notices
|
|
||||||
stating that You changed the files; and
|
|
||||||
|
|
||||||
(c) You must retain, in the Source form of any Derivative Works
|
|
||||||
that You distribute, all copyright, patent, trademark, and
|
|
||||||
attribution notices from the Source form of the Work,
|
|
||||||
excluding those notices that do not pertain to any part of
|
|
||||||
the Derivative Works; and
|
|
||||||
|
|
||||||
(d) If the Work includes a "NOTICE" text file as part of its
|
|
||||||
distribution, then any Derivative Works that You distribute must
|
|
||||||
include a readable copy of the attribution notices contained
|
|
||||||
within such NOTICE file, excluding those notices that do not
|
|
||||||
pertain to any part of the Derivative Works, in at least one
|
|
||||||
of the following places: within a NOTICE text file distributed
|
|
||||||
as part of the Derivative Works; within the Source form or
|
|
||||||
documentation, if provided along with the Derivative Works; or,
|
|
||||||
within a display generated by the Derivative Works, if and
|
|
||||||
wherever such third-party notices normally appear. The contents
|
|
||||||
of the NOTICE file are for informational purposes only and
|
|
||||||
do not modify the License. You may add Your own attribution
|
|
||||||
notices within Derivative Works that You distribute, alongside
|
|
||||||
or as an addendum to the NOTICE text from the Work, provided
|
|
||||||
that such additional attribution notices cannot be construed
|
|
||||||
as modifying the License.
|
|
||||||
|
|
||||||
You may add Your own copyright statement to Your modifications and
|
|
||||||
may provide additional or different license terms and conditions
|
|
||||||
for use, reproduction, or distribution of Your modifications, or
|
|
||||||
for any such Derivative Works as a whole, provided Your use,
|
|
||||||
reproduction, and distribution of the Work otherwise complies with
|
|
||||||
the conditions stated in this License.
|
|
||||||
|
|
||||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
|
||||||
any Contribution intentionally submitted for inclusion in the Work
|
|
||||||
by You to the Licensor shall be under the terms and conditions of
|
|
||||||
this License, without any additional terms or conditions.
|
|
||||||
Notwithstanding the above, nothing herein shall supersede or modify
|
|
||||||
the terms of any separate license agreement you may have executed
|
|
||||||
with Licensor regarding such Contributions.
|
|
||||||
|
|
||||||
6. Trademarks. This License does not grant permission to use the trade
|
|
||||||
names, trademarks, service marks, or product names of the Licensor,
|
|
||||||
except as required for reasonable and customary use in describing the
|
|
||||||
origin of the Work and reproducing the content of the NOTICE file.
|
|
||||||
|
|
||||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
|
||||||
agreed to in writing, Licensor provides the Work (and each
|
|
||||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
|
||||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
implied, including, without limitation, any warranties or conditions
|
|
||||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
|
||||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
|
||||||
appropriateness of using or redistributing the Work and assume any
|
|
||||||
risks associated with Your exercise of permissions under this License.
|
|
||||||
|
|
||||||
8. Limitation of Liability. In no event and under no legal theory,
|
|
||||||
whether in tort (including negligence), contract, or otherwise,
|
|
||||||
unless required by applicable law (such as deliberate and grossly
|
|
||||||
negligent acts) or agreed to in writing, shall any Contributor be
|
|
||||||
liable to You for damages, including any direct, indirect, special,
|
|
||||||
incidental, or consequential damages of any character arising as a
|
|
||||||
result of this License or out of the use or inability to use the
|
|
||||||
Work (including but not limited to damages for loss of goodwill,
|
|
||||||
work stoppage, computer failure or malfunction, or any and all
|
|
||||||
other commercial damages or losses), even if such Contributor
|
|
||||||
has been advised of the possibility of such damages.
|
|
||||||
|
|
||||||
9. Accepting Warranty or Additional Liability. While redistributing
|
|
||||||
the Work or Derivative Works thereof, You may choose to offer,
|
|
||||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
|
||||||
or other liability obligations and/or rights consistent with this
|
|
||||||
License. However, in accepting such obligations, You may act only
|
|
||||||
on Your own behalf and on Your sole responsibility, not on behalf
|
|
||||||
of any other Contributor, and only if You agree to indemnify,
|
|
||||||
defend, and hold each Contributor harmless for any liability
|
|
||||||
incurred by, or claims asserted against, such Contributor by reason
|
|
||||||
of your accepting any such warranty or additional liability.
|
|
||||||
|
|
||||||
END OF TERMS AND CONDITIONS
|
|
||||||
|
|
||||||
APPENDIX: How to apply the Apache License to your work.
|
|
||||||
|
|
||||||
To apply the Apache License to your work, attach the following
|
|
||||||
boilerplate notice, with the fields enclosed by brackets "[]"
|
|
||||||
replaced with your own identifying information. (Don't include
|
|
||||||
the brackets!) The text should be enclosed in the appropriate
|
|
||||||
comment syntax for the file format. We also recommend that a
|
|
||||||
file or class name and description of purpose be included on the
|
|
||||||
same "printed page" as the copyright notice for easier
|
|
||||||
identification within third-party archives.
|
|
||||||
|
|
||||||
Copyright [yyyy] [name of copyright owner]
|
|
||||||
|
|
||||||
Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
you may not use this file except in compliance with the License.
|
|
||||||
You may obtain a copy of the License at
|
|
||||||
|
|
||||||
http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
|
|
||||||
Unless required by applicable law or agreed to in writing, software
|
|
||||||
distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
See the License for the specific language governing permissions and
|
|
||||||
limitations under the License.
|
|
|
@ -1,13 +0,0 @@
|
||||||
Metadata-Version: 1.1
|
|
||||||
Name: io-monitor
|
|
||||||
Version: 1.0
|
|
||||||
Summary: Poll iostat and raise alarms for excessive conditions
|
|
||||||
Home-page:
|
|
||||||
Author: Windriver
|
|
||||||
Author-email: info@windriver.com
|
|
||||||
License: Apache-2.0
|
|
||||||
|
|
||||||
Description: Poll iostat and raise alarms for excessive conditions
|
|
||||||
|
|
||||||
|
|
||||||
Platform: UNKNOWN
|
|
|
@ -1,3 +0,0 @@
|
||||||
SRC_DIR="io-monitor"
|
|
||||||
COPY_LIST_TO_TAR="files scripts"
|
|
||||||
TIS_PATCH_VER=6
|
|
|
@ -1,81 +0,0 @@
|
||||||
Summary: Poll iostat and raise alarms for excessive conditions
|
|
||||||
Name: io-monitor
|
|
||||||
Version: 1.0
|
|
||||||
Release: %{tis_patch_ver}%{?_tis_dist}
|
|
||||||
License: Apache-2.0
|
|
||||||
Group: base
|
|
||||||
Packager: Wind River <info@windriver.com>
|
|
||||||
URL: unknown
|
|
||||||
Source0: %{name}-%{version}.tar.gz
|
|
||||||
|
|
||||||
BuildRequires: python-setuptools
|
|
||||||
BuildRequires: systemd-units
|
|
||||||
BuildRequires: systemd-devel
|
|
||||||
BuildRequires: fm-api
|
|
||||||
Requires: /bin/systemctl
|
|
||||||
|
|
||||||
%description
|
|
||||||
Poll iostat and raise alarms for excessive conditions
|
|
||||||
|
|
||||||
%define local_bindir /usr/bin/
|
|
||||||
%define local_etc /etc/
|
|
||||||
%define local_etc_initd /etc/init.d/
|
|
||||||
%define local_etc_pmond /etc/pmon.d/
|
|
||||||
%define local_etc_logrotated /etc/logrotate.d/
|
|
||||||
%define pythonroot /usr/lib64/python2.7/site-packages
|
|
||||||
|
|
||||||
%define debug_package %{nil}
|
|
||||||
|
|
||||||
%prep
|
|
||||||
%setup
|
|
||||||
|
|
||||||
%build
|
|
||||||
%{__python} setup.py build
|
|
||||||
|
|
||||||
%install
|
|
||||||
%{__python} setup.py install --root=$RPM_BUILD_ROOT \
|
|
||||||
--install-lib=%{pythonroot} \
|
|
||||||
--prefix=/usr \
|
|
||||||
--install-data=/usr/share \
|
|
||||||
--single-version-externally-managed
|
|
||||||
|
|
||||||
install -d -m 755 %{buildroot}%{local_etc}%{name}
|
|
||||||
install -p -D -m 700 files/io-monitor.conf %{buildroot}%{local_etc}%{name}/io-monitor.conf
|
|
||||||
|
|
||||||
install -d -m 755 %{buildroot}%{local_etc_pmond}
|
|
||||||
install -p -D -m 644 scripts/pmon.d/io-monitor.conf %{buildroot}%{local_etc_pmond}/io-monitor.conf
|
|
||||||
|
|
||||||
install -d -m 755 %{buildroot}%{local_etc_initd}
|
|
||||||
install -p -D -m 700 scripts/init.d/io-monitor-manager %{buildroot}%{local_etc_initd}/io-monitor-manager
|
|
||||||
|
|
||||||
install -d -m 755 %{buildroot}%{local_bindir}
|
|
||||||
install -p -D -m 700 scripts/bin/io-monitor-manager %{buildroot}%{local_bindir}/io-monitor-manager
|
|
||||||
|
|
||||||
install -d -m 755 %{buildroot}%{local_etc_logrotated}
|
|
||||||
install -p -D -m 644 files/io-monitor.logrotate %{buildroot}%{local_etc_logrotated}/io-monitor.logrotate
|
|
||||||
|
|
||||||
install -d -m 755 %{buildroot}%{_unitdir}
|
|
||||||
install -m 644 -p -D files/%{name}-manager.service %{buildroot}%{_unitdir}/%{name}-manager.service
|
|
||||||
|
|
||||||
%post
|
|
||||||
/bin/systemctl enable %{name}-manager.service
|
|
||||||
|
|
||||||
%clean
|
|
||||||
rm -rf $RPM_BUILD_ROOT
|
|
||||||
|
|
||||||
# Note: The package name is io-monitor but the import name is io_monitor so
|
|
||||||
# can't use '%{name}'.
|
|
||||||
%files
|
|
||||||
%defattr(-,root,root,-)
|
|
||||||
%doc LICENSE
|
|
||||||
%{local_bindir}/*
|
|
||||||
%{local_etc}%{name}/*
|
|
||||||
%{local_etc_initd}/*
|
|
||||||
%{local_etc_pmond}/*
|
|
||||||
%{_unitdir}/%{name}-manager.service
|
|
||||||
%dir %{local_etc_logrotated}
|
|
||||||
%{local_etc_logrotated}/*
|
|
||||||
%dir %{pythonroot}/io_monitor
|
|
||||||
%{pythonroot}/io_monitor/*
|
|
||||||
%dir %{pythonroot}/io_monitor-%{version}.0-py2.7.egg-info
|
|
||||||
%{pythonroot}/io_monitor-%{version}.0-py2.7.egg-info/*
|
|
|
@ -1,18 +0,0 @@
|
||||||
[Unit]
|
|
||||||
Description=Daemon for polling iostat status
|
|
||||||
After=local-fs.target
|
|
||||||
Before=pmon.service
|
|
||||||
|
|
||||||
[Service]
|
|
||||||
Type=forking
|
|
||||||
Restart=no
|
|
||||||
KillMode=process
|
|
||||||
RemainAfterExit=yes
|
|
||||||
ExecStart=/etc/rc.d/init.d/io-monitor-manager start
|
|
||||||
ExecStop=/etc/rc.d/init.d/io-monitor-manager stop
|
|
||||||
ExecReload=/etc/rc.d/init.d/io-monitor-manager reload
|
|
||||||
PIDFile=/var/run/io-monitor/io-monitor-manager.pid
|
|
||||||
|
|
||||||
[Install]
|
|
||||||
WantedBy=multi-user.target
|
|
||||||
|
|
|
@ -1,60 +0,0 @@
|
||||||
[DEFAULT]
|
|
||||||
# Run as a daemon
|
|
||||||
#daemon_mode = True
|
|
||||||
|
|
||||||
# Sleep interval (in seconds) between iostat executions [1..59]
|
|
||||||
#wait_time = 1
|
|
||||||
|
|
||||||
#Global debug level. Note: All monitors will be clipped at this setting.
|
|
||||||
#global_log_level = DEBUG
|
|
||||||
|
|
||||||
[cinder_congestion]
|
|
||||||
# SSD: Large moving average window size (in samples).
|
|
||||||
#ssd_large_window_size = 30
|
|
||||||
|
|
||||||
# SSD: Medium moving average window size (in samples).
|
|
||||||
#ssd_medium_window_size = 60
|
|
||||||
|
|
||||||
# SSD: Small moving average window size (in samples).
|
|
||||||
#ssd_small_window_size = 90
|
|
||||||
|
|
||||||
# SSD: Value required in a moving average window to trigger next state.
|
|
||||||
#ssd_thresh_sustained_await = 1000
|
|
||||||
|
|
||||||
# SSD: Max await time. Anomalous data readings are clipped to this.
|
|
||||||
#ssd_thresh_max_await = 5000
|
|
||||||
|
|
||||||
# HDD: Large moving average window size (in samples).
|
|
||||||
#hdd_large_window_size = 240
|
|
||||||
|
|
||||||
# HDD: Medium moving average window size (in samples).
|
|
||||||
#hdd_medium_window_size = 180
|
|
||||||
|
|
||||||
# HDD: Small moving average window size (in samples).
|
|
||||||
#hdd_small_window_size = 120
|
|
||||||
|
|
||||||
# HDD: Value required in a moving average window to trigger next state.
|
|
||||||
#hdd_thresh_sustained_await = 1500
|
|
||||||
|
|
||||||
# HDD: Max await time. Anomalous data readings are clipped to this.
|
|
||||||
#hdd_thresh_max_await = 5000
|
|
||||||
|
|
||||||
# Monitor debug level. Note: global level must be equialent or lower.
|
|
||||||
#log_level = INFO
|
|
||||||
|
|
||||||
# Modify how often status messages appear in the log. 0.0 is never, 1.0 is for
|
|
||||||
# every iostat execution.
|
|
||||||
#status_log_rate_modifier = 0.2
|
|
||||||
|
|
||||||
# Enable FM Alarm generation
|
|
||||||
#generate_fm_alarms = True
|
|
||||||
|
|
||||||
# Number of same consecutive congestion state seen before raising/clearing alarms.
|
|
||||||
#fm_alarm_debounce = 5
|
|
||||||
|
|
||||||
# Write monitor data to a csv for analysis
|
|
||||||
#output_write_csv = False
|
|
||||||
|
|
||||||
# Directory where monitor output will be located.
|
|
||||||
#output_csv_dir = /tmp
|
|
||||||
|
|
|
@ -1,11 +0,0 @@
|
||||||
/var/log/io-monitor.log {
|
|
||||||
nodateext
|
|
||||||
size 10M
|
|
||||||
start 1
|
|
||||||
rotate 10
|
|
||||||
missingok
|
|
||||||
notifempty
|
|
||||||
compress
|
|
||||||
delaycompress
|
|
||||||
copytruncate
|
|
||||||
}
|
|
|
@ -1,202 +0,0 @@
|
||||||
|
|
||||||
Apache License
|
|
||||||
Version 2.0, January 2004
|
|
||||||
http://www.apache.org/licenses/
|
|
||||||
|
|
||||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
|
||||||
|
|
||||||
1. Definitions.
|
|
||||||
|
|
||||||
"License" shall mean the terms and conditions for use, reproduction,
|
|
||||||
and distribution as defined by Sections 1 through 9 of this document.
|
|
||||||
|
|
||||||
"Licensor" shall mean the copyright owner or entity authorized by
|
|
||||||
the copyright owner that is granting the License.
|
|
||||||
|
|
||||||
"Legal Entity" shall mean the union of the acting entity and all
|
|
||||||
other entities that control, are controlled by, or are under common
|
|
||||||
control with that entity. For the purposes of this definition,
|
|
||||||
"control" means (i) the power, direct or indirect, to cause the
|
|
||||||
direction or management of such entity, whether by contract or
|
|
||||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
|
||||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
|
||||||
|
|
||||||
"You" (or "Your") shall mean an individual or Legal Entity
|
|
||||||
exercising permissions granted by this License.
|
|
||||||
|
|
||||||
"Source" form shall mean the preferred form for making modifications,
|
|
||||||
including but not limited to software source code, documentation
|
|
||||||
source, and configuration files.
|
|
||||||
|
|
||||||
"Object" form shall mean any form resulting from mechanical
|
|
||||||
transformation or translation of a Source form, including but
|
|
||||||
not limited to compiled object code, generated documentation,
|
|
||||||
and conversions to other media types.
|
|
||||||
|
|
||||||
"Work" shall mean the work of authorship, whether in Source or
|
|
||||||
Object form, made available under the License, as indicated by a
|
|
||||||
copyright notice that is included in or attached to the work
|
|
||||||
(an example is provided in the Appendix below).
|
|
||||||
|
|
||||||
"Derivative Works" shall mean any work, whether in Source or Object
|
|
||||||
form, that is based on (or derived from) the Work and for which the
|
|
||||||
editorial revisions, annotations, elaborations, or other modifications
|
|
||||||
represent, as a whole, an original work of authorship. For the purposes
|
|
||||||
of this License, Derivative Works shall not include works that remain
|
|
||||||
separable from, or merely link (or bind by name) to the interfaces of,
|
|
||||||
the Work and Derivative Works thereof.
|
|
||||||
|
|
||||||
"Contribution" shall mean any work of authorship, including
|
|
||||||
the original version of the Work and any modifications or additions
|
|
||||||
to that Work or Derivative Works thereof, that is intentionally
|
|
||||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
|
||||||
or by an individual or Legal Entity authorized to submit on behalf of
|
|
||||||
the copyright owner. For the purposes of this definition, "submitted"
|
|
||||||
means any form of electronic, verbal, or written communication sent
|
|
||||||
to the Licensor or its representatives, including but not limited to
|
|
||||||
communication on electronic mailing lists, source code control systems,
|
|
||||||
and issue tracking systems that are managed by, or on behalf of, the
|
|
||||||
Licensor for the purpose of discussing and improving the Work, but
|
|
||||||
excluding communication that is conspicuously marked or otherwise
|
|
||||||
designated in writing by the copyright owner as "Not a Contribution."
|
|
||||||
|
|
||||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
|
||||||
on behalf of whom a Contribution has been received by Licensor and
|
|
||||||
subsequently incorporated within the Work.
|
|
||||||
|
|
||||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
|
||||||
this License, each Contributor hereby grants to You a perpetual,
|
|
||||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
|
||||||
copyright license to reproduce, prepare Derivative Works of,
|
|
||||||
publicly display, publicly perform, sublicense, and distribute the
|
|
||||||
Work and such Derivative Works in Source or Object form.
|
|
||||||
|
|
||||||
3. Grant of Patent License. Subject to the terms and conditions of
|
|
||||||
this License, each Contributor hereby grants to You a perpetual,
|
|
||||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
|
||||||
(except as stated in this section) patent license to make, have made,
|
|
||||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
|
||||||
where such license applies only to those patent claims licensable
|
|
||||||
by such Contributor that are necessarily infringed by their
|
|
||||||
Contribution(s) alone or by combination of their Contribution(s)
|
|
||||||
with the Work to which such Contribution(s) was submitted. If You
|
|
||||||
institute patent litigation against any entity (including a
|
|
||||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
|
||||||
or a Contribution incorporated within the Work constitutes direct
|
|
||||||
or contributory patent infringement, then any patent licenses
|
|
||||||
granted to You under this License for that Work shall terminate
|
|
||||||
as of the date such litigation is filed.
|
|
||||||
|
|
||||||
4. Redistribution. You may reproduce and distribute copies of the
|
|
||||||
Work or Derivative Works thereof in any medium, with or without
|
|
||||||
modifications, and in Source or Object form, provided that You
|
|
||||||
meet the following conditions:
|
|
||||||
|
|
||||||
(a) You must give any other recipients of the Work or
|
|
||||||
Derivative Works a copy of this License; and
|
|
||||||
|
|
||||||
(b) You must cause any modified files to carry prominent notices
|
|
||||||
stating that You changed the files; and
|
|
||||||
|
|
||||||
(c) You must retain, in the Source form of any Derivative Works
|
|
||||||
that You distribute, all copyright, patent, trademark, and
|
|
||||||
attribution notices from the Source form of the Work,
|
|
||||||
excluding those notices that do not pertain to any part of
|
|
||||||
the Derivative Works; and
|
|
||||||
|
|
||||||
(d) If the Work includes a "NOTICE" text file as part of its
|
|
||||||
distribution, then any Derivative Works that You distribute must
|
|
||||||
include a readable copy of the attribution notices contained
|
|
||||||
within such NOTICE file, excluding those notices that do not
|
|
||||||
pertain to any part of the Derivative Works, in at least one
|
|
||||||
of the following places: within a NOTICE text file distributed
|
|
||||||
as part of the Derivative Works; within the Source form or
|
|
||||||
documentation, if provided along with the Derivative Works; or,
|
|
||||||
within a display generated by the Derivative Works, if and
|
|
||||||
wherever such third-party notices normally appear. The contents
|
|
||||||
of the NOTICE file are for informational purposes only and
|
|
||||||
do not modify the License. You may add Your own attribution
|
|
||||||
notices within Derivative Works that You distribute, alongside
|
|
||||||
or as an addendum to the NOTICE text from the Work, provided
|
|
||||||
that such additional attribution notices cannot be construed
|
|
||||||
as modifying the License.
|
|
||||||
|
|
||||||
You may add Your own copyright statement to Your modifications and
|
|
||||||
may provide additional or different license terms and conditions
|
|
||||||
for use, reproduction, or distribution of Your modifications, or
|
|
||||||
for any such Derivative Works as a whole, provided Your use,
|
|
||||||
reproduction, and distribution of the Work otherwise complies with
|
|
||||||
the conditions stated in this License.
|
|
||||||
|
|
||||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
|
||||||
any Contribution intentionally submitted for inclusion in the Work
|
|
||||||
by You to the Licensor shall be under the terms and conditions of
|
|
||||||
this License, without any additional terms or conditions.
|
|
||||||
Notwithstanding the above, nothing herein shall supersede or modify
|
|
||||||
the terms of any separate license agreement you may have executed
|
|
||||||
with Licensor regarding such Contributions.
|
|
||||||
|
|
||||||
6. Trademarks. This License does not grant permission to use the trade
|
|
||||||
names, trademarks, service marks, or product names of the Licensor,
|
|
||||||
except as required for reasonable and customary use in describing the
|
|
||||||
origin of the Work and reproducing the content of the NOTICE file.
|
|
||||||
|
|
||||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
|
||||||
agreed to in writing, Licensor provides the Work (and each
|
|
||||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
|
||||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
implied, including, without limitation, any warranties or conditions
|
|
||||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
|
||||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
|
||||||
appropriateness of using or redistributing the Work and assume any
|
|
||||||
risks associated with Your exercise of permissions under this License.
|
|
||||||
|
|
||||||
8. Limitation of Liability. In no event and under no legal theory,
|
|
||||||
whether in tort (including negligence), contract, or otherwise,
|
|
||||||
unless required by applicable law (such as deliberate and grossly
|
|
||||||
negligent acts) or agreed to in writing, shall any Contributor be
|
|
||||||
liable to You for damages, including any direct, indirect, special,
|
|
||||||
incidental, or consequential damages of any character arising as a
|
|
||||||
result of this License or out of the use or inability to use the
|
|
||||||
Work (including but not limited to damages for loss of goodwill,
|
|
||||||
work stoppage, computer failure or malfunction, or any and all
|
|
||||||
other commercial damages or losses), even if such Contributor
|
|
||||||
has been advised of the possibility of such damages.
|
|
||||||
|
|
||||||
9. Accepting Warranty or Additional Liability. While redistributing
|
|
||||||
the Work or Derivative Works thereof, You may choose to offer,
|
|
||||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
|
||||||
or other liability obligations and/or rights consistent with this
|
|
||||||
License. However, in accepting such obligations, You may act only
|
|
||||||
on Your own behalf and on Your sole responsibility, not on behalf
|
|
||||||
of any other Contributor, and only if You agree to indemnify,
|
|
||||||
defend, and hold each Contributor harmless for any liability
|
|
||||||
incurred by, or claims asserted against, such Contributor by reason
|
|
||||||
of your accepting any such warranty or additional liability.
|
|
||||||
|
|
||||||
END OF TERMS AND CONDITIONS
|
|
||||||
|
|
||||||
APPENDIX: How to apply the Apache License to your work.
|
|
||||||
|
|
||||||
To apply the Apache License to your work, attach the following
|
|
||||||
boilerplate notice, with the fields enclosed by brackets "[]"
|
|
||||||
replaced with your own identifying information. (Don't include
|
|
||||||
the brackets!) The text should be enclosed in the appropriate
|
|
||||||
comment syntax for the file format. We also recommend that a
|
|
||||||
file or class name and description of purpose be included on the
|
|
||||||
same "printed page" as the copyright notice for easier
|
|
||||||
identification within third-party archives.
|
|
||||||
|
|
||||||
Copyright [yyyy] [name of copyright owner]
|
|
||||||
|
|
||||||
Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
you may not use this file except in compliance with the License.
|
|
||||||
You may obtain a copy of the License at
|
|
||||||
|
|
||||||
http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
|
|
||||||
Unless required by applicable law or agreed to in writing, software
|
|
||||||
distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
See the License for the specific language governing permissions and
|
|
||||||
limitations under the License.
|
|
|
@ -1,10 +0,0 @@
|
||||||
#
|
|
||||||
# Copyright (c) 2016 Wind River Systems, Inc.
|
|
||||||
#
|
|
||||||
# SPDX-License-Identifier: Apache-2.0
|
|
||||||
#
|
|
||||||
|
|
||||||
import pbr.version
|
|
||||||
|
|
||||||
__version__ = pbr.version.VersionInfo('io-monitor').version_string()
|
|
||||||
__release__ = pbr.version.VersionInfo('io-monitor').release_string()
|
|
|
@ -1,38 +0,0 @@
|
||||||
#
|
|
||||||
# Copyright (c) 2016 Wind River Systems, Inc.
|
|
||||||
#
|
|
||||||
# SPDX-License-Identifier: Apache-2.0
|
|
||||||
#
|
|
||||||
|
|
||||||
import oslo_i18n as i18n
|
|
||||||
|
|
||||||
DOMAIN = 'io_monitor'
|
|
||||||
_translators = i18n.TranslatorFactory(domain=DOMAIN)
|
|
||||||
|
|
||||||
# The primary translation function using the well-known name "_"
|
|
||||||
_ = _translators.primary
|
|
||||||
|
|
||||||
# HOST OS
|
|
||||||
|
|
||||||
WRLINUX = 'wrlinux'
|
|
||||||
CENTOS = 'CentOS Linux'
|
|
||||||
|
|
||||||
# ALARMS
|
|
||||||
|
|
||||||
# Reasons for alarm
|
|
||||||
ALARM_REASON_BUILDING = _('Cinder I/O Congestion is above normal range and '
|
|
||||||
'is building')
|
|
||||||
ALARM_REASON_CONGESTED = _('Cinder I/O Congestion is high and impacting '
|
|
||||||
'guest performance')
|
|
||||||
|
|
||||||
# Repair actions for alarm
|
|
||||||
REPAIR_ACTION_MAJOR_ALARM = _('Reduce the I/O load on the Cinder LVM '
|
|
||||||
'backend. Use Cinder QoS mechanisms on high '
|
|
||||||
'usage volumes.')
|
|
||||||
REPAIR_ACTION_CRITICAL_ALARM = _('Reduce the I/O load on the Cinder LVM '
|
|
||||||
'backend. Cinder actions may fail until '
|
|
||||||
'congestion is reduced. Use Cinder QoS '
|
|
||||||
'mechanisms on high usage volumes.')
|
|
||||||
|
|
||||||
# All cinder volume group device mapper names begin with this
|
|
||||||
CINDER_DM_PREFIX = 'cinder--volumes'
|
|
|
@ -1,189 +0,0 @@
|
||||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
|
||||||
#
|
|
||||||
# Copyright (c) 2016 Wind River Systems, Inc.
|
|
||||||
#
|
|
||||||
# SPDX-License-Identifier: Apache-2.0
|
|
||||||
#
|
|
||||||
|
|
||||||
|
|
||||||
# IMPORTS
|
|
||||||
import logging
|
|
||||||
import time
|
|
||||||
import math
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
|
|
||||||
from daemon import runner
|
|
||||||
from io_monitor import __version__
|
|
||||||
from io_monitor.constants import DOMAIN
|
|
||||||
from io_monitor.options import CONF
|
|
||||||
from io_monitor.options import add_common_opts
|
|
||||||
from io_monitor.monitors.cinder.congestion import CinderCongestionMonitor
|
|
||||||
import subprocess
|
|
||||||
|
|
||||||
# OPTIONS
|
|
||||||
|
|
||||||
# CONSTANTS
|
|
||||||
LOG_FILE = '/var/log/io-monitor.log'
|
|
||||||
PID_FILE = '/var/run/io-monitor/io-monitor-manager.pid'
|
|
||||||
CONFIG_COMPLETE = '/etc/platform/.initial_config_complete'
|
|
||||||
|
|
||||||
LOG = logging.getLogger(DOMAIN)
|
|
||||||
|
|
||||||
LOG_FORMAT_DEBUG = '%(asctime)s.%(msecs)03d: ' \
|
|
||||||
+ os.path.basename(sys.argv[0]) + '[%(process)s]: ' \
|
|
||||||
+ '%(filename)s(%(lineno)s) - %(funcName)-20s: ' \
|
|
||||||
+ '%(levelname)s: %(message)s'
|
|
||||||
|
|
||||||
LOG_FORMAT_NORMAL = '%(asctime)s.%(msecs)03d: [%(process)s]: ' \
|
|
||||||
+ '%(levelname)s: %(message)s'
|
|
||||||
|
|
||||||
|
|
||||||
# METHODS
|
|
||||||
def _start_polling(log_handle):
|
|
||||||
io_monitor_daemon = IOMonitorDaemon()
|
|
||||||
io_monitor_runner = runner.DaemonRunner(io_monitor_daemon)
|
|
||||||
io_monitor_runner.daemon_context.umask = 0o022
|
|
||||||
io_monitor_runner.daemon_context.files_preserve = [log_handle.stream]
|
|
||||||
io_monitor_runner.do_action()
|
|
||||||
|
|
||||||
|
|
||||||
def handle_exception(exc_type, exc_value, exc_traceback):
|
|
||||||
"""
|
|
||||||
Exception handler to log any uncaught exceptions
|
|
||||||
"""
|
|
||||||
LOG.error("Uncaught exception",
|
|
||||||
exc_info=(exc_type, exc_value, exc_traceback))
|
|
||||||
sys.__excepthook__(exc_type, exc_value, exc_traceback)
|
|
||||||
|
|
||||||
|
|
||||||
def configure_logging():
|
|
||||||
|
|
||||||
level_dict = {'ERROR': logging.ERROR,
|
|
||||||
'WARN': logging.WARN,
|
|
||||||
'INFO': logging.INFO,
|
|
||||||
'DEBUG': logging.DEBUG}
|
|
||||||
|
|
||||||
if CONF.global_log_level in level_dict.keys():
|
|
||||||
level = level_dict[CONF.global_log_level]
|
|
||||||
else:
|
|
||||||
level = logging.INFO
|
|
||||||
|
|
||||||
# When we deamonize the default logging stream handler is closed. We need
|
|
||||||
# manually setup logging so that we can pass the file_handler into the
|
|
||||||
# monitor classes.
|
|
||||||
LOG.setLevel(level)
|
|
||||||
h = logging.FileHandler(LOG_FILE)
|
|
||||||
h.setLevel(level)
|
|
||||||
f = logging.Formatter(LOG_FORMAT_NORMAL, datefmt='%Y-%m-%d %H:%M:%S')
|
|
||||||
h.setFormatter(f)
|
|
||||||
LOG.addHandler(h)
|
|
||||||
|
|
||||||
# Log uncaught exceptions to file
|
|
||||||
sys.excepthook = handle_exception
|
|
||||||
|
|
||||||
return h
|
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
# Set up configuration options
|
|
||||||
add_common_opts()
|
|
||||||
CONF(project='io-monitor', version=__version__)
|
|
||||||
|
|
||||||
# Set up logging. Allow all levels. The monitor will restrict the level
|
|
||||||
# further as it sees fit
|
|
||||||
log_handle = configure_logging()
|
|
||||||
|
|
||||||
# Dump config
|
|
||||||
CONF.log_opt_values(LOG, logging.INFO)
|
|
||||||
if CONF.daemon_mode:
|
|
||||||
sys.argv = [sys.argv[0], 'start']
|
|
||||||
_start_polling(log_handle)
|
|
||||||
|
|
||||||
|
|
||||||
# CLASSES
|
|
||||||
|
|
||||||
class IOMonitorDaemon():
|
|
||||||
""" Daemon process representation of
|
|
||||||
the iostat monitoring program
|
|
||||||
"""
|
|
||||||
def __init__(self):
|
|
||||||
# Daemon-specific init
|
|
||||||
self.stdin_path = '/dev/null'
|
|
||||||
self.stdout_path = '/dev/null'
|
|
||||||
self.stderr_path = '/dev/null'
|
|
||||||
self.pidfile_path = PID_FILE
|
|
||||||
self.pidfile_timeout = 5
|
|
||||||
|
|
||||||
# Monitors
|
|
||||||
self.ccm = None
|
|
||||||
|
|
||||||
def run(self):
|
|
||||||
|
|
||||||
# We are started by systemd so wait for initial config to be completed
|
|
||||||
while not os.path.exists(CONFIG_COMPLETE):
|
|
||||||
LOG.info("Waiting: Initial configuration is not complete")
|
|
||||||
time.sleep(30)
|
|
||||||
|
|
||||||
LOG.info("Initializing monitors..")
|
|
||||||
# Cinder Congestion Monitor
|
|
||||||
self.ccm = CinderCongestionMonitor()
|
|
||||||
|
|
||||||
# Ensure system is monitorable
|
|
||||||
if not self.ccm.is_system_monitorable():
|
|
||||||
LOG.error("This system in not configured for Cinder LVM")
|
|
||||||
|
|
||||||
# Wait for something to kill us. Since we are managed by pmon
|
|
||||||
# we don't want to exit at this point
|
|
||||||
def sleepy_time(t):
|
|
||||||
while True:
|
|
||||||
t = t * 2
|
|
||||||
yield t
|
|
||||||
|
|
||||||
LOG.info("Will standby performing no further actions")
|
|
||||||
for s in sleepy_time(1):
|
|
||||||
time.sleep(s)
|
|
||||||
|
|
||||||
sys.exit()
|
|
||||||
|
|
||||||
LOG.info("Starting: Running iostat %d times per minute" %
|
|
||||||
math.ceil(60/(CONF.wait_time+1)))
|
|
||||||
|
|
||||||
try:
|
|
||||||
command = "iostat -dx -t -p ALL"
|
|
||||||
while True:
|
|
||||||
process = subprocess.Popen(command.split(),
|
|
||||||
stdout=subprocess.PIPE,
|
|
||||||
stderr=subprocess.PIPE)
|
|
||||||
output, error = process.communicate()
|
|
||||||
if output:
|
|
||||||
# Send the iostat input to the monitor
|
|
||||||
self._monitor_ccm_send_inputs(output)
|
|
||||||
|
|
||||||
# Instruct the monitor to process the data
|
|
||||||
self._monitor_ccm_generate_output()
|
|
||||||
|
|
||||||
time.sleep(CONF.wait_time)
|
|
||||||
except KeyboardInterrupt:
|
|
||||||
LOG.info('Exiting...')
|
|
||||||
|
|
||||||
return_code = process.poll()
|
|
||||||
LOG.error("return code = %s " % return_code)
|
|
||||||
|
|
||||||
def _monitor_ccm_send_inputs(self, inputs):
|
|
||||||
# LOG.debug(inputs)
|
|
||||||
|
|
||||||
# Process output from iteration
|
|
||||||
lines = inputs.split('\n')
|
|
||||||
for pline in lines[2:]:
|
|
||||||
self.ccm.parse_iostats(pline.strip())
|
|
||||||
|
|
||||||
def _monitor_ccm_generate_output(self):
|
|
||||||
self.ccm.generate_status()
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
|
||||||
|
|
||||||
if not os.geteuid() == 0:
|
|
||||||
sys.exit("\nOnly root can run this\n")
|
|
||||||
|
|
||||||
main()
|
|
|
@ -1,5 +0,0 @@
|
||||||
#
|
|
||||||
# Copyright (c) 2016 Wind River Systems, Inc.
|
|
||||||
#
|
|
||||||
# SPDX-License-Identifier: Apache-2.0
|
|
||||||
#
|
|
|
@ -1,5 +0,0 @@
|
||||||
#
|
|
||||||
# Copyright (c) 2016 Wind River Systems, Inc.
|
|
||||||
#
|
|
||||||
# SPDX-License-Identifier: Apache-2.0
|
|
||||||
#
|
|
|
@ -1,774 +0,0 @@
|
||||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
|
||||||
#
|
|
||||||
# Copyright (c) 2016-2017 Wind River Systems, Inc.
|
|
||||||
#
|
|
||||||
# SPDX-License-Identifier: Apache-2.0
|
|
||||||
#
|
|
||||||
|
|
||||||
import collections
|
|
||||||
import logging
|
|
||||||
import pyudev
|
|
||||||
import math
|
|
||||||
import operator
|
|
||||||
import os
|
|
||||||
import platform
|
|
||||||
import re
|
|
||||||
import subprocess
|
|
||||||
|
|
||||||
from fm_api import fm_api
|
|
||||||
from fm_api import constants as fm_constants
|
|
||||||
from io_monitor import constants
|
|
||||||
from io_monitor.constants import DOMAIN
|
|
||||||
from io_monitor.utils.data_collector import DeviceDataCollector
|
|
||||||
from io_monitor.constants import _
|
|
||||||
from oslo_config import cfg
|
|
||||||
|
|
||||||
ccm_opts = [
|
|
||||||
cfg.IntOpt('ssd_small_window_size',
|
|
||||||
default=30,
|
|
||||||
help=('SSD: Small moving average window size (in seconds).')),
|
|
||||||
cfg.IntOpt('ssd_medium_window_size',
|
|
||||||
default=60,
|
|
||||||
help=('SSD: Medium moving average window size (in seconds).')),
|
|
||||||
cfg.IntOpt('ssd_large_window_size',
|
|
||||||
default=90,
|
|
||||||
help=('SSD: Large moving average window size (in seconds).')),
|
|
||||||
cfg.IntOpt('ssd_thresh_sustained_await',
|
|
||||||
default=1000,
|
|
||||||
help=('SSD: Value required in a moving average window to '
|
|
||||||
'trigger next state.')),
|
|
||||||
cfg.IntOpt('ssd_thresh_max_await',
|
|
||||||
default=5000,
|
|
||||||
help=('SSD: Max await time. Anomalous data readings are clipped'
|
|
||||||
' to this.')),
|
|
||||||
cfg.IntOpt('hdd_small_window_size',
|
|
||||||
default=120,
|
|
||||||
help=('HDD: Small moving average window size (in seconds).')),
|
|
||||||
cfg.IntOpt('hdd_medium_window_size',
|
|
||||||
default=180,
|
|
||||||
help=('HDD: Medium moving average window size (in seconds).')),
|
|
||||||
cfg.IntOpt('hdd_large_window_size',
|
|
||||||
default=240,
|
|
||||||
help=('HDD: Large moving average window size (in seconds).')),
|
|
||||||
cfg.IntOpt('hdd_thresh_sustained_await',
|
|
||||||
default=1500,
|
|
||||||
help=('HDD: Value required in a moving average window to '
|
|
||||||
'trigger next state.')),
|
|
||||||
cfg.IntOpt('hdd_thresh_max_await',
|
|
||||||
default=5000,
|
|
||||||
help=('HDD: Max await time. Anomalous data readings are clipped'
|
|
||||||
' to this.')),
|
|
||||||
cfg.StrOpt('log_level',
|
|
||||||
default='INFO',
|
|
||||||
choices=('ERROR', 'WARN', 'INFO', 'DEBUG'),
|
|
||||||
help=('Monitor debug level. Note: global level must be'
|
|
||||||
' equialent or lower.')),
|
|
||||||
cfg.FloatOpt('status_log_rate_modifier', default=0.2,
|
|
||||||
help=('Modify how often status messages appear in the log.'
|
|
||||||
'0.0 is never, 1.0 is for every iostat execution.')),
|
|
||||||
cfg.BoolOpt('generate_fm_alarms', default=True,
|
|
||||||
help=('Enable FM Alarm generation')),
|
|
||||||
cfg.IntOpt('fm_alarm_debounce', default=5,
|
|
||||||
help=('Number of consecutive same congestion states seen '
|
|
||||||
'before raising/clearing alarms.')),
|
|
||||||
cfg.BoolOpt('output_write_csv', default=False,
|
|
||||||
help=('Write monitor data to a csv for analysis')),
|
|
||||||
cfg.StrOpt('output_csv_dir', default='/tmp',
|
|
||||||
help=('Directory where monitor output will be located.')),
|
|
||||||
]
|
|
||||||
|
|
||||||
CONF = cfg.CONF
|
|
||||||
CONF.register_opts(ccm_opts, group="cinder_congestion")
|
|
||||||
|
|
||||||
LOG = logging.getLogger(DOMAIN)
|
|
||||||
|
|
||||||
|
|
||||||
class CinderCongestionMonitor(object):
|
|
||||||
# Congestion States
|
|
||||||
STATUS_NORMAL = "Normal"
|
|
||||||
STATUS_BUILDING = "Building"
|
|
||||||
STATUS_CONGESTED = "Limiting"
|
|
||||||
|
|
||||||
# disk type
|
|
||||||
CINDER_DISK_SSD = 0
|
|
||||||
CINDER_DISK_HDD = 1
|
|
||||||
|
|
||||||
def __init__(self):
|
|
||||||
# Setup logging
|
|
||||||
level_dict = {'ERROR': logging.ERROR,
|
|
||||||
'WARN': logging.WARN,
|
|
||||||
'INFO': logging.INFO,
|
|
||||||
'DEBUG': logging.DEBUG}
|
|
||||||
|
|
||||||
if CONF.cinder_congestion.log_level in level_dict.keys():
|
|
||||||
LOG.setLevel(level_dict[CONF.cinder_congestion.log_level])
|
|
||||||
else:
|
|
||||||
LOG.setLevel(logging.INFO)
|
|
||||||
|
|
||||||
LOG.info("Initializing %s..." % self.__class__.__name__)
|
|
||||||
|
|
||||||
# DRBD file
|
|
||||||
self.drbd_file = '/etc/drbd.d/drbd-cinder.res'
|
|
||||||
|
|
||||||
# iostat parsing regex
|
|
||||||
self.ts_regex = re.compile(r"(\d{2}/\d{2}/\d{2,4}) "
|
|
||||||
"(\d{2}:\d{2}:\d{2})")
|
|
||||||
self.device_regex = re.compile(
|
|
||||||
r"(\w+-?\w+)\s+(\d+.\d+)\s+(\d+.\d+)\s+(\d+.\d+)\s+(\d+.\d+)"
|
|
||||||
"\s+(\d+.\d+)\s+(\d+.\d+)\s+(\d+.\d+)\s+(\d+.\d+)\s+(\d+.\d+)\s+"
|
|
||||||
"(\d+.\d+)\s+(\d+.\d+)\s+(\d+.\d+)\s+(\d+.\d+)")
|
|
||||||
|
|
||||||
# window sizes
|
|
||||||
self.s_window_sec = CONF.cinder_congestion.ssd_small_window_size
|
|
||||||
self.m_window_sec = CONF.cinder_congestion.ssd_medium_window_size
|
|
||||||
self.l_window_sec = CONF.cinder_congestion.ssd_large_window_size
|
|
||||||
|
|
||||||
# state variables
|
|
||||||
self.latest_time = None
|
|
||||||
self.congestion_status = self.STATUS_NORMAL
|
|
||||||
|
|
||||||
# init data collector
|
|
||||||
self.device_dict = {}
|
|
||||||
|
|
||||||
# devices
|
|
||||||
self.phys_cinder_device = None
|
|
||||||
self.base_cinder_devs = []
|
|
||||||
self.base_cinder_tracking_devs = []
|
|
||||||
self.non_cinder_dynamic_devs = ['drbd0', 'drbd1', 'drbd2', 'drbd3',
|
|
||||||
'drbd5']
|
|
||||||
self.non_cinder_phys_devs = []
|
|
||||||
|
|
||||||
# set the default operational scenarios
|
|
||||||
self.await_minimal_spike = CONF.cinder_congestion.ssd_thresh_max_await
|
|
||||||
self.await_sustained_congestion = (
|
|
||||||
CONF.cinder_congestion.ssd_thresh_sustained_await)
|
|
||||||
|
|
||||||
# FM
|
|
||||||
self.fm_api = fm_api.FaultAPIs()
|
|
||||||
self.fm_state_count = collections.Counter()
|
|
||||||
|
|
||||||
# CSV handle
|
|
||||||
self.csv = None
|
|
||||||
|
|
||||||
# status logging
|
|
||||||
self.status_skip_count = 0
|
|
||||||
|
|
||||||
# to compare with current g_count
|
|
||||||
self.last_g_count = 0
|
|
||||||
|
|
||||||
message_rate = math.ceil(60 / (CONF.wait_time+1))
|
|
||||||
self.status_skip_total = math.ceil(
|
|
||||||
message_rate/(message_rate *
|
|
||||||
CONF.cinder_congestion.status_log_rate_modifier))
|
|
||||||
LOG.info("Display status message at %d per minute..." %
|
|
||||||
(message_rate *
|
|
||||||
CONF.cinder_congestion.status_log_rate_modifier))
|
|
||||||
|
|
||||||
# Clear any exiting alarms
|
|
||||||
self._clear_fm()
|
|
||||||
|
|
||||||
def _is_number(self, s):
|
|
||||||
try:
|
|
||||||
float(s)
|
|
||||||
return True
|
|
||||||
except ValueError:
|
|
||||||
return False
|
|
||||||
|
|
||||||
def command(self, arguments, **kwargs):
|
|
||||||
""" Execute e command and capture stdout, stderr & return code """
|
|
||||||
process = subprocess.Popen(
|
|
||||||
arguments,
|
|
||||||
stdout=subprocess.PIPE,
|
|
||||||
stderr=subprocess.PIPE,
|
|
||||||
**kwargs)
|
|
||||||
out, err = process.communicate()
|
|
||||||
return out, err, process.returncode
|
|
||||||
|
|
||||||
def device_path_to_device_node(self, device_path):
|
|
||||||
try:
|
|
||||||
output, _, _ = self.command(["udevadm", "settle", "-E",
|
|
||||||
device_path])
|
|
||||||
out, err, retcode = self. command(["readlink", "-f", device_path])
|
|
||||||
out = out.rstrip()
|
|
||||||
except Exception as e:
|
|
||||||
return None
|
|
||||||
|
|
||||||
return out
|
|
||||||
|
|
||||||
def _get_disk_type(self, device_node):
|
|
||||||
if device_node:
|
|
||||||
proc_device_file = '/sys/block/' + device_node + \
|
|
||||||
'/queue/rotational'
|
|
||||||
if os.path.exists(proc_device_file):
|
|
||||||
with open(proc_device_file) as fileobject:
|
|
||||||
for line in fileobject:
|
|
||||||
return int(line.rstrip())
|
|
||||||
|
|
||||||
# If the disk is unknown assume an SSD.
|
|
||||||
return self.CINDER_DISK_SSD
|
|
||||||
|
|
||||||
|
|
||||||
def _is_cinder_related_device(self,device_node):
|
|
||||||
name = ""
|
|
||||||
if device_node:
|
|
||||||
proc_device_file = '/sys/block/' + device_node + \
|
|
||||||
'/dm/name'
|
|
||||||
|
|
||||||
if os.path.exists(proc_device_file):
|
|
||||||
with open(proc_device_file) as fileobject:
|
|
||||||
for line in fileobject:
|
|
||||||
name = line.rstrip()
|
|
||||||
|
|
||||||
if constants.CINDER_DM_PREFIX in name:
|
|
||||||
return True
|
|
||||||
|
|
||||||
return False
|
|
||||||
|
|
||||||
def _is_cinder_backing_device(self, device_node):
|
|
||||||
name = ""
|
|
||||||
if device_node:
|
|
||||||
proc_device_file = '/sys/block/' + device_node + \
|
|
||||||
'/dm/name'
|
|
||||||
if os.path.exists(proc_device_file):
|
|
||||||
with open(proc_device_file) as fileobject:
|
|
||||||
for line in fileobject:
|
|
||||||
name = line.rstrip()
|
|
||||||
|
|
||||||
if any(s in name for s in ['pool', 'anchor']):
|
|
||||||
if device_node not in self.base_cinder_devs:
|
|
||||||
self.base_cinder_devs.append(device_node)
|
|
||||||
if any(s in name for s in ['tdata', 'tmeta']):
|
|
||||||
if device_node not in self.base_cinder_tracking_devs:
|
|
||||||
self.base_cinder_tracking_devs.append(device_node)
|
|
||||||
|
|
||||||
LOG.info("Cinder Base Devices = %s; Tracking %s" % (
|
|
||||||
self.base_cinder_devs, self.base_cinder_tracking_devs))
|
|
||||||
return True
|
|
||||||
|
|
||||||
return False
|
|
||||||
|
|
||||||
def _determine_cinder_devices(self):
|
|
||||||
# Check to see if we have DRBD device we are syncing
|
|
||||||
if os.path.exists(self.drbd_file):
|
|
||||||
|
|
||||||
# grab the data
|
|
||||||
with open(self.drbd_file) as fileobject:
|
|
||||||
|
|
||||||
drbd_dev_regex = re.compile(r"device\s+/dev/(\w+);")
|
|
||||||
drbd_disk_path_regex = re.compile(
|
|
||||||
r"disk\s+\"(/dev/disk/by-path/(.+))\";")
|
|
||||||
drbd_disk_node_regex = re.compile(r"/dev/(\w+)")
|
|
||||||
partition_regex = re.compile(r"(sd\w+)\d+")
|
|
||||||
|
|
||||||
for line in fileobject:
|
|
||||||
m = drbd_dev_regex.match(line.strip())
|
|
||||||
if m:
|
|
||||||
self.base_cinder_devs.append(m.group(1))
|
|
||||||
|
|
||||||
m = drbd_disk_path_regex.match(line.strip())
|
|
||||||
if m:
|
|
||||||
drbd_disk = self.device_path_to_device_node(m.group(1))
|
|
||||||
|
|
||||||
drbd_disk_sd = drbd_disk_node_regex.match(drbd_disk)
|
|
||||||
if drbd_disk_sd:
|
|
||||||
self.base_cinder_devs.append(drbd_disk_sd.group(1))
|
|
||||||
|
|
||||||
d = partition_regex.match(drbd_disk_sd.group(1))
|
|
||||||
if d:
|
|
||||||
self.phys_cinder_device = d.group(1)
|
|
||||||
self.base_cinder_devs.append(d.group(1))
|
|
||||||
|
|
||||||
# Which host OS?
|
|
||||||
if platform.linux_distribution()[0] == constants.WRLINUX:
|
|
||||||
dm_major = 252
|
|
||||||
else:
|
|
||||||
dm_major = 253
|
|
||||||
|
|
||||||
# Grab the device mapper devices and pull out the base cinder
|
|
||||||
# devices
|
|
||||||
dmsetup_regex = re.compile(r'^([\w-]+)\s+\((\d+):(\d+)\)')
|
|
||||||
|
|
||||||
dmsetup_command = 'dmsetup ls'
|
|
||||||
dmsetup_process = subprocess.Popen(dmsetup_command,
|
|
||||||
stdout=subprocess.PIPE,
|
|
||||||
shell=True)
|
|
||||||
dmsetup_output = dmsetup_process.stdout.read()
|
|
||||||
lines = dmsetup_output.split('\n')
|
|
||||||
for l in lines:
|
|
||||||
m = dmsetup_regex.match(l.strip())
|
|
||||||
if m:
|
|
||||||
if m.group(2) == str(dm_major):
|
|
||||||
# LOG.debug("%s %s %s" % (m.group(1),
|
|
||||||
# m.group(2),
|
|
||||||
# m.group(3)))
|
|
||||||
if constants.CINDER_DM_PREFIX in m.group(1):
|
|
||||||
if 'pool' in m.group(1) or 'anchor' in m.group(1):
|
|
||||||
self.base_cinder_devs.append(
|
|
||||||
"dm-" + m.group(3))
|
|
||||||
if 'tdata' in m.group(1) or 'tmeta' in m.group(1):
|
|
||||||
self.base_cinder_tracking_devs.append(
|
|
||||||
"dm-" + m.group(3))
|
|
||||||
else:
|
|
||||||
self.non_cinder_dynamic_devs.append(
|
|
||||||
"dm-" + m.group(3))
|
|
||||||
|
|
||||||
# If the tracking devs are non existant, then we didn't find any
|
|
||||||
# thin pool entries. Therefore we are thickly provisioned and need
|
|
||||||
# to track the physical device
|
|
||||||
if len(self.base_cinder_tracking_devs) == 0:
|
|
||||||
self.base_cinder_tracking_devs.append(
|
|
||||||
self.phys_cinder_device)
|
|
||||||
|
|
||||||
# Use UDEV info to grab all phyical disks
|
|
||||||
context = pyudev.Context()
|
|
||||||
for device in context.list_devices(subsystem='block',
|
|
||||||
DEVTYPE='disk'):
|
|
||||||
if device['MAJOR'] == '8':
|
|
||||||
device = str(os.path.basename(device['DEVNAME']))
|
|
||||||
if device != self.phys_cinder_device:
|
|
||||||
self.non_cinder_phys_devs.append(device)
|
|
||||||
|
|
||||||
def _update_device_stats(self, ts, device, current_iops, current_await):
|
|
||||||
if device not in self.device_dict:
|
|
||||||
# For AIO systems nova-local will be provisioned later and
|
|
||||||
# differently based on the instance_backing value for the compute
|
|
||||||
# functionality. Check for cinder specific dm devices and ignore
|
|
||||||
# all others
|
|
||||||
if not self._is_cinder_related_device(device):
|
|
||||||
return
|
|
||||||
self._is_cinder_backing_device(device)
|
|
||||||
self.device_dict.update(
|
|
||||||
{device: DeviceDataCollector(
|
|
||||||
device,
|
|
||||||
[DeviceDataCollector.DATA_IOPS,
|
|
||||||
DeviceDataCollector.DATA_AWAIT],
|
|
||||||
self.s_window_sec,
|
|
||||||
self.m_window_sec,
|
|
||||||
self.l_window_sec)})
|
|
||||||
self.device_dict[device].set_data_caps(
|
|
||||||
DeviceDataCollector.DATA_AWAIT,
|
|
||||||
self.await_minimal_spike)
|
|
||||||
self.device_dict[device].set_congestion_thresholds(
|
|
||||||
self.await_minimal_spike,
|
|
||||||
self.await_sustained_congestion)
|
|
||||||
|
|
||||||
self.device_dict[device].update_data(ts,
|
|
||||||
DeviceDataCollector.DATA_IOPS,
|
|
||||||
current_iops)
|
|
||||||
self.device_dict[device].update_data(ts,
|
|
||||||
DeviceDataCollector.DATA_AWAIT,
|
|
||||||
current_await)
|
|
||||||
self.device_dict[device].update_congestion_status()
|
|
||||||
|
|
||||||
def is_system_monitorable(self):
|
|
||||||
if not os.path.exists(self.drbd_file):
|
|
||||||
LOG.error("%s does not exist" % self.drbd_file)
|
|
||||||
return False
|
|
||||||
|
|
||||||
# Discover devices on this host
|
|
||||||
self._determine_cinder_devices()
|
|
||||||
|
|
||||||
# Get the cinder disk type and set the monitor values accordingly
|
|
||||||
disk_type = self._get_disk_type(self.phys_cinder_device)
|
|
||||||
if disk_type:
|
|
||||||
self.s_window_sec = CONF.cinder_congestion.hdd_small_window_size
|
|
||||||
self.m_window_sec = CONF.cinder_congestion.hdd_medium_window_size
|
|
||||||
self.l_window_sec = CONF.cinder_congestion.hdd_large_window_size
|
|
||||||
self.await_minimal_spike = (
|
|
||||||
CONF.cinder_congestion.hdd_thresh_max_await)
|
|
||||||
self.await_sustained_congestion = (
|
|
||||||
CONF.cinder_congestion.hdd_thresh_sustained_await)
|
|
||||||
else:
|
|
||||||
self.s_window_sec = CONF.cinder_congestion.ssd_small_window_size
|
|
||||||
self.m_window_sec = CONF.cinder_congestion.ssd_medium_window_size
|
|
||||||
self.l_window_sec = CONF.cinder_congestion.ssd_large_window_size
|
|
||||||
self.await_minimal_spike = (
|
|
||||||
CONF.cinder_congestion.ssd_thresh_max_await)
|
|
||||||
self.await_sustained_congestion = (
|
|
||||||
CONF.cinder_congestion.ssd_thresh_sustained_await)
|
|
||||||
|
|
||||||
LOG.info("Physical Cinder Disk = %s - %s" %
|
|
||||||
(self.phys_cinder_device,
|
|
||||||
"HDD" if disk_type else "SSD"))
|
|
||||||
LOG.info("Cinder Base Devices = %s; Tracking %s" % (
|
|
||||||
self.base_cinder_devs, self.base_cinder_tracking_devs))
|
|
||||||
LOG.info("Non-Cinder Devices = %s" % (
|
|
||||||
self.non_cinder_dynamic_devs + self.non_cinder_phys_devs))
|
|
||||||
|
|
||||||
return True
|
|
||||||
|
|
||||||
def get_operational_thresholds(self):
|
|
||||||
return (self.await_minimal_spike,
|
|
||||||
self.await_sustained_congestion)
|
|
||||||
|
|
||||||
def set_operational_thresholds(self,
|
|
||||||
await_minimal_spike,
|
|
||||||
await_sustained_congestion):
|
|
||||||
if await_minimal_spike:
|
|
||||||
self.await_minimal_spike = await_minimal_spike
|
|
||||||
if await_sustained_congestion:
|
|
||||||
self.await_sustained_congestion = await_sustained_congestion
|
|
||||||
|
|
||||||
def _flush_stale_devices(self):
|
|
||||||
for d in self.device_dict.keys():
|
|
||||||
if self.device_dict[d].is_data_stale(self.latest_time):
|
|
||||||
self.device_dict.pop(d, None)
|
|
||||||
|
|
||||||
def _log_device_data_windows(self, device):
|
|
||||||
LOG.debug("%-6s: %s %s" % (
|
|
||||||
device,
|
|
||||||
self.device_dict[device].get_element_windows_avg_string(
|
|
||||||
DeviceDataCollector.DATA_AWAIT),
|
|
||||||
self.device_dict[device].get_element_windows_avg_string(
|
|
||||||
DeviceDataCollector.DATA_IOPS)))
|
|
||||||
|
|
||||||
def _log_congestion_status(self, congestion_data):
|
|
||||||
congestion_data.c_freq_dict.update(
|
|
||||||
dict.fromkeys(
|
|
||||||
set(['N', 'B', 'L']).difference(
|
|
||||||
congestion_data.c_freq_dict), 0))
|
|
||||||
congestion_data.g_freq_dict.update(
|
|
||||||
dict.fromkeys(
|
|
||||||
set(['N', 'B', 'L']).difference(
|
|
||||||
congestion_data.g_freq_dict), 0))
|
|
||||||
|
|
||||||
LOG.info("Status (%-8s): Cinder Devs IOPS [ %10.2f, %10.2f, %10.2f ] "
|
|
||||||
"Guests Counts %s; Guest Await[ %10.2f, %10.2f, %10.2f ]" % (
|
|
||||||
congestion_data.status,
|
|
||||||
congestion_data.c_iops_avg_list[0],
|
|
||||||
congestion_data.c_iops_avg_list[1],
|
|
||||||
congestion_data.c_iops_avg_list[2],
|
|
||||||
dict(congestion_data.g_freq_dict),
|
|
||||||
congestion_data.g_await_avg_list[0],
|
|
||||||
congestion_data.g_await_avg_list[1],
|
|
||||||
congestion_data.g_await_avg_list[2]))
|
|
||||||
|
|
||||||
def _determine_congestion_state(self):
|
|
||||||
|
|
||||||
# Analyze devices
|
|
||||||
cinder_congestion_freq = collections.Counter()
|
|
||||||
cinder_iops_avg = [0.0, 0.0, 0.0]
|
|
||||||
guest_congestion_freq = collections.Counter()
|
|
||||||
guest_await_avg = [0.0, 0.0, 0.0]
|
|
||||||
|
|
||||||
for d, dc in self.device_dict.iteritems():
|
|
||||||
if d in self.base_cinder_devs:
|
|
||||||
if d in self.base_cinder_tracking_devs:
|
|
||||||
cinder_congestion_freq.update(dc.get_congestion_status())
|
|
||||||
cinder_iops_avg = map(operator.add,
|
|
||||||
cinder_iops_avg,
|
|
||||||
dc.get_element_windows_avg_list(
|
|
||||||
DeviceDataCollector.DATA_IOPS))
|
|
||||||
# LOG.debug("C: %s " % cinder_iops_avg)
|
|
||||||
# self._log_device_data_windows(d)
|
|
||||||
|
|
||||||
elif d not in (self.base_cinder_devs +
|
|
||||||
self.non_cinder_dynamic_devs +
|
|
||||||
self.non_cinder_phys_devs):
|
|
||||||
guest_congestion_freq.update(
|
|
||||||
dc.get_congestion_status(debug=True))
|
|
||||||
guest_await_avg = map(operator.add,
|
|
||||||
guest_await_avg,
|
|
||||||
dc.get_element_windows_avg_list(
|
|
||||||
DeviceDataCollector.DATA_AWAIT))
|
|
||||||
# LOG.debug("G: %s " % guest_await_avg)
|
|
||||||
# self._log_device_data_windows(d)
|
|
||||||
|
|
||||||
if list(cinder_congestion_freq.elements()):
|
|
||||||
cinder_iops_avg[:] = [i/len(list(
|
|
||||||
cinder_congestion_freq.elements())) for i in cinder_iops_avg]
|
|
||||||
|
|
||||||
if list(guest_congestion_freq.elements()):
|
|
||||||
guest_await_avg[:] = [i/len(list(
|
|
||||||
guest_congestion_freq.elements())) for i in guest_await_avg]
|
|
||||||
|
|
||||||
self.congestion_status = self.STATUS_NORMAL
|
|
||||||
if DeviceDataCollector.STATUS_BUILDING in guest_congestion_freq:
|
|
||||||
self.congestion_status = self.STATUS_BUILDING
|
|
||||||
if DeviceDataCollector.STATUS_CONGESTED in guest_congestion_freq:
|
|
||||||
self.congestion_status = self.STATUS_CONGESTED
|
|
||||||
|
|
||||||
congestion_data = collections.namedtuple("congestion_data",
|
|
||||||
["timestamp", "status",
|
|
||||||
"c_freq_dict",
|
|
||||||
"c_iops_avg_list",
|
|
||||||
"g_count",
|
|
||||||
"g_freq_dict",
|
|
||||||
"g_await_avg_list"])
|
|
||||||
|
|
||||||
return congestion_data(self.latest_time,
|
|
||||||
self.congestion_status,
|
|
||||||
cinder_congestion_freq,
|
|
||||||
cinder_iops_avg,
|
|
||||||
sum(guest_congestion_freq.values()),
|
|
||||||
guest_congestion_freq,
|
|
||||||
guest_await_avg)
|
|
||||||
|
|
||||||
def _clear_fm(self):
|
|
||||||
building = fm_constants.FM_ALARM_ID_STORAGE_CINDER_IO_BUILDING
|
|
||||||
limiting = fm_constants.FM_ALARM_ID_STORAGE_CINDER_IO_LIMITING
|
|
||||||
|
|
||||||
entity_instance_id = "cinder_io_monitor"
|
|
||||||
ccm_alarm_ids = [building, limiting]
|
|
||||||
|
|
||||||
existing_alarms = []
|
|
||||||
for alarm_id in ccm_alarm_ids:
|
|
||||||
alarm_list = self.fm_api.get_faults_by_id(alarm_id)
|
|
||||||
if not alarm_list:
|
|
||||||
continue
|
|
||||||
for alarm in alarm_list:
|
|
||||||
existing_alarms.append(alarm)
|
|
||||||
|
|
||||||
if len(existing_alarms) > 1:
|
|
||||||
LOG.warn("WARNING: we have more than one existing alarm")
|
|
||||||
|
|
||||||
for a in existing_alarms:
|
|
||||||
self.fm_api.clear_fault(a.alarm_id, entity_instance_id)
|
|
||||||
LOG.info(
|
|
||||||
_("Clearing congestion alarm {} - severity: {}, "
|
|
||||||
"reason: {}, service_affecting: {}").format(
|
|
||||||
a.uuid, a.severity, a.reason_text, True))
|
|
||||||
|
|
||||||
def _update_fm(self, debounce_count, override=None):
|
|
||||||
|
|
||||||
building = fm_constants.FM_ALARM_ID_STORAGE_CINDER_IO_BUILDING
|
|
||||||
limiting = fm_constants.FM_ALARM_ID_STORAGE_CINDER_IO_LIMITING
|
|
||||||
|
|
||||||
if override:
|
|
||||||
self.congestion_status = override
|
|
||||||
|
|
||||||
# Update the status count
|
|
||||||
self.fm_state_count.update(self.congestion_status[0])
|
|
||||||
|
|
||||||
# Debounce alarms: If I have more than one congestion type then clear
|
|
||||||
# the counts as we have crossed a threshold
|
|
||||||
if len(self.fm_state_count) > 1:
|
|
||||||
self.fm_state_count.clear()
|
|
||||||
self.fm_state_count.update(self.congestion_status[0])
|
|
||||||
return
|
|
||||||
|
|
||||||
# Debounce alarms: Make sure we have see this alarm state for a specifc
|
|
||||||
# number of samples
|
|
||||||
count = self.fm_state_count.itervalues().next()
|
|
||||||
if count < debounce_count:
|
|
||||||
return
|
|
||||||
|
|
||||||
# We are past the debounce state. Now take action.
|
|
||||||
entity_instance_id = "cinder_io_monitor"
|
|
||||||
ccm_alarm_ids = [building, limiting]
|
|
||||||
|
|
||||||
existing_alarms = []
|
|
||||||
for alarm_id in ccm_alarm_ids:
|
|
||||||
alarm_list = self.fm_api.get_faults_by_id(alarm_id)
|
|
||||||
if not alarm_list:
|
|
||||||
continue
|
|
||||||
for alarm in alarm_list:
|
|
||||||
existing_alarms.append(alarm)
|
|
||||||
|
|
||||||
if len(existing_alarms) > 1:
|
|
||||||
LOG.warn("WARNING: we have more than one existing alarm")
|
|
||||||
|
|
||||||
if self.congestion_status is self.STATUS_NORMAL:
|
|
||||||
for a in existing_alarms:
|
|
||||||
self.fm_api.clear_fault(a.alarm_id, entity_instance_id)
|
|
||||||
LOG.info(
|
|
||||||
_("Clearing congestion alarm {} - severity: {}, "
|
|
||||||
"reason: {}, service_affecting: {}").format(
|
|
||||||
a.uuid, a.severity, a.reason_text, True))
|
|
||||||
|
|
||||||
elif self.congestion_status is self.STATUS_BUILDING:
|
|
||||||
alarm_is_raised = False
|
|
||||||
for a in existing_alarms:
|
|
||||||
if a.alarm_id != building:
|
|
||||||
self.fm_api.clear_fault(a.alarm_id, entity_instance_id)
|
|
||||||
LOG.info(
|
|
||||||
_("Clearing congestion alarm {} - severity: {}, "
|
|
||||||
"reason: {}, service_affecting: {}").format(
|
|
||||||
a.uuid, a.severity, a.reason_text, True))
|
|
||||||
else:
|
|
||||||
alarm_is_raised = True
|
|
||||||
|
|
||||||
if not alarm_is_raised:
|
|
||||||
severity = fm_constants.FM_ALARM_SEVERITY_MAJOR
|
|
||||||
reason_text = constants.ALARM_REASON_BUILDING
|
|
||||||
|
|
||||||
fault = fm_api.Fault(
|
|
||||||
alarm_id=building,
|
|
||||||
alarm_type=fm_constants.FM_ALARM_TYPE_2,
|
|
||||||
alarm_state=fm_constants.FM_ALARM_STATE_SET,
|
|
||||||
entity_type_id=fm_constants.FM_ENTITY_TYPE_CLUSTER,
|
|
||||||
entity_instance_id=entity_instance_id,
|
|
||||||
severity=severity,
|
|
||||||
reason_text=reason_text,
|
|
||||||
probable_cause=fm_constants.ALARM_PROBABLE_CAUSE_8,
|
|
||||||
proposed_repair_action=constants.REPAIR_ACTION_MAJOR_ALARM,
|
|
||||||
service_affecting=True)
|
|
||||||
alarm_uuid = self.fm_api.set_fault(fault)
|
|
||||||
if alarm_uuid:
|
|
||||||
LOG.info(
|
|
||||||
_("Created congestion alarm {} - severity: {}, "
|
|
||||||
"reason: {}, service_affecting: {}").format(
|
|
||||||
alarm_uuid, severity, reason_text, True))
|
|
||||||
else:
|
|
||||||
LOG.error(
|
|
||||||
_("Failed to create congestion alarm - severity: {},"
|
|
||||||
"reason: {}, service_affecting: {}").format(
|
|
||||||
severity, reason_text, True))
|
|
||||||
|
|
||||||
elif self.congestion_status is self.STATUS_CONGESTED:
|
|
||||||
alarm_is_raised = False
|
|
||||||
for a in existing_alarms:
|
|
||||||
if a.alarm_id != limiting:
|
|
||||||
self.fm_api.clear_fault(a.alarm_id, entity_instance_id)
|
|
||||||
LOG.info(
|
|
||||||
_("Clearing congestion alarm {} - severity: {}, "
|
|
||||||
"reason: {}, service_affecting: {}").format(
|
|
||||||
a.uuid, a.severity, a.reason_text, True))
|
|
||||||
else:
|
|
||||||
alarm_is_raised = True
|
|
||||||
|
|
||||||
if not alarm_is_raised:
|
|
||||||
severity = fm_constants.FM_ALARM_SEVERITY_CRITICAL
|
|
||||||
reason_text = constants.ALARM_REASON_CONGESTED
|
|
||||||
repair = constants.REPAIR_ACTION_CRITICAL_ALARM
|
|
||||||
fault = fm_api.Fault(
|
|
||||||
alarm_id=limiting,
|
|
||||||
alarm_type=fm_constants.FM_ALARM_TYPE_2,
|
|
||||||
alarm_state=fm_constants.FM_ALARM_STATE_SET,
|
|
||||||
entity_type_id=fm_constants.FM_ENTITY_TYPE_CLUSTER,
|
|
||||||
entity_instance_id=entity_instance_id,
|
|
||||||
severity=severity,
|
|
||||||
reason_text=reason_text,
|
|
||||||
probable_cause=fm_constants.ALARM_PROBABLE_CAUSE_8,
|
|
||||||
proposed_repair_action=repair,
|
|
||||||
service_affecting=True)
|
|
||||||
alarm_uuid = self.fm_api.set_fault(fault)
|
|
||||||
if alarm_uuid:
|
|
||||||
LOG.info(
|
|
||||||
_("Created congestion alarm {} - severity: {}, "
|
|
||||||
"reason: {}, service_affecting: {}").format(
|
|
||||||
alarm_uuid, severity, reason_text, True))
|
|
||||||
else:
|
|
||||||
LOG.error(
|
|
||||||
_("Failed to congestion storage alarm - severity: {},"
|
|
||||||
"reason: {}, service_affecting: {}").format(
|
|
||||||
severity, reason_text, True))
|
|
||||||
|
|
||||||
def _create_output(self, output_dir, congestion_data):
|
|
||||||
if not self.csv:
|
|
||||||
LOG.info("Creating output")
|
|
||||||
if os.path.exists(output_dir):
|
|
||||||
if output_dir.endswith('/'):
|
|
||||||
fn = output_dir + 'ccm.csv'
|
|
||||||
else:
|
|
||||||
fn = output_dir + '/ccm.csv'
|
|
||||||
else:
|
|
||||||
fn = '/tmp/ccm.csv'
|
|
||||||
try:
|
|
||||||
self.csv = open(fn, 'w')
|
|
||||||
except Exception as e:
|
|
||||||
raise e
|
|
||||||
|
|
||||||
self.csv.write("Timestamp, Congestion Status, "
|
|
||||||
"Cinder Devs Normal, "
|
|
||||||
"Cinder Devs Building, Cinder Devs Limiting,"
|
|
||||||
"Cinder IOPS Small, "
|
|
||||||
"Cinder IOPS Med, Cinder IOPS Large,"
|
|
||||||
"Guest Vols Normal, "
|
|
||||||
"Guest Vols Building, Guest Vols Limiting,"
|
|
||||||
"Guest Await Small, "
|
|
||||||
"Guest Await Med, Guest Await Large")
|
|
||||||
LOG.info("Done writing")
|
|
||||||
|
|
||||||
congestion_data.c_freq_dict.update(
|
|
||||||
dict.fromkeys(set(['N', 'B', 'L']).difference(
|
|
||||||
congestion_data.c_freq_dict), 0))
|
|
||||||
congestion_data.g_freq_dict.update(
|
|
||||||
dict.fromkeys(set(['N', 'B', 'L']).difference(
|
|
||||||
congestion_data.g_freq_dict), 0))
|
|
||||||
|
|
||||||
self.csv.write(
|
|
||||||
",".join(
|
|
||||||
(str(congestion_data.timestamp),
|
|
||||||
str(congestion_data.status[0]),
|
|
||||||
str(congestion_data.c_freq_dict[
|
|
||||||
DeviceDataCollector.STATUS_NORMAL]),
|
|
||||||
str(congestion_data.c_freq_dict[
|
|
||||||
DeviceDataCollector.STATUS_BUILDING]),
|
|
||||||
str(congestion_data.c_freq_dict[
|
|
||||||
DeviceDataCollector.STATUS_CONGESTED]),
|
|
||||||
str(congestion_data.c_iops_avg_list[0]),
|
|
||||||
str(congestion_data.c_iops_avg_list[1]),
|
|
||||||
str(congestion_data.c_iops_avg_list[2]),
|
|
||||||
str(congestion_data.g_freq_dict[
|
|
||||||
DeviceDataCollector.STATUS_NORMAL]),
|
|
||||||
str(congestion_data.g_freq_dict[
|
|
||||||
DeviceDataCollector.STATUS_BUILDING]),
|
|
||||||
str(congestion_data.g_freq_dict[
|
|
||||||
DeviceDataCollector.STATUS_CONGESTED]),
|
|
||||||
str(congestion_data.g_await_avg_list[0]),
|
|
||||||
str(congestion_data.g_await_avg_list[1]),
|
|
||||||
str(congestion_data.g_await_avg_list[2]))
|
|
||||||
) + '\n'
|
|
||||||
)
|
|
||||||
|
|
||||||
# flush the python buffer
|
|
||||||
self.csv.flush()
|
|
||||||
|
|
||||||
# make sure the os pushes the data to disk
|
|
||||||
os.fsync(self.csv.fileno())
|
|
||||||
|
|
||||||
def generate_status(self):
|
|
||||||
# Purge stale devices
|
|
||||||
self._flush_stale_devices()
|
|
||||||
|
|
||||||
# Get congestion state
|
|
||||||
data = self._determine_congestion_state()
|
|
||||||
if self.status_skip_count < self.status_skip_total:
|
|
||||||
self.status_skip_count += 1
|
|
||||||
else:
|
|
||||||
self._log_congestion_status(data)
|
|
||||||
self.status_skip_count = 0
|
|
||||||
|
|
||||||
# Send alarm updates to FM if configured and there are guest volumes
|
|
||||||
# present (won't be on the standby controller)
|
|
||||||
if CONF.cinder_congestion.generate_fm_alarms:
|
|
||||||
if data.g_count > 0:
|
|
||||||
self._update_fm(CONF.cinder_congestion.fm_alarm_debounce)
|
|
||||||
elif data.g_count == 0 and self.last_g_count > 0:
|
|
||||||
self._clear_fm()
|
|
||||||
|
|
||||||
# Save the current guest count view
|
|
||||||
self.last_g_count = data.g_count
|
|
||||||
|
|
||||||
# Save output
|
|
||||||
if CONF.cinder_congestion.output_write_csv:
|
|
||||||
self._create_output(CONF.cinder_congestion.output_csv_dir,
|
|
||||||
data)
|
|
||||||
|
|
||||||
def parse_iostats(self, line):
|
|
||||||
# LOG.debug(line)
|
|
||||||
m = self.ts_regex.match(line)
|
|
||||||
if m:
|
|
||||||
self.latest_time = m.group(0)
|
|
||||||
|
|
||||||
m = self.device_regex.match(line)
|
|
||||||
if m:
|
|
||||||
# LOG.debug(line)
|
|
||||||
# LOG.debug("%s: %f %f" % (m.group(1) ,
|
|
||||||
# float(m.group(4)) + float(m.group(5)),
|
|
||||||
# float(m.group(10))))
|
|
||||||
if not (self._is_number(m.group(4)) and
|
|
||||||
self._is_number(m.group(5)) and
|
|
||||||
self._is_number(m.group(10))):
|
|
||||||
LOG.error("ValueError: invalid input: r/s = %s, w/s = %s "
|
|
||||||
"await = %s" % (m.group(4), m.group(5), m.group(10)))
|
|
||||||
else:
|
|
||||||
if not any(s in m.group(1) for s in ['loop', 'ram', 'nb',
|
|
||||||
'md', 'scd'] +
|
|
||||||
self.non_cinder_phys_devs):
|
|
||||||
self._update_device_stats(self.latest_time,
|
|
||||||
m.group(1),
|
|
||||||
(float(m.group(4)) +
|
|
||||||
float(m.group(5))),
|
|
||||||
float(m.group(10)))
|
|
|
@ -1,27 +0,0 @@
|
||||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
|
||||||
#
|
|
||||||
# Copyright (c) 2016 Wind River Systems, Inc.
|
|
||||||
#
|
|
||||||
# SPDX-License-Identifier: Apache-2.0
|
|
||||||
#
|
|
||||||
|
|
||||||
from oslo_config import cfg
|
|
||||||
|
|
||||||
CONF = cfg.CONF
|
|
||||||
|
|
||||||
common_opts = [
|
|
||||||
cfg.BoolOpt('daemon_mode', default=True,
|
|
||||||
help=('Run as a daemon')),
|
|
||||||
cfg.IntOpt('wait_time', default=1, min=1, max=59,
|
|
||||||
help=('Sleep interval (in seconds) between iostat executions '
|
|
||||||
'[1..59]')),
|
|
||||||
cfg.StrOpt('global_log_level',
|
|
||||||
default='DEBUG',
|
|
||||||
choices=['DEBUG', 'INFO', 'WARN', 'ERROR'],
|
|
||||||
help=('Global debug level. Note: All monitors will be clipped '
|
|
||||||
'at this setting.'))
|
|
||||||
]
|
|
||||||
|
|
||||||
|
|
||||||
def add_common_opts():
|
|
||||||
CONF.register_cli_opts(common_opts)
|
|
|
@ -1,86 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
|
|
||||||
#
|
|
||||||
# Copyright (c) 2017 Wind River Systems, Inc.
|
|
||||||
#
|
|
||||||
# SPDX-License-Identifier: Apache-2.0
|
|
||||||
#
|
|
||||||
|
|
||||||
if [[ $EUID -ne 0 ]]; then
|
|
||||||
echo "This script must be run as root" 1>&2
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
TEST_ROOT=$PWD
|
|
||||||
HEAT_CHECK=${TEST_ROOT}/heat_check.sh
|
|
||||||
|
|
||||||
STRESSOR_CREATE=${TEST_ROOT}/cinder_stress_increment_create.sh
|
|
||||||
STRESSOR_DELETE=${TEST_ROOT}/cinder_stress_increment_delete.sh
|
|
||||||
|
|
||||||
## one volume/VM/stack
|
|
||||||
#YAML=${TEST_ROOT}/yaml/cinder_v1_bon0.yaml
|
|
||||||
#YAML=${TEST_ROOT}/yaml/cinder_v1_bon1.yaml
|
|
||||||
#YAML=${TEST_ROOT}/yaml/cinder_v1_bon1_cpuburn.yaml
|
|
||||||
|
|
||||||
## Two volumes/VM/stack
|
|
||||||
#YAML=${TEST_ROOT}/yaml/cinder_v2_bon0.yaml
|
|
||||||
#YAML=${TEST_ROOT}/yaml/cinder_v2_bon2.yaml
|
|
||||||
#YAML=${TEST_ROOT}/yaml/cinder_v2_bon2_cpuburn.yaml
|
|
||||||
|
|
||||||
## 4 volumes/VM/stack
|
|
||||||
#YAML=${TEST_ROOT}/yaml/cinder_v4_bon0.yaml
|
|
||||||
#YAML=${TEST_ROOT}/yaml/cinder_v4_bon4.yaml
|
|
||||||
#YAML=${TEST_ROOT}/yaml/cinder_v4_bon4_cpuburn.yaml
|
|
||||||
|
|
||||||
## test
|
|
||||||
#YAML=${TEST_ROOT}/yaml/cinder_nokia_v5_bon0.yaml
|
|
||||||
YAML=${TEST_ROOT}/yaml/cinder_nokia_v5_bon1.yaml
|
|
||||||
#YAML=${TEST_ROOT}/yaml/cinder_nokia_v5_bon2.yaml
|
|
||||||
#YAML=${TEST_ROOT}/yaml/cinder_nokia_v5_bon3.yaml
|
|
||||||
#YAML=${TEST_ROOT}/yaml/cinder_nokia_v5_bon4.yaml
|
|
||||||
#YAML=${TEST_ROOT}/yaml/cinder_nokia_v5_bon4_cpuburn.yaml
|
|
||||||
|
|
||||||
for stack_num in 1 2 4 8 14
|
|
||||||
#for stack_num in $(seq 1 32)
|
|
||||||
do
|
|
||||||
|
|
||||||
|
|
||||||
echo "$stack_num: Creating stacks"
|
|
||||||
sudo -u wrsroot ${STRESSOR_CREATE} $YAML $stack_num
|
|
||||||
|
|
||||||
source /etc/nova/openrc
|
|
||||||
AM_I_CREATING="sudo -u wrsroot $HEAT_CHECK | grep CREATE_IN_PROGRESS"
|
|
||||||
while [[ $(eval $AM_I_CREATING) != "" ]]; do
|
|
||||||
echo "$stack_num: Creating..."
|
|
||||||
sleep 15
|
|
||||||
done
|
|
||||||
|
|
||||||
ANY_CREATE_ERRORS="sudo -u wrsroot $HEAT_CHECK | grep CREATE_FAILED"
|
|
||||||
if [[ $(eval $ANY_CREATE_ERRORS) != "" ]]; then
|
|
||||||
echo "$stack_num: Creating stacks failed"
|
|
||||||
exit -1
|
|
||||||
else
|
|
||||||
# Run at steady state for 60s
|
|
||||||
echo "$stack_num: Running at steady state for an additional 10 seconds"
|
|
||||||
sleep 10
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo "$stack_num: Deleting stacks"
|
|
||||||
sudo -u wrsroot ${STRESSOR_DELETE} $stack_num
|
|
||||||
|
|
||||||
AM_I_DELETING="sudo -u wrsroot $HEAT_CHECK | grep DELETE_IN_PROGRESS"
|
|
||||||
while [[ $(eval $AM_I_DELETING) != "" ]]; do
|
|
||||||
echo "$stack_num: Deleting..."
|
|
||||||
done
|
|
||||||
|
|
||||||
ANY_DELETE_ERRORS="sudo -u wrsroot $HEAT_CHECK | grep DELETE_FAILED"
|
|
||||||
if [[ $(eval $ANY_DELETE_ERRORS) != "" ]]; then
|
|
||||||
echo "$stack_num: Deleting stacks failed"
|
|
||||||
else
|
|
||||||
echo "$stack_num: Create/Delete successful"
|
|
||||||
fi
|
|
||||||
|
|
||||||
sleep 10
|
|
||||||
|
|
||||||
done
|
|
||||||
|
|
|
@ -1,24 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
|
|
||||||
#
|
|
||||||
# Copyright (c) 2017 Wind River Systems, Inc.
|
|
||||||
#
|
|
||||||
# SPDX-License-Identifier: Apache-2.0
|
|
||||||
#
|
|
||||||
|
|
||||||
case $# in
|
|
||||||
0|1)
|
|
||||||
echo "Usage: `basename $0` <yaml> <# of stacks>"
|
|
||||||
exit $E_BADARGS
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
|
|
||||||
YAML=$1
|
|
||||||
NUM_STACKS=$2
|
|
||||||
|
|
||||||
for i in $(seq 1 $NUM_STACKS)
|
|
||||||
do
|
|
||||||
source $HOME/openrc.tenant1
|
|
||||||
heat stack-create -f $YAML stack-$i
|
|
||||||
done
|
|
||||||
|
|
|
@ -1,23 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
|
|
||||||
#
|
|
||||||
# Copyright (c) 2017 Wind River Systems, Inc.
|
|
||||||
#
|
|
||||||
# SPDX-License-Identifier: Apache-2.0
|
|
||||||
#
|
|
||||||
|
|
||||||
case $# in
|
|
||||||
0)
|
|
||||||
echo "Usage: `basename $0` <# of stacks>"
|
|
||||||
exit $E_BADARGS
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
|
|
||||||
NUM_STACKS=$1
|
|
||||||
|
|
||||||
for i in $(seq 1 $NUM_STACKS)
|
|
||||||
do
|
|
||||||
source /etc/nova/openrc
|
|
||||||
heat stack-delete stack-$i
|
|
||||||
done
|
|
||||||
|
|
|
@ -1,88 +0,0 @@
|
||||||
heat_template_version: '2013-05-23'
|
|
||||||
|
|
||||||
description:
|
|
||||||
One Bonnie, 5 volumes. No root volumes, CoW images
|
|
||||||
|
|
||||||
parameters:
|
|
||||||
Network_Name:
|
|
||||||
type: string
|
|
||||||
description: Network which is used for servers
|
|
||||||
default: tenant1-mgmt-net
|
|
||||||
|
|
||||||
resources:
|
|
||||||
|
|
||||||
Test_volume_1:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_1
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Test_volume_2:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_2
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Test_volume_3:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_3
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Test_volume_4:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_4
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Test_volume_5:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_5
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Stabi_volume_write:
|
|
||||||
type: OS::Nova::Server
|
|
||||||
properties:
|
|
||||||
name: { list_join : [ "-", [{get_param: 'OS::stack_name'}, 'Stabi_volume_write']]}
|
|
||||||
image: centos_nkstress
|
|
||||||
flavor: smallvol
|
|
||||||
#key_name: newkey
|
|
||||||
availability_zone: "nova"
|
|
||||||
networks:
|
|
||||||
- network: { get_param: Network_Name }
|
|
||||||
block_device_mapping:
|
|
||||||
- { volume_id: { get_resource: Test_volume_1 }, device_name: "vdb" }
|
|
||||||
- { volume_id: { get_resource: Test_volume_2 }, device_name: "vdc" }
|
|
||||||
- { volume_id: { get_resource: Test_volume_3 }, device_name: "vdd" }
|
|
||||||
- { volume_id: { get_resource: Test_volume_4 }, device_name: "vde" }
|
|
||||||
- { volume_id: { get_resource: Test_volume_5 }, device_name: "vdf" }
|
|
||||||
|
|
||||||
user_data_format: RAW
|
|
||||||
user_data: |
|
|
||||||
#cloud-config
|
|
||||||
user: centos
|
|
||||||
password: centos
|
|
||||||
chpasswd: {expire: False}
|
|
||||||
ssh_pwauth: True
|
|
||||||
runcmd:
|
|
||||||
- echo "Creating file systems..." > /root/stabi_1.log&
|
|
||||||
- mkfs.ext4 /dev/vdb
|
|
||||||
- mkfs.ext4 /dev/vdc
|
|
||||||
- mkfs.ext4 /dev/vdd
|
|
||||||
- mkfs.ext4 /dev/vde
|
|
||||||
- mkfs.ext4 /dev/vdf
|
|
||||||
- echo "Mounting directories..." >> /root/stabi_1.log&
|
|
||||||
- mkdir /mnt/b
|
|
||||||
- mkdir /mnt/c
|
|
||||||
- mkdir /mnt/d
|
|
||||||
- mkdir /mnt/e
|
|
||||||
- mkdir /mnt/f
|
|
||||||
- mount /dev/vdb /mnt/b/
|
|
||||||
- mount /dev/vdc /mnt/c/
|
|
||||||
- mount /dev/vdd /mnt/d/
|
|
||||||
- mount /dev/vde /mnt/e/
|
|
||||||
- mount /dev/vdf /mnt/f/
|
|
||||||
- echo "Starting bonnie++..." >> /root/stabi_1.log&
|
|
||||||
- date >> /root/stabi_1.log&
|
|
||||||
|
|
|
@ -1,84 +0,0 @@
|
||||||
heat_template_version: '2013-05-23'
|
|
||||||
|
|
||||||
description:
|
|
||||||
No Bonnie, One root volume 20GB and non-root 4 volumes
|
|
||||||
|
|
||||||
parameters:
|
|
||||||
Network_Name:
|
|
||||||
type: string
|
|
||||||
description: Network which is used for servers
|
|
||||||
default: tenant1-mgmt-net
|
|
||||||
|
|
||||||
resources:
|
|
||||||
|
|
||||||
root_volume_1:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: root_volume_1
|
|
||||||
image: centos_nkstress
|
|
||||||
size: 20
|
|
||||||
|
|
||||||
Test_volume_2:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_2
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Test_volume_3:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_3
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Test_volume_4:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_4
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Test_volume_5:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_5
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Stabi_volume_write:
|
|
||||||
type: OS::Nova::Server
|
|
||||||
depends_on: root_volume_1
|
|
||||||
properties:
|
|
||||||
name: { list_join : [ "-", [{get_param: 'OS::stack_name'}, 'Stabi_volume_write']]}
|
|
||||||
flavor: smallvol
|
|
||||||
key_name: newkey
|
|
||||||
availability_zone: "nova"
|
|
||||||
networks:
|
|
||||||
- network: { get_param: Network_Name }
|
|
||||||
block_device_mapping:
|
|
||||||
- { volume_id: { get_resource: root_volume_1}, device_name: "vda" }
|
|
||||||
- { volume_id: { get_resource: Test_volume_2}, device_name: "vdb" }
|
|
||||||
- { volume_id: { get_resource: Test_volume_3}, device_name: "vdc" }
|
|
||||||
- { volume_id: { get_resource: Test_volume_4}, device_name: "vdd" }
|
|
||||||
- { volume_id: { get_resource: Test_volume_5}, device_name: "vde" }
|
|
||||||
|
|
||||||
user_data_format: RAW
|
|
||||||
user_data: |
|
|
||||||
#cloud-config
|
|
||||||
user: centos
|
|
||||||
password: centos
|
|
||||||
chpasswd: {expire: False}
|
|
||||||
ssh_pwauth: True
|
|
||||||
runcmd:
|
|
||||||
- echo "Creating file systems..." > /root/stabi_1.log&
|
|
||||||
- mkfs.ext4 /dev/vdb
|
|
||||||
- mkfs.ext4 /dev/vdc
|
|
||||||
- mkfs.ext4 /dev/vdd
|
|
||||||
- mkfs.ext4 /dev/vde
|
|
||||||
- echo "Mounting directories..." >> /root/stabi_1.log&
|
|
||||||
- mkdir /mnt/b
|
|
||||||
- mkdir /mnt/c
|
|
||||||
- mkdir /mnt/d
|
|
||||||
- mkdir /mnt/e
|
|
||||||
- mount /dev/vdb /mnt/b/
|
|
||||||
- mount /dev/vdc /mnt/c/
|
|
||||||
- mount /dev/vdd /mnt/d/
|
|
||||||
- mount /dev/vde /mnt/e/
|
|
||||||
|
|
|
@ -1,89 +0,0 @@
|
||||||
heat_template_version: '2013-05-23'
|
|
||||||
|
|
||||||
description:
|
|
||||||
One Bonnie, 5 volumes. No root volumes, CoW images
|
|
||||||
|
|
||||||
parameters:
|
|
||||||
Network_Name:
|
|
||||||
type: string
|
|
||||||
description: Network which is used for servers
|
|
||||||
default: tenant1-mgmt-net
|
|
||||||
|
|
||||||
resources:
|
|
||||||
|
|
||||||
Test_volume_1:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_1
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Test_volume_2:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_2
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Test_volume_3:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_3
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Test_volume_4:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_4
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Test_volume_5:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_5
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Stabi_volume_write:
|
|
||||||
type: OS::Nova::Server
|
|
||||||
properties:
|
|
||||||
name: { list_join : [ "-", [{get_param: 'OS::stack_name'}, 'Stabi_volume_write']]}
|
|
||||||
image: centos_nkstress
|
|
||||||
flavor: smallvol
|
|
||||||
#key_name: newkey
|
|
||||||
availability_zone: "nova"
|
|
||||||
networks:
|
|
||||||
- network: { get_param: Network_Name }
|
|
||||||
block_device_mapping:
|
|
||||||
- { volume_id: { get_resource: Test_volume_1 }, device_name: "vdb" }
|
|
||||||
- { volume_id: { get_resource: Test_volume_2 }, device_name: "vdc" }
|
|
||||||
- { volume_id: { get_resource: Test_volume_3 }, device_name: "vdd" }
|
|
||||||
- { volume_id: { get_resource: Test_volume_4 }, device_name: "vde" }
|
|
||||||
- { volume_id: { get_resource: Test_volume_5 }, device_name: "vdf" }
|
|
||||||
|
|
||||||
user_data_format: RAW
|
|
||||||
user_data: |
|
|
||||||
#cloud-config
|
|
||||||
user: centos
|
|
||||||
password: centos
|
|
||||||
chpasswd: {expire: False}
|
|
||||||
ssh_pwauth: True
|
|
||||||
runcmd:
|
|
||||||
- echo "Creating file systems..." > /root/stabi_1.log&
|
|
||||||
- mkfs.ext4 /dev/vdb
|
|
||||||
- mkfs.ext4 /dev/vdc
|
|
||||||
- mkfs.ext4 /dev/vdd
|
|
||||||
- mkfs.ext4 /dev/vde
|
|
||||||
- mkfs.ext4 /dev/vdf
|
|
||||||
- echo "Mounting directories..." >> /root/stabi_1.log&
|
|
||||||
- mkdir /mnt/b
|
|
||||||
- mkdir /mnt/c
|
|
||||||
- mkdir /mnt/d
|
|
||||||
- mkdir /mnt/e
|
|
||||||
- mkdir /mnt/f
|
|
||||||
- mount /dev/vdb /mnt/b/
|
|
||||||
- mount /dev/vdc /mnt/c/
|
|
||||||
- mount /dev/vdd /mnt/d/
|
|
||||||
- mount /dev/vde /mnt/e/
|
|
||||||
- mount /dev/vdf /mnt/f/
|
|
||||||
- echo "Starting bonnie++..." >> /root/stabi_1.log&
|
|
||||||
- date >> /root/stabi_1.log&
|
|
||||||
- /usr/sbin/bonnie++ -b -n 100 -d /mnt/b -u root -x 9999999 >> /root/stabi_1.log&
|
|
||||||
|
|
|
@ -1,87 +0,0 @@
|
||||||
heat_template_version: '2013-05-23'
|
|
||||||
|
|
||||||
description:
|
|
||||||
One Bonnie, One root volume 10GB and non-root 4 5GB volumes
|
|
||||||
|
|
||||||
parameters:
|
|
||||||
Network_Name:
|
|
||||||
type: string
|
|
||||||
description: Network which is used for servers
|
|
||||||
default: tenant1-mgmt-net
|
|
||||||
|
|
||||||
resources:
|
|
||||||
|
|
||||||
root_volume_1:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: root_volume_1
|
|
||||||
image: centos_nkstress
|
|
||||||
size: 10
|
|
||||||
|
|
||||||
Test_volume_2:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_2
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Test_volume_3:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_3
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Test_volume_4:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_4
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Test_volume_5:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_5
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Stabi_volume_write:
|
|
||||||
type: OS::Nova::Server
|
|
||||||
depends_on: root_volume_1
|
|
||||||
properties:
|
|
||||||
name: { list_join : [ "-", [{get_param: 'OS::stack_name'}, 'Stabi_volume_write']]}
|
|
||||||
flavor: smallvol
|
|
||||||
key_name: newkey
|
|
||||||
availability_zone: "nova"
|
|
||||||
networks:
|
|
||||||
- network: { get_param: Network_Name }
|
|
||||||
block_device_mapping:
|
|
||||||
- { volume_id: { get_resource: root_volume_1}, device_name: "vda" }
|
|
||||||
- { volume_id: { get_resource: Test_volume_2}, device_name: "vdb" }
|
|
||||||
- { volume_id: { get_resource: Test_volume_3}, device_name: "vdc" }
|
|
||||||
- { volume_id: { get_resource: Test_volume_4}, device_name: "vdd" }
|
|
||||||
- { volume_id: { get_resource: Test_volume_5}, device_name: "vde" }
|
|
||||||
|
|
||||||
user_data_format: RAW
|
|
||||||
user_data: |
|
|
||||||
#cloud-config
|
|
||||||
user: centos
|
|
||||||
password: centos
|
|
||||||
chpasswd: {expire: False}
|
|
||||||
ssh_pwauth: True
|
|
||||||
runcmd:
|
|
||||||
- echo "Creating file systems..." > /root/stabi_1.log&
|
|
||||||
- mkfs.ext4 /dev/vdb
|
|
||||||
- mkfs.ext4 /dev/vdc
|
|
||||||
- mkfs.ext4 /dev/vdd
|
|
||||||
- mkfs.ext4 /dev/vde
|
|
||||||
- echo "Mounting directories..." >> /root/stabi_1.log&
|
|
||||||
- mkdir /mnt/b
|
|
||||||
- mkdir /mnt/c
|
|
||||||
- mkdir /mnt/d
|
|
||||||
- mkdir /mnt/e
|
|
||||||
- mount /dev/vdb /mnt/b/
|
|
||||||
- mount /dev/vdc /mnt/c/
|
|
||||||
- mount /dev/vdd /mnt/d/
|
|
||||||
- mount /dev/vde /mnt/e/
|
|
||||||
- echo "Starting bonnie++..." >> /root/stabi_1.log&
|
|
||||||
- date >> /root/stabi_1.log&
|
|
||||||
- /usr/sbin/bonnie++ -b -n 100 -d /home/centos -u root -x 1000 >> /root/stabi_1.log&
|
|
||||||
|
|
|
@ -1,87 +0,0 @@
|
||||||
heat_template_version: '2013-05-23'
|
|
||||||
|
|
||||||
description:
|
|
||||||
One Bonnie, One root volume 20GB and non-root 4 volumes
|
|
||||||
|
|
||||||
parameters:
|
|
||||||
Network_Name:
|
|
||||||
type: string
|
|
||||||
description: Network which is used for servers
|
|
||||||
default: tenant1-mgmt-net
|
|
||||||
|
|
||||||
resources:
|
|
||||||
|
|
||||||
root_volume_1:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: root_volume_1
|
|
||||||
image: centos_nkstress
|
|
||||||
size: 20
|
|
||||||
|
|
||||||
Test_volume_2:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_2
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Test_volume_3:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_3
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Test_volume_4:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_4
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Test_volume_5:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_5
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Stabi_volume_write:
|
|
||||||
type: OS::Nova::Server
|
|
||||||
depends_on: root_volume_1
|
|
||||||
properties:
|
|
||||||
name: { list_join : [ "-", [{get_param: 'OS::stack_name'}, 'Stabi_volume_write']]}
|
|
||||||
flavor: smallvol
|
|
||||||
key_name: newkey
|
|
||||||
availability_zone: "nova"
|
|
||||||
networks:
|
|
||||||
- network: { get_param: Network_Name }
|
|
||||||
block_device_mapping:
|
|
||||||
- { volume_id: { get_resource: root_volume_1}, device_name: "vda" }
|
|
||||||
- { volume_id: { get_resource: Test_volume_2}, device_name: "vdb" }
|
|
||||||
- { volume_id: { get_resource: Test_volume_3}, device_name: "vdc" }
|
|
||||||
- { volume_id: { get_resource: Test_volume_4}, device_name: "vdd" }
|
|
||||||
- { volume_id: { get_resource: Test_volume_5}, device_name: "vde" }
|
|
||||||
|
|
||||||
user_data_format: RAW
|
|
||||||
user_data: |
|
|
||||||
#cloud-config
|
|
||||||
user: centos
|
|
||||||
password: centos
|
|
||||||
chpasswd: {expire: False}
|
|
||||||
ssh_pwauth: True
|
|
||||||
runcmd:
|
|
||||||
- echo "Creating file systems..." > /root/stabi_1.log&
|
|
||||||
- mkfs.ext4 /dev/vdb
|
|
||||||
- mkfs.ext4 /dev/vdc
|
|
||||||
- mkfs.ext4 /dev/vdd
|
|
||||||
- mkfs.ext4 /dev/vde
|
|
||||||
- echo "Mounting directories..." >> /root/stabi_1.log&
|
|
||||||
- mkdir /mnt/b
|
|
||||||
- mkdir /mnt/c
|
|
||||||
- mkdir /mnt/d
|
|
||||||
- mkdir /mnt/e
|
|
||||||
- mount /dev/vdb /mnt/b/
|
|
||||||
- mount /dev/vdc /mnt/c/
|
|
||||||
- mount /dev/vdd /mnt/d/
|
|
||||||
- mount /dev/vde /mnt/e/
|
|
||||||
- echo "Starting bonnie++..." >> /root/stabi_1.log&
|
|
||||||
- date >> /root/stabi_1.log&
|
|
||||||
- /usr/sbin/bonnie++ -b -n 100 -d /home/centos -u root -x 1000 >> /root/stabi_1.log&
|
|
||||||
|
|
|
@ -1,87 +0,0 @@
|
||||||
heat_template_version: '2013-05-23'
|
|
||||||
|
|
||||||
description:
|
|
||||||
One Bonnie, One root volume 50GB and non-root 4 volumes
|
|
||||||
|
|
||||||
parameters:
|
|
||||||
Network_Name:
|
|
||||||
type: string
|
|
||||||
description: Network which is used for servers
|
|
||||||
default: tenant1-mgmt-net
|
|
||||||
|
|
||||||
resources:
|
|
||||||
|
|
||||||
root_volume_1:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: root_volume_1
|
|
||||||
image: centos_nkstress
|
|
||||||
size: 50
|
|
||||||
|
|
||||||
Test_volume_2:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_2
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Test_volume_3:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_3
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Test_volume_4:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_4
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Test_volume_5:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_5
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Stabi_volume_write:
|
|
||||||
type: OS::Nova::Server
|
|
||||||
depends_on: root_volume_1
|
|
||||||
properties:
|
|
||||||
name: { list_join : [ "-", [{get_param: 'OS::stack_name'}, 'Stabi_volume_write']]}
|
|
||||||
flavor: smallvol
|
|
||||||
key_name: newkey
|
|
||||||
availability_zone: "nova"
|
|
||||||
networks:
|
|
||||||
- network: { get_param: Network_Name }
|
|
||||||
block_device_mapping:
|
|
||||||
- { volume_id: { get_resource: root_volume_1}, device_name: "vda" }
|
|
||||||
- { volume_id: { get_resource: Test_volume_2}, device_name: "vdb" }
|
|
||||||
- { volume_id: { get_resource: Test_volume_3}, device_name: "vdc" }
|
|
||||||
- { volume_id: { get_resource: Test_volume_4}, device_name: "vdd" }
|
|
||||||
- { volume_id: { get_resource: Test_volume_5}, device_name: "vde" }
|
|
||||||
|
|
||||||
user_data_format: RAW
|
|
||||||
user_data: |
|
|
||||||
#cloud-config
|
|
||||||
user: centos
|
|
||||||
password: centos
|
|
||||||
chpasswd: {expire: False}
|
|
||||||
ssh_pwauth: True
|
|
||||||
runcmd:
|
|
||||||
- echo "Creating file systems..." > /root/stabi_1.log&
|
|
||||||
- mkfs.ext4 /dev/vdb
|
|
||||||
- mkfs.ext4 /dev/vdc
|
|
||||||
- mkfs.ext4 /dev/vdd
|
|
||||||
- mkfs.ext4 /dev/vde
|
|
||||||
- echo "Mounting directories..." >> /root/stabi_1.log&
|
|
||||||
- mkdir /mnt/b
|
|
||||||
- mkdir /mnt/c
|
|
||||||
- mkdir /mnt/d
|
|
||||||
- mkdir /mnt/e
|
|
||||||
- mount /dev/vdb /mnt/b/
|
|
||||||
- mount /dev/vdc /mnt/c/
|
|
||||||
- mount /dev/vdd /mnt/d/
|
|
||||||
- mount /dev/vde /mnt/e/
|
|
||||||
- echo "Starting bonnie++..." >> /root/stabi_1.log&
|
|
||||||
- date >> /root/stabi_1.log&
|
|
||||||
- /usr/sbin/bonnie++ -b -n 100 -d /home/centos -u root -x 1000 >> /root/stabi_1.log&
|
|
||||||
|
|
|
@ -1,90 +0,0 @@
|
||||||
heat_template_version: '2013-05-23'
|
|
||||||
|
|
||||||
description:
|
|
||||||
Two Bonnies, 5 volumes. No root volumes, CoW images
|
|
||||||
|
|
||||||
parameters:
|
|
||||||
Network_Name:
|
|
||||||
type: string
|
|
||||||
description: Network which is used for servers
|
|
||||||
default: tenant1-mgmt-net
|
|
||||||
|
|
||||||
resources:
|
|
||||||
|
|
||||||
Test_volume_1:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_1
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Test_volume_2:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_2
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Test_volume_3:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_3
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Test_volume_4:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_4
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Test_volume_5:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_5
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Stabi_volume_write:
|
|
||||||
type: OS::Nova::Server
|
|
||||||
properties:
|
|
||||||
name: { list_join : [ "-", [{get_param: 'OS::stack_name'}, 'Stabi_volume_write']]}
|
|
||||||
image: centos_nkstress
|
|
||||||
flavor: smallvol
|
|
||||||
#key_name: newkey
|
|
||||||
availability_zone: "nova"
|
|
||||||
networks:
|
|
||||||
- network: { get_param: Network_Name }
|
|
||||||
block_device_mapping:
|
|
||||||
- { volume_id: { get_resource: Test_volume_1 }, device_name: "vdb" }
|
|
||||||
- { volume_id: { get_resource: Test_volume_2 }, device_name: "vdc" }
|
|
||||||
- { volume_id: { get_resource: Test_volume_3 }, device_name: "vdd" }
|
|
||||||
- { volume_id: { get_resource: Test_volume_4 }, device_name: "vde" }
|
|
||||||
- { volume_id: { get_resource: Test_volume_5 }, device_name: "vdf" }
|
|
||||||
|
|
||||||
user_data_format: RAW
|
|
||||||
user_data: |
|
|
||||||
#cloud-config
|
|
||||||
user: centos
|
|
||||||
password: centos
|
|
||||||
chpasswd: {expire: False}
|
|
||||||
ssh_pwauth: True
|
|
||||||
runcmd:
|
|
||||||
- echo "Creating file systems..." > /root/stabi_1.log&
|
|
||||||
- mkfs.ext4 /dev/vdb
|
|
||||||
- mkfs.ext4 /dev/vdc
|
|
||||||
- mkfs.ext4 /dev/vdd
|
|
||||||
- mkfs.ext4 /dev/vde
|
|
||||||
- mkfs.ext4 /dev/vdf
|
|
||||||
- echo "Mounting directories..." >> /root/stabi_1.log&
|
|
||||||
- mkdir /mnt/b
|
|
||||||
- mkdir /mnt/c
|
|
||||||
- mkdir /mnt/d
|
|
||||||
- mkdir /mnt/e
|
|
||||||
- mkdir /mnt/f
|
|
||||||
- mount /dev/vdb /mnt/b/
|
|
||||||
- mount /dev/vdc /mnt/c/
|
|
||||||
- mount /dev/vdd /mnt/d/
|
|
||||||
- mount /dev/vde /mnt/e/
|
|
||||||
- mount /dev/vdf /mnt/f/
|
|
||||||
- echo "Starting bonnie++..." >> /root/stabi_1.log&
|
|
||||||
- date >> /root/stabi_1.log&
|
|
||||||
- /usr/sbin/bonnie++ -b -n 100 -d /mnt/b -u root -x 999 >> /root/stabi_1.log&
|
|
||||||
- /usr/sbin/bonnie++ -b -n 100 -d /mnt/c -u root -x 999 >> /root/stabi_2.log&
|
|
||||||
|
|
|
@ -1,88 +0,0 @@
|
||||||
heat_template_version: '2013-05-23'
|
|
||||||
|
|
||||||
description:
|
|
||||||
Two Bonnies, One root volume 50GB and non-root 4 volumes
|
|
||||||
|
|
||||||
parameters:
|
|
||||||
Network_Name:
|
|
||||||
type: string
|
|
||||||
description: Network which is used for servers
|
|
||||||
default: tenant1-mgmt-net
|
|
||||||
|
|
||||||
resources:
|
|
||||||
|
|
||||||
root_volume_1:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: root_volume_1
|
|
||||||
image: centos_nkstress
|
|
||||||
size: 20
|
|
||||||
|
|
||||||
Test_volume_2:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_2
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Test_volume_3:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_3
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Test_volume_4:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_4
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Test_volume_5:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_5
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Stabi_volume_write:
|
|
||||||
type: OS::Nova::Server
|
|
||||||
depends_on: root_volume_1
|
|
||||||
properties:
|
|
||||||
name: { list_join : [ "-", [{get_param: 'OS::stack_name'}, 'Stabi_volume_write']]}
|
|
||||||
flavor: smallvol
|
|
||||||
key_name: newkey
|
|
||||||
availability_zone: "nova"
|
|
||||||
networks:
|
|
||||||
- network: { get_param: Network_Name }
|
|
||||||
block_device_mapping:
|
|
||||||
- { volume_id: { get_resource: root_volume_1}, device_name: "vda" }
|
|
||||||
- { volume_id: { get_resource: Test_volume_2}, device_name: "vdb" }
|
|
||||||
- { volume_id: { get_resource: Test_volume_3}, device_name: "vdc" }
|
|
||||||
- { volume_id: { get_resource: Test_volume_4}, device_name: "vdd" }
|
|
||||||
- { volume_id: { get_resource: Test_volume_5}, device_name: "vde" }
|
|
||||||
|
|
||||||
user_data_format: RAW
|
|
||||||
user_data: |
|
|
||||||
#cloud-config
|
|
||||||
user: centos
|
|
||||||
password: centos
|
|
||||||
chpasswd: {expire: False}
|
|
||||||
ssh_pwauth: True
|
|
||||||
runcmd:
|
|
||||||
- echo "Creating file systems..." > /root/stabi_1.log&
|
|
||||||
- mkfs.ext4 /dev/vdb
|
|
||||||
- mkfs.ext4 /dev/vdc
|
|
||||||
- mkfs.ext4 /dev/vdd
|
|
||||||
- mkfs.ext4 /dev/vde
|
|
||||||
- echo "Mounting directories..." >> /root/stabi_1.log&
|
|
||||||
- mkdir /mnt/b
|
|
||||||
- mkdir /mnt/c
|
|
||||||
- mkdir /mnt/d
|
|
||||||
- mkdir /mnt/e
|
|
||||||
- mount /dev/vdb /mnt/b/
|
|
||||||
- mount /dev/vdc /mnt/c/
|
|
||||||
- mount /dev/vdd /mnt/d/
|
|
||||||
- mount /dev/vde /mnt/e/
|
|
||||||
- echo "Starting bonnie++..." >> /root/stabi_1.log&
|
|
||||||
- date >> /root/stabi_1.log&
|
|
||||||
- /usr/sbin/bonnie++ -b -n 100 -d /home/centos -u root -x 1000 >> /root/stabi_1.log&
|
|
||||||
- /usr/sbin/bonnie++ -b -n 100 -d /mnt/b -u root -x 1000 >> /root/stabi_2.log&
|
|
||||||
|
|
|
@ -1,88 +0,0 @@
|
||||||
heat_template_version: '2013-05-23'
|
|
||||||
|
|
||||||
description:
|
|
||||||
One Bonnie, One root volume 50GB and non-root 4 volumes
|
|
||||||
|
|
||||||
parameters:
|
|
||||||
Network_Name:
|
|
||||||
type: string
|
|
||||||
description: Network which is used for servers
|
|
||||||
default: tenant1-mgmt-net
|
|
||||||
|
|
||||||
resources:
|
|
||||||
|
|
||||||
root_volume_1:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: root_volume_1
|
|
||||||
image: centos_nkstress
|
|
||||||
size: 50
|
|
||||||
|
|
||||||
Test_volume_2:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_2
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Test_volume_3:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_3
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Test_volume_4:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_4
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Test_volume_5:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_5
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Stabi_volume_write:
|
|
||||||
type: OS::Nova::Server
|
|
||||||
depends_on: root_volume_1
|
|
||||||
properties:
|
|
||||||
name: { list_join : [ "-", [{get_param: 'OS::stack_name'}, 'Stabi_volume_write']]}
|
|
||||||
flavor: smallvol
|
|
||||||
key_name: newkey
|
|
||||||
availability_zone: "nova"
|
|
||||||
networks:
|
|
||||||
- network: { get_param: Network_Name }
|
|
||||||
block_device_mapping:
|
|
||||||
- { volume_id: { get_resource: root_volume_1}, device_name: "vda" }
|
|
||||||
- { volume_id: { get_resource: Test_volume_2}, device_name: "vdb" }
|
|
||||||
- { volume_id: { get_resource: Test_volume_3}, device_name: "vdc" }
|
|
||||||
- { volume_id: { get_resource: Test_volume_4}, device_name: "vdd" }
|
|
||||||
- { volume_id: { get_resource: Test_volume_5}, device_name: "vde" }
|
|
||||||
|
|
||||||
user_data_format: RAW
|
|
||||||
user_data: |
|
|
||||||
#cloud-config
|
|
||||||
user: centos
|
|
||||||
password: centos
|
|
||||||
chpasswd: {expire: False}
|
|
||||||
ssh_pwauth: True
|
|
||||||
runcmd:
|
|
||||||
- echo "Creating file systems..." > /root/stabi_1.log&
|
|
||||||
- mkfs.ext4 /dev/vdb
|
|
||||||
- mkfs.ext4 /dev/vdc
|
|
||||||
- mkfs.ext4 /dev/vdd
|
|
||||||
- mkfs.ext4 /dev/vde
|
|
||||||
- echo "Mounting directories..." >> /root/stabi_1.log&
|
|
||||||
- mkdir /mnt/b
|
|
||||||
- mkdir /mnt/c
|
|
||||||
- mkdir /mnt/d
|
|
||||||
- mkdir /mnt/e
|
|
||||||
- mount /dev/vdb /mnt/b/
|
|
||||||
- mount /dev/vdc /mnt/c/
|
|
||||||
- mount /dev/vdd /mnt/d/
|
|
||||||
- mount /dev/vde /mnt/e/
|
|
||||||
- echo "Starting bonnie++..." >> /root/stabi_1.log&
|
|
||||||
- date >> /root/stabi_1.log&
|
|
||||||
- /usr/sbin/bonnie++ -b -n 100 -d /home/centos -u root -x 1000 >> /root/stabi_1.log&
|
|
||||||
- /usr/sbin/bonnie++ -b -n 100 -d /mnt/b -u root -x 1000 >> /root/stabi_2.log&
|
|
||||||
|
|
|
@ -1,91 +0,0 @@
|
||||||
heat_template_version: '2013-05-23'
|
|
||||||
|
|
||||||
description:
|
|
||||||
One Bonnie, 5 volumes. No root volumes, CoW images
|
|
||||||
|
|
||||||
parameters:
|
|
||||||
Network_Name:
|
|
||||||
type: string
|
|
||||||
description: Network which is used for servers
|
|
||||||
default: tenant1-mgmt-net
|
|
||||||
|
|
||||||
resources:
|
|
||||||
|
|
||||||
Test_volume_1:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_1
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Test_volume_2:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_2
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Test_volume_3:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_3
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Test_volume_4:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_4
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Test_volume_5:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_5
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Stabi_volume_write:
|
|
||||||
type: OS::Nova::Server
|
|
||||||
properties:
|
|
||||||
name: { list_join : [ "-", [{get_param: 'OS::stack_name'}, 'Stabi_volume_write']]}
|
|
||||||
image: centos_nkstress
|
|
||||||
flavor: smallvol
|
|
||||||
#key_name: newkey
|
|
||||||
availability_zone: "nova"
|
|
||||||
networks:
|
|
||||||
- network: { get_param: Network_Name }
|
|
||||||
block_device_mapping:
|
|
||||||
- { volume_id: { get_resource: Test_volume_1 }, device_name: "vdb" }
|
|
||||||
- { volume_id: { get_resource: Test_volume_2 }, device_name: "vdc" }
|
|
||||||
- { volume_id: { get_resource: Test_volume_3 }, device_name: "vdd" }
|
|
||||||
- { volume_id: { get_resource: Test_volume_4 }, device_name: "vde" }
|
|
||||||
- { volume_id: { get_resource: Test_volume_5 }, device_name: "vdf" }
|
|
||||||
|
|
||||||
user_data_format: RAW
|
|
||||||
user_data: |
|
|
||||||
#cloud-config
|
|
||||||
user: centos
|
|
||||||
password: centos
|
|
||||||
chpasswd: {expire: False}
|
|
||||||
ssh_pwauth: True
|
|
||||||
runcmd:
|
|
||||||
- echo "Creating file systems..." > /root/stabi_1.log&
|
|
||||||
- mkfs.ext4 /dev/vdb
|
|
||||||
- mkfs.ext4 /dev/vdc
|
|
||||||
- mkfs.ext4 /dev/vdd
|
|
||||||
- mkfs.ext4 /dev/vde
|
|
||||||
- mkfs.ext4 /dev/vdf
|
|
||||||
- echo "Mounting directories..." >> /root/stabi_1.log&
|
|
||||||
- mkdir /mnt/b
|
|
||||||
- mkdir /mnt/c
|
|
||||||
- mkdir /mnt/d
|
|
||||||
- mkdir /mnt/e
|
|
||||||
- mkdir /mnt/f
|
|
||||||
- mount /dev/vdb /mnt/b/
|
|
||||||
- mount /dev/vdc /mnt/c/
|
|
||||||
- mount /dev/vdd /mnt/d/
|
|
||||||
- mount /dev/vde /mnt/e/
|
|
||||||
- mount /dev/vdf /mnt/f/
|
|
||||||
- echo "Starting bonnie++..." >> /root/stabi_1.log&
|
|
||||||
- date >> /root/stabi_1.log&
|
|
||||||
- /usr/sbin/bonnie++ -b -n 100 -d /mnt/b -u root -x 999 >> /root/stabi_1.log&
|
|
||||||
- /usr/sbin/bonnie++ -b -n 100 -d /mnt/c -u root -x 999 >> /root/stabi_2.log&
|
|
||||||
- /usr/sbin/bonnie++ -b -n 100 -d /mnt/d -u root -x 999 >> /root/stabi_3.log&
|
|
||||||
|
|
|
@ -1,89 +0,0 @@
|
||||||
heat_template_version: '2013-05-23'
|
|
||||||
|
|
||||||
description:
|
|
||||||
Two Bonnies, One root volume 50GB and non-root 4 volumes
|
|
||||||
|
|
||||||
parameters:
|
|
||||||
Network_Name:
|
|
||||||
type: string
|
|
||||||
description: Network which is used for servers
|
|
||||||
default: tenant1-mgmt-net
|
|
||||||
|
|
||||||
resources:
|
|
||||||
|
|
||||||
root_volume_1:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: root_volume_1
|
|
||||||
image: centos_nkstress
|
|
||||||
size: 20
|
|
||||||
|
|
||||||
Test_volume_2:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_2
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Test_volume_3:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_3
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Test_volume_4:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_4
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Test_volume_5:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_5
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Stabi_volume_write:
|
|
||||||
type: OS::Nova::Server
|
|
||||||
depends_on: root_volume_1
|
|
||||||
properties:
|
|
||||||
name: { list_join : [ "-", [{get_param: 'OS::stack_name'}, 'Stabi_volume_write']]}
|
|
||||||
flavor: smallvol
|
|
||||||
key_name: newkey
|
|
||||||
availability_zone: "nova"
|
|
||||||
networks:
|
|
||||||
- network: { get_param: Network_Name }
|
|
||||||
block_device_mapping:
|
|
||||||
- { volume_id: { get_resource: root_volume_1}, device_name: "vda" }
|
|
||||||
- { volume_id: { get_resource: Test_volume_2}, device_name: "vdb" }
|
|
||||||
- { volume_id: { get_resource: Test_volume_3}, device_name: "vdc" }
|
|
||||||
- { volume_id: { get_resource: Test_volume_4}, device_name: "vdd" }
|
|
||||||
- { volume_id: { get_resource: Test_volume_5}, device_name: "vde" }
|
|
||||||
|
|
||||||
user_data_format: RAW
|
|
||||||
user_data: |
|
|
||||||
#cloud-config
|
|
||||||
user: centos
|
|
||||||
password: centos
|
|
||||||
chpasswd: {expire: False}
|
|
||||||
ssh_pwauth: True
|
|
||||||
runcmd:
|
|
||||||
- echo "Creating file systems..." > /root/stabi_1.log&
|
|
||||||
- mkfs.ext4 /dev/vdb
|
|
||||||
- mkfs.ext4 /dev/vdc
|
|
||||||
- mkfs.ext4 /dev/vdd
|
|
||||||
- mkfs.ext4 /dev/vde
|
|
||||||
- echo "Mounting directories..." >> /root/stabi_1.log&
|
|
||||||
- mkdir /mnt/b
|
|
||||||
- mkdir /mnt/c
|
|
||||||
- mkdir /mnt/d
|
|
||||||
- mkdir /mnt/e
|
|
||||||
- mount /dev/vdb /mnt/b/
|
|
||||||
- mount /dev/vdc /mnt/c/
|
|
||||||
- mount /dev/vdd /mnt/d/
|
|
||||||
- mount /dev/vde /mnt/e/
|
|
||||||
- echo "Starting bonnie++..." >> /root/stabi_1.log&
|
|
||||||
- date >> /root/stabi_1.log&
|
|
||||||
- /usr/sbin/bonnie++ -b -n 100 -d /home/centos -u root -x 1000 >> /root/stabi_1.log&
|
|
||||||
- /usr/sbin/bonnie++ -b -n 100 -d /mnt/b -u root -x 1000 >> /root/stabi_2.log&
|
|
||||||
- /usr/sbin/bonnie++ -b -n 100 -d /mnt/c -u root -x 1000 >> /root/stabi_2.log&
|
|
||||||
|
|
|
@ -1,89 +0,0 @@
|
||||||
heat_template_version: '2013-05-23'
|
|
||||||
|
|
||||||
description:
|
|
||||||
Four Bonnies, One root volume 50GB and non-root 4 volumes
|
|
||||||
|
|
||||||
parameters:
|
|
||||||
Network_Name:
|
|
||||||
type: string
|
|
||||||
description: Network which is used for servers
|
|
||||||
default: tenant1-mgmt-net
|
|
||||||
|
|
||||||
resources:
|
|
||||||
|
|
||||||
root_volume_1:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: root_volume_1
|
|
||||||
image: centos_nkstress
|
|
||||||
size: 50
|
|
||||||
|
|
||||||
Test_volume_2:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_2
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Test_volume_3:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_3
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Test_volume_4:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_4
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Test_volume_5:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_5
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Stabi_volume_write:
|
|
||||||
type: OS::Nova::Server
|
|
||||||
depends_on: root_volume_1
|
|
||||||
properties:
|
|
||||||
name: { list_join : [ "-", [{get_param: 'OS::stack_name'}, 'Stabi_volume_write']]}
|
|
||||||
flavor: smallvol
|
|
||||||
key_name: newkey
|
|
||||||
availability_zone: "nova"
|
|
||||||
networks:
|
|
||||||
- network: { get_param: Network_Name }
|
|
||||||
block_device_mapping:
|
|
||||||
- { volume_id: { get_resource: root_volume_1}, device_name: "vda" }
|
|
||||||
- { volume_id: { get_resource: Test_volume_2}, device_name: "vdb" }
|
|
||||||
- { volume_id: { get_resource: Test_volume_3}, device_name: "vdc" }
|
|
||||||
- { volume_id: { get_resource: Test_volume_4}, device_name: "vdd" }
|
|
||||||
- { volume_id: { get_resource: Test_volume_5}, device_name: "vde" }
|
|
||||||
|
|
||||||
user_data_format: RAW
|
|
||||||
user_data: |
|
|
||||||
#cloud-config
|
|
||||||
user: centos
|
|
||||||
password: centos
|
|
||||||
chpasswd: {expire: False}
|
|
||||||
ssh_pwauth: True
|
|
||||||
runcmd:
|
|
||||||
- echo "Creating file systems..." > /root/stabi_1.log&
|
|
||||||
- mkfs.ext4 /dev/vdb
|
|
||||||
- mkfs.ext4 /dev/vdc
|
|
||||||
- mkfs.ext4 /dev/vdd
|
|
||||||
- mkfs.ext4 /dev/vde
|
|
||||||
- echo "Mounting directories..." >> /root/stabi_1.log&
|
|
||||||
- mkdir /mnt/b
|
|
||||||
- mkdir /mnt/c
|
|
||||||
- mkdir /mnt/d
|
|
||||||
- mkdir /mnt/e
|
|
||||||
- mount /dev/vdb /mnt/b/
|
|
||||||
- mount /dev/vdc /mnt/c/
|
|
||||||
- mount /dev/vdd /mnt/d/
|
|
||||||
- mount /dev/vde /mnt/e/
|
|
||||||
- echo "Starting bonnie++..." >> /root/stabi_1.log&
|
|
||||||
- date >> /root/stabi_1.log&
|
|
||||||
- /usr/sbin/bonnie++ -b -n 100 -d /home/centos -u root -x 1000 >> /root/stabi_1.log&
|
|
||||||
- /usr/sbin/bonnie++ -b -n 100 -d /mnt/b -u root -x 1000 >> /root/stabi_2.log&
|
|
||||||
- /usr/sbin/bonnie++ -b -n 100 -d /mnt/c -u root -x 1000 >> /root/stabi_3.log&
|
|
||||||
|
|
|
@ -1,92 +0,0 @@
|
||||||
heat_template_version: '2013-05-23'
|
|
||||||
|
|
||||||
description:
|
|
||||||
One Bonnie, 5 volumes. No root volumes, CoW images
|
|
||||||
|
|
||||||
parameters:
|
|
||||||
Network_Name:
|
|
||||||
type: string
|
|
||||||
description: Network which is used for servers
|
|
||||||
default: tenant1-mgmt-net
|
|
||||||
|
|
||||||
resources:
|
|
||||||
|
|
||||||
Test_volume_1:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_1
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Test_volume_2:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_2
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Test_volume_3:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_3
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Test_volume_4:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_4
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Test_volume_5:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_5
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Stabi_volume_write:
|
|
||||||
type: OS::Nova::Server
|
|
||||||
properties:
|
|
||||||
name: { list_join : [ "-", [{get_param: 'OS::stack_name'}, 'Stabi_volume_write']]}
|
|
||||||
image: centos_nkstress
|
|
||||||
flavor: smallvol
|
|
||||||
#key_name: newkey
|
|
||||||
availability_zone: "nova"
|
|
||||||
networks:
|
|
||||||
- network: { get_param: Network_Name }
|
|
||||||
block_device_mapping:
|
|
||||||
- { volume_id: { get_resource: Test_volume_1 }, device_name: "vdb" }
|
|
||||||
- { volume_id: { get_resource: Test_volume_2 }, device_name: "vdc" }
|
|
||||||
- { volume_id: { get_resource: Test_volume_3 }, device_name: "vdd" }
|
|
||||||
- { volume_id: { get_resource: Test_volume_4 }, device_name: "vde" }
|
|
||||||
- { volume_id: { get_resource: Test_volume_5 }, device_name: "vdf" }
|
|
||||||
|
|
||||||
user_data_format: RAW
|
|
||||||
user_data: |
|
|
||||||
#cloud-config
|
|
||||||
user: centos
|
|
||||||
password: centos
|
|
||||||
chpasswd: {expire: False}
|
|
||||||
ssh_pwauth: True
|
|
||||||
runcmd:
|
|
||||||
- echo "Creating file systems..." > /root/stabi_1.log&
|
|
||||||
- mkfs.ext4 /dev/vdb
|
|
||||||
- mkfs.ext4 /dev/vdc
|
|
||||||
- mkfs.ext4 /dev/vdd
|
|
||||||
- mkfs.ext4 /dev/vde
|
|
||||||
- mkfs.ext4 /dev/vdf
|
|
||||||
- echo "Mounting directories..." >> /root/stabi_1.log&
|
|
||||||
- mkdir /mnt/b
|
|
||||||
- mkdir /mnt/c
|
|
||||||
- mkdir /mnt/d
|
|
||||||
- mkdir /mnt/e
|
|
||||||
- mkdir /mnt/f
|
|
||||||
- mount /dev/vdb /mnt/b/
|
|
||||||
- mount /dev/vdc /mnt/c/
|
|
||||||
- mount /dev/vdd /mnt/d/
|
|
||||||
- mount /dev/vde /mnt/e/
|
|
||||||
- mount /dev/vdf /mnt/f/
|
|
||||||
- echo "Starting bonnie++..." >> /root/stabi_1.log&
|
|
||||||
- date >> /root/stabi_1.log&
|
|
||||||
- /usr/sbin/bonnie++ -b -n 100 -d /mnt/b -u root -x 999 >> /root/stabi_1.log&
|
|
||||||
- /usr/sbin/bonnie++ -b -n 100 -d /mnt/c -u root -x 999 >> /root/stabi_2.log&
|
|
||||||
- /usr/sbin/bonnie++ -b -n 100 -d /mnt/d -u root -x 999 >> /root/stabi_3.log&
|
|
||||||
- /usr/sbin/bonnie++ -b -n 100 -d /mnt/e -u root -x 999 >> /root/stabi_4.log&
|
|
||||||
|
|
|
@ -1,90 +0,0 @@
|
||||||
heat_template_version: '2013-05-23'
|
|
||||||
|
|
||||||
description:
|
|
||||||
Four Bonnies, One root volume 50GB and non-root 4 volumes
|
|
||||||
|
|
||||||
parameters:
|
|
||||||
Network_Name:
|
|
||||||
type: string
|
|
||||||
description: Network which is used for servers
|
|
||||||
default: tenant1-mgmt-net
|
|
||||||
|
|
||||||
resources:
|
|
||||||
|
|
||||||
root_volume_1:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: root_volume_1
|
|
||||||
image: centos_nkstress
|
|
||||||
size: 20
|
|
||||||
|
|
||||||
Test_volume_2:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_2
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Test_volume_3:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_3
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Test_volume_4:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_4
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Test_volume_5:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_5
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Stabi_volume_write:
|
|
||||||
type: OS::Nova::Server
|
|
||||||
depends_on: root_volume_1
|
|
||||||
properties:
|
|
||||||
name: { list_join : [ "-", [{get_param: 'OS::stack_name'}, 'Stabi_volume_write']]}
|
|
||||||
flavor: smallvol
|
|
||||||
key_name: newkey
|
|
||||||
availability_zone: "nova"
|
|
||||||
networks:
|
|
||||||
- network: { get_param: Network_Name }
|
|
||||||
block_device_mapping:
|
|
||||||
- { volume_id: { get_resource: root_volume_1}, device_name: "vda" }
|
|
||||||
- { volume_id: { get_resource: Test_volume_2}, device_name: "vdb" }
|
|
||||||
- { volume_id: { get_resource: Test_volume_3}, device_name: "vdc" }
|
|
||||||
- { volume_id: { get_resource: Test_volume_4}, device_name: "vdd" }
|
|
||||||
- { volume_id: { get_resource: Test_volume_5}, device_name: "vde" }
|
|
||||||
|
|
||||||
user_data_format: RAW
|
|
||||||
user_data: |
|
|
||||||
#cloud-config
|
|
||||||
user: centos
|
|
||||||
password: centos
|
|
||||||
chpasswd: {expire: False}
|
|
||||||
ssh_pwauth: True
|
|
||||||
runcmd:
|
|
||||||
- echo "Creating file systems..." > /root/stabi_1.log&
|
|
||||||
- mkfs.ext4 /dev/vdb
|
|
||||||
- mkfs.ext4 /dev/vdc
|
|
||||||
- mkfs.ext4 /dev/vdd
|
|
||||||
- mkfs.ext4 /dev/vde
|
|
||||||
- echo "Mounting directories..." >> /root/stabi_1.log&
|
|
||||||
- mkdir /mnt/b
|
|
||||||
- mkdir /mnt/c
|
|
||||||
- mkdir /mnt/d
|
|
||||||
- mkdir /mnt/e
|
|
||||||
- mount /dev/vdb /mnt/b/
|
|
||||||
- mount /dev/vdc /mnt/c/
|
|
||||||
- mount /dev/vdd /mnt/d/
|
|
||||||
- mount /dev/vde /mnt/e/
|
|
||||||
- echo "Starting bonnie++..." >> /root/stabi_1.log&
|
|
||||||
- date >> /root/stabi_1.log&
|
|
||||||
- /usr/sbin/bonnie++ -b -n 100 -d /home/centos -u root -x 1000 >> /root/stabi_1.log&
|
|
||||||
- /usr/sbin/bonnie++ -b -n 100 -d /mnt/b -u root -x 1000 >> /root/stabi_2.log&
|
|
||||||
- /usr/sbin/bonnie++ -b -n 100 -d /mnt/c -u root -x 1000 >> /root/stabi_2.log&
|
|
||||||
- /usr/sbin/bonnie++ -b -n 100 -d /mnt/d -u root -x 1000 >> /root/stabi_2.log&
|
|
||||||
|
|
|
@ -1,90 +0,0 @@
|
||||||
heat_template_version: '2013-05-23'
|
|
||||||
|
|
||||||
description:
|
|
||||||
Four Bonnie, One root volume 50GB and non-root 4 volumes
|
|
||||||
|
|
||||||
parameters:
|
|
||||||
Network_Name:
|
|
||||||
type: string
|
|
||||||
description: Network which is used for servers
|
|
||||||
default: tenant1-mgmt-net
|
|
||||||
|
|
||||||
resources:
|
|
||||||
|
|
||||||
root_volume_1:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: root_volume_1
|
|
||||||
image: centos_nkstress
|
|
||||||
size: 50
|
|
||||||
|
|
||||||
Test_volume_2:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_2
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Test_volume_3:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_3
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Test_volume_4:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_4
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Test_volume_5:
|
|
||||||
type: OS::Cinder::Volume
|
|
||||||
properties:
|
|
||||||
name: Test_volume_5
|
|
||||||
size: 5
|
|
||||||
|
|
||||||
Stabi_volume_write:
|
|
||||||
type: OS::Nova::Server
|
|
||||||
depends_on: root_volume_1
|
|
||||||
properties:
|
|
||||||
name: { list_join : [ "-", [{get_param: 'OS::stack_name'}, 'Stabi_volume_write']]}
|
|
||||||
flavor: smallvol
|
|
||||||
#key_name: newkey
|
|
||||||
availability_zone: "nova"
|
|
||||||
networks:
|
|
||||||
- network: { get_param: Network_Name }
|
|
||||||
block_device_mapping:
|
|
||||||
- { volume_id: { get_resource: root_volume_1}, device_name: "vda" }
|
|
||||||
- { volume_id: { get_resource: Test_volume_2}, device_name: "vdb" }
|
|
||||||
- { volume_id: { get_resource: Test_volume_3}, device_name: "vdc" }
|
|
||||||
- { volume_id: { get_resource: Test_volume_4}, device_name: "vdd" }
|
|
||||||
- { volume_id: { get_resource: Test_volume_5}, device_name: "vde" }
|
|
||||||
|
|
||||||
user_data_format: RAW
|
|
||||||
user_data: |
|
|
||||||
#cloud-config
|
|
||||||
user: centos
|
|
||||||
password: centos
|
|
||||||
chpasswd: {expire: False}
|
|
||||||
ssh_pwauth: True
|
|
||||||
runcmd:
|
|
||||||
- echo "Creating file systems..." > /root/stabi_1.log&
|
|
||||||
- mkfs.ext4 /dev/vdb
|
|
||||||
- mkfs.ext4 /dev/vdc
|
|
||||||
- mkfs.ext4 /dev/vdd
|
|
||||||
- mkfs.ext4 /dev/vde
|
|
||||||
- echo "Mounting directories..." >> /root/stabi_1.log&
|
|
||||||
- mkdir /mnt/b
|
|
||||||
- mkdir /mnt/c
|
|
||||||
- mkdir /mnt/d
|
|
||||||
- mkdir /mnt/e
|
|
||||||
- mount /dev/vdb /mnt/b/
|
|
||||||
- mount /dev/vdc /mnt/c/
|
|
||||||
- mount /dev/vdd /mnt/d/
|
|
||||||
- mount /dev/vde /mnt/e/
|
|
||||||
- echo "Starting bonnie++..." >> /root/stabi_1.log&
|
|
||||||
- date >> /root/stabi_1.log&
|
|
||||||
- /usr/sbin/bonnie++ -b -n 100 -d /home/centos -u root -x 1000 >> /root/stabi_1.log&
|
|
||||||
- /usr/sbin/bonnie++ -b -n 100 -d /mnt/b -u root -x 1000 >> /root/stabi_2.log&
|
|
||||||
- /usr/sbin/bonnie++ -b -n 100 -d /mnt/c -u root -x 1000 >> /root/stabi_2.log&
|
|
||||||
- /usr/sbin/bonnie++ -b -n 100 -d /mnt/d -u root -x 1000 >> /root/stabi_2.log&
|
|
||||||
|
|
|
@ -1,5 +0,0 @@
|
||||||
#
|
|
||||||
# Copyright (c) 2016 Wind River Systems, Inc.
|
|
||||||
#
|
|
||||||
# SPDX-License-Identifier: Apache-2.0
|
|
||||||
#
|
|
|
@ -1,147 +0,0 @@
|
||||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
|
||||||
#
|
|
||||||
# Copyright (c) 2016 Wind River Systems, Inc.
|
|
||||||
#
|
|
||||||
# SPDX-License-Identifier: Apache-2.0
|
|
||||||
#
|
|
||||||
|
|
||||||
import logging
|
|
||||||
import os
|
|
||||||
|
|
||||||
from io_monitor.constants import DOMAIN
|
|
||||||
from io_monitor.utils.data_window import DataCollectionWindow
|
|
||||||
|
|
||||||
LOG = logging.getLogger(DOMAIN)
|
|
||||||
|
|
||||||
|
|
||||||
class DeviceDataCollector(object):
|
|
||||||
# Moving average windows
|
|
||||||
MA_WINDOW_SMA = 0
|
|
||||||
MA_WINDOW_MED = 1
|
|
||||||
MA_WINDOW_LAR = 2
|
|
||||||
|
|
||||||
# Device status
|
|
||||||
STATUS_NORMAL = "N"
|
|
||||||
STATUS_BUILDING = "B"
|
|
||||||
STATUS_CONGESTED = "L"
|
|
||||||
|
|
||||||
# Data tracked
|
|
||||||
DATA_IOPS = "iops"
|
|
||||||
DATA_AWAIT = "await"
|
|
||||||
|
|
||||||
def __init__(self, device_node, data_elements,
|
|
||||||
size_sma, size_med, size_lar):
|
|
||||||
|
|
||||||
self.node = device_node
|
|
||||||
|
|
||||||
if os.path.exists('/sys/block/' + self.node + '/dm/name'):
|
|
||||||
self.name = open('/sys/block/' + self.node + '/dm/name',
|
|
||||||
'r').read().rstrip()
|
|
||||||
else:
|
|
||||||
self.name = self.node
|
|
||||||
|
|
||||||
self.data_dict = {}
|
|
||||||
self.data_caps = {self.DATA_AWAIT: -1, self.DATA_IOPS: -1}
|
|
||||||
self.timestamp = None
|
|
||||||
|
|
||||||
self.congestion_status = self.STATUS_NORMAL
|
|
||||||
self.congestion_await_minimal_spike = -1
|
|
||||||
self.congestion_await_sustained = -1
|
|
||||||
|
|
||||||
for element in data_elements:
|
|
||||||
self.data_dict.update({element: [
|
|
||||||
DataCollectionWindow(size_sma, stuck_data_override=True),
|
|
||||||
DataCollectionWindow(size_med, stuck_data_override=True),
|
|
||||||
DataCollectionWindow(size_lar, stuck_data_override=True)]})
|
|
||||||
|
|
||||||
def update_congestion_status(self):
|
|
||||||
# Bail if threshold is not set
|
|
||||||
if self.congestion_await_sustained == -1:
|
|
||||||
return
|
|
||||||
|
|
||||||
ma_sma = self.get_average(self.DATA_AWAIT, self.MA_WINDOW_SMA)
|
|
||||||
ma_med = self.get_average(self.DATA_AWAIT, self.MA_WINDOW_MED)
|
|
||||||
ma_lar = self.get_average(self.DATA_AWAIT, self.MA_WINDOW_LAR)
|
|
||||||
|
|
||||||
# Set the congestion status based on await moving average
|
|
||||||
if self.congestion_status is self.STATUS_NORMAL:
|
|
||||||
if ma_sma > self.congestion_await_sustained:
|
|
||||||
self.congestion_status = self.STATUS_BUILDING
|
|
||||||
|
|
||||||
if self.congestion_status is self.STATUS_BUILDING:
|
|
||||||
if ma_lar > self.congestion_await_sustained:
|
|
||||||
self.congestion_status = self.STATUS_CONGESTED
|
|
||||||
LOG.warn("Node %s (%s) is experiencing high await times."
|
|
||||||
% (self.node, self.name))
|
|
||||||
elif ma_sma < self.congestion_await_sustained:
|
|
||||||
self.congestion_status = self.STATUS_NORMAL
|
|
||||||
|
|
||||||
if self.congestion_status is self.STATUS_CONGESTED:
|
|
||||||
if ma_med < self.congestion_await_sustained:
|
|
||||||
self.congestion_status = self.STATUS_BUILDING
|
|
||||||
|
|
||||||
def update_data(self, ts, element, value):
|
|
||||||
self.timestamp = ts
|
|
||||||
|
|
||||||
# LOG.debug("%s: e = %s, v= %f" % (self.node, element, value))
|
|
||||||
for w in [self.MA_WINDOW_SMA,
|
|
||||||
self.MA_WINDOW_MED,
|
|
||||||
self.MA_WINDOW_LAR]:
|
|
||||||
self.data_dict[element][w].update(value, self.data_caps[element])
|
|
||||||
|
|
||||||
def get_latest(self, element):
|
|
||||||
if element not in self.data_dict:
|
|
||||||
LOG.error("Error: invalid element requested = %s" % element)
|
|
||||||
return 0
|
|
||||||
|
|
||||||
return self.data_dict[element][self.MA_WINDOW_SMA].get_latest()
|
|
||||||
|
|
||||||
def get_average(self, element, window):
|
|
||||||
if window not in [self.MA_WINDOW_SMA,
|
|
||||||
self.MA_WINDOW_MED,
|
|
||||||
self.MA_WINDOW_LAR]:
|
|
||||||
LOG.error("WindowError: invalid window requested = %s" % window)
|
|
||||||
return 0
|
|
||||||
|
|
||||||
if element not in self.data_dict:
|
|
||||||
LOG.error("Error: invalid element requested = %s" % element)
|
|
||||||
return 0
|
|
||||||
|
|
||||||
return self.data_dict[element][window].get_average()
|
|
||||||
|
|
||||||
def is_data_stale(self, ts):
|
|
||||||
return not (ts == self.timestamp)
|
|
||||||
|
|
||||||
def get_congestion_status(self, debug=False):
|
|
||||||
|
|
||||||
if debug:
|
|
||||||
ma_sma = self.get_average(self.DATA_AWAIT, self.MA_WINDOW_SMA)
|
|
||||||
ma_med = self.get_average(self.DATA_AWAIT, self.MA_WINDOW_MED)
|
|
||||||
ma_lar = self.get_average(self.DATA_AWAIT, self.MA_WINDOW_LAR)
|
|
||||||
|
|
||||||
LOG.debug("%s [ %6.2f %6.2f %6.2f ] %d" %
|
|
||||||
(self.node, ma_sma, ma_med, ma_lar,
|
|
||||||
self.congestion_await_sustained))
|
|
||||||
|
|
||||||
return self.congestion_status
|
|
||||||
|
|
||||||
def set_data_caps(self, element, cap):
|
|
||||||
if element in self.data_caps:
|
|
||||||
self.data_caps[element] = cap
|
|
||||||
|
|
||||||
def set_congestion_thresholds(self, await_minimal_spike,
|
|
||||||
await_sustained_congestion):
|
|
||||||
self.congestion_await_minimal_spike = await_minimal_spike
|
|
||||||
self.congestion_await_sustained = await_sustained_congestion
|
|
||||||
|
|
||||||
def get_element_windows_avg_list(self, element):
|
|
||||||
return [self.get_average(element, self.MA_WINDOW_SMA),
|
|
||||||
self.get_average(element, self.MA_WINDOW_MED),
|
|
||||||
self.get_average(element, self.MA_WINDOW_LAR)]
|
|
||||||
|
|
||||||
def get_element_windows_avg_string(self, element):
|
|
||||||
return "%s [ %9.2f, %9.2f, %9.2f ]" % (
|
|
||||||
element,
|
|
||||||
self.get_average(element, self.MA_WINDOW_SMA),
|
|
||||||
self.get_average(element, self.MA_WINDOW_MED),
|
|
||||||
self.get_average(element, self.MA_WINDOW_LAR))
|
|
|
@ -1,61 +0,0 @@
|
||||||
# vim: tabstop=4 shiftwidth=4 softtabstop=4
|
|
||||||
#
|
|
||||||
# Copyright (c) 2016 Wind River Systems, Inc.
|
|
||||||
#
|
|
||||||
# SPDX-License-Identifier: Apache-2.0
|
|
||||||
#
|
|
||||||
|
|
||||||
import collections
|
|
||||||
|
|
||||||
|
|
||||||
class DataCollectionWindow(object):
|
|
||||||
# If the same data is seen repeatedly, then override with 0.0 as this
|
|
||||||
# device is no longer updating
|
|
||||||
CONSECUTIVE_SAME_DATA = 5
|
|
||||||
|
|
||||||
def __init__(self, size, stuck_data_override=False):
|
|
||||||
self.window = collections.deque(size*[0.0], size)
|
|
||||||
self.timestamp = None
|
|
||||||
self.last_value = 0.0
|
|
||||||
self.total = 0.0
|
|
||||||
self.avg = 0.0
|
|
||||||
|
|
||||||
# iostat will produce a "stuck data" scenario when called with less
|
|
||||||
# than two iterations and I/O has stopped on the device
|
|
||||||
self.stuck_override = stuck_data_override
|
|
||||||
self.stuck_count = 0
|
|
||||||
|
|
||||||
def update(self, value, cap):
|
|
||||||
# Handle stuck data and override
|
|
||||||
if self.stuck_override and value != 0:
|
|
||||||
if value == self.last_value:
|
|
||||||
self.stuck_count += 1
|
|
||||||
else:
|
|
||||||
self.stuck_count = 0
|
|
||||||
|
|
||||||
# Save latest value
|
|
||||||
self.last_value = value
|
|
||||||
|
|
||||||
if self.stuck_count > self.CONSECUTIVE_SAME_DATA:
|
|
||||||
value = 0.0
|
|
||||||
else:
|
|
||||||
# Cap the values due to squirly data
|
|
||||||
if cap > 0:
|
|
||||||
value = min(value, cap)
|
|
||||||
|
|
||||||
expired_value = self.window.pop()
|
|
||||||
|
|
||||||
# Adjust push the new
|
|
||||||
self.window.appendleft(value)
|
|
||||||
|
|
||||||
# Adjust the sums
|
|
||||||
self.total += (value - expired_value)
|
|
||||||
|
|
||||||
# Adjust the average
|
|
||||||
self.avg = max(0.0, self.total/len(self.window))
|
|
||||||
|
|
||||||
def get_latest(self):
|
|
||||||
return self.last_value
|
|
||||||
|
|
||||||
def get_average(self):
|
|
||||||
return self.avg
|
|
|
@ -1,18 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
#
|
|
||||||
# Copyright (c) 2016 Wind River Systems, Inc.
|
|
||||||
#
|
|
||||||
# SPDX-License-Identifier: Apache-2.0
|
|
||||||
#
|
|
||||||
|
|
||||||
|
|
||||||
import setuptools
|
|
||||||
|
|
||||||
setuptools.setup(name='io_monitor',
|
|
||||||
version='1.0.0',
|
|
||||||
description='IO Monitor',
|
|
||||||
license='Apache-2.0',
|
|
||||||
packages=['io_monitor', 'io_monitor.monitors',
|
|
||||||
'io_monitor.monitors.cinder', 'io_monitor.utils'],
|
|
||||||
entry_points={
|
|
||||||
})
|
|
|
@ -1,17 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
#
|
|
||||||
# Copyright (c) 2016 Wind River Systems, Inc.
|
|
||||||
#
|
|
||||||
# SPDX-License-Identifier: Apache-2.0
|
|
||||||
#
|
|
||||||
|
|
||||||
|
|
||||||
import sys
|
|
||||||
|
|
||||||
try:
|
|
||||||
from io_monitor import io_monitor_manager
|
|
||||||
except EnvironmentError as e:
|
|
||||||
print >> sys.stderr, "Error importing io_monitor_manager: ", str(e)
|
|
||||||
sys.exit(1)
|
|
||||||
|
|
||||||
io_monitor_manager.main()
|
|
|
@ -1,100 +0,0 @@
|
||||||
#!/bin/sh
|
|
||||||
#
|
|
||||||
# Copyright (c) 2016 Wind River Systems, Inc.
|
|
||||||
#
|
|
||||||
# SPDX-License-Identifier: Apache-2.0
|
|
||||||
#
|
|
||||||
|
|
||||||
|
|
||||||
### BEGIN INIT INFO
|
|
||||||
# Provides: io-monitor-manager
|
|
||||||
# Required-Start:
|
|
||||||
# Required-Stop:
|
|
||||||
# Default-Start: 2 3 4 5
|
|
||||||
# Default-Stop: 0 1 6
|
|
||||||
# Short-Description: Daemon for polling iostat status
|
|
||||||
# Description: Daemon for polling iostat status
|
|
||||||
### END INIT INFO
|
|
||||||
|
|
||||||
DESC="io-monitor-manager"
|
|
||||||
DAEMON="/usr/bin/io-monitor-manager"
|
|
||||||
RUNDIR="/var/run/io-monitor"
|
|
||||||
PIDFILE=$RUNDIR/$DESC.pid
|
|
||||||
|
|
||||||
start()
|
|
||||||
{
|
|
||||||
if [ -e $PIDFILE ]; then
|
|
||||||
PIDDIR=/prod/$(cat $PIDFILE)
|
|
||||||
if [ -d ${PIDFILE} ]; then
|
|
||||||
echo "$DESC already running."
|
|
||||||
exit 0
|
|
||||||
else
|
|
||||||
echo "Removing stale PID file $PIDFILE"
|
|
||||||
rm -f $PIDFILE
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo -n "Starting $DESC..."
|
|
||||||
mkdir -p $RUNDIR
|
|
||||||
start-stop-daemon --start --quiet \
|
|
||||||
--pidfile ${PIDFILE} --exec ${DAEMON} -- --daemon_mode
|
|
||||||
|
|
||||||
#--make-pidfile
|
|
||||||
|
|
||||||
if [ $? -eq 0 ]; then
|
|
||||||
echo "done."
|
|
||||||
else
|
|
||||||
echo "failed."
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
stop()
|
|
||||||
{
|
|
||||||
echo -n "Stopping $DESC..."
|
|
||||||
start-stop-daemon --stop --quiet --pidfile $PIDFILE
|
|
||||||
if [ $? -eq 0 ]; then
|
|
||||||
echo "done."
|
|
||||||
else
|
|
||||||
echo "failed."
|
|
||||||
fi
|
|
||||||
rm -f $PIDFILE
|
|
||||||
}
|
|
||||||
|
|
||||||
status()
|
|
||||||
{
|
|
||||||
pid=`cat $PIDFILE 2>/dev/null`
|
|
||||||
if [ -n "$pid" ]; then
|
|
||||||
if ps -p $pid &> /dev/null ; then
|
|
||||||
echo "$DESC is running"
|
|
||||||
exit 0
|
|
||||||
else
|
|
||||||
echo "$DESC is not running but has pid file"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
echo "$DESC is not running"
|
|
||||||
exit 3
|
|
||||||
}
|
|
||||||
|
|
||||||
case "$1" in
|
|
||||||
start)
|
|
||||||
start
|
|
||||||
;;
|
|
||||||
stop)
|
|
||||||
stop
|
|
||||||
;;
|
|
||||||
restart|force-reload|reload)
|
|
||||||
stop
|
|
||||||
start
|
|
||||||
;;
|
|
||||||
status)
|
|
||||||
status
|
|
||||||
;;
|
|
||||||
*)
|
|
||||||
echo "Usage: $0 {start|stop|force-reload|restart|reload|status}"
|
|
||||||
exit 1
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
|
|
||||||
exit 0
|
|
|
@ -1,19 +0,0 @@
|
||||||
[process]
|
|
||||||
process = io-monitor-manager
|
|
||||||
pidfile = /var/run/io-monitor/io-monitor-manager.pid
|
|
||||||
script = /etc/init.d/io-monitor-manager
|
|
||||||
style = lsb ; ocf or lsb
|
|
||||||
severity = minor ; Process failure severity
|
|
||||||
; critical : host is failed
|
|
||||||
; major : host is degraded
|
|
||||||
; minor : log is generated
|
|
||||||
restarts = 5 ; Number of back to back unsuccessful restarts before severity assertion
|
|
||||||
interval = 10 ; Number of seconds to wait between back-to-back unsuccessful restarts
|
|
||||||
debounce = 20 ; Number of seconds the process needs to run before declaring
|
|
||||||
; it as running O.K. after a restart.
|
|
||||||
; Time after which back-to-back restart count is cleared.
|
|
||||||
startuptime = 10 ; Seconds to wait after process start before starting the debounce monitor
|
|
||||||
mode = passive ; Monitoring mode: passive (default) or active
|
|
||||||
; passive: process death monitoring (default: always)
|
|
||||||
; active: heartbeat monitoring, i.e. request / response messaging
|
|
||||||
|
|
Loading…
Reference in New Issue