Remove unused openstack files that are no longer in use

Containerized builds do not use src rpms, but instead
build directly from git repos or upstream sources.

A clean workspace build passed, as well as building wheels and
containers.

A later commit will remove or rename the containerized pike
files, which are almost all not being built anymore.

Removing:
 aodh
 ceilometer
 cinder
 glance
 glance-store
 gnocchi
 ironic
 heat
 magnum
 magnum-ui
 murano
 murano-ui
 nova
 neutron
 neutron-lib
 python-networking-bgpvpn
 python-networking-sfc
 python-networking-odl
 python-neutron-dynamic-routing
 panko
 panko-config

This also fixes some minor tox issues
1) bashate needed the argument -r to handle empty input to xargs
2) bashate needs to process files individually otherwise
failures may not be reported.
3) bashate line too long no longer needed to be suppressed since
the file with the issue was removed.

The existing folders still exist to provide the docker image
directives.  A future commit may relocate the docker image
directives into their own centralized location.

Story: 2004764
Task: 30213
Change-Id: I4b724e4630593326dead7e86b0bfc74b556cfb9f
Signed-off-by: Al Bailey <Al.Bailey@windriver.com>
This commit is contained in:
Al Bailey 2019-03-22 13:20:25 -05:00
parent a18fa853f7
commit c3aafc8cea
220 changed files with 1 additions and 24938 deletions

View File

@ -1,17 +1,6 @@
distributedcloud-client-wheels distributedcloud-client-wheels
distributedcloud-wheels distributedcloud-wheels
gnocchi-wheels
openstack-ceilometer-wheels
openstack-cinder-wheels
openstack-glance-wheels
openstack-heat-wheels
openstack-ironic-wheels
openstack-keystone-wheels openstack-keystone-wheels
openstack-magnum-ui-wheels
openstack-magnum-wheels
openstack-murano-wheels
openstack-neutron-wheels
openstack-nova-wheels
python-cinderclient-wheels python-cinderclient-wheels
python-django-horizon-wheels python-django-horizon-wheels
python-glanceclient-wheels python-glanceclient-wheels
@ -19,11 +8,6 @@ python-gnocchiclient-wheels
python-ironicclient-wheels python-ironicclient-wheels
python-magnumclient-wheels python-magnumclient-wheels
python-muranoclient-wheels python-muranoclient-wheels
python-networking-bgpvpn-wheels
python-networking-odl-wheels
python-networking-sfc-wheels
python-neutronclient-wheels python-neutronclient-wheels
python-neutron-dynamic-routing-wheels
python-neutron-lib-wheels
python-novaclient-wheels python-novaclient-wheels
python-openstacksdk-wheels python-openstacksdk-wheels

View File

@ -1,38 +1,19 @@
openstack/openstack-murano
openstack/python-muranoclient openstack/python-muranoclient
openstack/openstack-ironic
openstack/python-ironicclient openstack/python-ironicclient
openstack/python-magnumclient openstack/python-magnumclient
openstack/openstack-magnum
openstack/openstack-magnum-ui
openstack/openstack-ras openstack/openstack-ras
openstack/openstack-panko
openstack/openstack-panko-config
openstack/openstack-os-vif openstack/openstack-os-vif
openstack/python-aodhclient openstack/python-aodhclient
openstack/python-barbicanclient openstack/python-barbicanclient
openstack/python-ceilometer
openstack/python-cinder
openstack/python-cinderclient openstack/python-cinderclient
openstack/python-glance
openstack/python-glance-store
openstack/python-glanceclient openstack/python-glanceclient
openstack/python-gnocchi
openstack/python-gnocchiclient openstack/python-gnocchiclient
openstack/python-heat/openstack-heat
openstack/python-heatclient openstack/python-heatclient
openstack/python-horizon openstack/python-horizon
openstack/python-keystone openstack/python-keystone
openstack/python-keystoneauth1 openstack/python-keystoneauth1
openstack/python-keystoneclient openstack/python-keystoneclient
openstack/python-networking-bgpvpn
openstack/python-networking-sfc
openstack/python-networking-odl
openstack/python-neutron
openstack/python-neutron-dynamic-routing
openstack/python-neutron-lib
openstack/python-neutronclient openstack/python-neutronclient
openstack/python-nova
openstack/python-novaclient openstack/python-novaclient
openstack/python-openstackdocstheme openstack/python-openstackdocstheme
openstack/python-oslo-service openstack/python-oslo-service

View File

@ -1,17 +1,6 @@
distributedcloud-client-wheels distributedcloud-client-wheels
distributedcloud-wheels distributedcloud-wheels
gnocchi-wheels
openstack-ceilometer-wheels
openstack-cinder-wheels
openstack-glance-wheels
openstack-heat-wheels
openstack-ironic-wheels
openstack-keystone-wheels openstack-keystone-wheels
openstack-magnum-ui-wheels
openstack-magnum-wheels
openstack-murano-wheels
openstack-neutron-wheels
openstack-nova-wheels
python-cinderclient-wheels python-cinderclient-wheels
python-django-horizon-wheels python-django-horizon-wheels
python-glanceclient-wheels python-glanceclient-wheels
@ -19,11 +8,6 @@ python-gnocchiclient-wheels
python-ironicclient-wheels python-ironicclient-wheels
python-magnumclient-wheels python-magnumclient-wheels
python-muranoclient-wheels python-muranoclient-wheels
python-networking-bgpvpn-wheels
python-networking-odl-wheels
python-networking-sfc-wheels
python-neutronclient-wheels python-neutronclient-wheels
python-neutron-dynamic-routing-wheels
python-neutron-lib-wheels
python-novaclient-wheels python-novaclient-wheels
python-openstacksdk-wheels python-openstacksdk-wheels

View File

@ -1,2 +0,0 @@
SRC_DIR="files"
TIS_PATCH_VER=0

View File

@ -1,56 +0,0 @@
#
# SPDX-License-Identifier: Apache-2.0
#
# Copyright (C) 2019 Intel Corporation
#
Summary: openstack-aodh-config
Name: openstack-aodh-config
Version: 1.0
Release: %{tis_patch_ver}%{?_tis_dist}
License: Apache-2.0
Group: openstack
Packager: StarlingX
URL: unknown
BuildArch: noarch
Source: %name-%version.tar.gz
Requires: openstack-aodh-common
Requires: openstack-aodh-api
Requires: openstack-aodh-evaluator
Requires: openstack-aodh-notifier
Requires: openstack-aodh-expirer
Requires: openstack-aodh-listener
Summary: package StarlingX configuration files of openstack-aodh to system folder.
%description
package StarlingX configuration files of openstack-aodh to system folder.
%prep
%setup
%build
%install
%{__install} -d %{buildroot}%{_sysconfdir}/systemd/system
%{__install} -d %{buildroot}%{_bindir}
%{__install} -m 0644 openstack-aodh-api.service %{buildroot}%{_sysconfdir}/systemd/system/openstack-aodh-api.service
%{__install} -m 0644 openstack-aodh-evaluator.service %{buildroot}%{_sysconfdir}/systemd/system/openstack-aodh-evaluator.service
%{__install} -m 0644 openstack-aodh-expirer.service %{buildroot}%{_sysconfdir}/systemd/system/openstack-aodh-expirer.service
%{__install} -m 0644 openstack-aodh-listener.service %{buildroot}%{_sysconfdir}/systemd/system/openstack-aodh-listener.service
%{__install} -m 0644 openstack-aodh-notifier.service %{buildroot}%{_sysconfdir}/systemd/system/openstack-aodh-notifier.service
%{__install} -m 0750 aodh-expirer-active %{buildroot}%{_bindir}/aodh-expirer-active
%post
if test -s %{_sysconfdir}/logrotate.d/openstack-aodh ; then
echo '#See /etc/logrotate.d/syslog for aodh rules' > %{_sysconfdir}/logrotate.d/openstack-aodh
fi
%files
%{_sysconfdir}/systemd/system/openstack-aodh-api.service
%{_sysconfdir}/systemd/system/openstack-aodh-evaluator.service
%{_sysconfdir}/systemd/system/openstack-aodh-expirer.service
%{_sysconfdir}/systemd/system/openstack-aodh-listener.service
%{_sysconfdir}/systemd/system/openstack-aodh-notifier.service
%{_bindir}/aodh-expirer-active

View File

@ -1,61 +0,0 @@
#!/bin/bash
#
# Wrapper script to run aodh-expirer when on active controller only
#
AODH_EXPIRER_INFO="/var/run/aodh-expirer.info"
AODH_EXPIRER_CMD="/usr/bin/nice -n 2 /usr/bin/aodh-expirer"
function is_active_pgserver()
{
# Determine whether we're running on the same controller as the service.
local service=postgres
local enabledactive=$(/usr/bin/sm-query service $service| grep enabled-active)
if [ "x$enabledactive" == "x" ]
then
# enabled-active not found for that service on this controller
return 1
else
# enabled-active found for that resource
return 0
fi
}
if is_active_pgserver
then
if [ ! -f ${AODH_EXPIRER_INFO} ]
then
echo delay_count=0 > ${AODH_EXPIRER_INFO}
fi
source ${AODH_EXPIRER_INFO}
sudo -u postgres psql -d sysinv -c "SELECT alarm_id, entity_instance_id from i_alarm;" | grep -P "^(?=.*100.101)(?=.*${HOSTNAME})" &>/dev/null
if [ $? -eq 0 ]
then
source /etc/platform/platform.conf
if [ "${system_type}" = "All-in-one" ]
then
source /etc/init.d/task_affinity_functions.sh
idle_core=$(get_most_idle_core)
if [ "$idle_core" -ne "0" ]
then
sh -c "exec taskset -c $idle_core ${AODH_EXPIRER_CMD}"
sed -i "/delay_count/s/=.*/=0/" ${AODH_EXPIRER_INFO}
exit 0
fi
fi
if [ "$delay_count" -lt "3" ]
then
newval=$(($delay_count+1))
sed -i "/delay_count/s/=.*/=$newval/" ${AODH_EXPIRER_INFO}
(sleep 3600; /usr/bin/aodh-expirer-active) &
exit 0
fi
fi
eval ${AODH_EXPIRER_CMD}
sed -i "/delay_count/s/=.*/=0/" ${AODH_EXPIRER_INFO}
fi
exit 0

View File

@ -1,11 +0,0 @@
[Unit]
Description=OpenStack Alarm API service
After=syslog.target network.target
[Service]
Type=simple
User=root
ExecStart=/bin/python /usr/bin/gunicorn --config /usr/share/aodh/aodh-api.conf --pythonpath /usr/share/aodh aodh-api
[Install]
WantedBy=multi-user.target

View File

@ -1,11 +0,0 @@
[Unit]
Description=OpenStack Alarm evaluator service
After=syslog.target network.target
[Service]
Type=simple
User=root
ExecStart=/usr/bin/aodh-evaluator
[Install]
WantedBy=multi-user.target

View File

@ -1,11 +0,0 @@
[Unit]
Description=OpenStack Alarm expirer service
After=syslog.target network.target
[Service]
Type=simple
User=root
ExecStart=/usr/bin/aodh-expirer
[Install]
WantedBy=multi-user.target

View File

@ -1,11 +0,0 @@
[Unit]
Description=OpenStack Alarm listener service
After=syslog.target network.target
[Service]
Type=simple
User=root
ExecStart=/usr/bin/aodh-listener
[Install]
WantedBy=multi-user.target

View File

@ -1,11 +0,0 @@
[Unit]
Description=OpenStack Alarm notifier service
After=syslog.target network.target
[Service]
Type=simple
User=root
ExecStart=/usr/bin/aodh-notifier
[Install]
WantedBy=multi-user.target

View File

@ -1 +0,0 @@
TIS_PATCH_VER=7

View File

@ -1,26 +0,0 @@
From 21b64cda6c2cd29c287c08c85962609f8d93d18f Mon Sep 17 00:00:00 2001
From: Scott Little <scott.little@windriver.com>
Date: Mon, 2 Oct 2017 14:28:46 -0400
Subject: [PATCH 2/6] WRS: 0001-Update-package-versioning-for-TIS-format.patch
Signed-off-by: zhipengl <zhipengs.liu@intel.com>
---
SPECS/openstack-aodh.spec | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/SPECS/openstack-aodh.spec b/SPECS/openstack-aodh.spec
index 9fffb3c..01615e7 100644
--- a/SPECS/openstack-aodh.spec
+++ b/SPECS/openstack-aodh.spec
@@ -4,7 +4,7 @@
Name: openstack-aodh
Version: 5.1.0
-Release: 1%{?dist}
+Release: 1.el7%{?_tis_dist}.%{tis_patch_ver}
Summary: OpenStack Telemetry Alarming
License: ASL 2.0
URL: https://github.com/openstack/aodh.git
--
1.8.3.1

View File

@ -1,2 +0,0 @@
0001-Update-package-versioning-for-TIS-format.patch
spec-include-TiS-patches.patch

View File

@ -1,75 +0,0 @@
From 8620101244ea5be1ff0cc0e127fa57dca4be468a Mon Sep 17 00:00:00 2001
From: Scott Little <scott.little@windriver.com>
Date: Mon, 2 Oct 2017 14:28:46 -0400
Subject: [PATCH] WRS: spec-include-TiS-patches.patch
Signed-off-by: zhipengl <zhipengs.liu@intel.com>
---
SPECS/openstack-aodh.spec | 16 +++++++++++++---
1 file changed, 13 insertions(+), 3 deletions(-)
diff --git a/SPECS/openstack-aodh.spec b/SPECS/openstack-aodh.spec
index 01615e7..f840861 100644
--- a/SPECS/openstack-aodh.spec
+++ b/SPECS/openstack-aodh.spec
@@ -18,6 +18,9 @@ Source12: %{name}-notifier.service
Source13: %{name}-expirer.service
Source14: %{name}-listener.service
+#WRS: Include patches here:
+Patch1: 0001-modify-aodh-api.patch
+Patch2: 0002-Add-drivername-support-for-postgresql-connection-set.patch
BuildArch: noarch
BuildRequires: openstack-macros
@@ -212,6 +215,10 @@ This package contains the Aodh test files.
%prep
%setup -q -n %{pypi_name}-%{upstream_version}
+#WRS: Apply patches here
+%patch1 -p1
+%patch2 -p1
+
find . \( -name .gitignore -o -name .placeholder \) -delete
find aodh -name \*.py -exec sed -i '/\/usr\/bin\/env python/{d;q}' {} +
@@ -259,10 +266,12 @@ install -p -D -m 640 %{SOURCE1} %{buildroot}%{_datadir}/aodh/aodh-dist.conf
install -p -D -m 640 aodh/aodh.conf %{buildroot}%{_sysconfdir}/aodh/aodh.conf
install -p -D -m 640 aodh/api/policy.json %{buildroot}%{_sysconfdir}/aodh/policy.json
+#WRS
+install -p -D -m 640 aodh/api/aodh-api.py %{buildroot}%{_datadir}/aodh/aodh-api.py
# Setup directories
install -d -m 755 %{buildroot}%{_sharedstatedir}/aodh
install -d -m 755 %{buildroot}%{_sharedstatedir}/aodh/tmp
-install -d -m 750 %{buildroot}%{_localstatedir}/log/aodh
+install -d -m 755 %{buildroot}%{_localstatedir}/log/aodh
# Install logrotate
install -p -D -m 644 %{SOURCE2} %{buildroot}%{_sysconfdir}/logrotate.d/%{name}
@@ -340,12 +349,12 @@ exit 0
%files common -f %{pypi_name}.lang
%doc README.rst
%dir %{_sysconfdir}/aodh
+%{_datadir}/aodh/aodh-api.*
%attr(-, root, aodh) %{_datadir}/aodh/aodh-dist.conf
%config(noreplace) %attr(-, root, aodh) %{_sysconfdir}/aodh/aodh.conf
%config(noreplace) %attr(-, root, aodh) %{_sysconfdir}/aodh/policy.json
%config(noreplace) %{_sysconfdir}/logrotate.d/%{name}
-%dir %attr(0750, aodh, root) %{_localstatedir}/log/aodh
-%{_bindir}/aodh-dbsync
+%dir %attr(0755, aodh, root) %{_localstatedir}/log/aodh
%{_bindir}/aodh-config-generator
%defattr(-, aodh, aodh, -)
@@ -353,6 +362,7 @@ exit 0
%dir %{_sharedstatedir}/aodh/tmp
%files api
+%{_bindir}/aodh-dbsync
%{_bindir}/aodh-api
%{_unitdir}/%{name}-api.service
--
2.7.4

View File

@ -1,61 +0,0 @@
From 4307eaa7d2cf3dbcf1eb25c7b2636aeaef13b6c7 Mon Sep 17 00:00:00 2001
From: Angie Wang <angie.Wang@windriver.com>
Date: Wed, 15 Feb 2017 15:59:26 -0500
Subject: [PATCH 1/2] modify-aodh-api
---
aodh/api/aodh-api.py | 7 +++++++
aodh/api/app.py | 12 +++++++++---
2 files changed, 16 insertions(+), 3 deletions(-)
create mode 100644 aodh/api/aodh-api.py
diff --git a/aodh/api/aodh-api.py b/aodh/api/aodh-api.py
new file mode 100644
index 0000000..565f2e3
--- /dev/null
+++ b/aodh/api/aodh-api.py
@@ -0,0 +1,7 @@
+from aodh.api import app as build_wsgi_app
+import sys
+
+sys.argv = sys.argv[:1]
+args = {'config_file' : 'etc/aodh/aodh.conf', }
+application = build_wsgi_app.build_wsgi_app(None, args)
+
diff --git a/aodh/api/app.py b/aodh/api/app.py
index b7c0900..ba0a438 100644
--- a/aodh/api/app.py
+++ b/aodh/api/app.py
@@ -53,7 +53,7 @@ def setup_app(root, conf):
)
-def load_app(conf):
+def load_app(conf, args):
global APPCONFIGS
# Build the WSGI app
@@ -64,6 +64,12 @@ def load_app(conf):
if cfg_path is None or not os.path.exists(cfg_path):
raise cfg.ConfigFilesNotFoundError([conf.api.paste_config])
+ config = dict([(key, value) for key, value in args.iteritems()
+ if key in conf and value is not None])
+ for key, value in config.iteritems():
+ if key == 'config_file':
+ conf.config_file = value
+
config = dict(conf=conf)
configkey = str(uuid.uuid4())
APPCONFIGS[configkey] = config
@@ -83,5 +89,5 @@ def app_factory(global_config, **local_conf):
return setup_app(root=local_conf.get('root'), **appconfig)
-def build_wsgi_app(argv=None):
- return load_app(service.prepare_service(argv=argv))
+def build_wsgi_app(argv=None, args=None):
+ return load_app(service.prepare_service(argv=argv), args)
--
2.7.4

View File

@ -1,37 +0,0 @@
From c322e9795933ff8665d7590f082672bd839df431 Mon Sep 17 00:00:00 2001
From: Al Bailey <Al.Bailey@windriver.com>
Date: Thu, 21 Dec 2017 13:38:09 -0600
Subject: [PATCH 2/2] Add drivername support for postgresql connection settings
---
aodh/api/aodh-api.py | 3 +--
setup.cfg | 1 +
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/aodh/api/aodh-api.py b/aodh/api/aodh-api.py
index 565f2e3..7c413d6 100644
--- a/aodh/api/aodh-api.py
+++ b/aodh/api/aodh-api.py
@@ -2,6 +2,5 @@ from aodh.api import app as build_wsgi_app
import sys
sys.argv = sys.argv[:1]
-args = {'config_file' : 'etc/aodh/aodh.conf', }
+args = {'config_file': 'etc/aodh/aodh.conf', }
application = build_wsgi_app.build_wsgi_app(None, args)
-
diff --git a/setup.cfg b/setup.cfg
index 5bfc817..8f8db57 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -64,6 +64,7 @@ aodh.storage =
mysql = aodh.storage.impl_sqlalchemy:Connection
mysql+pymysql = aodh.storage.impl_sqlalchemy:Connection
postgresql = aodh.storage.impl_sqlalchemy:Connection
+ postgresql+psycopg2 = aodh.storage.impl_sqlalchemy:Connection
sqlite = aodh.storage.impl_sqlalchemy:Connection
aodh.alarm.rule =
threshold = aodh.api.controllers.v2.alarm_rules.threshold:AlarmThresholdRule
--
2.7.4

View File

@ -1 +0,0 @@
mirror:Source/openstack-aodh-5.1.0-1.el7.src.rpm

View File

@ -1,6 +0,0 @@
TAR_NAME="ironic"
SRC_DIR="$CGCS_BASE/git/ironic"
COPY_LIST="$FILES_BASE/*"
TIS_BASE_SRCREV=47179d9fca337f32324f8e8a68541358fdac8649
TIS_PATCH_VER=GITREVCOUNT

View File

@ -1,4 +0,0 @@
[DEFAULT]
log_dir = /var/log/ironic
state_path = /var/lib/ironic
use_stderr = True

View File

@ -1,2 +0,0 @@
Defaults:ironic !requiretty
ironic ALL = (root) NOPASSWD: /usr/bin/ironic-rootwrap /etc/ironic/rootwrap.conf *

View File

@ -1,12 +0,0 @@
[Unit]
Description=OpenStack Ironic API service
After=syslog.target network.target
[Service]
Type=simple
User=ironic
ExecStart=/usr/bin/ironic-api
[Install]
WantedBy=multi-user.target

View File

@ -1,12 +0,0 @@
[Unit]
Description=OpenStack Ironic Conductor service
After=syslog.target network.target
[Service]
Type=simple
User=ironic
ExecStart=/usr/bin/ironic-conductor
[Install]
WantedBy=multi-user.target

View File

@ -1,297 +0,0 @@
%global full_release ironic-%{version}
%{!?upstream_version: %global upstream_version %{version}%{?milestone}}
Name: openstack-ironic
# Liberty semver reset
# https://review.openstack.org/#/q/I1a161b2c1d1e27268065b6b4be24c8f7a5315afb,n,z
Epoch: 1
Summary: OpenStack Baremetal Hypervisor API (ironic)
Version: 9.1.2
Release: 0%{?_tis_dist}.%{tis_patch_ver}
License: ASL 2.0
URL: http://www.openstack.org
Source0: https://tarballs.openstack.org/ironic/ironic-%{version}.tar.gz
Source1: openstack-ironic-api.service
Source2: openstack-ironic-conductor.service
Source3: ironic-rootwrap-sudoers
Source4: ironic-dist.conf
BuildArch: noarch
BuildRequires: openstack-macros
BuildRequires: python-setuptools
BuildRequires: python2-pip
BuildRequires: python2-wheel
BuildRequires: python2-devel
BuildRequires: python-pbr
BuildRequires: openssl-devel
BuildRequires: libxml2-devel
BuildRequires: libxslt-devel
BuildRequires: gmp-devel
BuildRequires: python-sphinx
BuildRequires: systemd
# Required to compile translation files
BuildRequires: python-babel
# Required to run unit tests
BuildRequires: pysendfile
BuildRequires: python-alembic
BuildRequires: python-automaton
BuildRequires: python-cinderclient
BuildRequires: python-dracclient
BuildRequires: python-eventlet
BuildRequires: python-futurist
BuildRequires: python-glanceclient
BuildRequires: python-ironic-inspector-client
BuildRequires: python-ironic-lib
BuildRequires: python-jinja2
BuildRequires: python-jsonpatch
BuildRequires: python-jsonschema
BuildRequires: python-keystoneauth1
BuildRequires: python-keystonemiddleware
BuildRequires: python-mock
BuildRequires: python-neutronclient
BuildRequires: python-oslo-concurrency
BuildRequires: python-oslo-config
BuildRequires: python-oslo-context
BuildRequires: python-oslo-db
BuildRequires: python-oslo-db-tests
BuildRequires: python-oslo-i18n
BuildRequires: python-oslo-log
BuildRequires: python-oslo-messaging
BuildRequires: python-oslo-middleware
BuildRequires: python-oslo-policy
BuildRequires: python-oslo-reports
BuildRequires: python-oslo-rootwrap
BuildRequires: python-oslo-serialization
BuildRequires: python-oslo-service
BuildRequires: python-oslo-utils
BuildRequires: python-oslo-versionedobjects
BuildRequires: python-oslotest
BuildRequires: python-osprofiler
BuildRequires: python-os-testr
BuildRequires: python-pbr
BuildRequires: python-pecan
BuildRequires: python-proliantutils
BuildRequires: python-psutil
BuildRequires: python-requests
BuildRequires: python-retrying
BuildRequires: python-scciclient
BuildRequires: python-six
BuildRequires: python-sqlalchemy
BuildRequires: python-stevedore
BuildRequires: python-sushy
BuildRequires: python-swiftclient
BuildRequires: python-testresources
BuildRequires: python-tooz
BuildRequires: python-UcsSdk
BuildRequires: python-webob
BuildRequires: python-wsme
BuildRequires: pysnmp
BuildRequires: pytz
%prep
%setup -q -n ironic-%{upstream_version}
rm requirements.txt test-requirements.txt
%build
export PBR_VERSION=%{version}
%{__python2} setup.py build
# Generate i18n files
%{__python2} setup.py compile_catalog -d build/lib/ironic/locale
%py2_build_wheel
%install
export PBR_VERSION=%{version}
%{__python2} setup.py install -O1 --skip-build --root=%{buildroot}
mkdir -p $RPM_BUILD_ROOT/wheels
install -m 644 dist/*.whl $RPM_BUILD_ROOT/wheels/
# Create fake egg-info for the tempest plugin
# TODO switch to %{service} everywhere as in openstack-example.spec
%global service ironic
%py2_entrypoint %{service} %{service}
# install systemd scripts
mkdir -p %{buildroot}%{_unitdir}
install -p -D -m 644 %{SOURCE1} %{buildroot}%{_unitdir}
install -p -D -m 644 %{SOURCE2} %{buildroot}%{_unitdir}
# install sudoers file
mkdir -p %{buildroot}%{_sysconfdir}/sudoers.d
install -p -D -m 440 %{SOURCE3} %{buildroot}%{_sysconfdir}/sudoers.d/ironic
mkdir -p %{buildroot}%{_sharedstatedir}/ironic/
mkdir -p %{buildroot}%{_localstatedir}/log/ironic/
mkdir -p %{buildroot}%{_sysconfdir}/ironic/rootwrap.d
#Populate the conf dir
install -p -D -m 640 etc/ironic/ironic.conf.sample %{buildroot}/%{_sysconfdir}/ironic/ironic.conf
install -p -D -m 640 etc/ironic/policy.json %{buildroot}/%{_sysconfdir}/ironic/policy.json
install -p -D -m 640 etc/ironic/rootwrap.conf %{buildroot}/%{_sysconfdir}/ironic/rootwrap.conf
install -p -D -m 640 etc/ironic/rootwrap.d/* %{buildroot}/%{_sysconfdir}/ironic/rootwrap.d/
# Install distribution config
install -p -D -m 640 %{SOURCE4} %{buildroot}/%{_datadir}/ironic/ironic-dist.conf
# Install i18n .mo files (.po and .pot are not required)
install -d -m 755 %{buildroot}%{_datadir}
rm -f %{buildroot}%{python2_sitelib}/ironic/locale/*/LC_*/ironic*po
rm -f %{buildroot}%{python2_sitelib}/ironic/locale/*pot
mv %{buildroot}%{python2_sitelib}/ironic/locale %{buildroot}%{_datadir}/locale
# Find language files
%find_lang ironic --all-name
%description
Ironic provides an API for management and provisioning of physical machines
%package common
Summary: Ironic common
Requires: ipmitool
Requires: pysendfile
Requires: python-alembic
Requires: python-automaton >= 0.5.0
Requires: python-cinderclient >= 3.1.0
Requires: python-dracclient >= 1.3.0
Requires: python-eventlet
Requires: python-futurist >= 0.11.0
Requires: python-glanceclient >= 1:2.7.0
Requires: python-ironic-inspector-client >= 1.5.0
Requires: python-ironic-lib >= 2.5.0
Requires: python-jinja2
Requires: python-jsonpatch
Requires: python-jsonschema
Requires: python-keystoneauth1 >= 3.1.0
Requires: python-keystonemiddleware >= 4.12.0
Requires: python-neutronclient >= 6.3.0
Requires: python-oslo-concurrency >= 3.8.0
Requires: python-oslo-config >= 2:4.0.0
Requires: python-oslo-context >= 2.14.0
Requires: python-oslo-db >= 4.24.0
Requires: python-oslo-i18n >= 2.1.0
Requires: python-oslo-log >= 3.22.0
Requires: python-oslo-messaging >= 5.24.2
Requires: python-oslo-middleware >= 3.27.0
Requires: python-oslo-policy >= 1.23.0
Requires: python-oslo-reports >= 0.6.0
Requires: python-oslo-rootwrap >= 5.0.0
Requires: python-oslo-serialization >= 1.10.0
Requires: python-oslo-service >= 1.10.0
Requires: python-oslo-utils >= 3.20.0
Requires: python-oslo-versionedobjects >= 1.17.0
Requires: python-osprofiler >= 1.4.0
Requires: python-pbr
Requires: python-pecan
Requires: python-proliantutils >= 2.4.0
Requires: python-psutil
Requires: python-requests
Requires: python-retrying
Requires: python-rfc3986 >= 0.3.1
Requires: python-scciclient >= 0.5.0
Requires: python-six
Requires: python-sqlalchemy
Requires: python-stevedore >= 1.20.0
Requires: python-sushy
Requires: python-swiftclient >= 3.2.0
Requires: python-tooz >= 1.47.0
Requires: python-UcsSdk >= 0.8.2.2
Requires: python-webob >= 1.7.1
Requires: python-wsme
Requires: pysnmp
Requires: pytz
Requires(pre): shadow-utils
%description common
Components common to all OpenStack Ironic services
%files common -f ironic.lang
%doc README.rst
%license LICENSE
%{_bindir}/ironic-dbsync
%{_bindir}/ironic-rootwrap
%{python2_sitelib}/ironic
%{python2_sitelib}/ironic-*.egg-info
%exclude %{python2_sitelib}/ironic/tests
%exclude %{python2_sitelib}/ironic_tempest_plugin
%{_sysconfdir}/sudoers.d/ironic
%config(noreplace) %attr(-,root,ironic) %{_sysconfdir}/ironic
%attr(-,ironic,ironic) %{_sharedstatedir}/ironic
%attr(0755,ironic,ironic) %{_localstatedir}/log/ironic
%attr(-, root, ironic) %{_datadir}/ironic/ironic-dist.conf
%exclude %{python2_sitelib}/ironic_tests.egg_info
%package api
Summary: The Ironic API
Requires: %{name}-common = %{epoch}:%{version}-%{release}
Requires(post): systemd
Requires(preun): systemd
Requires(postun): systemd
%description api
Ironic API for management and provisioning of physical machines
%files api
%{_bindir}/ironic-api
%{_unitdir}/openstack-ironic-api.service
%package conductor
Summary: The Ironic Conductor
Requires: %{name}-common = %{epoch}:%{version}-%{release}
Requires(post): systemd
Requires(preun): systemd
Requires(postun): systemd
%description conductor
Ironic Conductor for management and provisioning of physical machines
%files conductor
%{_bindir}/ironic-conductor
%{_unitdir}/openstack-ironic-conductor.service
%package -n python-ironic-tests
Summary: Ironic tests
Requires: %{name}-common = %{epoch}:%{version}-%{release}
Requires: python-mock
Requires: python-oslotest
Requires: python-os-testr
Requires: python-testresources
%description -n python-ironic-tests
This package contains the Ironic test files.
%files -n python-ironic-tests
%{python2_sitelib}/ironic/tests
%{python2_sitelib}/ironic_tempest_plugin
%{python2_sitelib}/%{service}_tests.egg-info
%package wheels
Summary: %{name} wheels
%description wheels
Contains python wheels for %{name}
%files wheels
/wheels/*
%changelog
* Fri Nov 03 2017 RDO <dev@lists.rdoproject.org> 1:9.1.2-1
- Update to 9.1.2
* Mon Sep 25 2017 rdo-trunk <javier.pena@redhat.com> 1:9.1.1-1
- Update to 9.1.1
* Thu Aug 24 2017 Alfredo Moralejo <amoralej@redhat.com> 1:9.1.0-1
- Update to 9.1.0

View File

@ -1,6 +0,0 @@
TAR_NAME="magnum-ui"
SRC_DIR="$CGCS_BASE/git/magnum-ui"
TIS_BASE_SRCREV=0b9fc50aada1a3e214acaad1204b48c96a549e5f
TIS_PATCH_VER=1

View File

@ -1,107 +0,0 @@
%{!?upstream_version: %global upstream_version %{version}%{?milestone}}
%global library magnum-ui
%global module magnum_ui
Name: openstack-%{library}
Version: 3.0.0
Release: 1%{?_tis_dist}.%{tis_patch_ver}
Summary: OpenStack Magnum UI Horizon plugin
License: ASL 2.0
URL: http://launchpad.net/%{library}/
Source0: https://tarballs.openstack.org/%{library}/%{library}-%{upstream_version}.tar.gz
BuildArch: noarch
BuildRequires: python2-devel
BuildRequires: python-pbr
BuildRequires: python-setuptools
BuildRequires: python2-pip
BuildRequires: python2-wheel
BuildRequires: git
Requires: python-pbr
Requires: python-babel
Requires: python-magnumclient >= 2.0.0
Requires: openstack-dashboard >= 8.0.0
Requires: python-django >= 1.8
Requires: python-django-babel
Requires: python-django-compressor >= 2.0
Requires: python-django-openstack-auth >= 3.5.0
Requires: python-django-pyscss >= 2.0.2
%description
OpenStack Magnum UI Horizon plugin
# Documentation package
%package -n python-%{library}-doc
Summary: OpenStack example library documentation
BuildRequires: python-sphinx
BuildRequires: python-django
BuildRequires: python-django-nose
BuildRequires: openstack-dashboard
BuildRequires: python-openstackdocstheme
BuildRequires: python-magnumclient
BuildRequires: python-mock
BuildRequires: python-mox3
%description -n python-%{library}-doc
OpenStack Magnum UI Horizon plugin documentation
This package contains the documentation.
%prep
%autosetup -n %{library}-%{upstream_version} -S git
# Let's handle dependencies ourseleves
rm -f *requirements.txt
%build
export PBR_VERSION=%{version}
%{__python2} setup.py build
%py2_build_wheel
# generate html docs
export PYTHONPATH=/usr/share/openstack-dashboard
#%{__python2} setup.py build_sphinx -b html
# remove the sphinx-build leftovers
#rm -rf doc/build/html/.{doctrees,buildinfo}
%install
export PBR_VERSION=%{version}
%{__python2} setup.py install --skip-build --root %{buildroot}
mkdir -p $RPM_BUILD_ROOT/wheels
install -m 644 dist/*.whl $RPM_BUILD_ROOT/wheels/
# Move config to horizon
install -p -D -m 640 %{module}/enabled/_1370_project_container_infra_panel_group.py %{buildroot}%{_datadir}/openstack-dashboard/openstack_dashboard/local/enabled/_1370_project_container_infra_panel_group.py
install -p -D -m 640 %{module}/enabled/_1371_project_container_infra_clusters_panel.py %{buildroot}%{_datadir}/openstack-dashboard/openstack_dashboard/local/enabled/_1371_project_container_infra_clusters_panel.py
install -p -D -m 640 %{module}/enabled/_1372_project_container_infra_cluster_templates_panel.py %{buildroot}%{_datadir}/openstack-dashboard/openstack_dashboard/local/enabled/_1372_project_container_infra_cluster_templates_panel.py
%files
%license LICENSE
%{python2_sitelib}/%{module}
%{python2_sitelib}/*.egg-info
%{_datadir}/openstack-dashboard/openstack_dashboard/local/enabled/_137*
%files -n python-%{library}-doc
%license LICENSE
#%doc doc/build/html README.rst
%package wheels
Summary: %{name} wheels
%description wheels
Contains python wheels for %{name}
%files wheels
/wheels/*
%changelog
* Thu Aug 24 2017 Alfredo Moralejo <amoralej@redhat.com> 3.0.0-1
- Update to 3.0.0

View File

@ -1,6 +0,0 @@
TAR_NAME="magnum"
SRC_DIR="$CGCS_BASE/git/magnum"
COPY_LIST="$FILES_BASE/*"
TIS_BASE_SRCREV=ca4b29087a4af00060870519e5897348ccc61161
TIS_PATCH_VER=1

View File

@ -1,15 +0,0 @@
[Unit]
Description=OpenStack Magnum API Service
After=syslog.target network.target
[Service]
Type=simple
User=magnum
ExecStart=/usr/bin/magnum-api
PrivateTmp=true
NotifyAccess=all
KillMode=process
Restart=on-failure
[Install]
WantedBy=multi-user.target

View File

@ -1,15 +0,0 @@
[Unit]
Description=Openstack Magnum Conductor Service
After=syslog.target network.target qpidd.service mysqld.service tgtd.service
[Service]
Type=simple
User=magnum
ExecStart=/usr/bin/magnum-conductor
PrivateTmp=true
NotifyAccess=all
KillMode=process
Restart=on-failure
[Install]
WantedBy=multi-user.target

View File

@ -1,339 +0,0 @@
%{!?upstream_version: %global upstream_version %{version}%{?milestone}}
%global with_doc %{!?_without_doc:1}%{?_without_doc:0}
%global service magnum
Name: openstack-%{service}
Summary: Container Management project for OpenStack
Version: 5.0.1
Release: 1%{?_tis_dist}.%{tis_patch_ver}
License: ASL 2.0
URL: https://github.com/openstack/magnum.git
Source0: https://tarballs.openstack.org/%{service}/%{service}-%{version}.tar.gz
Source2: %{name}-api.service
Source3: %{name}-conductor.service
BuildArch: noarch
BuildRequires: git
BuildRequires: python2-devel
BuildRequires: python-pbr
BuildRequires: python-setuptools
BuildRequires: python2-pip
BuildRequires: python2-wheel
BuildRequires: python-werkzeug
BuildRequires: systemd-units
# Required for config file generation
BuildRequires: python-pycadf
BuildRequires: python-osprofiler
Requires: %{name}-common = %{version}-%{release}
Requires: %{name}-conductor = %{version}-%{release}
Requires: %{name}-api = %{version}-%{release}
%description
Magnum is an OpenStack project which offers container orchestration engines
for deploying and managing containers as first class resources in OpenStack.
%package -n python-%{service}
Summary: Magnum Python libraries
Requires: python-pbr
Requires: python-babel
Requires: PyYAML
Requires: python-sqlalchemy
Requires: python-wsme
Requires: python-webob
Requires: python-alembic
Requires: python-decorator
Requires: python-docker >= 2.0.0
Requires: python-enum34
Requires: python-eventlet
Requires: python-iso8601
Requires: python-jsonpatch
Requires: python-keystonemiddleware >= 4.12.0
Requires: python-netaddr
Requires: python-oslo-concurrency >= 3.8.0
Requires: python-oslo-config >= 2:4.0.0
Requires: python-oslo-context >= 2.14.0
Requires: python-oslo-db >= 4.24.0
Requires: python-oslo-i18n >= 2.1.0
Requires: python-oslo-log >= 3.22.0
Requires: python-oslo-messaging >= 5.24.2
Requires: python-oslo-middleware >= 3.27.0
Requires: python-oslo-policy >= 1.23.0
Requires: python-oslo-serialization >= 1.10.0
Requires: python-oslo-service >= 1.10.0
Requires: python-oslo-utils >= 3.20.0
Requires: python-oslo-versionedobjects >= 1.17.0
Requires: python-oslo-reports >= 0.6.0
Requires: python-osprofiler
Requires: python-pycadf
Requires: python-pecan
Requires: python-barbicanclient >= 4.0.0
Requires: python-glanceclient >= 1:2.8.0
Requires: python-heatclient >= 1.6.1
Requires: python-neutronclient >= 6.3.0
Requires: python-novaclient >= 1:9.0.0
Requires: python-kubernetes
Requires: python-keystoneclient >= 1:3.8.0
Requires: python-keystoneauth1 >= 3.1.0
Requires: python-cliff >= 2.8.0
Requires: python-requests
Requires: python-six
Requires: python-stevedore >= 1.20.0
Requires: python-taskflow
Requires: python-cryptography
Requires: python-werkzeug
Requires: python-marathon
%description -n python-%{service}
Magnum is an OpenStack project which offers container orchestration engines
for deploying and managing containers as first class resources in OpenStack.
%package common
Summary: Magnum common
Requires: python-%{service} = %{version}-%{release}
Requires(pre): shadow-utils
%description common
Components common to all OpenStack Magnum services
%package conductor
Summary: The Magnum conductor
Requires: %{name}-common = %{version}-%{release}
Requires(post): systemd
Requires(preun): systemd
Requires(postun): systemd
%description conductor
OpenStack Magnum Conductor
%package api
Summary: The Magnum API
Requires: %{name}-common = %{version}-%{release}
Requires(post): systemd
Requires(preun): systemd
Requires(postun): systemd
%description api
OpenStack-native ReST API to the Magnum Engine
%if 0%{?with_doc}
%package -n %{name}-doc
Summary: Documentation for OpenStack Magnum
Requires: python-%{service} = %{version}-%{release}
BuildRequires: python-sphinx
BuildRequires: python-openstackdocstheme
BuildRequires: python-stevedore
BuildRequires: graphviz
%description -n %{name}-doc
Magnum is an OpenStack project which offers container orchestration engines
for deploying and managing containers as first class resources in OpenStack.
This package contains documentation files for Magnum.
%endif
# tests
%package -n python-%{service}-tests
Summary: Tests for OpenStack Magnum
Requires: python-%{service} = %{version}-%{release}
BuildRequires: python-fixtures
BuildRequires: python-hacking
BuildRequires: python-mock
BuildRequires: python-oslotest
BuildRequires: python-os-testr
BuildRequires: python-subunit
BuildRequires: python-testrepository
BuildRequires: python-testscenarios
BuildRequires: python-testtools
BuildRequires: python-tempest
BuildRequires: openstack-macros
# copy-paste from runtime Requires
BuildRequires: python-babel
BuildRequires: PyYAML
BuildRequires: python-sqlalchemy
BuildRequires: python-wsme
BuildRequires: python-webob
BuildRequires: python-alembic
BuildRequires: python-decorator
BuildRequires: python-docker >= 2.0.0
BuildRequires: python-enum34
BuildRequires: python-eventlet
BuildRequires: python-iso8601
BuildRequires: python-jsonpatch
BuildRequires: python-keystonemiddleware
BuildRequires: python-netaddr
BuildRequires: python-oslo-concurrency
BuildRequires: python-oslo-config
BuildRequires: python-oslo-context
BuildRequires: python-oslo-db
BuildRequires: python-oslo-i18n
BuildRequires: python-oslo-log
BuildRequires: python-oslo-messaging
BuildRequires: python-oslo-middleware
BuildRequires: python-oslo-policy
BuildRequires: python-oslo-serialization
BuildRequires: python-oslo-service
BuildRequires: python-oslo-utils
BuildRequires: python-oslo-versionedobjects
BuildRequires: python2-oslo-versionedobjects-tests
BuildRequires: python-oslo-reports
BuildRequires: python-pecan
BuildRequires: python-barbicanclient
BuildRequires: python-glanceclient
BuildRequires: python-heatclient
BuildRequires: python-neutronclient
BuildRequires: python-novaclient
BuildRequires: python-kubernetes
BuildRequires: python-keystoneclient
BuildRequires: python-requests
BuildRequires: python-six
BuildRequires: python-stevedore
BuildRequires: python-taskflow
BuildRequires: python-cryptography
BuildRequires: python-marathon
%description -n python-%{service}-tests
Magnum is an OpenStack project which offers container orchestration engines
for deploying and managing containers as first class resources in OpenStack.
%prep
%autosetup -n %{service}-%{upstream_version} -S git
# Let's handle dependencies ourselves
rm -rf {test-,}requirements{-bandit,}.txt tools/{pip,test}-requires
# Remove tests in contrib
find contrib -name tests -type d | xargs rm -rf
%build
export PBR_VERSION=%{version}
%{__python2} setup.py build
%py2_build_wheel
%install
export PBR_VERSION=%{version}
%{__python2} setup.py install -O1 --skip-build --root=%{buildroot}
mkdir -p $RPM_BUILD_ROOT/wheels
install -m 644 dist/*.whl $RPM_BUILD_ROOT/wheels/
# Create fake egg-info for the tempest plugin
%py2_entrypoint %{service} %{service}
# docs generation requires everything to be installed first
%if 0%{?with_doc}
%{__python2} setup.py build_sphinx -b html
# Fix hidden-file-or-dir warnings
rm -fr doc/build/html/.doctrees doc/build/html/.buildinfo
%endif
mkdir -p %{buildroot}%{_localstatedir}/log/%{service}/
mkdir -p %{buildroot}%{_localstatedir}/run/%{service}/
# install systemd unit files
install -p -D -m 644 %{SOURCE2} %{buildroot}%{_unitdir}/%{name}-api.service
install -p -D -m 644 %{SOURCE3} %{buildroot}%{_unitdir}/%{name}-conductor.service
mkdir -p %{buildroot}%{_sharedstatedir}/%{service}/
mkdir -p %{buildroot}%{_sharedstatedir}/%{service}/certificates/
mkdir -p %{buildroot}%{_sysconfdir}/%{service}/
oslo-config-generator --config-file etc/magnum/magnum-config-generator.conf --output-file %{buildroot}%{_sysconfdir}/%{service}/magnum.conf
chmod 640 %{buildroot}%{_sysconfdir}/%{service}/magnum.conf
install -p -D -m 640 etc/magnum/policy.json %{buildroot}%{_sysconfdir}/%{service}
install -p -D -m 640 etc/magnum/api-paste.ini %{buildroot}%{_sysconfdir}/%{service}
%check
%{__python2} setup.py test || true
%files -n python-%{service}
%license LICENSE
%{python2_sitelib}/%{service}
%{python2_sitelib}/%{service}-*.egg-info
%exclude %{python2_sitelib}/%{service}/tests
%files common
%{_bindir}/magnum-db-manage
%{_bindir}/magnum-driver-manage
%license LICENSE
%dir %attr(0750,%{service},root) %{_localstatedir}/log/%{service}
%dir %attr(0755,%{service},root) %{_localstatedir}/run/%{service}
%dir %attr(0755,%{service},root) %{_sharedstatedir}/%{service}
%dir %attr(0755,%{service},root) %{_sharedstatedir}/%{service}/certificates
%dir %attr(0755,%{service},root) %{_sysconfdir}/%{service}
%config(noreplace) %attr(-, root, %{service}) %{_sysconfdir}/%{service}/magnum.conf
%config(noreplace) %attr(-, root, %{service}) %{_sysconfdir}/%{service}/policy.json
%config(noreplace) %attr(-, root, %{service}) %{_sysconfdir}/%{service}/api-paste.ini
%pre common
# 1870:1870 for magnum - rhbz#845078
getent group %{service} >/dev/null || groupadd -r --gid 1870 %{service}
getent passwd %{service} >/dev/null || \
useradd --uid 1870 -r -g %{service} -d %{_sharedstatedir}/%{service} -s /sbin/nologin \
-c "OpenStack Magnum Daemons" %{service}
exit 0
%files conductor
%doc README.rst
%license LICENSE
%{_bindir}/magnum-conductor
%{_unitdir}/%{name}-conductor.service
%files api
%doc README.rst
%license LICENSE
%{_bindir}/magnum-api
%{_unitdir}/%{name}-api.service
%if 0%{?with_doc}
%files -n %{name}-doc
%license LICENSE
%doc doc/build/html
%endif
%files -n python-%{service}-tests
%license LICENSE
%{python2_sitelib}/%{service}/tests
%{python2_sitelib}/%{service}_tests.egg-info
%package wheels
Summary: %{name} wheels
%description wheels
Contains python wheels for %{name}
%files wheels
/wheels/*
%changelog
* Mon Aug 28 2017 rdo-trunk <javier.pena@redhat.com> 5.0.1-1
- Update to 5.0.1
* Thu Aug 24 2017 Alfredo Moralejo <amoralej@redhat.com> 5.0.0-1
- Update to 5.0.0

View File

@ -1,5 +0,0 @@
TAR_NAME="murano-dashboard"
SRC_DIR="$CGCS_BASE/git/murano-dashboard"
TIS_BASE_SRCREV=c950e248c2dfdc7a040d6984d84ed19c82a04e7d
TIS_PATCH_VER=1

View File

@ -1,161 +0,0 @@
%{!?upstream_version: %global upstream_version %{version}%{?milestone}}
%global pypi_name murano-dashboard
%global mod_name muranodashboard
Name: openstack-murano-ui
Version: 4.0.0
Release: 1%{?_tis_dist}.%{tis_patch_ver}
Summary: The UI component for the OpenStack murano service
Group: Applications/Communications
License: ASL 2.0
URL: https://github.com/openstack/%{pypi_name}
Source0: https://tarballs.openstack.org/%{pypi_name}/%{pypi_name}-%{upstream_version}.tar.gz
#
BuildRequires: gettext
BuildRequires: git
BuildRequires: openstack-dashboard
BuildRequires: python-beautifulsoup4
BuildRequires: python-castellan
BuildRequires: python-devel
BuildRequires: python-django-formtools
BuildRequires: python-django-nose
BuildRequires: python-mock
BuildRequires: python-mox3
BuildRequires: python-muranoclient
BuildRequires: python-nose
BuildRequires: python-openstack-nose-plugin
BuildRequires: python-oslo-config >= 2:3.14.0
BuildRequires: python-pbr >= 1.6
BuildRequires: python-semantic-version
BuildRequires: python-setuptools
BuildRequires: python2-pip
BuildRequires: python2-wheel
BuildRequires: python-testtools
BuildRequires: python-yaql >= 1.1.0
BuildRequires: tsconfig
Requires: openstack-dashboard
Requires: PyYAML >= 3.10
Requires: python-babel >= 2.3.4
Requires: python-beautifulsoup4
Requires: python-castellan >= 0.7.0
Requires: python-django >= 1.8
Requires: python-django-babel
Requires: python-django-formtools
Requires: python-iso8601 >= 0.1.11
Requires: python-muranoclient >= 0.8.2
Requires: python-oslo-log >= 3.22.0
Requires: python-pbr
Requires: python-semantic-version
Requires: python-six >= 1.9.0
Requires: python-yaql >= 1.1.0
Requires: pytz
BuildArch: noarch
%description
Murano Dashboard
Sytem package - murano-dashboard
Python package - murano-dashboard
Murano Dashboard is an extension for OpenStack Dashboard that provides a UI
for Murano. With murano-dashboard, a user is able to easily manage and control
an application catalog, running applications and created environments alongside
with all other OpenStack resources.
%package doc
Summary: Documentation for OpenStack murano dashboard
BuildRequires: python-sphinx
BuildRequires: python-openstackdocstheme
BuildRequires: python-reno
%description doc
Murano Dashboard is an extension for OpenStack Dashboard that provides a UI
for Murano. With murano-dashboard, a user is able to easily manage and control
an application catalog, running applications and created environments alongside
with all other OpenStack resources.
This package contains the documentation.
%prep
%autosetup -n %{pypi_name}-%{upstream_version} -S git
# Let RPM handle the dependencies
rm -rf {test-,}requirements.txt tools/{pip,test}-requires
# disable warning-is-error, this project has intersphinx in docs
# so some warnings are generated in network isolated build environment
# as koji
sed -i 's/^warning-is-error.*/warning-is-error = 0/g' setup.cfg
%build
export PBR_VERSION=%{version}
%py2_build
# Generate i18n files
pushd build/lib/%{mod_name}
django-admin compilemessages
popd
# generate html docs
export OSLO_PACKAGE_VERSION=%{upstream_version}
%{__python2} setup.py build_sphinx -b html
# remove the sphinx-build leftovers
rm -rf doc/build/html/.{doctrees,buildinfo}
%py2_build_wheel
%install
export PBR_VERSION=%{version}
%py2_install
mkdir -p $RPM_BUILD_ROOT/wheels
install -m 644 dist/*.whl $RPM_BUILD_ROOT/wheels/
mkdir -p %{buildroot}%{_datadir}/openstack-dashboard/openstack_dashboard/local/enabled
mkdir -p %{buildroot}%{_datadir}/openstack-dashboard/openstack_dashboard/local/local_settings.d
mkdir -p %{buildroot}/var/cache/murano-dashboard
# Enable Horizon plugin for murano-dashboard
cp %{_builddir}/%{pypi_name}-%{upstream_version}/muranodashboard/local/local_settings.d/_50_murano.py %{buildroot}%{_datadir}/openstack-dashboard/openstack_dashboard/local/local_settings.d/
cp %{_builddir}/%{pypi_name}-%{upstream_version}/muranodashboard/local/enabled/_*.py %{buildroot}%{_datadir}/openstack-dashboard/openstack_dashboard/local/enabled/
# install policy file, makes horizon side bar dissapear without it. Can be fixed by refreshing page, but annoying
install -p -D -m 644 muranodashboard/conf/murano_policy.json %{buildroot}%{_sysconfdir}/openstack-dashboard/murano_policy.json
%check
export PYTHONPATH="%{_datadir}/openstack-dashboard:%{python2_sitearch}:%{python2_sitelib}:%{buildroot}%{python2_sitelib}"
#%{__python2} manage.py test muranodashboard --settings=muranodashboard.tests.settings
%post
HORIZON_SETTINGS='/etc/openstack-dashboard/local_settings'
if grep -Eq '^METADATA_CACHE_DIR=' $HORIZON_SETTINGS; then
sed -i '/^METADATA_CACHE_DIR=/{s#.*#METADATA_CACHE_DIR="/var/cache/murano-dashboard"#}' $HORIZON_SETTINGS
else
sed -i '$aMETADATA_CACHE_DIR="/var/cache/murano-dashboard"' $HORIZON_SETTINGS
fi
%systemd_postun_with_restart httpd.service
%postun
%systemd_postun_with_restart httpd.service
%files
%license LICENSE
%doc README.rst
%{python2_sitelib}/muranodashboard
%{python2_sitelib}/murano_dashboard*.egg-info
%{_datadir}/openstack-dashboard/openstack_dashboard/local/local_settings.d/*
%{_datadir}/openstack-dashboard/openstack_dashboard/local/enabled/*
%dir %attr(755, apache, apache) /var/cache/murano-dashboard
%{_sysconfdir}/openstack-dashboard/murano_policy.json
%files doc
%license LICENSE
%doc doc/build/html
%package wheels
Summary: %{name} wheels
%description wheels
Contains python wheels for %{name}
%files wheels
/wheels/*
%changelog
* Wed Aug 30 2017 rdo-trunk <javier.pena@redhat.com> 4.0.0-1
- Update to 4.0.0
* Thu Aug 24 2017 Alfredo Moralejo <amoralej@redhat.com> 4.0.0-0.1.0rc2
- Update to 4.0.0.0rc2

View File

@ -1,6 +0,0 @@
TAR_NAME="murano"
SRC_DIR="$CGCS_BASE/git/murano"
COPY_LIST="$FILES_BASE/*"
TIS_BASE_SRCREV=de53ba8f9a97ad30c492063d9cc497ca56093e38
TIS_PATCH_VER=1

View File

@ -1,12 +0,0 @@
[Unit]
Description=OpenStack Murano API Service
After=syslog.target network.target mysqld.service
[Service]
Type=simple
User=murano
ExecStart=/usr/bin/murano-api --config-file /etc/murano/murano.conf
Restart=on-failure
[Install]
WantedBy=multi-user.target

View File

@ -1,12 +0,0 @@
[Unit]
Description=OpenStack Murano Cloud Foundry API Service
After=syslog.target network.target mysqld.service
[Service]
Type=simple
User=murano
ExecStart=/usr/bin/murano-cfapi --config-file /etc/murano/murano.conf
Restart=on-failure
[Install]
WantedBy=multi-user.target

View File

@ -1,12 +0,0 @@
[Unit]
Description=Openstack Murano Engine Service
After=syslog.target network.target mysqld.service openstack-keystone.service
[Service]
Type=simple
User=murano
ExecStart=/usr/bin/murano-engine --config-file /etc/murano/murano.conf
Restart=on-failure
[Install]
WantedBy=multi-user.target

View File

@ -1,304 +0,0 @@
%global pypi_name murano
%global with_doc %{!?_without_doc:1}%{?_without_doc:0}
%{!?upstream_version: %global upstream_version %{version}%{?milestone}}
%if 0%{?fedora}
%global with_python3 1
%{!?python3_shortver: %global python3_shortver %(%{__python3} -c 'import sys; print(str(sys.version_info.major) + "." + str(sys.version_info.minor))')}
%endif
Name: openstack-%{pypi_name}
Version: 4.0.0
Release: 1%{?_tis_dist}.%{tis_patch_ver}
Summary: OpenStack Murano Service
License: ASL 2.0
URL: https://pypi.python.org/pypi/murano
Source0: https://tarballs.openstack.org/%{pypi_name}/%{pypi_name}-%{upstream_version}.tar.gz
#
Source1: openstack-murano-api.service
Source2: openstack-murano-engine.service
Source4: openstack-murano-cf-api.service
BuildArch: noarch
BuildRequires: git
BuildRequires: python2-devel
BuildRequires: python-setuptools
BuildRequires: python2-pip
BuildRequires: python2-wheel
BuildRequires: python-jsonschema >= 2.0.0
BuildRequires: python-keystonemiddleware
BuildRequires: python-oslo-config
BuildRequires: python-oslo-db
BuildRequires: python-oslo-i18n
BuildRequires: python-oslo-log
BuildRequires: python-oslo-messaging
BuildRequires: python-oslo-middleware
BuildRequires: python-oslo-policy
BuildRequires: python-oslo-serialization
BuildRequires: python-oslo-service
BuildRequires: python-openstackdocstheme
BuildRequires: python-pbr >= 2.0.0
BuildRequires: python-routes >= 2.3.1
BuildRequires: python-sphinx
BuildRequires: python-sphinxcontrib-httpdomain
BuildRequires: python-castellan
BuildRequires: pyOpenSSL
BuildRequires: systemd
BuildRequires: openstack-macros
# Required to compile translation files
BuildRequires: python-babel
%description
Murano Project introduces an application catalog service
# MURANO-COMMON
%package common
Summary: Murano common
Requires: python-alembic >= 0.8.7
Requires: python-babel >= 2.3.4
Requires: python-debtcollector >= 1.2.0
Requires: python-eventlet >= 0.18.2
Requires: python-iso8601 >= 0.1.9
Requires: python-jsonpatch >= 1.1
Requires: python-jsonschema >= 2.0.0
Requires: python-keystonemiddleware >= 4.12.0
Requires: python-keystoneauth1 >= 3.1.0
Requires: python-kombu >= 1:4.0.0
Requires: python-netaddr >= 0.7.13
Requires: python-oslo-concurrency >= 3.8.0
Requires: python-oslo-config >= 2:4.0.0
Requires: python-oslo-context >= 2.14.0
Requires: python-oslo-db >= 4.24.0
Requires: python-oslo-i18n >= 2.1.0
Requires: python-oslo-log >= 3.22.0
Requires: python-oslo-messaging >= 5.24.2
Requires: python-oslo-middleware >= 3.27.0
Requires: python-oslo-policy >= 1.23.0
Requires: python-oslo-serialization >= 1.10.0
Requires: python-oslo-service >= 1.10.0
Requires: python-oslo-utils >= 3.20.0
Requires: python-paste
Requires: python-paste-deploy >= 1.5.0
Requires: python-pbr >= 2.0.0
Requires: python-psutil >= 3.2.2
Requires: python-congressclient >= 1.3.0
Requires: python-heatclient >= 1.6.1
Requires: python-keystoneclient >= 1:3.8.0
Requires: python-mistralclient >= 3.1.0
Requires: python-muranoclient >= 0.8.2
Requires: python-neutronclient >= 6.3.0
Requires: PyYAML >= 3.10
Requires: python-routes >= 2.3.1
Requires: python-semantic_version >= 2.3.1
Requires: python-six >= 1.9.0
Requires: python-stevedore >= 1.20.0
Requires: python-sqlalchemy >= 1.0.10
Requires: python-tenacity >= 3.2.1
Requires: python-webob >= 1.7.1
Requires: python-yaql >= 1.1.0
Requires: python-castellan >= 0.7.0
Requires: %{name}-doc = %{version}-%{release}
%description common
Components common to all OpenStack Murano services
# MURANO-ENGINE
%package engine
Summary: The Murano engine
Group: Applications/System
Requires: %{name}-common = %{version}-%{release}
%description engine
OpenStack Murano Engine daemon
# MURANO-API
%package api
Summary: The Murano API
Group: Applications/System
Requires: %{name}-common = %{version}-%{release}
%description api
OpenStack rest API to the Murano Engine
# MURANO-CF-API
%package cf-api
Summary: The Murano Cloud Foundry API
Group: System Environment/Base
Requires: %{name}-common = %{version}-%{release}
%description cf-api
OpenStack rest API for Murano to the Cloud Foundry
%if 0%{?with_doc}
%package doc
Summary: Documentation for OpenStack Murano services
%description doc
This package contains documentation files for Murano.
%endif
%package -n python-murano-tests
Summary: Murano tests
Requires: %{name}-common = %{version}-%{release}
%description -n python-murano-tests
This package contains the murano test files.
%prep
%autosetup -S git -n %{pypi_name}-%{upstream_version}
# Remove the requirements file so that pbr hooks don't add it
# to distutils requires_dist config
rm -rf {test-,}requirements.txt tools/{pip,test}-requires
%build
export PBR_VERSION=%{version}
%{__python2} setup.py build
# Generate i18n files
%{__python2} setup.py compile_catalog -d build/lib/%{pypi_name}/locale
# Generate sample config and add the current directory to PYTHONPATH so
# oslo-config-generator doesn't skip heat's entry points.
PYTHONPATH=. oslo-config-generator --config-file=./etc/oslo-config-generator/murano.conf
PYTHONPATH=. oslo-config-generator --config-file=./etc/oslo-config-generator/murano-cfapi.conf
%py2_build_wheel
%install
export PBR_VERSION=%{version}
%{__python2} setup.py install -O1 --skip-build --root %{buildroot}
mkdir -p $RPM_BUILD_ROOT/wheels
install -m 644 dist/*.whl $RPM_BUILD_ROOT/wheels/
# Create fake egg-info for the tempest plugin
# TODO switch to %{service} everywhere as in openstack-example.spec
%global service murano
%py2_entrypoint %{service} %{service}
# DOCs
pushd doc
%if 0%{?with_doc}
SPHINX_DEBUG=1 sphinx-build -b html source build/html
# Fix hidden-file-or-dir warnings
rm -fr build/html/.doctrees build/html/.buildinfo
%endif
popd
mkdir -p %{buildroot}/var/log/murano
mkdir -p %{buildroot}/var/run/murano
mkdir -p %{buildroot}/var/cache/murano/meta
mkdir -p %{buildroot}/etc/murano/
# install systemd unit files
install -p -D -m 644 %{SOURCE1} %{buildroot}%{_unitdir}/murano-api.service
install -p -D -m 644 %{SOURCE2} %{buildroot}%{_unitdir}/murano-engine.service
install -p -D -m 644 %{SOURCE4} %{buildroot}%{_unitdir}/murano-cf-api.service
# install default config files
cd %{_builddir}/%{pypi_name}-%{upstream_version} && oslo-config-generator --config-file ./etc/oslo-config-generator/murano.conf --output-file %{_builddir}/%{pypi_name}-%{upstream_version}/etc/murano/murano.conf.sample
install -p -D -m 640 %{_builddir}/%{pypi_name}-%{upstream_version}/etc/murano/murano.conf.sample %{buildroot}%{_sysconfdir}/murano/murano.conf
install -p -D -m 640 %{_builddir}/%{pypi_name}-%{upstream_version}/etc/murano/netconfig.yaml.sample %{buildroot}%{_sysconfdir}/murano/netconfig.yaml.sample
install -p -D -m 640 %{_builddir}/%{pypi_name}-%{upstream_version}/etc/murano/murano-paste.ini %{buildroot}%{_sysconfdir}/murano/murano-paste.ini
install -p -D -m 640 %{_builddir}/%{pypi_name}-%{upstream_version}/etc/murano/logging.conf.sample %{buildroot}%{_sysconfdir}/murano/logging.conf
install -p -D -m 640 %{_builddir}/%{pypi_name}-%{upstream_version}/etc/murano/murano-cfapi.conf.sample %{buildroot}%{_sysconfdir}/murano/murano-cfapi.conf
install -p -D -m 640 %{_builddir}/%{pypi_name}-%{upstream_version}/etc/murano/murano-cfapi-paste.ini %{buildroot}%{_sysconfdir}/murano/murano-cfapi-paste.ini
# Creating murano core library archive(murano meta packages written in muranoPL with execution plan main minimal logic)
pushd meta/io.murano
zip -r %{buildroot}%{_localstatedir}/cache/murano/meta/io.murano.zip .
popd
# Creating murano core library archive(murano meta packages written in muranoPL with execution plan main minimal logic)
pushd meta/io.murano.applications
zip -r %{buildroot}%{_localstatedir}/cache/murano/meta/io.murano.applications.zip .
popd
# Install i18n .mo files (.po and .pot are not required)
install -d -m 755 %{buildroot}%{_datadir}
rm -f %{buildroot}%{python2_sitelib}/%{pypi_name}/locale/*/LC_*/%{pypi_name}*po
rm -f %{buildroot}%{python2_sitelib}/%{pypi_name}/locale/*pot
mv %{buildroot}%{python2_sitelib}/%{pypi_name}/locale %{buildroot}%{_datadir}/locale
# Find language files
%find_lang %{pypi_name} --all-name
%files common -f %{pypi_name}.lang
%license LICENSE
%{python2_sitelib}/murano
%{python2_sitelib}/murano-*.egg-info
%exclude %{python2_sitelib}/murano/tests
%exclude %{python2_sitelib}/murano_tempest_tests
%exclude %{python2_sitelib}/%{service}_tests.egg-info
%{_bindir}/murano-manage
%{_bindir}/murano-db-manage
%{_bindir}/murano-test-runner
%{_bindir}/murano-cfapi-db-manage
%dir %attr(0750,murano,root) %{_localstatedir}/log/murano
%dir %attr(0755,murano,root) %{_localstatedir}/run/murano
%dir %attr(0755,murano,root) %{_localstatedir}/cache/murano
%dir %attr(0755,murano,root) %{_sysconfdir}/murano
%config(noreplace) %attr(-, root, murano) %{_sysconfdir}/murano/murano.conf
%config(noreplace) %attr(-, root, murano) %{_sysconfdir}/murano/murano-paste.ini
%config(noreplace) %attr(-, root, murano) %{_sysconfdir}/murano/netconfig.yaml.sample
%config(noreplace) %attr(-, root, murano) %{_sysconfdir}/murano/logging.conf
%config(noreplace) %attr(-, root, murano) %{_sysconfdir}/murano/murano-cfapi.conf
%config(noreplace) %attr(-, root, murano) %{_sysconfdir}/murano/murano-cfapi-paste.ini
%files engine
%doc README.rst
%license LICENSE
%{_bindir}/murano-engine
%{_unitdir}/murano-engine.service
%post engine
%systemd_post murano-engine.service
%preun engine
%systemd_preun murano-engine.service
%postun engine
%systemd_postun_with_restart murano-engine.service
%files api
%doc README.rst
%license LICENSE
%{_localstatedir}/cache/murano/*
%{_bindir}/murano-api
%{_bindir}/murano-wsgi-api
%{_unitdir}/murano-api.service
%files cf-api
%doc README.rst
%license LICENSE
%{_bindir}/murano-cfapi
%{_unitdir}/murano-cf-api.service
%files doc
%doc doc/build/html
%files -n python-murano-tests
%license LICENSE
%{python2_sitelib}/murano/tests
%{python2_sitelib}/murano_tempest_tests
%{python2_sitelib}/%{service}_tests.egg-info
%package wheels
Summary: %{name} wheels
%description wheels
Contains python wheels for %{name}
%files wheels
/wheels/*
%changelog
* Wed Aug 30 2017 rdo-trunk <javier.pena@redhat.com> 4.0.0-1
- Update to 4.0.0
* Fri Aug 25 2017 Alfredo Moralejo <amoralej@redhat.com> 4.0.0-0.2.0rc2
- Update to 4.0.0.0rc2
* Mon Aug 21 2017 Alfredo Moralejo <amoralej@redhat.com> 4.0.0-0.1.0rc1
- Update to 4.0.0.0rc1

View File

@ -1,2 +0,0 @@
SRC_DIR="files"
TIS_PATCH_VER=0

View File

@ -1,40 +0,0 @@
#
# SPDX-License-Identifier: Apache-2.0
#
# Copyright (C) 2019 Intel Corporation
#
Summary: openstack-panko-config
Name: openstack-panko-config
Version: 1.0
Release: %{tis_patch_ver}%{?_tis_dist}
License: Apache-2.0
Group: openstack
Packager: StarlingX
URL: unknown
BuildArch: noarch
Source: %name-%version.tar.gz
Requires: openstack-panko-common
Requires: openstack-panko-api
Summary: package StarlingX configuration files of openstack-panko to system folder.
%description
package StarlingX configuration files of openstack-panko to system folder.
%prep
%setup
%build
%install
%{__install} -d %{buildroot}%{_bindir}
%{__install} -m 0755 panko-expirer-active %{buildroot}%{_bindir}/panko-expirer-active
%post
if test -s %{_sysconfdir}/logrotate.d/openstack-panko ; then
echo '#See /etc/logrotate.d/syslog for panko rules' > %{_sysconfdir}/logrotate.d/openstack-panko
fi
%files
%{_bindir}/panko-expirer-active

View File

@ -1,60 +0,0 @@
#!/bin/bash
#
# Wrapper script to run panko-expirer when on active controller only
#
PANKO_EXPIRER_INFO="/var/run/panko-expirer.info"
PANKO_EXPIRER_CMD="/usr/bin/nice -n 2 /usr/bin/panko-expirer"
function is_active_pgserver()
{
# Determine whether we're running on the same controller as the service.
local service=postgres
local enabledactive=$(/usr/bin/sm-query service $service| grep enabled-active)
if [ "x$enabledactive" == "x" ]
then
# enabled-active not found for that service on this controller
return 1
else
# enabled-active found for that resource
return 0
fi
}
if is_active_pgserver
then
if [ ! -f ${PANKO_EXPIRER_INFO} ]
then
echo skip_count=0 > ${PANKO_EXPIRER_INFO}
fi
source ${PANKO_EXPIRER_INFO}
sudo -u postgres psql -d sysinv -c "SELECT alarm_id, entity_instance_id from i_alarm;" | grep -P "^(?=.*100.101)(?=.*${HOSTNAME})" &>/dev/null
if [ $? -eq 0 ]
then
source /etc/platform/platform.conf
if [ "${system_type}" = "All-in-one" ]
then
source /etc/init.d/task_affinity_functions.sh
idle_core=$(get_most_idle_core)
if [ "$idle_core" -ne "0" ]
then
sh -c "exec taskset -c $idle_core ${PANKO_EXPIRER_CMD}"
sed -i "/skip_count/s/=.*/=0/" ${PANKO_EXPIRER_INFO}
exit 0
fi
fi
if [ "$skip_count" -lt "3" ]
then
newval=$(($skip_count+1))
sed -i "/skip_count/s/=.*/=$newval/" ${PANKO_EXPIRER_INFO}
exit 0
fi
fi
eval ${PANKO_EXPIRER_CMD}
sed -i "/skip_count/s/=.*/=0/" ${PANKO_EXPIRER_INFO}
fi
exit 0

View File

@ -1 +0,0 @@
TIS_PATCH_VER=6

View File

@ -1,86 +0,0 @@
From 39121ea596ec8137f2d56b8a35ebba73feb6b5c8 Mon Sep 17 00:00:00 2001
From: Angie Wang <angie.Wang@windriver.com>
Date: Fri, 20 Oct 2017 10:07:03 -0400
Subject: [PATCH 1/1] panko config
---
SOURCES/panko-dist.conf | 2 +-
SPECS/openstack-panko.spec | 17 ++++++++++++++++-
2 files changed, 17 insertions(+), 2 deletions(-)
create mode 100644 SOURCES/panko-expirer-active
diff --git a/SOURCES/panko-dist.conf b/SOURCES/panko-dist.conf
index c33a2ee..ac6f79f 100644
--- a/SOURCES/panko-dist.conf
+++ b/SOURCES/panko-dist.conf
@@ -1,4 +1,4 @@
[DEFAULT]
-log_dir = /var/log/panko
+#log_dir = /var/log/panko
use_stderr = False
diff --git a/SPECS/openstack-panko.spec b/SPECS/openstack-panko.spec
index d12da57..90471d9 100644
--- a/SPECS/openstack-panko.spec
+++ b/SPECS/openstack-panko.spec
@@ -4,7 +4,7 @@
Name: openstack-panko
Version: 3.1.0
-Release: 1%{?dist}
+Release: 1%{?_tis_dist}.%{tis_patch_ver}
Summary: Panko provides Event storage and REST API
License: ASL 2.0
@@ -12,12 +12,19 @@ URL: http://github.com/openstack/panko
Source0: https://tarballs.openstack.org/%{pypi_name}/%{pypi_name}-%{upstream_version}.tar.gz
Source1: %{pypi_name}-dist.conf
Source2: %{pypi_name}.logrotate
+
+# WRS: Include patches here
+Patch1: 0001-modify-panko-api.patch
+Patch2: 0002-Change-event-list-descending.patch
+Patch3: 0003-Fix-event-query-to-sqlalchemy-with-non-admin-user.patch
+
BuildArch: noarch
BuildRequires: python-setuptools
BuildRequires: python-pbr
BuildRequires: python2-devel
BuildRequires: openstack-macros
+BuildRequires: python-tenacity >= 3.1.0
%description
HTTP API to store events.
@@ -116,6 +123,11 @@ This package contains documentation files for panko.
%prep
%setup -q -n %{pypi_name}-%{upstream_version}
+# WRS: Apply patches here
+%patch1 -p1
+%patch2 -p1
+%patch3 -p1
+
find . \( -name .gitignore -o -name .placeholder \) -delete
find panko -name \*.py -exec sed -i '/\/usr\/bin\/env python/{d;q}' {} +
@@ -158,6 +170,8 @@ mkdir -p %{buildroot}/%{_var}/log/%{name}
install -p -D -m 640 %{SOURCE1} %{buildroot}%{_datadir}/panko/panko-dist.conf
install -p -D -m 640 etc/panko/panko.conf %{buildroot}%{_sysconfdir}/panko/panko.conf
install -p -D -m 640 etc/panko/api_paste.ini %{buildroot}%{_sysconfdir}/panko/api_paste.ini
+# WRS
+install -p -D -m 640 panko/api/panko-api.py %{buildroot}%{_datadir}/panko/panko-api.py
#TODO(prad): build the docs at run time, once the we get rid of postgres setup dependency
@@ -204,6 +218,7 @@ exit 0
%files common
%dir %{_sysconfdir}/panko
+%{_datadir}/panko/panko-api.*
%attr(-, root, panko) %{_datadir}/panko/panko-dist.conf
%config(noreplace) %attr(-, root, panko) %{_sysconfdir}/panko/policy.json
%config(noreplace) %attr(-, root, panko) %{_sysconfdir}/panko/panko.conf
--
1.8.3.1

View File

@ -1 +0,0 @@
0001-panko-config.patch

View File

@ -1,63 +0,0 @@
From 3583e2afbae8748f05dc12c51eefc4983358759c Mon Sep 17 00:00:00 2001
From: Angie Wang <angie.Wang@windriver.com>
Date: Mon, 6 Nov 2017 17:32:46 -0500
Subject: [PATCH 1/1] modify panko api
---
panko/api/app.py | 12 +++++++++---
panko/api/panko-api.py | 6 ++++++
2 files changed, 15 insertions(+), 3 deletions(-)
create mode 100644 panko/api/panko-api.py
diff --git a/panko/api/app.py b/panko/api/app.py
index 9867e18..4eedaea 100644
--- a/panko/api/app.py
+++ b/panko/api/app.py
@@ -51,7 +51,7 @@ global APPCONFIGS
APPCONFIGS = {}
-def load_app(conf, appname='panko+keystone'):
+def load_app(conf, args, appname='panko+keystone'):
global APPCONFIGS
# Build the WSGI app
@@ -62,6 +62,12 @@ def load_app(conf, appname='panko+keystone'):
if cfg_path is None or not os.path.exists(cfg_path):
raise cfg.ConfigFilesNotFoundError([conf.api_paste_config])
+ config_args = dict([(key, value) for key, value in args.iteritems()
+ if key in conf and value is not None])
+ for key, value in config_args.iteritems():
+ if key == 'config_file':
+ conf.config_file = value
+
config = dict(conf=conf)
configkey = str(uuid.uuid4())
APPCONFIGS[configkey] = config
@@ -71,8 +77,8 @@ def load_app(conf, appname='panko+keystone'):
global_conf={'configkey': configkey})
-def build_wsgi_app(argv=None):
- return load_app(service.prepare_service(argv=argv))
+def build_wsgi_app(argv=None, args=None):
+ return load_app(service.prepare_service(argv=argv), args)
def app_factory(global_config, **local_conf):
diff --git a/panko/api/panko-api.py b/panko/api/panko-api.py
new file mode 100644
index 0000000..87d917d
--- /dev/null
+++ b/panko/api/panko-api.py
@@ -0,0 +1,6 @@
+from panko.api import app as build_wsgi_app
+import sys
+
+sys.argv = sys.argv[:1]
+args = {'config_file' : 'etc/panko/panko.conf', }
+application = build_wsgi_app.build_wsgi_app(args=args)
--
1.8.3.1

View File

@ -1,27 +0,0 @@
From 05b89c2f78357ad39b0cd9eb74903e14d1f56758 Mon Sep 17 00:00:00 2001
From: Angie Wang <angie.Wang@windriver.com>
Date: Thu, 16 Nov 2017 15:14:17 -0500
Subject: [PATCH 1/1] Change event list descending
---
panko/storage/models.py | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/panko/storage/models.py b/panko/storage/models.py
index 9c578c8..ed4c9a8 100644
--- a/panko/storage/models.py
+++ b/panko/storage/models.py
@@ -35,8 +35,8 @@ class Event(base.Model):
SUPPORT_DIRS = ('asc', 'desc')
SUPPORT_SORT_KEYS = ('message_id', 'generated')
- DEFAULT_DIR = 'asc'
- DEFAULT_SORT = [('generated', 'asc'), ('message_id', 'asc')]
+ DEFAULT_DIR = 'desc'
+ DEFAULT_SORT = [('generated', 'desc'), ('message_id', 'desc')]
PRIMARY_KEY = 'message_id'
def __init__(self, message_id, event_type, generated, traits, raw):
--
1.8.3.1

View File

@ -1,101 +0,0 @@
From c390a3bc6920728806f581b85d46f02d75eb651c Mon Sep 17 00:00:00 2001
From: Angie Wang <angie.Wang@windriver.com>
Date: Mon, 11 Dec 2017 16:21:42 -0500
Subject: [PATCH 1/1] Fix event query to sqlalchemy with non admin user
This is an upstream fix.
https://github.com/openstack/panko/commit/99d591df950271594ee049caa3ff22304437a228
Do not port this patch in the next panko rebase.
---
panko/storage/impl_sqlalchemy.py | 34 +++++++++++++++-------
.../functional/storage/test_storage_scenarios.py | 4 +--
2 files changed, 25 insertions(+), 13 deletions(-)
diff --git a/panko/storage/impl_sqlalchemy.py b/panko/storage/impl_sqlalchemy.py
index 670c8d7..29b5b97 100644
--- a/panko/storage/impl_sqlalchemy.py
+++ b/panko/storage/impl_sqlalchemy.py
@@ -24,6 +24,7 @@ from oslo_log import log
from oslo_utils import timeutils
import sqlalchemy as sa
from sqlalchemy.engine import url as sqlalchemy_url
+from sqlalchemy.orm import aliased
from panko import storage
from panko.storage import base
@@ -61,8 +62,8 @@ trait_models_dict = {'string': models.TraitText,
'float': models.TraitFloat}
-def _build_trait_query(session, trait_type, key, value, op='eq'):
- trait_model = trait_models_dict[trait_type]
+def _get_model_and_conditions(trait_type, key, value, op='eq'):
+ trait_model = aliased(trait_models_dict[trait_type])
op_dict = {'eq': (trait_model.value == value),
'lt': (trait_model.value < value),
'le': (trait_model.value <= value),
@@ -70,8 +71,7 @@ def _build_trait_query(session, trait_type, key, value, op='eq'):
'ge': (trait_model.value >= value),
'ne': (trait_model.value != value)}
conditions = [trait_model.key == key, op_dict[op]]
- return (session.query(trait_model.event_id.label('ev_id'))
- .filter(*conditions))
+ return (trait_model, conditions)
class Connection(base.Connection):
@@ -274,16 +274,28 @@ class Connection(base.Connection):
key = trait_filter.pop('key')
op = trait_filter.pop('op', 'eq')
trait_type, value = list(trait_filter.items())[0]
- trait_subq = _build_trait_query(session, trait_type,
- key, value, op)
- for trait_filter in filters:
+
+ trait_model, conditions = _get_model_and_conditions(
+ trait_type, key, value, op)
+ trait_subq = (session
+ .query(trait_model.event_id.label('ev_id'))
+ .filter(*conditions))
+
+ first_model = trait_model
+ for label_num, trait_filter in enumerate(filters):
key = trait_filter.pop('key')
op = trait_filter.pop('op', 'eq')
trait_type, value = list(trait_filter.items())[0]
- q = _build_trait_query(session, trait_type,
- key, value, op)
- trait_subq = trait_subq.filter(
- trait_subq.subquery().c.ev_id == q.subquery().c.ev_id)
+ trait_model, conditions = _get_model_and_conditions(
+ trait_type, key, value, op)
+ trait_subq = (
+ trait_subq
+ .add_columns(
+ trait_model.event_id.label('l%d' % label_num))
+ .filter(
+ first_model.event_id == trait_model.event_id,
+ *conditions))
+
trait_subq = trait_subq.subquery()
query = (session.query(models.Event.id)
diff --git a/panko/tests/functional/storage/test_storage_scenarios.py b/panko/tests/functional/storage/test_storage_scenarios.py
index 3af76b4..9af75c8 100644
--- a/panko/tests/functional/storage/test_storage_scenarios.py
+++ b/panko/tests/functional/storage/test_storage_scenarios.py
@@ -340,8 +340,8 @@ class GetEventTest(EventTestBase):
def test_get_event_multiple_trait_filter(self):
trait_filters = [{'key': 'trait_B', 'integer': 1},
- {'key': 'trait_A', 'string': 'my_Foo_text'},
- {'key': 'trait_C', 'float': 0.123456}]
+ {'key': 'trait_C', 'float': 0.123456},
+ {'key': 'trait_A', 'string': 'my_Foo_text'}]
event_filter = storage.EventFilter(self.start, self.end,
traits_filter=trait_filters)
events = [event for event in self.conn.get_events(event_filter)]
--
1.8.3.1

View File

@ -1 +0,0 @@
mirror:Source/openstack-panko-3.1.0-1.el7.src.rpm

View File

@ -1,11 +0,0 @@
SRC_DIR="$CGCS_BASE/git/ceilometer"
TAR_NAME=ceilometer
COPY_LIST="$FILES_BASE/* \
python-ceilometer/static/ceilometer-expirer-active \
python-ceilometer/static/ceilometer-polling \
python-ceilometer/static/ceilometer-polling.conf \
python-ceilometer/static/ceilometer-polling.conf.pmon.centos \
python-ceilometer/static/ceilometer-polling-compute.conf.pmon.centos \
python-ceilometer/static/ceilometer-agent-compute"
TIS_BASE_SRCREV=105788514dadcd881fc86d4b9a03d0d10e2e0874
TIS_PATCH_VER=GITREVCOUNT

View File

@ -1,6 +0,0 @@
[DEFAULT]
#log_dir = /var/log/ceilometer
use_stderr = False
[database]
connection = mongodb://localhost:27017/ceilometer

View File

@ -1,2 +0,0 @@
Defaults:ceilometer !requiretty
ceilometer ALL = (root) NOPASSWD: /usr/bin/ceilometer-rootwrap /etc/ceilometer/rootwrap.conf *

File diff suppressed because it is too large Load Diff

View File

@ -1,9 +0,0 @@
compress
/var/log/ceilometer/*.log {
rotate 14
size 10M
missingok
compress
copytruncate
}

View File

@ -1,13 +0,0 @@
[Unit]
Description=OpenStack ceilometer API service
After=syslog.target network.target
[Service]
Type=simple
User=root
ExecStart=/bin/python /usr/bin/gunicorn --config /usr/share/ceilometer/ceilometer-api.conf --pythonpath /usr/share/ceilometer ceilometer-api
#Restart=on-failure
[Install]
WantedBy=multi-user.target

View File

@ -1,13 +0,0 @@
[Unit]
Description=OpenStack ceilometer collection service
After=syslog.target network.target
[Service]
Type=simple
User=root
ExecStart=/usr/bin/ceilometer-collector --logfile /var/log/ceilometer/ceilometer-collector.log
#Restart=on-failure
[Install]
WantedBy=multi-user.target

View File

@ -1,13 +0,0 @@
[Unit]
Description=OpenStack ceilometer ipmi agent
After=syslog.target network.target
[Service]
Type=simple
User=root
ExecStart=/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /var/log/ceilometer/agent-ipmi.log
#Restart=on-failure
[Install]
WantedBy=multi-user.target

View File

@ -1,13 +0,0 @@
[Unit]
Description=OpenStack ceilometer notification agent
After=syslog.target network.target
[Service]
Type=simple
User=root
ExecStart=/usr/bin/ceilometer-agent-notification --logfile /var/log/ceilometer/ceilometer-agent-notification.log
#Restart=on-failure
[Install]
WantedBy=multi-user.target

View File

@ -1 +0,0 @@
OPTIONS="--polling-namespace 'central' 'compute'"

View File

@ -1,16 +0,0 @@
[Unit]
Description=OpenStack ceilometer polling agent
After=syslog.target network.target
[Service]
Type=forking
Restart=no
KillMode=process
RemainAfterExit=yes
ExecStart=/etc/rc.d/init.d/openstack-ceilometer-polling start
ExecStop=/etc/rc.d/init.d/openstack-ceilometer-polling stop
ExecReload=/etc/rc.d/init.d/openstack-ceilometer-polling reload
[Install]
WantedBy=multi-user.target

View File

@ -1,709 +0,0 @@
%global _without_doc 1
%global with_doc %{!?_without_doc:1}%{?_without_doc:0}
%global pypi_name ceilometer
%{!?upstream_version: %global upstream_version %{version}%{?milestone}}
Name: openstack-ceilometer
# Liberty semver reset
# https://review.openstack.org/#/q/I6a35fa0dda798fad93b804d00a46af80f08d475c,n,z
Epoch: 1
Version: 9.0.1
Release: 1%{?_tis_dist}.%{tis_patch_ver}
Summary: OpenStack measurement collection service
Group: Applications/System
License: ASL 2.0
URL: https://wiki.openstack.org/wiki/Ceilometer
Source0: %{pypi_name}-%{upstream_version}.tar.gz
Source1: %{pypi_name}-dist.conf
Source4: ceilometer-rootwrap-sudoers
Source7: ceilometer-expirer-active
Source8: ceilometer-polling
Source9: ceilometer-polling.conf
Source10: %{name}-api.service
Source11: %{name}-collector.service
%if 0%{?with_compute}
Source12: %{name}-compute.service
%endif
%if 0%{?with_central}
Source13: %{name}-central.service
%endif
Source16: %{name}-notification.service
Source17: %{name}-ipmi.service
Source18: %{name}-polling.service
Source20: ceilometer-polling.conf.pmon.centos
Source21: ceilometer-polling-compute.conf.pmon.centos
BuildArch: noarch
BuildRequires: intltool
BuildRequires: openstack-macros
BuildRequires: python-cotyledon
BuildRequires: python-sphinx
BuildRequires: python-setuptools
BuildRequires: python2-pip
BuildRequires: python2-wheel
BuildRequires: python-pbr >= 1.10.0
BuildRequires: git
BuildRequires: python-d2to1
BuildRequires: python2-devel
# Required to compile translation files
BuildRequires: python-babel
BuildRequires: systemd-devel
BuildRequires: systemd
BuildRequires: systemd-units
%description
OpenStack ceilometer provides services to measure and
collect metrics from OpenStack components.
%package -n python-ceilometer
Summary: OpenStack ceilometer python libraries
Group: Applications/System
Requires: python-babel
Requires: python-cachetools >= 1.1.0
Requires: python-debtcollector >= 1.2.0
Requires: python-eventlet
Requires: python-futurist >= 0.11.0
Requires: python-cotyledon
Requires: python-dateutil
Requires: python-greenlet
Requires: python-iso8601
Requires: python-keystoneauth1 >= 2.1.0
Requires: python-lxml
Requires: python-anyjson
Requires: python-jsonpath-rw
Requires: python-jsonpath-rw-ext
Requires: python-stevedore >= 1.9.0
Requires: python-msgpack >= 0.4.0
Requires: python-pbr
Requires: python-six >= 1.9.0
Requires: python-tenacity >= 3.2.1
Requires: python-sqlalchemy
Requires: python-alembic
Requires: python-migrate
Requires: python-webob
Requires: python-oslo-config >= 2:3.22.0
Requires: PyYAML
Requires: python-netaddr
Requires: python-oslo-rootwrap
Requires: python-oslo-vmware >= 0.6.0
Requires: python-requests >= 2.8.1
Requires: pysnmp
Requires: pytz
Requires: python-croniter
Requires: python-retrying
Requires: python-jsonschema
Requires: python-werkzeug
Requires: python-oslo-context
Requires: python-oslo-concurrency >= 3.5.0
Requires: python-oslo-i18n >= 2.1.0
Requires: python-oslo-log >= 1.14.0
Requires: python-oslo-middleware >= 3.0.0
Requires: python-oslo-policy >= 0.5.0
Requires: python-oslo-reports >= 0.6.0
Requires: python-monotonic
Requires: python-futures
%description -n python-ceilometer
OpenStack ceilometer provides services to measure and
collect metrics from OpenStack components.
This package contains the ceilometer python library.
%package common
Summary: Components common to all OpenStack ceilometer services
Group: Applications/System
Requires: python-ceilometer = %{epoch}:%{version}-%{release}
Requires: python-oslo-messaging >= 5.12.0
Requires: python-oslo-serialization >= 1.10.0
Requires: python-oslo-utils >= 3.5.0
Requires: python-pecan >= 1.0.0
Requires: python-posix_ipc
Requires: python-gnocchiclient
Requires: python-wsme >= 0.8
Requires: python-os-xenapi >= 0.1.1
Requires(post): systemd-units
Requires(preun): systemd-units
Requires(postun): systemd-units
Requires(pre): shadow-utils
# Config file generation
BuildRequires: python-os-xenapi
BuildRequires: python-oslo-config >= 2:3.7.0
BuildRequires: python-oslo-concurrency
BuildRequires: python-oslo-db
BuildRequires: python-oslo-log
BuildRequires: python-oslo-messaging
BuildRequires: python-oslo-policy
BuildRequires: python-oslo-reports
BuildRequires: python-oslo-vmware >= 0.6.0
BuildRequires: python-glanceclient >= 1:2.0.0
BuildRequires: python-keystonemiddleware
BuildRequires: python-neutronclient
BuildRequires: python-novaclient >= 1:2.29.0
BuildRequires: python-swiftclient
BuildRequires: python-croniter
BuildRequires: python-jsonpath-rw
BuildRequires: python-jsonpath-rw-ext
BuildRequires: python-lxml
BuildRequires: python-pecan >= 1.0.0
BuildRequires: python-tooz
BuildRequires: python-werkzeug
BuildRequires: python-wsme >= 0.7
BuildRequires: python-gnocchiclient
BuildRequires: python-cinderclient >= 1.7.1
%description common
OpenStack ceilometer provides services to measure and
collect metrics from OpenStack components.
This package contains components common to all OpenStack
ceilometer services.
%if 0%{?with_compute}
%package compute
Summary: OpenStack ceilometer compute agent
Group: Applications/System
Requires: %{name}-common = %{epoch}:%{version}-%{release}
Requires: %{name}-polling = %{epoch}:%{version}-%{release}
Requires: python-novaclient >= 1:2.29.0
Requires: python-keystoneclient >= 1:1.6.0
Requires: python-tooz
Requires: libvirt-python
%description compute
OpenStack ceilometer provides services to measure and
collect metrics from OpenStack components.
This package contains the ceilometer agent for
running on OpenStack compute nodes.
%endif
%if 0%{?with_central}
%package central
Summary: OpenStack ceilometer central agent
Group: Applications/System
Requires: %{name}-common = %{epoch}:%{version}-%{release}
Requires: %{name}-polling = %{epoch}:%{version}-%{release}
Requires: python-novaclient >= 1:2.29.0
Requires: python-keystoneclient >= 1:1.6.0
Requires: python-glanceclient >= 1:2.0.0
Requires: python-swiftclient
Requires: python-neutronclient >= 4.2.0
Requires: python-tooz
%description central
OpenStack ceilometer provides services to measure and
collect metrics from OpenStack components.
This package contains the central ceilometer agent.
%endif
%package collector
Summary: OpenStack ceilometer collector
Group: Applications/System
Requires: %{name}-common = %{epoch}:%{version}-%{release}
# For compat with older provisioning tools.
# Remove when all reference the notification package explicitly
Requires: %{name}-notification
Requires: python-oslo-db
Requires: python-pymongo
%description collector
OpenStack ceilometer provides services to measure and
collect metrics from OpenStack components.
This package contains the ceilometer collector service
which collects metrics from the various agents.
%package notification
Summary: OpenStack ceilometer notification agent
Group: Applications/System
Requires: %{name}-common = %{epoch}:%{version}-%{release}
%description notification
OpenStack ceilometer provides services to measure and
collect metrics from OpenStack components.
This package contains the ceilometer notification agent
which pushes metrics to the collector service from the
various OpenStack services.
%package api
Summary: OpenStack ceilometer API service
Group: Applications/System
Requires: %{name}-common = %{epoch}:%{version}-%{release}
Requires: python-keystonemiddleware >= 4.0.0
Requires: python-oslo-db >= 4.1.0
Requires: python-pymongo
Requires: python-paste-deploy
Requires: python-tooz
%description api
OpenStack ceilometer provides services to measure and
collect metrics from OpenStack components.
This package contains the ceilometer API service.
%package ipmi
Summary: OpenStack ceilometer ipmi agent
Group: Applications/System
Requires: %{name}-common = %{epoch}:%{version}-%{release}
Requires: %{name}-polling = %{epoch}:%{version}-%{release}
Requires: python-novaclient >= 1:2.29.0
Requires: python-keystoneclient >= 1:1.6.0
Requires: python-neutronclient >= 4.2.0
Requires: python-tooz
Requires: python-oslo-rootwrap >= 2.0.0
Requires: ipmitool
%description ipmi
OpenStack ceilometer provides services to measure and
collect metrics from OpenStack components.
This package contains the ipmi agent to be run on OpenStack
nodes from which IPMI sensor data is to be collected directly,
by-passing Ironic's management of baremetal.
%package polling
Summary: OpenStack ceilometer polling agent
Group: Applications/System
Requires: %{name}-common = %{epoch}:%{version}-%{release}
Requires: python-cinderclient >= 1.7.1
Requires: python-novaclient >= 1:2.29.0
Requires: python-keystoneclient >= 1:1.6.0
Requires: python-glanceclient >= 1:2.0.0
Requires: python-swiftclient >= 2.2.0
Requires: libvirt-python
Requires: python-neutronclient
Requires: python-tooz
Requires: /usr/bin/systemctl
%description polling
Ceilometer aims to deliver a unique point of contact for billing systems to
acquire all counters they need to establish customer billing, across all
current and future OpenStack components. The delivery of counters must
be tracable and auditable, the counters must be easily extensible to support
new projects, and agents doing data collections should be
independent of the overall system.
This package contains the polling service.
%package -n python-ceilometer-tests
Summary: Ceilometer tests
Requires: python-ceilometer = %{epoch}:%{version}-%{release}
Requires: python-gabbi >= 1.30.0
%description -n python-ceilometer-tests
OpenStack ceilometer provides services to measure and
collect metrics from OpenStack components.
This package contains the Ceilometer test files.
%if 0%{?with_doc}
%package doc
Summary: Documentation for OpenStack ceilometer
Group: Documentation
# Required to build module documents
BuildRequires: python-eventlet
BuildRequires: python-sqlalchemy
BuildRequires: python-webob
BuildRequires: python-openstackdocstheme
# while not strictly required, quiets the build down when building docs.
BuildRequires: python-migrate, python-iso8601
%description doc
OpenStack ceilometer provides services to measure and
collect metrics from OpenStack components.
This package contains documentation files for ceilometer.
%endif
%prep
%autosetup -n ceilometer-%{upstream_version} -S git
find . \( -name .gitignore -o -name .placeholder \) -delete
find ceilometer -name \*.py -exec sed -i '/\/usr\/bin\/env python/{d;q}' {} +
# TODO: Have the following handle multi line entries
sed -i '/setup_requires/d; /install_requires/d; /dependency_links/d' setup.py
# Remove the requirements file so that pbr hooks don't add it
# to distutils requires_dist config
rm -rf {test-,}requirements.txt tools/{pip,test}-requires
%build
# Generate config file
PYTHONPATH=. oslo-config-generator --config-file=etc/ceilometer/ceilometer-config-generator.conf
export PBR_VERSION=%{version}
%{__python2} setup.py build
# Generate i18n files
export PBR_VERSION=%{version}
%{__python2} setup.py compile_catalog -d build/lib/%{pypi_name}/locale
# Programmatically update defaults in sample config
# which is installed at /etc/ceilometer/ceilometer.conf
# TODO: Make this more robust
# Note it only edits the first occurrence, so assumes a section ordering in sample
# and also doesn't support multi-valued variables.
while read name eq value; do
test "$name" && test "$value" || continue
sed -i "0,/^# *$name=/{s!^# *$name=.*!#$name=$value!}" etc/ceilometer/ceilometer.conf
done < %{SOURCE1}
%py2_build_wheel
%install
export PBR_VERSION=%{version}
%{__python2} setup.py install -O1 --skip-build --root %{buildroot}
mkdir -p $RPM_BUILD_ROOT/wheels
install -m 644 dist/*.whl $RPM_BUILD_ROOT/wheels/
# Install sql migration cfg and sql files that were not installed by setup.py
install -m 644 ceilometer/storage/sqlalchemy/migrate_repo/migrate.cfg %{buildroot}%{python_sitelib}/ceilometer/storage/sqlalchemy/migrate_repo/migrate.cfg
install -m 644 ceilometer/storage/sqlalchemy/migrate_repo/versions/*.sql %{buildroot}%{python_sitelib}/ceilometer/storage/sqlalchemy/migrate_repo/versions/.
# Install non python files that were not installed by setup.py
install -m 755 -d %{buildroot}%{python_sitelib}/ceilometer/hardware/pollsters/data
install -m 644 ceilometer/hardware/pollsters/data/snmp.yaml %{buildroot}%{python_sitelib}/ceilometer/hardware/pollsters/data/snmp.yaml
# Create fake egg-info for the tempest plugin
# TODO switch to %{service} everywhere as in openstack-example.spec
%global service ceilometer
%py2_entrypoint %{service} %{service}
# docs generation requires everything to be installed first
pushd doc
%if 0%{?with_doc}
export PBR_VERSION=%{version}
%{__python2} setup.py build_sphinx -b html
# Fix hidden-file-or-dir warnings
rm -fr doc/build/html/.doctrees doc/build/html/.buildinfo
%endif
popd
# Setup directories
install -d -m 755 %{buildroot}%{_sharedstatedir}/ceilometer
install -d -m 755 %{buildroot}%{_sharedstatedir}/ceilometer/tmp
install -d -m 750 %{buildroot}%{_localstatedir}/log/ceilometer
# Install config files
install -d -m 755 %{buildroot}%{_sysconfdir}/ceilometer
install -d -m 755 %{buildroot}%{_sysconfdir}/ceilometer/rootwrap.d
install -d -m 755 %{buildroot}%{_sysconfdir}/sudoers.d
install -d -m 755 %{buildroot}%{_sysconfdir}/sysconfig
install -d -m 755 %{buildroot}%{_sysconfdir}/ceilometer/meters.d
install -p -D -m 640 %{SOURCE1} %{buildroot}%{_datadir}/ceilometer/ceilometer-dist.conf
install -p -D -m 440 %{SOURCE4} %{buildroot}%{_sysconfdir}/sudoers.d/ceilometer
install -p -D -m 640 etc/ceilometer/ceilometer.conf %{buildroot}%{_sysconfdir}/ceilometer/ceilometer.conf
install -p -D -m 640 etc/ceilometer/policy.json %{buildroot}%{_sysconfdir}/ceilometer/policy.json
install -p -D -m 640 ceilometer/pipeline/data/pipeline.yaml %{buildroot}%{_sysconfdir}/ceilometer/pipeline.yaml
install -p -D -m 640 etc/ceilometer/polling.yaml %{buildroot}%{_sysconfdir}/ceilometer/polling.yaml
install -p -D -m 640 ceilometer/pipeline/data/event_pipeline.yaml %{buildroot}%{_sysconfdir}/ceilometer/event_pipeline.yaml
install -p -D -m 640 ceilometer/pipeline/data/event_definitions.yaml %{buildroot}%{_sysconfdir}/ceilometer/event_definitions.yaml
install -p -D -m 640 etc/ceilometer/api_paste.ini %{buildroot}%{_sysconfdir}/ceilometer/api_paste.ini
install -p -D -m 640 etc/ceilometer/rootwrap.conf %{buildroot}%{_sysconfdir}/ceilometer/rootwrap.conf
install -p -D -m 640 etc/ceilometer/rootwrap.d/ipmi.filters %{buildroot}/%{_sysconfdir}/ceilometer/rootwrap.d/ipmi.filters
install -p -D -m 640 ceilometer/publisher/data/gnocchi_resources.yaml %{buildroot}%{_sysconfdir}/ceilometer/gnocchi_resources.yaml
install -p -D -m 640 ceilometer/data/meters.d/meters.yaml %{buildroot}%{_sysconfdir}/ceilometer/meters.d/meters.yaml
install -p -D -m 640 ceilometer/api/ceilometer-api.py %{buildroot}%{_datadir}/ceilometer/ceilometer-api.py
# Install initscripts for services
%if 0%{?rhel} && 0%{?rhel} <= 6
install -p -D -m 755 %{SOURCE10} %{buildroot}%{_initrddir}/%{name}-api
install -p -D -m 755 %{SOURCE11} %{buildroot}%{_initrddir}/%{name}-collector
%if 0%{?with_compute}
install -p -D -m 755 %{SOURCE12} %{buildroot}%{_initrddir}/%{name}-compute
%endif
%if 0%{?with_central}
install -p -D -m 755 %{SOURCE13} %{buildroot}%{_initrddir}/%{name}-central
%endif
install -p -D -m 755 %{SOURCE16} %{buildroot}%{_initrddir}/%{name}-notification
install -p -D -m 755 %{SOURCE17} %{buildroot}%{_initrddir}/%{name}-ipmi
install -p -D -m 755 %{SOURCE18} %{buildroot}%{_initrddir}/%{name}-polling
# Install upstart jobs examples
install -d -m 755 %{buildroot}%{_datadir}/ceilometer
install -p -m 644 %{SOURCE100} %{buildroot}%{_datadir}/ceilometer/
install -p -m 644 %{SOURCE110} %{buildroot}%{_datadir}/ceilometer/
install -p -m 644 %{SOURCE120} %{buildroot}%{_datadir}/ceilometer/
install -p -m 644 %{SOURCE130} %{buildroot}%{_datadir}/ceilometer/
install -p -m 644 %{SOURCE140} %{buildroot}%{_datadir}/ceilometer/
install -p -m 644 %{SOURCE150} %{buildroot}%{_datadir}/ceilometer/
install -p -m 644 %{SOURCE160} %{buildroot}%{_datadir}/ceilometer/
install -p -m 644 %{SOURCE170} %{buildroot}%{_datadir}/ceilometer/
install -p -m 644 %{SOURCE180} %{buildroot}%{_datadir}/ceilometer/
%else
install -p -D -m 644 %{SOURCE10} %{buildroot}%{_unitdir}/%{name}-api.service
install -p -D -m 644 %{SOURCE11} %{buildroot}%{_unitdir}/%{name}-collector.service
%if 0%{?with_compute}
install -p -D -m 644 %{SOURCE12} %{buildroot}%{_unitdir}/%{name}-compute.service
%endif
%if 0%{?with_central}
install -p -D -m 644 %{SOURCE13} %{buildroot}%{_unitdir}/%{name}-central.service
%endif
install -p -D -m 644 %{SOURCE16} %{buildroot}%{_unitdir}/%{name}-notification.service
install -p -D -m 644 %{SOURCE17} %{buildroot}%{_unitdir}/%{name}-ipmi.service
install -p -D -m 644 %{SOURCE18} %{buildroot}%{_unitdir}/%{name}-polling.service
%endif
install -p -D -m 755 %{SOURCE7} %{buildroot}%{_bindir}/ceilometer-expirer-active
install -p -D -m 755 %{SOURCE8} %{buildroot}%{_initrddir}/openstack-ceilometer-polling
mkdir -p %{buildroot}/%{_sysconfdir}/ceilometer
install -p -D -m 644 %{SOURCE9} %{buildroot}%{_sysconfdir}/ceilometer/ceilometer-polling.conf
install -p -D -m 644 %{SOURCE20} %{buildroot}%{_sysconfdir}/ceilometer/ceilometer-polling.conf.pmon
install -p -D -m 644 %{SOURCE21} %{buildroot}%{_sysconfdir}/ceilometer/ceilometer-polling-compute.conf.pmon
# Install i18n .mo files (.po and .pot are not required)
install -d -m 755 %{buildroot}%{_datadir}
rm -f %{buildroot}%{python2_sitelib}/%{pypi_name}/locale/*/LC_*/%{pypi_name}*po
rm -f %{buildroot}%{python2_sitelib}/%{pypi_name}/locale/*pot
mv %{buildroot}%{python2_sitelib}/%{pypi_name}/locale %{buildroot}%{_datadir}/locale
# Find language files
%find_lang %{pypi_name} --all-name
# Remove unneeded in production stuff
rm -f %{buildroot}/usr/share/doc/ceilometer/README*
# Remove unused files
rm -fr %{buildroot}/usr/etc
%pre common
getent group ceilometer >/dev/null || groupadd -r ceilometer --gid 166
if ! getent passwd ceilometer >/dev/null; then
# Id reservation request: https://bugzilla.redhat.com/923891
useradd -u 166 -r -g ceilometer -G ceilometer,nobody -d %{_sharedstatedir}/ceilometer -s /sbin/nologin -c "OpenStack ceilometer Daemons" ceilometer
fi
exit 0
%if 0%{?with_compute}
%post compute
%systemd_post %{name}-compute.service
%endif
%post collector
%systemd_post %{name}-collector.service
%post notification
%systemd_post %{name}-notification.service
%post api
%systemd_post %{name}-api.service
%if 0%{?with_central}
%post central
%systemd_post %{name}-central.service
%endif
%post ipmi
%systemd_post %{name}-alarm-ipmi.service
%post polling
/usr/bin/systemctl disable %{name}-polling.service
%if 0%{?with_compute}
%preun compute
%systemd_preun %{name}-compute.service
%endif
%preun collector
%systemd_preun %{name}-collector.service
%preun notification
%systemd_preun %{name}-notification.service
%preun api
%systemd_preun %{name}-api.service
%if 0%{?with_central}
%preun central
%systemd_preun %{name}-central.service
%endif
%preun ipmi
%systemd_preun %{name}-ipmi.service
%preun polling
%systemd_preun %{name}-polling.service
%if 0%{?with_compute}
%postun compute
%systemd_postun_with_restart %{name}-compute.service
%endif
%postun collector
%systemd_postun_with_restart %{name}-collector.service
%postun notification
%systemd_postun_with_restart %{name}-notification.service
%postun api
%systemd_postun_with_restart %{name}-api.service
%if 0%{?with_central}
%postun central
%systemd_postun_with_restart %{name}-central.service
%endif
%postun ipmi
%systemd_postun_with_restart %{name}-ipmi.service
%postun polling
/usr/bin/systemctl disable %{name}-polling.service
%files common -f %{pypi_name}.lang
%license LICENSE
%dir %{_sysconfdir}/ceilometer
%{_datadir}/ceilometer/ceilometer-api.*
%attr(-, root, ceilometer) %{_datadir}/ceilometer/ceilometer-dist.conf
%config(noreplace) %attr(-, root, ceilometer) %{_sysconfdir}/ceilometer/ceilometer.conf
%config(noreplace) %attr(-, root, ceilometer) %{_sysconfdir}/ceilometer/policy.json
%config(noreplace) %attr(-, root, ceilometer) %{_sysconfdir}/ceilometer/pipeline.yaml
%config(noreplace) %attr(-, root, ceilometer) %{_sysconfdir}/ceilometer/polling.yaml
%config(noreplace) %attr(-, root, ceilometer) %{_sysconfdir}/ceilometer/api_paste.ini
%config(noreplace) %attr(-, root, ceilometer) %{_sysconfdir}/ceilometer/gnocchi_resources.yaml
%dir %attr(0750, ceilometer, root) %{_localstatedir}/log/ceilometer
%{_bindir}/ceilometer-db-legacy-clean
%{_bindir}/ceilometer-expirer
%{_bindir}/ceilometer-send-sample
%{_bindir}/ceilometer-upgrade
%defattr(-, ceilometer, ceilometer, -)
%dir %{_sharedstatedir}/ceilometer
%dir %{_sharedstatedir}/ceilometer/tmp
%files -n python-ceilometer
%{python2_sitelib}/ceilometer
%{python2_sitelib}/ceilometer-*.egg-info
%exclude %{python2_sitelib}/ceilometer/tests
%files -n python-ceilometer-tests
%license LICENSE
%{python2_sitelib}/ceilometer/tests
%{python2_sitelib}/%{service}_tests.egg-info
%if 0%{?with_doc}
%files doc
%doc doc/build/html
%endif
%if 0%{?with_compute}
%files compute
%{_unitdir}/%{name}-compute.service
%endif
%files collector
%{_bindir}/ceilometer-collector*
%{_bindir}/ceilometer-expirer-active
%{_unitdir}/%{name}-collector.service
%files notification
%config(noreplace) %attr(-, root, ceilometer) %{_sysconfdir}/ceilometer/event_pipeline.yaml
%config(noreplace) %attr(-, root, ceilometer) %{_sysconfdir}/ceilometer/event_definitions.yaml
%dir %{_sysconfdir}/ceilometer/meters.d
%config(noreplace) %attr(-, root, ceilometer) %{_sysconfdir}/ceilometer/meters.d/meters.yaml
%{_bindir}/ceilometer-agent-notification
%{_unitdir}/%{name}-notification.service
%files api
%{_bindir}/ceilometer-api
%{_unitdir}/%{name}-api.service
%if 0%{?with_central}
%files central
%{_unitdir}/%{name}-central.service
%endif
%files ipmi
%config(noreplace) %attr(-, root, ceilometer) %{_sysconfdir}/ceilometer/rootwrap.conf
%config(noreplace) %attr(-, root, ceilometer) %{_sysconfdir}/ceilometer/rootwrap.d/ipmi.filters
%{_bindir}/ceilometer-rootwrap
%{_sysconfdir}/sudoers.d/ceilometer
%{_unitdir}/%{name}-ipmi.service
%files polling
%{_bindir}/ceilometer-polling
%{_initrddir}/openstack-ceilometer-polling
%{_sysconfdir}/ceilometer/ceilometer-polling.conf
%{_sysconfdir}/ceilometer/ceilometer-polling.conf.pmon
%{_sysconfdir}/ceilometer/ceilometer-polling-compute.conf.pmon
%{_unitdir}/%{name}-polling.service
%package wheels
Summary: %{name} wheels
%description wheels
Contains python wheels for %{name}
%files wheels
/wheels/*
%changelog
* Tue Sep 12 2017 rdo-trunk <javier.pena@redhat.com> 1:9.0.1-1
- Update to 9.0.1
* Thu Aug 24 2017 Alfredo Moralejo <amoralej@redhat.com> 1:9.0.0-1
- Update to 9.0.0

View File

@ -1,125 +0,0 @@
#!/bin/sh
### BEGIN INIT INFO
# Provides:
# Required-Start: $remote_fs $network $syslog
# Required-Stop: $remote_fs $syslog
# Default-Start: 3 5
# Default-Stop: 0 1 2 6
# Short-Description: Ceilometer Servers
# Description: OpenStack Monitoring Service (code-named Ceilometer) server(s)
### END INIT INFO
SUFFIX=agent-compute
DESC="ceilometer-$SUFFIX"
DAEMON="/usr/bin/ceilometer-$SUFFIX"
CONFIG="/etc/ceilometer/ceilometer.conf"
PIDFILE="/var/run/ceilometer-$SUFFIX.pid"
start()
{
if [ -e $PIDFILE ]; then
PIDDIR=/proc/$(cat $PIDFILE)
if [ -d ${PIDDIR} ]; then
echo "$DESC already running."
exit 1
else
echo "Removing stale PID file $PIDFILE"
rm -f $PIDFILE
fi
fi
if [ ! -d /var/log/ceilometer ]; then
mkdir /var/log/ceilometer
fi
# load up the platform info including nodetype and subfunction
source /etc/platform/platform.conf
# We'll need special handling for controller with compute subfunction,
function=`echo "$subfunction" | cut -f 2 -d','`
if [ "$nodetype" != "compute" -a "$function" != "compute" ] ; then
logger -t $0 -p warn "exiting because this is not compute host"
exit 0
fi
echo -n "Starting $DESC..."
start-stop-daemon --start --quiet --background \
--pidfile ${PIDFILE} --make-pidfile --exec ${DAEMON} \
-- --config-file $CONFIG --log-dir=/var/log/ceilometer
if [ $? -eq 0 ]; then
echo "done."
else
echo "failed."
fi
}
stop()
{
echo -n "Stopping $DESC..."
start-stop-daemon --stop --quiet --pidfile $PIDFILE
if [ $? -eq 0 ]; then
echo "done."
else
echo "failed."
fi
rm -f $PIDFILE
}
status()
{
pid=`cat $PIDFILE 2>/dev/null`
if [ -n "$pid" ]; then
if ps -p $pid &>/dev/null ; then
echo "$DESC is running"
return
fi
fi
echo "$DESC is not running"
}
reset()
{
stop
# This is to make sure postgres is configured and running
if ! pidof postmaster > /dev/null; then
/etc/init.d/postgresql-init
/etc/init.d/postgresql start
sleep 2
fi
[ ! -d /var/log/ceilometer ] && mkdir /var/log/ceilometer
sudo -u postgres dropdb ceilometer
sudo -u postgres createdb ceilometer
ceilometer-dbsync
start
}
case "$1" in
start)
start
;;
stop)
stop
;;
restart|force-reload|reload)
stop
start
;;
status)
status
;;
reset)
reset
;;
*)
echo "Usage: $0 {start|stop|force-reload|restart|reload|status|reset}"
exit 1
;;
esac
exit 0

View File

@ -1,60 +0,0 @@
#!/bin/bash
#
# Wrapper script to run ceilometer-expirer when on active controller only
#
CEILOMETER_EXPIRER_INFO="/var/run/ceilometer-expirer.info"
CEILOMETER_EXPIRER_CMD="/usr/bin/nice -n 2 /usr/bin/ceilometer-expirer"
function is_active_pgserver()
{
# Determine whether we're running on the same controller as the service.
local service=postgres
local enabledactive=$(/usr/bin/sm-query service $service| grep enabled-active)
if [ "x$enabledactive" == "x" ]
then
# enabled-active not found for that service on this controller
return 1
else
# enabled-active found for that resource
return 0
fi
}
if is_active_pgserver
then
if [ ! -f ${CEILOMETER_EXPIRER_INFO} ]
then
echo skip_count=0 > ${CEILOMETER_EXPIRER_INFO}
fi
source ${CEILOMETER_EXPIRER_INFO}
sudo -u postgres psql -d sysinv -c "SELECT alarm_id, entity_instance_id from i_alarm;" | grep -P "^(?=.*100.101)(?=.*${HOSTNAME})" &>/dev/null
if [ $? -eq 0 ]
then
source /etc/platform/platform.conf
if [ "${system_type}" = "All-in-one" ]
then
source /etc/init.d/task_affinity_functions.sh
idle_core=$(get_most_idle_core)
if [ "$idle_core" -ne "0" ]
then
sh -c "exec taskset -c $idle_core ${CEILOMETER_EXPIRER_CMD}"
sed -i "/skip_count/s/=.*/=0/" ${CEILOMETER_EXPIRER_INFO}
exit 0
fi
fi
if [ "$skip_count" -lt "3" ]
then
newval=$(($skip_count+1))
sed -i "/skip_count/s/=.*/=$newval/" ${CEILOMETER_EXPIRER_INFO}
exit 0
fi
fi
eval ${CEILOMETER_EXPIRER_CMD}
sed -i "/skip_count/s/=.*/=0/" ${CEILOMETER_EXPIRER_INFO}
fi
exit 0

View File

@ -1,138 +0,0 @@
#!/bin/sh
### BEGIN INIT INFO
# Provides:
# Required-Start: $remote_fs $network $syslog
# Required-Stop: $remote_fs $syslog
# Default-Start: 3 5
# Default-Stop: 0 1 2 6
# Short-Description: Ceilometer Servers
# Description: OpenStack Monitoring Service (code-named Ceilometer) server(s)
### END INIT INFO
# Platform paths and flags
. /usr/bin/tsconfig
SUFFIX=polling
DESC="ceilometer-$SUFFIX"
DAEMON="/usr/bin/ceilometer-$SUFFIX"
CONFIG="/etc/ceilometer/ceilometer.conf"
PIDFILE="/var/run/ceilometer-$SUFFIX.pid"
COMPLETED="/etc/platform/.initial_config_complete"
start()
{
if [ ! -f $COMPLETED ]; then
echo "Waiting for for $COMPLETED"
exit 0
fi
. $PLATFORM_CONF_FILE
if [[ "$nodetype" == "worker" || "$subfunction" == *"worker"* ]] ; then
if [ ! -f $VOLATILE_WORKER_CONFIG_COMPLETE ]; then
# Do not start polling until compute manifests have been applied
echo "Waiting for $VOLATILE_WORKER_CONFIG_COMPLETE"
exit 0
elif [ -f $VOLATILE_DISABLE_WORKER_SERVICES ]; then
# Do not start polling if compute services are disabled. This can
# happen during an upgrade when controller-1 is running a newer
# load than controller-0.
echo "Waiting for $VOLATILE_DISABLE_WORKER_SERVICES"
exit 0
fi
fi
if [ -e $PIDFILE ]; then
PIDDIR=/proc/$(cat $PIDFILE)
if [ -d ${PIDDIR} ]; then
echo "$DESC already running."
exit 1
else
echo "Removing stale PID file $PIDFILE"
rm -f $PIDFILE
fi
fi
if [ ! -d /var/log/ceilometer ]; then
mkdir /var/log/ceilometer
fi
echo -n "Starting $DESC..."
start-stop-daemon --start --quiet --background \
--pidfile ${PIDFILE} --make-pidfile --exec ${DAEMON} \
-- --config-file $CONFIG --log-dir=/var/log/ceilometer
if [ $? -eq 0 ]; then
echo "done."
else
echo "failed."
fi
}
stop()
{
echo -n "Stopping $DESC..."
start-stop-daemon --stop --quiet --pidfile $PIDFILE
if [ $? -eq 0 ]; then
echo "done."
else
echo "failed."
fi
rm -f $PIDFILE
}
status()
{
pid=`cat $PIDFILE 2>/dev/null`
if [ -n "$pid" ]; then
if ps -p $pid &>/dev/null ; then
echo "$DESC is running"
return
fi
fi
echo "$DESC is not running"
}
reset()
{
stop
# This is to make sure postgres is configured and running
if ! pidof postmaster > /dev/null; then
/etc/init.d/postgresql-init
/etc/init.d/postgresql start
sleep 2
fi
[ ! -d /var/log/ceilometer ] && mkdir /var/log/ceilometer
sudo -u postgres dropdb ceilometer
sudo -u postgres createdb ceilometer
ceilometer-dbsync
start
}
case "$1" in
start)
start
;;
stop)
stop
;;
restart|force-reload|reload)
stop
start
;;
status)
status
;;
reset)
reset
;;
*)
echo "Usage: $0 {start|stop|force-reload|restart|reload|status|reset}"
exit 1
;;
esac
exit 0

View File

@ -1,26 +0,0 @@
[process]
process = ceilometer-polling
service = openstack-ceilometer-polling
pidfile = /var/run/ceilometer-polling.pid
script = /etc/init.d/openstack-ceilometer-polling
style = lsb ; ocf or lsb
severity = minor ; minor, major, critical
restarts = 5 ; restart retries before error assertion
interval = 10 ; number of seconds to wait between restarts
debounce = 20 ; number of seconds that a process needs to remain
; running before degrade is removed and retry count
; is cleared.
startuptime = 5 ; Seconds to wait after process start before starting the debounce monitor
mode = passive ; Monitoring mode: passive (default) or active
; passive: process death monitoring (default: always)
; active : heartbeat monitoring, i.e. request / response messaging
; ignore : do not monitor or stop monitoring
subfunction = worker ; Optional label.
; Manage this process in the context of a combo host subfunction
; Choices: worker or storage.
; when specified pmond will wait for
; /var/run/.worker_config_complete or
; /var/run/.storage_config_complete
; ... before managing this process with the specified subfunction
; Excluding this label will cause this process to be managed by default on startup

View File

@ -1,19 +0,0 @@
[process]
process = ceilometer-polling
pidfile = /var/run/ceilometer-polling.pid
script = /etc/init.d/ceilometer-polling
style = lsb ; ocf or lsb
severity = minor ; minor, major, critical
restarts = 5 ; restart retries before error assertion
interval = 10 ; number of seconds to wait between restarts
debounce = 20 ; number of seconds that a process needs to remain
; running before degrade is removed and retry count
; is cleared.
; These settings will generate a log only without attempting to restart
; pmond will put the process into an ignore state after failure.
startuptime = 5 ; Seconds to wait after process start before starting the debounce monitor
mode = passive ; Monitoring mode: passive (default) or active
; passive: process death monitoring (default: always)
; active : heartbeat monitoring, i.e. request / response messaging
; ignore : do not monitor or stop monitoring

View File

@ -1,18 +0,0 @@
[process]
process = ceilometer-polling
service = openstack-ceilometer-polling
pidfile = /var/run/ceilometer-polling.pid
script = /etc/init.d/openstack-ceilometer-polling
style = lsb ; ocf or lsb
severity = minor ; minor, major, critical
restarts = 5 ; restart retries before error assertion
interval = 10 ; number of seconds to wait between restarts
debounce = 20 ; number of seconds that a process needs to remain
; running before degrade is removed and retry count
; is cleared.
startuptime = 5 ; Seconds to wait after process start before starting the debounce monitor
mode = passive ; Monitoring mode: passive (default) or active
; passive: process death monitoring (default: always)
; active : heartbeat monitoring, i.e. request / response messaging
; ignore : do not monitor or stop monitoring

View File

@ -1,5 +0,0 @@
SRC_DIR="$CGCS_BASE/git/cinder"
COPY_LIST="$FILES_BASE/*"
TIS_BASE_SRCREV=90b64640126fd88e50b2f05841c393757f4faae7
TIS_PATCH_VER=GITREVCOUNT
BUILD_IS_SLOW=5

View File

@ -1,19 +0,0 @@
[DEFAULT]
logdir = /var/log/cinder
state_path = /var/lib/cinder
lock_path = /var/lib/cinder/tmp
volumes_dir = /etc/cinder/volumes
iscsi_helper = lioadm
rootwrap_config = /etc/cinder/rootwrap.conf
auth_strategy = keystone
[database]
connection = mysql://cinder:cinder@localhost/cinder
[keystone_authtoken]
admin_tenant_name = %SERVICE_TENANT_NAME%
admin_user = %SERVICE_USER%
admin_password = %SERVICE_PASSWORD%
auth_host = 127.0.0.1
auth_port = 35357
auth_protocol = http

View File

@ -1,63 +0,0 @@
#!/bin/bash
#
# Wrapper script to run cinder-manage to purge deleted records on active controller only
#
CINDER_PURGE_INFO="/var/run/cinder-purge.info"
CINDER_PURGE_CMD="/usr/bin/nice -n 2 /usr/bin/cinder-manage db purge 1 >>/dev/null 2>&1"
function is_active_pgserver()
{
# Determine whether we're running on the same controller as the service.
local service=postgres
local enabledactive=$(/usr/bin/sm-query service $service| grep enabled-active)
if [ "x$enabledactive" == "x" ]
then
# enabled-active not found for that service on this controller
return 1
else
# enabled-active found for that resource
return 0
fi
}
if is_active_pgserver
then
if [ ! -f ${CINDER_PURGE_INFO} ]
then
echo delay_count=0 > ${CINDER_PURGE_INFO}
fi
source ${CINDER_PURGE_INFO}
sudo -u postgres psql -d sysinv -c "SELECT alarm_id, entity_instance_id from i_alarm;" | grep -P "^(?=.*100.101)(?=.*${HOSTNAME})" &>/dev/null
if [ $? -eq 0 ]
then
source /etc/platform/platform.conf
if [ "${system_type}" = "All-in-one" ]
then
source /etc/init.d/task_affinity_functions.sh
idle_core=$(get_most_idle_core)
if [ "$idle_core" -ne "0" ]
then
# Purge soft deleted records that are older than 1 day from cinder database.
sh -c "exec taskset -c $idle_core ${CINDER_PURGE_CMD}"
sed -i "/delay_count/s/=.*/=0/" ${CINDER_PURGE_INFO}
exit 0
fi
fi
if [ "$delay_count" -lt "3" ]
then
newval=$(($delay_count+1))
sed -i "/delay_count/s/=.*/=$newval/" ${CINDER_PURGE_INFO}
(sleep 3600; /usr/bin/cinder-purge-deleted-active) &
exit 0
fi
fi
# Purge soft deleted records that are older than 1 day from cinder database.
eval ${CINDER_PURGE_CMD}
sed -i "/delay_count/s/=.*/=0/" ${CINDER_PURGE_INFO}
fi
exit 0

View File

@ -1,3 +0,0 @@
Defaults:cinder !requiretty
cinder ALL = (root) NOPASSWD: /usr/bin/cinder-rootwrap /etc/cinder/rootwrap.conf *

File diff suppressed because it is too large Load Diff

View File

@ -1,11 +0,0 @@
compress
/var/log/cinder/*.log {
weekly
rotate 4
missingok
compress
minsize 100k
size 10M
copytruncate
}

View File

@ -1,18 +0,0 @@
[Unit]
Description=OpenStack Cinder API Server
After=syslog.target network.target
[Service]
Type=simple
# WRS - use root user
#User=cinder
User=root
ExecStart=/usr/bin/cinder-api --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf --logfile /var/log/cinder/cinder-api.log
# WRS - Managed by sm/OCF scripts
#Restart=on-failure
#KillMode=process
PIDFile=/var/run/resource-agents/cinder-api.pid
[Install]
WantedBy=multi-user.target

View File

@ -1,16 +0,0 @@
[Unit]
Description=OpenStack Cinder Backup Server
After=syslog.target network.target
[Service]
Type=simple
# WRS - use root user
#User=cinder
User=root
ExecStart=/usr/bin/cinder-backup --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf --logfile /var/log/cinder/cinder-backup.log
# WRS - Currently not used but would be also managed by sm
#Restart=on-failure
[Install]
WantedBy=multi-user.target

View File

@ -1,17 +0,0 @@
[Unit]
Description=OpenStack Cinder Scheduler Server
After=syslog.target network.target
[Service]
Type=simple
# WRS - use root user
#User=cinder
User=root
ExecStart=/usr/bin/cinder-scheduler --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf --logfile /var/log/cinder/cinder-scheduler.log
# WRS - Managed by sm
#Restart=on-failure
Restart=on-failure
[Install]
WantedBy=multi-user.target

View File

@ -1,19 +0,0 @@
[Unit]
Description=OpenStack Cinder Volume Server
After=syslog.target network.target
[Service]
LimitNOFILE=131072
LimitNPROC=131072
Type=simple
# WRS - use root user
#User=cinder
User=root
ExecStart=/usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf --logfile /var/log/cinder/cinder-volume.log
# WRS - Managed by sm
#Restart=on-failure
#KillMode=process
[Install]
WantedBy=multi-user.target

View File

@ -1,165 +0,0 @@
#!/bin/bash
#
# Copyright (c) 2016 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
#
# The patching subsystem provides a patch-functions bash source file
# with useful function and variable definitions.
#
. /etc/patching/patch-functions
#
# We can now check to see what type of node we're on, if it's locked, etc,
# and act accordingly
#
#
# Declare an overall script return code
#
declare -i GLOBAL_RC=$PATCH_STATUS_OK
#
# Handle restarting Cinder services
#
# Syntax:("<service_name> <timeout> <initialize_interval> <behavior after timeout>"\)
SERVICES=("cinder-volume 30 30 kill"\
"cinder-scheduler 50 20 kill"\
"cinder-backup 30 30 kill"\
"cinder-api 30 20 kill")
# where:
# <service_name> = name of executable file reported by ps
# <timeout> = how much to wait for the process to gracefully shutdown
# <behavior after timeout> = either 'kill' the process with SIGKILL or 'leave' it running
# the idea is to avoid leaving behind processes that are degraded, better have
# new ones re-spawn
# <initialize interval> = how much to wait before the new process can be considered available.
# The processes are restarted by sm by running the ocf scripts at
# /usr/lib/ocf/resource.d/openstack/cinder-*. These scripts have a very good service init
# monitoring routine. We just have to make sure that they don't hang while restarting.
# The values are taken from SM, but we don't wait for any retry.
# Note: cinder-volume timeout is set to 180 secs in sm which is too much for our controlled
# restart
function get_pid {
local service=$1
PID=$(cat /var/run/resource-agents/$service.pid)
echo "$PID"
}
if is_controller
then
# Cinder services only run on the controller
if [ ! -f $PATCH_FLAGDIR/cinder.restarted ]
then
touch $PATCH_FLAGDIR/cinder.restarted
# cinder-volume uses this to know that its new process was restarted by in-service patching
touch $PATCH_FLAGDIR/cinder.restarting
for s in "${SERVICES[@]}"; do
set -- $s
service="$1"
timeout="$2"
initialize_interval="$3"
after_timeout="$4"
new_process="false"
# Check SM to see if service is running
sm-query service $service | grep -q 'enabled-active'
if [ $? -eq 0 ]
then
loginfo "$0: Restarting $service"
# Get PID
PID=$(get_pid $service)
# Send restart signal to process
kill -s TERM $PID
# Wait up to $timeout seconds for service to gracefully recover
let -i UNTIL=$SECONDS+$timeout
while [ $UNTIL -ge $SECONDS ]
do
# Check to see if we have a new process
NEW_PID=$(get_pid $service)
if [[ "$PID" != "$NEW_PID" ]]
then
# We have a new process
new_process="true"
break
fi
# Still old process? Let's wait 5 seconds and check again
sleep 5
done
# Do a hard restart of the process if we still have the old one
NEW_PID=$(get_pid $service)
if [[ "$PID" == "$NEW_PID" ]]
then
# we have the old process still running!
if [[ "$after_timeout" == "kill" ]]
then
loginfo "$0: Old process of $service failed to gracefully terminate in $timeout, killing it!"
# kill the old process
kill -s KILL $PID
# wait for a new process to be restarted by sm
let -i UNTIL=$SECONDS+10
while [ $UNTIL -ge $SECONDS ]
do
sleep 1
# Check to see if we have a new process
NEW_PID=$(get_pid $service)
if [[ ! -z "$NEW_PID" ]] && [[ "$PID" != "$NEW_PID" ]]
then
loginfo "$0: New process of $service started"
new_process="true"
break
fi
done
fi
fi
# Wait for the new process to complete initialisation
if [[ "$new_process" == "true" ]]
then
let -i UNTIL=$SECONDS+$initialize_interval
while [ $UNTIL -ge $SECONDS ]
do
# Note: Services are restarted by sm which runs the ocf start script.
# Sm reports enabled-active only *after* those scripts return success
sm-query service $service | grep -q 'enabled-active'
if [ $? -eq 0 ]
then
loginfo "$0: New process of $service started correctly"
break
fi
sleep 1
done
fi
sm-query service $service | grep -q 'enabled-active'
if [ $? -ne 0 ]
then
# Still not running! Clear the flag and mark the RC as failed
loginfo "$0: Failed to restart $service"
rm -f $PATCH_FLAGDIR/$service.restarted
GLOBAL_RC=$PATCH_STATUS_FAILED
sm-query service $service
break
# Note: break if any process in the SERVICES list fails
fi
fi
done
fi
fi
#
# Exit the script with the overall return code
#
rm -f $PATCH_FLAGDIR/cinder.restarting
exit $GLOBAL_RC

View File

@ -1,481 +0,0 @@
%{!?upstream_version: %global upstream_version %{version}%{?milestone}}
%global with_doc %{!?_without_doc:1}%{?_without_doc:0}
%global pypi_name cinder
# WRS: Keep service name - used by build scripts
#%global service cinder
# WRS: remove docs - for now
%global with_doc 0
%global common_desc \
OpenStack Volume (codename Cinder) provides services to manage and \
access block storage volumes for use by Virtual Machine instances.
Name: openstack-cinder
# Liberty semver reset
# https://review.openstack.org/#/q/I6a35fa0dda798fad93b804d00a46af80f08d475c,n,z
Epoch: 1
Version: 11.0.0
Release: 1%{?_tis_dist}.%{tis_patch_ver}
Summary: OpenStack Volume service
License: ASL 2.0
URL: http://www.openstack.org/software/openstack-storage/
Source0: https://tarballs.openstack.org/%{pypi_name}/%{pypi_name}-%{upstream_version}.tar.gz
#
Source1: cinder-dist.conf
Source2: cinder.logrotate
# WRS: Adding pre-built config file (via: tox -egenconfig) as this is not
# getting generated correctly in our build system. Might be due to partial
# rebase env w/ mitaka+newton. We need to re-evaluate once rebase is
# complete.
Source3: cinder.conf.sample
Source10: openstack-cinder-api.service
Source11: openstack-cinder-scheduler.service
Source12: openstack-cinder-volume.service
Source13: openstack-cinder-backup.service
Source20: cinder-sudoers
Source21: restart-cinder
Source22: cinder-purge-deleted-active
BuildArch: noarch
BuildRequires: intltool
BuildRequires: python-d2to1
BuildRequires: python-openstackdocstheme
BuildRequires: python-pbr
BuildRequires: python-reno
BuildRequires: python-sphinx
BuildRequires: python2-devel
BuildRequires: python-setuptools
BuildRequires: python2-pip
BuildRequires: python2-wheel
BuildRequires: python-netaddr
BuildRequires: systemd
BuildRequires: git
BuildRequires: openstack-macros
BuildRequires: os-brick
BuildRequires: pyparsing
BuildRequires: pytz
BuildRequires: python-decorator
BuildRequires: openstack-macros
# Required to build cinder.conf
BuildRequires: python-google-api-client >= 1.4.2
BuildRequires: python-keystonemiddleware
BuildRequires: python-glanceclient >= 1:2.8.0
#BuildRequires: python-novaclient >= 1:9.0.0
BuildRequires: python-novaclient >= 2.29.0
BuildRequires: python-swiftclient >= 3.2.0
BuildRequires: python-oslo-db
BuildRequires: python-oslo-config >= 2:4.0.0
BuildRequires: python-oslo-policy
BuildRequires: python-oslo-reports
BuildRequires: python-oslotest
BuildRequires: python-oslo-utils
BuildRequires: python-oslo-versionedobjects
BuildRequires: python-oslo-vmware
BuildRequires: python-os-win
BuildRequires: python-castellan
BuildRequires: python-cryptography
BuildRequires: python-lxml
BuildRequires: python-osprofiler
BuildRequires: python-paramiko
BuildRequires: python-suds
BuildRequires: python-taskflow
BuildRequires: python-tooz
BuildRequires: python-oslo-log
BuildRequires: python-oslo-i18n
BuildRequires: python-barbicanclient
BuildRequires: python-requests
BuildRequires: python-retrying
# Required to compile translation files
BuildRequires: python-babel
# Needed for unit tests
BuildRequires: python-ddt
BuildRequires: python-fixtures
BuildRequires: python-mock
BuildRequires: python-oslotest
BuildRequires: python-subunit
BuildRequires: python-testtools
BuildRequires: python-testrepository
BuildRequires: python-testresources
BuildRequires: python-testscenarios
BuildRequires: python-os-testr
BuildRequires: python-rtslib
Requires: python-cinder = %{epoch}:%{version}-%{release}
# we dropped the patch to remove PBR for Delorean
Requires: python-pbr
# as convenience
Requires: python-cinderclient
Requires(post): systemd
Requires(preun): systemd
Requires(postun): systemd
Requires(pre): shadow-utils
Requires: lvm2
Requires: python-osprofiler
Requires: python-rtslib
# Required for EMC VNX driver
Requires: python2-storops
%description
%{common_desc}
%package -n python-cinder
Summary: OpenStack Volume Python libraries
Group: Applications/System
Requires: sudo
Requires: qemu-img
Requires: sysfsutils
Requires: os-brick >= 1.15.2
Requires: python-paramiko >= 2.0
Requires: python-simplejson >= 2.2.0
Requires: python-castellan >= 0.7.0
Requires: python-eventlet >= 0.18.2
Requires: python-greenlet >= 0.3.2
Requires: python-iso8601 >= 0.1.11
Requires: python-lxml >= 2.3
Requires: python-stevedore >= 1.20.0
Requires: python-suds
Requires: python-tooz >= 1.47.0
Requires: python-sqlalchemy >= 1.0.10
Requires: python-migrate >= 0.11.0
Requires: python-paste-deploy
Requires: python-routes >= 2.3.1
Requires: python-webob >= 1.7.1
Requires: python-glanceclient >= 1:2.8.0
Requires: python-swiftclient >= 3.2.0
Requires: python-keystoneclient >= 3.8.0
#Requires: python-novaclient >= 1:9.0.0
Requires: python-novaclient >= 2.29.0
Requires: python-oslo-config >= 2:4.0.0
Requires: python-six >= 1.9.0
Requires: python-psutil >= 3.2.2
Requires: python-babel
Requires: python-google-api-client >= 1.4.2
Requires: python-oslo-rootwrap >= 5.0.0
Requires: python-oslo-utils >= 3.20.0
Requires: python-oslo-serialization >= 1.10.0
Requires: python-oslo-db >= 4.24.0
Requires: python-oslo-context >= 2.14.0
Requires: python-oslo-concurrency >= 3.8.0
Requires: python-oslo-middleware >= 3.27.0
Requires: python-taskflow >= 2.7.0
Requires: python-oslo-messaging >= 5.24.2
Requires: python-oslo-policy >= 1.23.0
Requires: python-oslo-reports >= 0.6.0
Requires: python-oslo-service >= 1.10.0
Requires: python-oslo-versionedobjects >= 1.19.0
Requires: iscsi-initiator-utils
Requires: python-osprofiler >= 1.4.0
Requires: python-httplib2 >= 0.7.5
Requires: python-oauth2client >= 1.5.0
Requires: python-oslo-log >= 3.22.0
Requires: python-oslo-i18n >= 2.1.0
Requires: python-barbicanclient >= 4.0.0
Requires: python-requests >= 2.10.0
Requires: python-retrying >= 1.2.3
Requires: pyparsing >= 2.0.7
Requires: pytz
Requires: python-decorator
Requires: python-enum34
Requires: python-ipaddress
Requires: python-keystonemiddleware >= 4.12.0
Requires: python-keystoneauth1 >= 3.1.0
Requires: python-oslo-privsep >= 1.9.0
Requires: python-cryptography >= 1.6
%description -n python-cinder
%{common_desc}
This package contains the cinder Python library.
%package -n python-cinder-tests
Summary: Cinder tests
Requires: openstack-cinder = %{epoch}:%{version}-%{release}
# Added test requirements
Requires: python-hacking
Requires: python-anyjson
Requires: python-coverage
Requires: python-ddt
Requires: python-fixtures
Requires: python-mock
Requires: python-mox3
Requires: python-oslotest
Requires: python-subunit
Requires: python-testtools
Requires: python-testrepository
Requires: python-testresources
Requires: python-testscenarios
Requires: python-os-testr
Requires: python-tempest
%description -n python-cinder-tests
%{common_desc}
This package contains the Cinder test files.
%if 0%{?with_doc}
%package doc
Summary: Documentation for OpenStack Volume
Group: Documentation
Requires: %{name} = %{epoch}:%{version}-%{release}
BuildRequires: graphviz
# Required to build module documents
BuildRequires: python-eventlet
BuildRequires: python-routes
BuildRequires: python-sqlalchemy
BuildRequires: python-webob
BuildRequires: python-stevedore
# while not strictly required, quiets the build down when building docs.
BuildRequires: python-migrate
BuildRequires: python-iso8601 >= 0.1.9
%description doc
%{common_desc}
This package contains documentation files for cinder.
%endif
%prep
%autosetup -n cinder-%{upstream_version} -S git
find . \( -name .gitignore -o -name .placeholder \) -delete
find cinder -name \*.py -exec sed -i '/\/usr\/bin\/env python/{d;q}' {} +
#sed -i 's/%{version}.%{milestone}/%{version}/' PKG-INFO
# Remove the requirements file so that pbr hooks don't add it
# to distutils requires_dist config
%py_req_cleanup
%build
# Generate config file
PYTHONPATH=. oslo-config-generator --config-file=cinder/config/cinder-config-generator.conf
# WRS: Put this pre-built config file in place of the generated one as it's not
# being built correctly currently
cp %{SOURCE3} etc/cinder/cinder.conf.sample
# Build
export PBR_VERSION=%{version}
%{__python2} setup.py build
# Generate i18n files
# (amoralej) we can remove '-D cinder' once https://review.openstack.org/#/c/439501/ is merged
%{__python2} setup.py compile_catalog -d build/lib/%{pypi_name}/locale -D cinder
%py2_build_wheel
%install
export PBR_VERSION=%{version}
%{__python2} setup.py install -O1 --skip-build --root %{buildroot}
mkdir -p $RPM_BUILD_ROOT/wheels
install -m 644 dist/*.whl $RPM_BUILD_ROOT/wheels/
# Create fake egg-info for the tempest plugin
# TODO switch to %{service} everywhere as in openstack-example.spec
%global service cinder
%py2_entrypoint %{service} %{service}
# docs generation requires everything to be installed first
export PYTHONPATH="$( pwd ):$PYTHONPATH"
%if 0%{?with_doc}
%{__python2} setup.py build_sphinx --builder html
# Fix hidden-file-or-dir warnings
rm -fr doc/build/html/.buildinfo
%endif
%{__python2} setup.py build_sphinx --builder man
mkdir -p %{buildroot}%{_mandir}/man1
install -p -D -m 644 doc/build/man/*.1 %{buildroot}%{_mandir}/man1/
# Setup directories
install -d -m 755 %{buildroot}%{_sharedstatedir}/cinder
install -d -m 755 %{buildroot}%{_sharedstatedir}/cinder/tmp
install -d -m 755 %{buildroot}%{_localstatedir}/log/cinder
# Install config files
install -d -m 755 %{buildroot}%{_sysconfdir}/cinder
install -p -D -m 640 %{SOURCE1} %{buildroot}%{_datadir}/cinder/cinder-dist.conf
install -d -m 755 %{buildroot}%{_sysconfdir}/cinder/volumes
install -p -D -m 640 etc/cinder/rootwrap.conf %{buildroot}%{_sysconfdir}/cinder/rootwrap.conf
install -p -D -m 640 etc/cinder/api-paste.ini %{buildroot}%{_sysconfdir}/cinder/api-paste.ini
install -p -D -m 640 etc/cinder/policy.json %{buildroot}%{_sysconfdir}/cinder/policy.json
install -p -D -m 640 etc/cinder/cinder.conf.sample %{buildroot}%{_sysconfdir}/cinder/cinder.conf
# Install initscripts for services
install -p -D -m 644 %{SOURCE10} %{buildroot}%{_unitdir}/openstack-cinder-api.service
install -p -D -m 644 %{SOURCE11} %{buildroot}%{_unitdir}/openstack-cinder-scheduler.service
install -p -D -m 644 %{SOURCE12} %{buildroot}%{_unitdir}/openstack-cinder-volume.service
install -p -D -m 644 %{SOURCE13} %{buildroot}%{_unitdir}/openstack-cinder-backup.service
# Install sudoers
install -p -D -m 440 %{SOURCE20} %{buildroot}%{_sysconfdir}/sudoers.d/cinder
# Install pid directory
install -d -m 755 %{buildroot}%{_localstatedir}/run/cinder
# Install rootwrap files in /usr/share/cinder/rootwrap
mkdir -p %{buildroot}%{_datarootdir}/cinder/rootwrap/
install -p -D -m 644 etc/cinder/rootwrap.d/* %{buildroot}%{_datarootdir}/cinder/rootwrap/
# Symlinks to rootwrap config files
mkdir -p %{buildroot}%{_sysconfdir}/cinder/rootwrap.d
for filter in %{_datarootdir}/os-brick/rootwrap/*.filters; do
ln -s $filter %{buildroot}%{_sysconfdir}/cinder/rootwrap.d/
done
# Install i18n .mo files (.po and .pot are not required)
install -d -m 755 %{buildroot}%{_datadir}
rm -f %{buildroot}%{python2_sitelib}/%{pypi_name}/locale/*/LC_*/%{pypi_name}*po
rm -f %{buildroot}%{python2_sitelib}/%{pypi_name}/locale/*pot
mv %{buildroot}%{python2_sitelib}/%{pypi_name}/locale %{buildroot}%{_datadir}/locale
# Find language files
%find_lang %{pypi_name} --all-name
# Remove unneeded in production stuff
rm -f %{buildroot}%{_bindir}/cinder-all
rm -f %{buildroot}%{_bindir}/cinder-debug
rm -fr %{buildroot}%{python2_sitelib}/run_tests.*
rm -f %{buildroot}/usr/share/doc/cinder/README*
# FIXME(jpena): unit tests are taking too long in the current DLRN infra
# Until we have a better architecture, let's not run them when under DLRN
%if 0%{!?dlrn}
%check
OS_TEST_PATH=./cinder/tests/unit ostestr --concurrency=2
%endif
# WRS: in-service restarts
install -p -D -m 700 %{SOURCE21} %{buildroot}%{_bindir}/restart-cinder
# WRS: purge cron
install -p -D -m 755 %{SOURCE22} %{buildroot}%{_bindir}/cinder-purge-deleted-active
%pre
getent group cinder >/dev/null || groupadd -r cinder --gid 165
if ! getent passwd cinder >/dev/null; then
useradd -u 165 -r -g cinder -G cinder,nobody -d %{_sharedstatedir}/cinder -s /sbin/nologin -c "OpenStack Cinder Daemons" cinder
fi
exit 0
%post
%systemd_post openstack-cinder-volume
%systemd_post openstack-cinder-api
%systemd_post openstack-cinder-scheduler
%systemd_post openstack-cinder-backup
%preun
%systemd_preun openstack-cinder-volume
%systemd_preun openstack-cinder-api
%systemd_preun openstack-cinder-scheduler
%systemd_preun openstack-cinder-backup
%postun
%systemd_postun_with_restart openstack-cinder-volume
%systemd_postun_with_restart openstack-cinder-api
%systemd_postun_with_restart openstack-cinder-scheduler
%systemd_postun_with_restart openstack-cinder-backup
%files
%dir %{_sysconfdir}/cinder
%config(noreplace) %attr(-, root, cinder) %{_sysconfdir}/cinder/cinder.conf
%config(noreplace) %attr(-, root, cinder) %{_sysconfdir}/cinder/api-paste.ini
%config(noreplace) %attr(-, root, cinder) %{_sysconfdir}/cinder/rootwrap.conf
%config(noreplace) %attr(-, root, cinder) %{_sysconfdir}/cinder/policy.json
%config(noreplace) %{_sysconfdir}/sudoers.d/cinder
%{_sysconfdir}/cinder/rootwrap.d/
%attr(-, root, cinder) %{_datadir}/cinder/cinder-dist.conf
%dir %attr(0750, cinder, root) %{_localstatedir}/log/cinder
%dir %attr(0755, cinder, root) %{_localstatedir}/run/cinder
%dir %attr(0755, cinder, root) %{_sysconfdir}/cinder/volumes
%{_bindir}/cinder-*
%{_unitdir}/*.service
%{_datarootdir}/cinder
%{_mandir}/man1/cinder*.1.gz
#WRS: in-service patching
%{_bindir}/restart-cinder
#WRS: purge cron
%{_bindir}/cinder-purge-deleted-active
%defattr(-, cinder, cinder, -)
%dir %{_sharedstatedir}/cinder
%dir %{_sharedstatedir}/cinder/tmp
%files -n python-cinder -f %{pypi_name}.lang
%{?!_licensedir: %global license %%doc}
%license LICENSE
%{python2_sitelib}/cinder
%{python2_sitelib}/cinder-*.egg-info
%exclude %{python2_sitelib}/cinder/tests
%files -n python-cinder-tests
%license LICENSE
%{python2_sitelib}/cinder/tests
%{python2_sitelib}/%{service}_tests.egg-info
%if 0%{?with_doc}
%files doc
%doc doc/build/html
%endif
%package wheels
Summary: %{name} wheels
%description wheels
Contains python wheels for %{name}
%files wheels
/wheels/*
%changelog
* Wed Aug 30 2017 rdo-trunk <javier.pena@redhat.com> 1:11.0.0-1
- Update to 11.0.0
* Fri Aug 25 2017 Alfredo Moralejo <amoralej@redhat.com> 1:11.0.0-0.2.0rc2
- Update to 11.0.0.0rc2
* Tue Aug 22 2017 Alfredo Moralejo <amoralej@redhat.com> 1:11.0.0-0.1.0rc1
- Update to 11.0.0.0rc1

View File

@ -1 +0,0 @@
TIS_PATCH_VER=0

View File

@ -1,25 +0,0 @@
From 89c5d60116569e1446c62d652d30eb0a21130193 Mon Sep 17 00:00:00 2001
From: Daniel Badea <daniel.badea@windriver.com>
Date: Thu, 2 Nov 2017 18:26:56 +0200
Subject: [PATCH 1/2] Update package versioning for TIS format
---
SPECS/python-glance-store.spec | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/SPECS/python-glance-store.spec b/SPECS/python-glance-store.spec
index 831aba4..387a326 100644
--- a/SPECS/python-glance-store.spec
+++ b/SPECS/python-glance-store.spec
@@ -7,7 +7,7 @@
Name: python-glance-store
Version: 0.22.0
-Release: 1%{?dist}
+Release: 1%{?_tis_dist}.%{tis_patch_ver}
Summary: OpenStack Image Service Store Library
License: ASL 2.0
--
2.7.4

View File

@ -1,34 +0,0 @@
From aef63b7fcf58613c233cbe519b814f5366522299 Mon Sep 17 00:00:00 2001
From: Daniel Badea <daniel.badea@windriver.com>
Date: Thu, 2 Nov 2017 18:29:54 +0200
Subject: [PATCH 2/2] Check ceph cluster free space
---
SPECS/python-glance-store.spec | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/SPECS/python-glance-store.spec b/SPECS/python-glance-store.spec
index 387a326..b60bd97 100644
--- a/SPECS/python-glance-store.spec
+++ b/SPECS/python-glance-store.spec
@@ -13,6 +13,8 @@ Summary: OpenStack Image Service Store Library
License: ASL 2.0
URL: https://github.com/openstack/%{upstream_name}
Source0: https://tarballs.openstack.org/%{upstream_name}/%{upstream_name}-%{upstream_version}.tar.gz
+# WRS
+Patch0001: 0001-Check-ceph-cluster-free-space-before-creating-image.patch
BuildArch: noarch
@@ -84,6 +86,8 @@ Requires: python3-oslo-privsep >= 1.9.0
%prep
%setup -q -n %{upstream_name}-%{upstream_version}
+# Apply WRS patches
+%patch0001 -p1
%build
%py2_build
--
2.7.4

View File

@ -1,32 +0,0 @@
From bcdcfce2587d467d756d7493f40a41167c2b68ed Mon Sep 17 00:00:00 2001
From: Stefan Dinescu <stefan.dinescu@windriver.com>
Date: Thu, 16 Nov 2017 17:44:10 +0000
Subject: Meta Glance Driver
---
SPECS/python-glance-store.spec | 2 ++
1 file changed, 2 insertions(+)
diff --git a/SPECS/python-glance-store.spec b/SPECS/python-glance-store.spec
index b60bd97..0c0d728 100644
--- a/SPECS/python-glance-store.spec
+++ b/SPECS/python-glance-store.spec
@@ -15,6 +15,7 @@ URL: https://github.com/openstack/%{upstream_name}
Source0: https://tarballs.openstack.org/%{upstream_name}/%{upstream_name}-%{upstream_version}.tar.gz
# WRS
Patch0001: 0001-Check-ceph-cluster-free-space-before-creating-image.patch
+Patch0002: 0002-Add-glance-driver.patch
BuildArch: noarch
@@ -88,6 +89,7 @@ Requires: python3-oslo-privsep >= 1.9.0
# Apply WRS patches
%patch0001 -p1
+%patch0002 -p1
%build
%py2_build
--
2.7.4

View File

@ -1,32 +0,0 @@
From fc9b9d397b503eeff6585310d8de177b051a11e9 Mon Sep 17 00:00:00 2001
From: Elena Taivan <elena.taivan@windriver.com>
Date: Wed, 6 Jun 2018 10:02:56 +0000
Subject: [PATCH 1/1] meta Add glance schedulre greenthreads
---
SPECS/python-glance-store.spec | 2 ++
1 file changed, 2 insertions(+)
diff --git a/SPECS/python-glance-store.spec b/SPECS/python-glance-store.spec
index 0c0d728..977b2bc 100644
--- a/SPECS/python-glance-store.spec
+++ b/SPECS/python-glance-store.spec
@@ -16,6 +16,7 @@ Source0: https://tarballs.openstack.org/%{upstream_name}/%{upstream_name}
# WRS
Patch0001: 0001-Check-ceph-cluster-free-space-before-creating-image.patch
Patch0002: 0002-Add-glance-driver.patch
+Patch0003: 0003-Add-glance-schedule-greenthreads.patch
BuildArch: noarch
@@ -90,6 +91,7 @@ Requires: python3-oslo-privsep >= 1.9.0
# Apply WRS patches
%patch0001 -p1
%patch0002 -p1
+%patch0003 -p1
%build
%py2_build
--
1.8.3.1

View File

@ -1,32 +0,0 @@
From e314323f74f4b434b812baccc444a0724abe507b Mon Sep 17 00:00:00 2001
From: Andy Ning <andy.ning@windriver.com>
Date: Thu, 14 Jun 2018 16:42:20 -0400
Subject: [PATCH 1/1] Add image download support to DC config
---
SPECS/python-glance-store.spec | 2 ++
1 file changed, 2 insertions(+)
diff --git a/SPECS/python-glance-store.spec b/SPECS/python-glance-store.spec
index 977b2bc..54442bb 100644
--- a/SPECS/python-glance-store.spec
+++ b/SPECS/python-glance-store.spec
@@ -17,6 +17,7 @@ Source0: https://tarballs.openstack.org/%{upstream_name}/%{upstream_name}
Patch0001: 0001-Check-ceph-cluster-free-space-before-creating-image.patch
Patch0002: 0002-Add-glance-driver.patch
Patch0003: 0003-Add-glance-schedule-greenthreads.patch
+Patch0004: 0004-Add-image-download-support-to-DC-config.patch
BuildArch: noarch
@@ -92,6 +93,7 @@ Requires: python3-oslo-privsep >= 1.9.0
%patch0001 -p1
%patch0002 -p1
%patch0003 -p1
+%patch0004 -p1
%build
%py2_build
--
2.7.4

View File

@ -1,5 +0,0 @@
0001-Update-package-versioning-for-TIS-format.patch
0002-meta-patch-Check-ceph-cluster-free-space.patch
0003-meta-patch-Glance-Driver.patch
0004-meta-Add-glance-schedulre-greenthreads.patch
0005-Add-image-download-support-to-DC-config.patch

View File

@ -1,298 +0,0 @@
From 8dbde864bbdd4302918e91ac696b0ae95f698b36 Mon Sep 17 00:00:00 2001
From: Daniel Badea <daniel.badea@windriver.com>
Date: Thu, 2 Nov 2017 21:07:24 +0200
Subject: [PATCH] Check available ceph space before creating image
---
glance_store/_drivers/rbd.py | 159 +++++++++++++++++++++++++++++-
glance_store/tests/unit/test_rbd_store.py | 17 +++-
tox.ini | 2 +-
3 files changed, 170 insertions(+), 8 deletions(-)
diff --git a/glance_store/_drivers/rbd.py b/glance_store/_drivers/rbd.py
index 7b803bc..9895472 100644
--- a/glance_store/_drivers/rbd.py
+++ b/glance_store/_drivers/rbd.py
@@ -18,11 +18,15 @@
from __future__ import absolute_import
from __future__ import with_statement
+import ast
import contextlib
import hashlib
+import json
import logging
import math
+from oslo_concurrency import lockutils
+from oslo_concurrency import processutils
from oslo_config import cfg
from oslo_utils import units
from six.moves import urllib
@@ -46,6 +50,10 @@ DEFAULT_CONFFILE = '/etc/ceph/ceph.conf'
DEFAULT_USER = None # let librados decide based on the Ceph conf file
DEFAULT_CHUNKSIZE = 8 # in MiB
DEFAULT_SNAPNAME = 'snap'
+DEFAULT_POOL_RESERVATION_FILE = '/var/run/glance-space-reservations'
+LOCK_DIR = "/tmp"
+LOCK_PREFIX = "glance_"
+LOCK_RBD_USAGE = "rbd_cluster_usage"
LOG = logging.getLogger(__name__)
@@ -344,8 +352,117 @@ class Store(driver.Store):
LOG.debug(msg)
raise exceptions.NotFound(msg)
+ def validate_available_space(self, ioctx, image_name,
+ image_size, total_space=0,
+ reserved=0, ignore=[]):
+ """
+ Checks if there is sufficient free space in the
+ ceph cluster to put the whole image in
+
+ :param image_size: the size of the new image
+ :param total_space: total cluster guaranteed space for images
+ :param reserved: space reserved for other uses
+ :param ignore: list of image names to ignore in space computation
+
+ Raises exception in case not enough space is found
+ """
+
+ pool_name = ioctx.name
+
+ # Get free space if there is no space guarantee (e.g. no quota set)
+ if total_space == 0:
+ cmd = ('env', 'LC_ALL=C', 'ceph', 'df',
+ '--format', 'json')
+ out, err = processutils.execute(*cmd)
+ ceph_df_pools = json.loads(out).get('pools', [])
+ for pool in ceph_df_pools:
+ if pool_name == pool.get("name", "") and 'stats' in pool:
+ # Leave space to avoid cluster filling up as some
+ # other processes can write at the same time we import
+ # our image.
+ total_space = pool['stats'].get("max_avail", 0) * 0.99
+ break
+ else:
+ msg = ("Query of max available space in %(pool) failed."
+ "cmd: %(cmd)s, stdout: %(stdout)s, "
+ "stderr: %(stderr)s" %
+ {"pool": pool_name, "cmd": cmd,
+ "stdout": out, "stderr": err})
+ LOG.error(msg)
+ raise exceptions.GlanceStoreException(message=msg)
+
+ # Get used space by all images in pool
+ # NOTE: There is no librbd python API for getting real space usage
+ cmd = ('env', 'LC_ALL=C', 'rbd', 'du', '-p', pool_name,
+ '--format', 'json')
+ out, err = processutils.execute(*cmd, check_exit_code=[0, 2])
+ if out:
+ image_list = json.loads(out).get("images", [])
+ else:
+ image_list = []
+
+ # Compute occupied space
+ # NOTE: RBD images can be sparse, in this case real disk usage is
+ # lower than provisioned. All glance images real size equals
+ # provisioned space while raw cache are sparse. Moreover, delta
+ # between RAW caching provisioned and real sizes usually is high.
+ # For e.g. a CentOS cloud image that uses approx. 900MB of real
+ # space yet provisions for 8GB. Therefore, we want the real usage
+ # not waste space with provisions that are never going to be used.
+ occupied_space = 0
+ image_id = ""
+ for image in image_list:
+ image_id = image.get("name", "")
+ # Process ignores
+ for img in ignore:
+ if img == image_id:
+ continue
+ # Sanitize input
+ if "used_size" not in image or "provisioned_size" not in image:
+ LOG.error("Image disk usage query failure for "
+ "image: %(id)s. cmd: %(cmd)s, "
+ "stdout: %(stdout)s, stderr: %(stderr)s" %
+ {"id": image_id, "cmd": cmd,
+ "stdout": out, "stderr": err})
+ # Get image usage
+ if "_raw" in image_id:
+ if image.get("snapshot", None) != "snap":
+ # Each image is listed twice, after import, only snapshots
+ # display 'used_size' correctly
+ continue
+ # Get raw cached images real used space
+ size = image["used_size"]
+ else:
+ if image.get("snapshot", None) == "snap":
+ # Before import, there is no snapshot and we also need
+ # reserved space during glance image creation
+ continue
+ # Get glance images provisioned space
+ size = image["provisioned_size"]
+ occupied_space += size
+ LOG.debug("Image %(id)s used RBD space is: %(used_size)s" %
+ {"id": image_id, "used_size": image_size})
+
+ # Verify if there is enough space to proceed
+ data = {"image": image_id,
+ "pool": pool_name,
+ "used": occupied_space // 2 ** 20,
+ "needed": image_size // 2 ** 20,
+ "available": (total_space - occupied_space) // 2 ** 20,
+ "reserved": reserved // 2 ** 20}
+ LOG.info("Requesting %(needed)s MB for image %(image)s in "
+ "Ceph %(pool)s pool. Used: %(used)s MB. Available: "
+ "%(available)s MB (where %(reserved)s reserved)" % data)
+ if (total_space and image_size and
+ occupied_space + image_size + reserved > total_space):
+ msg = (_('Not enough cluster free space for image %s.') %
+ image_name)
+ LOG.error(msg)
+ raise exceptions.StorageFull(message=msg)
+
+ @lockutils.synchronized(LOCK_RBD_USAGE, LOCK_PREFIX, True, LOCK_DIR)
def _create_image(self, fsid, conn, ioctx, image_name,
- size, order, context=None):
+ size, order, total_available_space, context=None):
"""
Create an rbd image. If librbd supports it,
make it a cloneable snapshot, so that copy-on-write
@@ -356,6 +473,34 @@ class Store(driver.Store):
:retval: `glance_store.rbd.StoreLocation` object
"""
librbd = rbd.RBD()
+
+ # Get space reserved by RAW Caching feature
+ # NOTE: Real space is updated on the fly while an image is added to
+ # RBD (i.e. with 'rbd import') so we will know how big an image is
+ # only after its imported. Also, due to sparse mode provisioned RBD
+ # space is higher than real usage. Therefore we need to get a better
+ # value closer to what we will have as real usage in RBD, and this
+ # has to come from raw caching itself.
+ try:
+ out = None
+ with open(DEFAULT_POOL_RESERVATION_FILE, "r") as f:
+ out = f.read()
+ data = ast.literal_eval(out)
+ reserved = data.get("reserved", 0)
+ img_under_caching = ([data["image_id"]] if
+ "image_id" in data else [])
+ except IOError:
+ # In case reservation file does not exist
+ reserved, img_under_caching = (0, [])
+ except Exception:
+ # In case of any other error ignore reservations
+ LOG.error("Failed parsing: %s" % out)
+ reserved, img_under_caching = (0, [])
+
+ self.validate_available_space(
+ ioctx, image_name, size, total_available_space,
+ reserved, img_under_caching)
+
features = conn.conf_get('rbd_default_features')
if ((features is None) or (int(features) == 0)):
features = rbd.RBD_FEATURE_LAYERING
@@ -464,9 +609,19 @@ class Store(driver.Store):
"resize-before-write for each chunk which "
"will be considerably slower than normal"))
+ ceph_quota_output = json.loads(
+ conn.mon_command(
+ json.dumps({
+ "prefix": "osd pool get-quota",
+ "pool": self.pool,
+ "format": "json-pretty"}), "")[1])
+
+ glance_ceph_quota = ceph_quota_output.get("quota_max_bytes", 0)
+
try:
loc = self._create_image(fsid, conn, ioctx, image_name,
- image_size, order)
+ image_size, order,
+ glance_ceph_quota)
except rbd.ImageExists:
msg = _('RBD image %s already exists') % image_id
raise exceptions.Duplicate(message=msg)
diff --git a/glance_store/tests/unit/test_rbd_store.py b/glance_store/tests/unit/test_rbd_store.py
index 9765aa3..34ab7b4 100644
--- a/glance_store/tests/unit/test_rbd_store.py
+++ b/glance_store/tests/unit/test_rbd_store.py
@@ -69,6 +69,9 @@ class MockRados(object):
def conf_get(self, *args, **kwargs):
pass
+ def mon_command(self, *args, **kwargs):
+ return ["{}", "{}"]
+
class MockRBD(object):
@@ -152,7 +155,7 @@ class MockRBD(object):
pass
def list(self, *args, **kwargs):
- raise NotImplementedError()
+ return []
def clone(self, *args, **kwargs):
raise NotImplementedError()
@@ -184,7 +187,8 @@ class TestStore(base.StoreBaseTest,
self.data_len = 3 * units.Ki
self.data_iter = six.BytesIO(b'*' * self.data_len)
- def test_add_w_image_size_zero(self):
+ @mock.patch.object(rbd_store.Store, 'validate_available_space')
+ def test_add_w_image_size_zero(self, validate_available_space):
"""Assert that correct size is returned even though 0 was provided."""
self.store.chunk_size = units.Ki
with mock.patch.object(rbd_store.rbd.Image, 'resize') as resize:
@@ -234,7 +238,8 @@ class TestStore(base.StoreBaseTest,
'fake_image_id', self.data_iter, self.data_len)
self.called_commands_expected = ['create']
- def test_add_with_verifier(self):
+ @mock.patch.object(rbd_store.Store, 'validate_available_space')
+ def test_add_with_verifier(self, validate_available_space):
"""Assert 'verifier.update' is called when verifier is provided."""
self.store.chunk_size = units.Ki
verifier = mock.MagicMock(name='mock_verifier')
@@ -403,7 +408,8 @@ class TestStore(base.StoreBaseTest,
pass
self.assertRaises(exceptions.BackendException, test)
- def test_create_image_conf_features(self):
+ @mock.patch.object(rbd_store.Store, 'validate_available_space')
+ def test_create_image_conf_features(self, validate_available_space):
# Tests that we use non-0 features from ceph.conf and cast to int.
fsid = 'fake'
features = '3'
@@ -413,9 +419,10 @@ class TestStore(base.StoreBaseTest,
name = '1'
size = 1024
order = 3
+ ceph_size = 0
with mock.patch.object(rbd_store.rbd.RBD, 'create') as create_mock:
location = self.store._create_image(
- fsid, conn, ioctxt, name, size, order)
+ fsid, conn, ioctxt, name, size, order, ceph_size)
self.assertEqual(fsid, location.specs['fsid'])
self.assertEqual(rbd_store.DEFAULT_POOL, location.specs['pool'])
self.assertEqual(name, location.specs['image'])
diff --git a/tox.ini b/tox.ini
index 2e5a2f8..426c024 100644
--- a/tox.ini
+++ b/tox.ini
@@ -27,7 +27,7 @@ commands =
# B101 - Use of assert detected.
# B110 - Try, Except, Pass detected.
# B303 - Use of insecure MD2, MD4, or MD5 hash function.
- bandit -r glance_store -x tests --skip B101,B110,B303
+ bandit -r glance_store -x tests --skip B101,B110,B303,B108
[testenv:bandit]
# NOTE(browne): This is required for the integration test job of the bandit
--
2.7.4

View File

@ -1,250 +0,0 @@
From 6da11c584cab0e2ff396cc0208453a3e19b4dc2d Mon Sep 17 00:00:00 2001
From: Stefan Dinescu <stefan.dinescu@windriver.com>
Date: Fri, 17 Nov 2017 15:50:23 +0000
Subject: [PATCH 1/1] Add glance driver
---
glance_store/_drivers/glance.py | 210 ++++++++++++++++++++++++++++++++++++++++
setup.cfg | 2 +
2 files changed, 212 insertions(+)
create mode 100644 glance_store/_drivers/glance.py
diff --git a/glance_store/_drivers/glance.py b/glance_store/_drivers/glance.py
new file mode 100644
index 0000000..554a5a1
--- /dev/null
+++ b/glance_store/_drivers/glance.py
@@ -0,0 +1,210 @@
+# Copyright (c) 2013-2017 Wind River Systems, Inc.
+# SPDX-License-Identifier: Apache-2.0
+#
+#
+#
+#
+
+# vim: tabstop=4 shiftwidth=4 softtabstop=4
+
+# All Rights Reserved.
+#
+
+"""Storage backend for glance"""
+
+import contextlib
+import errno
+import hashlib
+import logging
+import math
+import os
+import socket
+import time
+
+from oslo_concurrency import processutils
+from oslo_config import cfg
+from oslo_utils import units
+
+from glance_store import capabilities
+from glance_store.common import utils
+import glance_store.driver
+from glance_store import exceptions
+from glance_store.i18n import _, _LE, _LW, _LI
+import glance_store.location
+from keystoneclient import exceptions as keystone_exc
+from keystoneclient import service_catalog as keystone_sc
+
+import keystoneauth1.loading
+import keystoneauth1.session
+
+from glanceclient import client as glance_client
+from cinderclient import exceptions as glance_exception
+
+CONF = cfg.CONF
+LOG = logging.getLogger(__name__)
+
+_GLANCE_OPTS = [
+ cfg.StrOpt('glance_endpoint_template',
+ default=None,
+ help=_("Glance Endpoint template")),
+ cfg.StrOpt('glance_catalog_info',
+ default='image:glance:internalURL',
+ help=_("Glance catalog info")),]
+
+def get_glanceclient(conf, remote_region, context=None):
+
+ glance_store = conf.glance_store
+
+ if glance_store.cinder_endpoint_template:
+ url = glance_store.glance_endpoint_template % context.to_dict()
+ else:
+ info = glance_store.glance_catalog_info
+ service_type, service_name, endpoint_type = info.split(':')
+ sc = {'serviceCatalog': context.service_catalog}
+ try:
+ url = keystone_sc.ServiceCatalogV2(sc).url_for(
+ region_name=remote_region,
+ service_type=service_type,
+ service_name=service_name,
+ endpoint_type=endpoint_type)
+ except keystone_exc.EndpointNotFound:
+ reason = _("Failed to find Glance from a service catalog.")
+ raise exceptions.BadStoreConfiguration(store_name="glance",
+ reason=reason)
+
+ c = glance_client.Client('2',
+ endpoint=url,
+ token=context.auth_token)
+
+ return c
+
+
+class StoreLocation(glance_store.location.StoreLocation):
+
+ """Class describing a Glance URI."""
+
+ def process_specs(self):
+ self.scheme = self.specs.get('scheme', 'glance')
+ self.image_id = self.specs.get('image_id')
+ self.remote_region = self.specs.get('remote_region')
+
+ def get_uri(self):
+ return "glance://%s/%s" % (self.remote_region, self.image_id)
+
+ def parse_uri(self, uri):
+
+ if not uri.startswith('glance://'):
+ reason = _("URI must start with 'glance://'")
+ LOG.info(reason)
+ raise exceptions.BadStoreUri(message=reason)
+
+ self.scheme = 'glance'
+
+ sp = uri.split('/')
+
+ self.image_id = sp[-1]
+ self.remote_region=sp[-2]
+
+ if not utils.is_uuid_like(self.image_id):
+ reason = _("URI contains invalid image ID")
+ LOG.info(reason)
+ raise exceptions.BadStoreUri(message=reason)
+
+
+
+class Store(glance_store.driver.Store):
+
+ """Cinder backend store adapter."""
+
+ _CAPABILITIES = (capabilities.BitMasks.READ_ACCESS |
+ capabilities.BitMasks.DRIVER_REUSABLE)
+ OPTIONS = _GLANCE_OPTS
+ EXAMPLE_URL = "glance://<remote_region>/<image_id>"
+
+ def __init__(self, *args, **kargs):
+ super(Store, self).__init__(*args, **kargs)
+
+ def get_schemes(self):
+ return ('glance',)
+
+ def _check_context(self, context, require_tenant=False):
+
+ if context is None:
+ reason = _("Glance storage requires a context.")
+ raise exceptions.BadStoreConfiguration(store_name="glance",
+ reason=reason)
+ if context.service_catalog is None:
+ reason = _("glance storage requires a service catalog.")
+ raise exceptions.BadStoreConfiguration(store_name="glance",
+ reason=reason)
+
+
+ @capabilities.check
+ def get(self, location, offset=0, chunk_size=None, context=None):
+ """
+ Takes a `glance_store.location.Location` object that indicates
+ where to find the image file, and returns a tuple of generator
+ (for reading the image file) and image_size
+
+ :param location `glance_store.location.Location` object, supplied
+ from glance_store.location.get_location_from_uri()
+ :param offset: offset to start reading
+ :param chunk_size: size to read, or None to get all the image
+ :param context: Request context
+ :raises `glance_store.exceptions.NotFound` if image does not exist
+ """
+
+ loc = location.store_location
+ self._check_context(context)
+
+ try:
+ gc = get_glanceclient(self.conf, loc.remote_region, context)
+ img = gc.images.get(loc.image_id)
+
+ size = int(img.size/(1024*1024))
+ iterator = gc.images.data(loc.image_id)
+ return (iterator, chunk_size or size)
+ except glance_exception.NotFound:
+ reason = _("Failed to get image size due to "
+ "volume can not be found: %s") % volume.id
+ LOG.error(reason)
+ raise exceptions.NotFound(reason)
+ except glance_exception.ClientException as e:
+ msg = (_('Failed to get image volume %(volume_id): %(error)s')
+ % {'volume_id': loc.volume_id, 'error': e})
+ LOG.error(msg)
+ raise exceptions.BackendException(msg)
+
+ def get_size(self, location, context=None):
+ """
+ Takes a `glance_store.location.Location` object that indicates
+ where to find the image file and returns the image size
+
+ :param location: `glance_store.location.Location` object, supplied
+ from glance_store.location.get_location_from_uri()
+ :raises: `glance_store.exceptions.NotFound` if image does not exist
+ :rtype int
+ """
+
+ loc = location.store_location
+
+ try:
+ self._check_context(context)
+ img = get_glanceclient(self.conf,
+ context).images.get(loc.image_id)
+ return int(img.size/1024 * 1024)
+ except glance_exception.NotFound:
+ raise exceptions.NotFound(image=loc.image_id)
+ except Exception:
+ LOG.exception(_LE("Failed to get image size due to "
+ "internal error."))
+ return 0
+
+ @capabilities.check
+ def add(self, image_id, image_file, image_size, context=None,
+ verifier=None):
+ raise NotImplementedError
+
+ @capabilities.check
+ def delete(self, location, context=None):
+ raise NotImplementedError
diff --git a/setup.cfg b/setup.cfg
index b3054c4..8cc9fb7 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -32,6 +32,7 @@ glance_store.drivers =
sheepdog = glance_store._drivers.sheepdog:Store
cinder = glance_store._drivers.cinder:Store
vmware = glance_store._drivers.vmware_datastore:Store
+ glance = glance_store._drivers.glance:Store
# TESTS ONLY
no_conf = glance_store.tests.fakes:UnconfigurableStore
# Backwards compatibility
@@ -42,6 +43,7 @@ glance_store.drivers =
glance.store.sheepdog.Store = glance_store._drivers.sheepdog:Store
glance.store.cinder.Store = glance_store._drivers.cinder:Store
glance.store.vmware_datastore.Store = glance_store._drivers.vmware_datastore:Store
+ glance.store.glance.Store = glance_store._drivers.glance:Store
oslo.config.opts =
glance.store = glance_store.backend:_list_opts
console_scripts =
--
1.8.3.1

View File

@ -1,33 +0,0 @@
From 71f9d9555e0909cbc878d8852cd2c2243abd0b1c Mon Sep 17 00:00:00 2001
From: Elena Taivan <elena.taivan@windriver.com>
Date: Wed, 6 Jun 2018 09:41:42 +0000
Subject: [PATCH 1/1] Add glance schedule greenthreads
---
glance_store/_drivers/filesystem.py | 3 +++
1 file changed, 3 insertions(+)
diff --git a/glance_store/_drivers/filesystem.py b/glance_store/_drivers/filesystem.py
index 5de011d..4a26d10 100644
--- a/glance_store/_drivers/filesystem.py
+++ b/glance_store/_drivers/filesystem.py
@@ -23,6 +23,7 @@ import hashlib
import logging
import os
import stat
+import time
import jsonschema
from oslo_config import cfg
@@ -685,6 +686,8 @@ class Store(glance_store.driver.Store):
if verifier:
verifier.update(buf)
f.write(buf)
+ # Give other greenthreads a chance to schedule.
+ time.sleep(0)
except IOError as e:
if e.errno != errno.EACCES:
self._delete_partial(filepath, image_id)
--
1.8.3.1

View File

@ -1,73 +0,0 @@
From 7448d61cc5dfa9c658a739cbb2dae678971a347b Mon Sep 17 00:00:00 2001
From: Andy Ning <andy.ning@windriver.com>
Date: Thu, 14 Jun 2018 16:35:44 -0400
Subject: [PATCH 1/1] Add image download support to DC config
---
glance_store/_drivers/glance.py | 36 ++++++++++++++++++++++++++++++++++--
1 file changed, 34 insertions(+), 2 deletions(-)
diff --git a/glance_store/_drivers/glance.py b/glance_store/_drivers/glance.py
index 554a5a1..70f3e65 100644
--- a/glance_store/_drivers/glance.py
+++ b/glance_store/_drivers/glance.py
@@ -34,13 +34,23 @@ import glance_store.location
from keystoneclient import exceptions as keystone_exc
from keystoneclient import service_catalog as keystone_sc
-import keystoneauth1.loading
-import keystoneauth1.session
+import keystoneauth1.loading as loading
+import keystoneauth1.session as session
from glanceclient import client as glance_client
+from glanceclient import Client
from cinderclient import exceptions as glance_exception
CONF = cfg.CONF
+_registry_client = 'glance.registry.client'
+CONF.import_opt('use_user_token', _registry_client)
+CONF.import_opt('admin_user', _registry_client)
+CONF.import_opt('admin_password', _registry_client)
+CONF.import_opt('admin_tenant_name', _registry_client)
+CONF.import_opt('auth_url', _registry_client)
+CONF.import_opt('auth_strategy', _registry_client)
+CONF.import_opt('auth_region', _registry_client)
+
LOG = logging.getLogger(__name__)
_GLANCE_OPTS = [
@@ -51,8 +61,30 @@ _GLANCE_OPTS = [
default='image:glance:internalURL',
help=_("Glance catalog info")),]
+def get_glanceclient_dc():
+
+ loader = loading.get_plugin_loader('password')
+ auth = loader.load_from_options(
+ auth_url=CONF.auth_url,
+ username=CONF.admin_user,
+ password=CONF.admin_password,
+ user_domain_id='default',
+ project_name=CONF.admin_tenant_name,
+ project_domain_id='default')
+ auth_session = session.Session(auth=auth)
+ c = Client('2', session=auth_session)
+
+ return c
+
+
def get_glanceclient(conf, remote_region, context=None):
+ # In DC config, need to authentication against central region
+ # keystone.
+ if not CONF.use_user_token:
+ c = get_glanceclient_dc()
+ return c
+
glance_store = conf.glance_store
if glance_store.cinder_endpoint_template:
--
2.7.4

View File

@ -1 +0,0 @@
mirror:Source/python-glance-store-0.22.0-1.el7.src.rpm

View File

@ -1,4 +0,0 @@
SRC_DIR="$CGCS_BASE/git/glance"
COPY_LIST="$FILES_BASE/*"
TIS_BASE_SRCREV=06af2eb5abe0332f7035a7d7c2fbfd19fbc4dae7
TIS_PATCH_VER=GITREVCOUNT

View File

@ -1,20 +0,0 @@
[DEFAULT]
debug = False
verbose = True
use_stderr = False
log_file = /var/log/glance/api.log
filesystem_store_datadir = /var/lib/glance/images/
scrubber_datadir = /var/lib/glance/scrubber
image_cache_dir = /var/lib/glance/image-cache/
[database]
connection = mysql://glance:glance@localhost/glance
[keystone_authtoken]
auth_host = 127.0.0.1
auth_port = 35357
auth_protocol = http
[paste_deploy]
config_file = /usr/share/glance/glance-api-dist-paste.ini

View File

@ -1,5 +0,0 @@
[DEFAULT]
debug = False
verbose = True
log_file = /var/log/glance/image-cache.log
image_cache_dir = /var/lib/glance/image-cache/

View File

@ -1,63 +0,0 @@
#!/bin/bash
#
# Wrapper script to run glance-manage to purge soft deleted rows on active controller only
#
GLANCE_PURGE_INFO="/var/run/glance-purge.info"
GLANCE_PURGE_CMD="/usr/bin/nice -n 2 /usr/bin/glance-manage db purge --age_in_days 1 --max_rows 1000000 >> /dev/null 2>&1"
function is_active_pgserver()
{
# Determine whether we're running on the same controller as the service.
local service=postgres
local enabledactive=$(/usr/bin/sm-query service $service| grep enabled-active)
if [ "x$enabledactive" == "x" ]
then
# enabled-active not found for that service on this controller
return 1
else
# enabled-active found for that resource
return 0
fi
}
if is_active_pgserver
then
if [ ! -f ${GLANCE_PURGE_INFO} ]
then
echo delay_count=0 > ${GLANCE_PURGE_INFO}
fi
source ${GLANCE_PURGE_INFO}
sudo -u postgres psql -d sysinv -c "SELECT alarm_id, entity_instance_id from i_alarm;" | grep -P "^(?=.*100.101)(?=.*${HOSTNAME})" &>/dev/null
if [ $? -eq 0 ]
then
source /etc/platform/platform.conf
if [ "${system_type}" = "All-in-one" ]
then
source /etc/init.d/task_affinity_functions.sh
idle_core=$(get_most_idle_core)
if [ "$idle_core" -ne "0" ]
then
# Purge soft deleted records that are older than 1 day from glance database.
sh -c "exec taskset -c $idle_core ${GLANCE_PURGE_CMD}"
sed -i "/delay_count/s/=.*/=0/" ${GLANCE_PURGE_INFO}
exit 0
fi
fi
if [ "$delay_count" -lt "3" ]
then
newval=$(($delay_count+1))
sed -i "/delay_count/s/=.*/=$newval/" ${GLANCE_PURGE_INFO}
(sleep 3600; /usr/bin/glance-purge-deleted-active) &
exit 0
fi
fi
# Purge soft deleted records that are older than 1 day from glance database.
eval ${GLANCE_PURGE_CMD}
sed -i "/delay_count/s/=.*/=0/" ${GLANCE_PURGE_INFO}
fi
exit 0

View File

@ -1,20 +0,0 @@
[DEFAULT]
debug = False
verbose = True
use_stderr = False
log_file = /var/log/glance/registry.log
[database]
connection = mysql://glance:glance@localhost/glance
[keystone_authtoken]
admin_tenant_name = %SERVICE_TENANT_NAME%
admin_user = %SERVICE_USER%
admin_password = %SERVICE_PASSWORD%
auth_host = 127.0.0.1
auth_port = 35357
auth_protocol = http
[paste_deploy]
config_file = /usr/share/glance/glance-registry-dist-paste.ini

View File

@ -1,6 +0,0 @@
[DEFAULT]
debug = False
verbose = True
log_file = /var/log/glance/scrubber.log
scrubber_datadir = /var/lib/glance/scrubber

View File

@ -1,3 +0,0 @@
Defaults:glance !requiretty
glance ALL = (root) NOPASSWD: /usr/bin/glance-rootwrap /etc/glance/rootwrap.conf *

View File

@ -1,25 +0,0 @@
# glance-swift.conf.sample
#
# This file is an example config file when
# multiple swift accounts/backing stores are enabled.
#
# Specify the reference name in []
# For each section, specify the auth_address, user and key.
#
# WARNING:
# * If any of auth_address, user or key is not specified,
# the glance-api's swift store will fail to configure
#
# [ref1]
# user = tenant:user1
# key = key1
# auth_version = 2
# auth_address = http://localhost:5000/v2.0
#
# [ref2]
# user = project_name:user_name2
# key = key2
# user_domain_id = default
# project_domain_id = default
# auth_version = 3
# auth_address = http://localhost:5000/v3

View File

@ -1,19 +0,0 @@
[Unit]
Description=OpenStack Image Service (code-named Glance) API server
After=syslog.target network.target
[Service]
LimitNOFILE=131072
LimitNPROC=131072
Type=simple
# WRS - use root user
#User=glance
User=root
ExecStart=/usr/bin/glance-api
PrivateTmp=true
# WRS - managed by sm
#Restart=on-failure
[Install]
WantedBy=multi-user.target

View File

@ -1,17 +0,0 @@
[Unit]
Description=OpenStack Image Service (code-named Glance) Registry server
After=syslog.target network.target
[Service]
Type=simple
# WRS - use root user
#User=glance
User=root
ExecStart=/usr/bin/glance-registry
PrivateTmp=true
# WRS - managed by sm
#Restart=on-failure
[Install]
WantedBy=multi-user.target

View File

@ -1,17 +0,0 @@
[Unit]
Description=OpenStack Image Service deferred image deletion service
After=syslog.target network.target
[Service]
Type=simple
# WRS - use root user
#User=glance
User=root
ExecStart=/usr/bin/glance-scrubber
PrivateTmp=true
# WRS - Not currently used - would be managed by sm
#Restart=on-failure
[Install]
WantedBy=multi-user.target

Some files were not shown because too many files have changed in this diff Show More