Add debian package for Ceph

Add debian packaging infrastructure for
integ/ceph to build a debian package.

Test Plan: build-pkg; build-image; same contents as RPM

PASS build-pkg
PASS build-image
PASS same contents and permissions as RPM

Attention:

In order to avoid memory issues during the build,
please do one of the following:

- Developers with only 32G RAM will need to
temporarily unmount /var/lib/sbuild/build
so that the build system uses the disk instead of tmpfs

OR

- update /etc/fstab to set the size for
the sbuild tmpfs filesystem in the pkgbuilder container:

tmpfs /var/lib/sbuild/build tmpfs uid=sbuild,gid=sbuild,mode=2770,size=40G 0 0

Note:
Build times can be long. In order to accelerate it,
adjust the values of MINIKUBECPUS/MINIKUBEMEMORY
in import-stx file (tools repo) before building
the containers with stx-init-env.

Depends-On: https://review.opendev.org/c/starlingx/tools/+/827884

Story: 2009101
Task: 44304

Signed-off-by: Leonardo Fagundes Luz Serrano <Leonardo.FagundesLuzSerrano@windriver.com>
Change-Id: Idc8ee1ebac5c973622c1c599f4a04c001bfa89a6
This commit is contained in:
Leonardo Fagundes Luz Serrano 2022-01-17 18:30:04 +00:00
parent e5fac7bf46
commit 83065c5298
126 changed files with 19556 additions and 0 deletions

View File

@ -0,0 +1,120 @@
## See online installation and setup documentation at
http://ceph.com/docs/master/install/manual-deployment/
-------- -------- --------
## "systemd" requires manual activation of services:
## MON
# systemctl start ceph-mon
# systemctl enable ceph-mon
## OSD.0 (set other OSDs like this)
# systemctl start ceph-osd@0
# systemctl enable ceph-osd@0
## MDS
# systemctl start ceph-mds
# systemctl enable ceph-mds
## "ceph" meta-service (starts/stops all the above like old init script)
# systemctl start ceph
# systemctl enable ceph
The ceph cluster can be set in the "/etc/default/ceph" file
by setting the CLUSTER environment variable.
-------- -------- --------
## Upgrade procedure (0.72.2 to 0.80):
* Read "Upgrade Sequencing" in release notes:
http://ceph.com/docs/firefly/release-notes/
* Upgrade packages.
* Restart MONs.
* Restart all OSDs.
* Run `ceph osd crush tunables default`.
* (Restart MDSes).
* Consider setting the 'hashpspool' flag on your pools (new default):
ceph osd pool set {pool} hashpspool true
This changes the pool to use a new hashing algorithm for the distribution of
Placement Groups (PGs) to OSDs. This new algorithm ensures a better distribution
to all OSDs. Be aware that this change will temporarly put some of your PGs into
"misplaced" state and cause additional I/O until all PGs are moved to their new
location. See http://tracker.ceph.com/issues/4128 for the details about the new
algorithm.
Read more about tunables in
http://ceph.com/docs/master/rados/operations/crush-map/#tunables
Upgrading all OSDs and setting correct tunables is necessary to avoid the errors like:
## rbdmap errors:
libceph: mon2 192.168.0.222:6789 socket error on read
Wrong tunables may produce the following error:
libceph: mon0 192.168.0.222:6789 socket error on read
libceph: mon2 192.168.0.250:6789 feature set mismatch, my 4a042a42 < server's 2004a042a42, missing 20000000000
## MDS errors:
one or more OSDs do not support TMAP2OMAP; upgrade OSDs before starting MDS (or downgrade MDS)
See also:
http://ceph.com/docs/firefly/install/upgrading-ceph/
-------- -------- --------
Jerasure pool(s) will bump requirements to Linux_3.15 (not yet released) for
kernel CephFS and RBD clients.
-------- -------- --------
RBD kernel driver do not support authentication so the following setting
in "/etc/ceph/ceph.conf" may be used to relax client auth. requirements:
cephx cluster require signatures = true
cephx service require signatures = false
-------- -------- --------
> How to mount CephFS using fuse client from "/etc/fstab"?
Add (and modify) the following sample to "/etc/fstab":
mount.fuse.ceph#conf=/etc/ceph/ceph.conf,id=admin /mnt/ceph fuse _netdev,noatime,allow_other 0 0
This is equivalent of running
ceph-fuse /mnt/ceph --id=admin -o noatime,allow_other
as root.
-------- -------- --------
To avoid known issue with kernel FS client it is recommended to use
'readdir_max_entries' mount option, for example:
mount -t ceph 1.2.3.4:/ /mnt/ceph -o readdir_max_entries=64
-------- -------- --------
Beware of "mlocate" scanning of OSD file systems. To avoid problems add
"/var/lib/ceph" to PRUNEPATHS in the "/etc/updatedb.conf" like in the
following example:
PRUNEPATHS="/tmp /var/spool /media /mnt /var/lib/ceph"
-------- -------- --------

View File

@ -0,0 +1,32 @@
#!/bin/sh
#
# Simple tool to calculate max parallel jobs based on
# memory of builder.
#
# MDCache.cc generally runs out of RAM in 4G of memory
# with parallel=4
total_ram=$(grep MemTotal /proc/meminfo | awk '{ print $2 }')
sixtyfour_g=$((64*1024*1024))
fourtyheight_g=$((48*1024*1024))
thirtytwo_g=$((32*1024*1024))
sixteen_g=$((16*1024*1024))
eight_g=$((8*1024*1024))
four_g=$((4*1024*1024))
if [ ${total_ram} -le ${four_g} ]; then
echo "--max-parallel=1"
elif [ ${total_ram} -le ${eight_g} ]; then
echo "--max-parallel=2"
elif [ ${total_ram} -le ${sixteen_g} ]; then
echo "--max-parallel=3"
elif [ ${total_ram} -le ${thirtytwo_g} ]; then
echo "--max-parallel=6"
elif [ ${total_ram} -le ${fourtyheight_g} ]; then
echo "--max-parallel=8"
elif [ ${total_ram} -le ${sixtyfour_g} ]; then
echo "--max-parallel=12"
else
echo "--max-parallel=16"
fi

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,9 @@
var/lib/ceph/bootstrap-mds
var/lib/ceph/bootstrap-mgr
var/lib/ceph/bootstrap-osd
var/lib/ceph/bootstrap-rbd
var/lib/ceph/bootstrap-rbd-mirror
var/lib/ceph/bootstrap-rgw
var/lib/ceph/crash
var/lib/ceph/crash/posted
var/lib/ceph/tmp

View File

@ -0,0 +1 @@
README.md

View File

@ -0,0 +1,42 @@
## install from source tree
lib/systemd/system/ceph-crash.service
usr/bin/ceph-crash
usr/bin/ceph-kvstore-tool
usr/bin/ceph-run
usr/bin/crushtool
usr/bin/monmaptool
usr/bin/osdmaptool
usr/lib/*/ceph/erasure-code/*
usr/lib/*/rados-classes/*
usr/lib/ceph/ceph_common.sh
usr/sbin/ceph-create-keys
usr/share/doc/ceph/sample.ceph.conf
usr/share/man/man8/ceph-create-keys.8
usr/share/man/man8/ceph-deploy.8
usr/share/man/man8/ceph-kvstore-tool.8
usr/share/man/man8/ceph-run.8
usr/share/man/man8/crushtool.8
usr/share/man/man8/monmaptool.8
usr/share/man/man8/osdmaptool.8
usr/bin/ceph-detect-init
# if %{with stx}
etc/init.d/ceph
etc/init.d/mgr-restful-plugin
etc/init.d/ceph-init-wrapper
etc/ceph/ceph.conf.pmon
etc/ceph/ceph.conf
etc/services.d/*
usr/sbin/ceph-preshutdown.sh
lib/systemd/system/docker.service.d/starlingx-docker-override.conf
lib/systemd/system/ceph.service
lib/systemd/system/mgr-restful-plugin.service
# if %{without stx}
# usr/libexec/systemd/system-preset/50-ceph.preset
usr/sbin/ceph-disk
usr/lib/python3/dist-packages/ceph_detect_init*
usr/lib/python3/dist-packages/ceph_disk*

View File

@ -0,0 +1 @@
package-has-unnecessary-activation-of-ldconfig-trigger

View File

@ -0,0 +1,61 @@
#!/bin/sh
# vim: set noet ts=8:
# postinst script for ceph
#
# see: dh_installdeb(1)
set -e
# summary of how this script can be called:
#
# postinst configure <most-recently-configured-version>
# old-postinst abort-upgrade <new-version>
# conflictor's-postinst abort-remove in-favour <package> <new-version>
# postinst abort-remove
# deconfigured's-postinst abort-deconfigure in-favour <failed-install-package> <version> [<removing conflicting-package> <version>]
#
# The current action is to simply remove the mistakenly-added
# /etc/init/ceph.conf file; this could be done in any of these cases,
# although technically it will leave the system in a different state
# than the original install that included that file. So instead we
# only remove on "configure", since that's the only time we know we're
# successful in installing a newer package than the erroneous version.
# for details, see http://www.debian.org/doc/debian-policy/ or
# the debian-policy package
[ -f "/etc/default/ceph" ] && . /etc/default/ceph
[ -z "$SERVER_USER" ] && SERVER_USER=ceph
[ -z "$SERVER_GROUP" ] && SERVER_GROUP=ceph
case "$1" in
configure)
rm -f /etc/init/ceph.conf
for DIR in `ls -1 /var/lib/ceph` ; do
if ! dpkg-statoverride --list /var/lib/ceph/$DIR >/dev/null; then
if [ -d /run/systemd/system ] && [ $DIR = 'mon' ]; then
# NOTE: upgrade file permissions for mon filesystem on
# systemd based installs only due to automatic
# restarting of ceph-mon daemon
chown -R $SERVER_USER:$SERVER_GROUP /var/lib/ceph/$DIR
else
chown $SERVER_USER:$SERVER_GROUP /var/lib/ceph/$DIR
fi
fi
done
;;
abort-upgrade|abort-remove|abort-deconfigure)
:
;;
*)
echo "postinst called with unknown argument \`$1'" >&2
exit 1
;;
esac
# dh_installdeb will replace this with shell code automatically
# generated by other debhelper scripts.
#DEBHELPER#
exit 0

View File

@ -0,0 +1,13 @@
#!/bin/sh
set -e
if [ "${1}" = "purge" ] ; then
rm -rf /var/log/ceph
fi
#DEBHELPER#
exit 0

View File

@ -0,0 +1,3 @@
etc/ceph
var/lib/ceph
var/log/ceph

View File

@ -0,0 +1,52 @@
#!/usr/bin/dh-exec --with=install
usr/share/bash-completion/completions/ceph
usr/share/bash-completion/completions/rados
usr/share/bash-completion/completions/radosgw-admin
usr/share/bash-completion/completions/rbd
lib/systemd/system/ceph.target
# %if %{with stx}
etc/init.d/rbdmap
# if %{without stx}
# lib/systemd/system/rbdmap.service
etc/default/ceph
usr/bin/ceph
usr/bin/ceph-authtool
usr/bin/ceph-conf
usr/bin/ceph-dencoder
usr/bin/ceph-rbdnamer
usr/bin/ceph-syn
usr/bin/cephfs-data-scan
usr/bin/cephfs-journal-tool
usr/bin/cephfs-table-tool
usr/bin/rados
usr/bin/radosgw-admin
usr/bin/rbd
usr/bin/rbdmap
usr/bin/rbd-replay*
usr/bin/ceph-post-file
usr/sbin/mount.ceph sbin
usr/lib/*/ceph/compressor/*
usr/lib/*/ceph/crypto/* [amd64]
usr/share/man/man8/ceph-authtool.8
usr/share/man/man8/ceph-conf.8
usr/share/man/man8/ceph-dencoder.8
usr/share/man/man8/ceph-rbdnamer.8
usr/share/man/man8/ceph-syn.8
usr/share/man/man8/ceph-post-file.8
usr/share/man/man8/ceph.8
usr/share/man/man8/mount.ceph.8
usr/share/man/man8/rados.8
usr/share/man/man8/radosgw-admin.8
usr/share/man/man8/rbd.8
usr/share/man/man8/rbdmap.8
usr/share/man/man8/rbd-replay*.8
usr/share/ceph/known_hosts_drop.ceph.com
usr/share/ceph/id_rsa_drop.ceph.com
usr/share/ceph/id_rsa_drop.ceph.com.pub
etc/ceph/rbdmap
lib/udev/rules.d/50-rbd.rules

View File

@ -0,0 +1 @@
package-has-unnecessary-activation-of-ldconfig-trigger

View File

@ -0,0 +1 @@
debian/man/ceph-crush-location.1

View File

@ -0,0 +1,140 @@
#!/bin/sh
# vim: set noet ts=8:
# postinst script for ceph-common
#
# see: dh_installdeb(1)
set -e
# summary of how this script can be called:
#
# postinst configure <most-recently-configured-version>
# old-postinst abort-upgrade <new-version>
# conflictor's-postinst abort-remove in-favour <package> <new-version>
# postinst abort-remove
# deconfigured's-postinst abort-deconfigure in-favour <failed-install-package> <version> [<removing conflicting-package> <version>]
#
# for details, see http://www.debian.org/doc/debian-policy/ or
# the debian-policy package
# Let the admin override these distro-specified defaults. This is NOT
# recommended!
[ -f "/etc/default/ceph" ] && . /etc/default/ceph
[ -z "$SERVER_HOME" ] && SERVER_HOME=/var/lib/ceph
[ -z "$SERVER_USER" ] && SERVER_USER=ceph
[ -z "$SERVER_NAME" ] && SERVER_NAME="Ceph storage service"
[ -z "$SERVER_GROUP" ] && SERVER_GROUP=ceph
[ -z "$SERVER_UID" ] && SERVER_UID=64045 # alloc by Debian base-passwd maintainer
[ -z "$SERVER_GID" ] && SERVER_GID=$SERVER_UID
# Groups that the user will be added to, if undefined, then none.
[ -z "$SERVER_ADDGROUP" ] && SERVER_ADDGROUP=
# Custom dpkg-maintscript-helper type function to deal with
# nested /etc/default/ceph/ceph
finish_mv_ceph_defaults() {
rm -rf "/etc/default/ceph.dpkg-backup/ceph.dpkg-remove"
[ -e "/etc/default/ceph.dpkg-backup/ceph" ] || return 0
echo "Preserving user changes to /etc/default/ceph (renamed from /etc/default/ceph/ceph)..."
if [ -f "/etc/default/ceph" ]; then
mv -f "/etc/default/ceph" "/etc/default/ceph.dpkg-new"
fi
mv -f "/etc/default/ceph.dpkg-backup/ceph" "/etc/default/ceph"
}
case "$1" in
configure)
# create user to avoid running server as root
# 1. create group if not existing
if ! getent group | grep -q "^$SERVER_GROUP:" ; then
echo -n "Adding group $SERVER_GROUP.."
addgroup --quiet --system --gid $SERVER_GID \
$SERVER_GROUP 2>/dev/null ||true
echo "..done"
fi
# 2. create user if not existing
if ! getent passwd | grep -q "^$SERVER_USER:"; then
echo -n "Adding system user $SERVER_USER.."
adduser --quiet \
--system \
--no-create-home \
--disabled-password \
--uid $SERVER_UID \
--gid $SERVER_GID \
--home $SERVER_HOME \
$SERVER_USER 2>/dev/null || true
echo "..done"
fi
# 3. adjust passwd entry
echo -n "Setting system user $SERVER_USER properties.."
usermod -c "$SERVER_NAME" \
-d $SERVER_HOME \
-g $SERVER_GROUP \
$SERVER_USER
# Unlock $SERVER_USER in case it is locked from an uninstall
if [ -f /etc/shadow ]; then
usermod -U -e '' $SERVER_USER
else
usermod -U $SERVER_USER
fi
echo "..done"
# 5. adjust file and directory permissions
if ! dpkg-statoverride --list $SERVER_HOME >/dev/null
then
chown $SERVER_USER:$SERVER_GROUP $SERVER_HOME
chmod u=rwx,g=rx,o= $SERVER_HOME
fi
if ! dpkg-statoverride --list /var/log/ceph >/dev/null
then
chown -R $SERVER_USER:$SERVER_GROUP /var/log/ceph
# members of group ceph can log here, but cannot remove
# others' files. non-members cannot read any logs.
chmod u=rwx,g=rwxs,o=t /var/log/ceph
fi
# 6. fix /var/run/ceph
if [ -d /var/run/ceph ]; then
echo -n "Fixing /var/run/ceph ownership.."
chown $SERVER_USER:$SERVER_GROUP /var/run/ceph
echo "..done"
fi
# create /run/ceph. fail softly if systemd isn't present or
# something.
[ -x /bin/systemd-tmpfiles ] && systemd-tmpfiles --create || true
# Complete renames of /etc/default/ceph
if [ -n "$2" ] &&
dpkg --compare-versions -- "$2" le-nl 10.2.1-0ubuntu1; then
finish_mv_ceph_defaults
# Preserve dpkg-backup directory if it still contains
# any file
if ! ls -1qA "/etc/default/ceph.dpkg-backup" | grep -q . ; then
rm -rf "/etc/default/ceph.dpkg-backup"
fi
fi
;;
abort-upgrade|abort-remove|abort-deconfigure)
:
;;
*)
echo "postinst called with unknown argument \`$1'" >&2
exit 1
;;
esac
# dh_installdeb will replace this with shell code automatically
# generated by other debhelper scripts.
#DEBHELPER#
exit 0

View File

@ -0,0 +1,74 @@
#!/bin/sh
# postrm script for ceph-common
#
# see: dh_installdeb(1)
set -e
# summary of how this script can be called:
# * <postrm> `remove'
# * <postrm> `purge'
# * <old-postrm> `upgrade' <new-version>
# * <new-postrm> `failed-upgrade' <old-version>
# * <new-postrm> `abort-install'
# * <new-postrm> `abort-install' <old-version>
# * <new-postrm> `abort-upgrade' <old-version>
# * <disappearer's-postrm> `disappear' <overwriter>
# <overwriter-version>
# for details, see http://www.debian.org/doc/debian-policy/ or
# the debian-policy package
# Custom dpkg-maintscript-helper type function to deal with
# nested /etc/default/ceph/ceph
abort_mv_ceph_defaults() {
if [ -e "/etc/default/ceph.dpkg-backup/ceph.dpkg-remove" ]; then
echo "Reinstalling /etc/default/ceph/ceph that was moved away"
mv "/etc/default/ceph.dpkg-backup" "/etc/default/ceph"
mv "/etc/default/ceph/ceph.dpkg-remove" "/etc/default/ceph/ceph"
fi
}
case "$1" in
remove)
;;
purge)
[ -f "/etc/default/ceph" ] && . /etc/default/ceph
[ -z "$SERVER_USER" ] && SERVER_USER=ceph
rm -rf /var/log/ceph
rm -rf /etc/ceph
if [ -f /etc/shadow ]; then
usermod -L -e 1 $SERVER_USER
else
usermod -L $SERVER_USER
fi
;;
abort-install|abort-upgrade)
if [ -n "$2" ] &&
dpkg --compare-versions -- "$2" le-nl 10.2.1-0ubuntu1; then
abort_mv_ceph_defaults
fi
;;
upgrade|failed-upgrade|disappear)
;;
*)
echo "postrm called with unknown argument \`$1'" >&2
exit 1
;;
esac
# dh_installdeb will replace this with shell code automatically
# generated by other debhelper scripts.
#DEBHELPER#
exit 0

View File

@ -0,0 +1,29 @@
#!/bin/sh
set -e
# Custom dpkg-maintscript-helper type function to deal with
# nested /etc/default/ceph/ceph
prepare_mv_ceph_defaults() {
local md5sum old_md5sum
md5sum="$(md5sum "/etc/default/ceph/ceph" | sed -e 's/ .*//')"
old_md5sum="$(dpkg-query -W -f='${Conffiles}' "ceph-common" | \
sed -n -e "\'^ /etc/default/ceph/ceph ' { s/ obsolete$//; s/.* //; p }")"
if [ "$md5sum" = "$old_md5sum" ]; then
mv -f "/etc/default/ceph/ceph" "/etc/default/ceph/ceph.dpkg-remove"
mv -f "/etc/default/ceph" "/etc/default/ceph.dpkg-backup"
fi
}
case "$1" in
upgrade|install)
if [ -d /etc/default/ceph ] && [ -n "$2" ] &&
dpkg --compare-versions -- "$2" le-nl 10.2.1-0ubuntu1; then
prepare_mv_ceph_defaults
fi
;;
esac
#DEBHELPER#
exit 0

View File

@ -0,0 +1,56 @@
#!/usr/bin/env bash
#
# rbdmap Ceph RBD Mapping
#
# chkconfig: 2345 20 80
# description: Ceph RBD Mapping
### BEGIN INIT INFO
# Provides: rbdmap
# Required-Start: $network $remote_fs
# Required-Stop: $network $remote_fs
# Should-Start: ceph
# Should-Stop: ceph
# X-Start-Before: $x-display-manager
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Ceph RBD Mapping
# Description: Ceph RBD Mapping
### END INIT INFO
RBDMAPFILE="/etc/ceph/rbdmap"
if [ -e /lib/lsb/init-functions ]; then
. /lib/lsb/init-functions
fi
case "$1" in
start)
rbdmap device map
;;
stop)
rbdmap device unmap
;;
restart|force-reload)
$0 stop
$0 start
;;
reload)
rbdmap device map
;;
status)
rbd device list
;;
*)
echo "Usage: rbdmap {start|stop|restart|force-reload|reload|status}"
exit 1
;;
esac

View File

@ -0,0 +1,4 @@
usr/bin/cephfs
usr/sbin/mount.ceph sbin
usr/share/man/man8/cephfs.8
usr/share/man/man8/mount.ceph.8

View File

@ -0,0 +1,4 @@
lib/systemd/system/ceph-fuse*
usr/bin/ceph-fuse
usr/sbin/mount.fuse.ceph sbin
usr/share/man/man8/ceph-fuse.8

View File

@ -0,0 +1 @@
debian/man/mount.fuse.ceph.8

View File

@ -0,0 +1,3 @@
etc/grafana/dashboards/ceph-dashboard
etc/grafana/dashboards
etc/grafana

View File

@ -0,0 +1,3 @@
etc/grafana/dashboards/ceph-dashboard/*
monitoring/grafana/dashboards/README
monitoring/grafana/README.md

View File

@ -0,0 +1 @@
var/lib/ceph/mds

View File

@ -0,0 +1,3 @@
lib/systemd/system/ceph-mds*
usr/bin/ceph-mds
usr/share/man/man8/ceph-mds.8

View File

@ -0,0 +1,47 @@
#!/bin/sh
# vim: set noet ts=8:
# postinst script for ceph-mds
#
# see: dh_installdeb(1)
set -e
# summary of how this script can be called:
#
# postinst configure <most-recently-configured-version>
# old-postinst abort-upgrade <new-version>
# conflictor's-postinst abort-remove in-favour <package> <new-version>
# postinst abort-remove
# deconfigured's-postinst abort-deconfigure in-favour <failed-install-package> <version> [<removing conflicting-package> <version>]
#
# for details, see http://www.debian.org/doc/debian-policy/ or
# the debian-policy package
[ -f "/etc/default/ceph" ] && . /etc/default/ceph
[ -z "$SERVER_USER" ] && SERVER_USER=ceph
[ -z "$SERVER_GROUP" ] && SERVER_GROUP=ceph
case "$1" in
configure)
if ! dpkg-statoverride --list /var/lib/ceph/mds >/dev/null;
then
chown $SERVER_USER:$SERVER_GROUP /var/lib/ceph/mds
fi
;;
abort-upgrade|abort-remove|abort-deconfigure)
:
;;
*)
echo "postinst called with unknown argument \`$1'" >&2
exit 1
;;
esac
# dh_installdeb will replace this with shell code automatically
# generated by other debhelper scripts.
#DEBHELPER#
exit 0

View File

@ -0,0 +1 @@
usr/share/ceph/mgr/dashboard

View File

@ -0,0 +1,43 @@
#!/bin/sh
# vim: set noet ts=8:
# postinst script for ceph-mgr-dashboard
#
# see: dh_installdeb(1)
set -e
# summary of how this script can be called:
#
# postinst configure <most-recently-configured-version>
# old-postinst abort-upgrade <new-version>
# conflictor's-postinst abort-remove in-favour <package> <new-version>
# postinst abort-remove
# deconfigured's-postinst abort-deconfigure in-favour <failed-install-package> <version> [<removing conflicting-package> <version>]
#
# for details, see http://www.debian.org/doc/debian-policy/ or
# the debian-policy package
case "$1" in
configure)
# attempt to load the plugin if the mgr is running
deb-systemd-invoke try-restart ceph-mgr.target
;;
abort-upgrade|abort-remove|abort-deconfigure)
:
;;
*)
echo "postinst called with unknown argument \`$1'" >&2
exit 1
;;
esac
# dh_installdeb will replace this with shell code automatically
# generated by other debhelper scripts.
#DEBHELPER#
exit 0

View File

@ -0,0 +1 @@
usr/share/ceph/mgr/diskprediction_cloud

View File

@ -0,0 +1,43 @@
#!/bin/sh
# vim: set noet ts=8:
# postinst script for ceph-mgr-diskprediction-cloud
#
# see: dh_installdeb(1)
set -e
# summary of how this script can be called:
#
# postinst configure <most-recently-configured-version>
# old-postinst abort-upgrade <new-version>
# conflictor's-postinst abort-remove in-favour <package> <new-version>
# postinst abort-remove
# deconfigured's-postinst abort-deconfigure in-favour <failed-install-package> <version> [<removing conflicting-package> <version>]
#
# for details, see http://www.debian.org/doc/debian-policy/ or
# the debian-policy package
case "$1" in
configure)
# attempt to load the plugin if the mgr is running
deb-systemd-invoke try-restart ceph-mgr.target
;;
abort-upgrade|abort-remove|abort-deconfigure)
:
;;
*)
echo "postinst called with unknown argument \`$1'" >&2
exit 1
;;
esac
# dh_installdeb will replace this with shell code automatically
# generated by other debhelper scripts.
#DEBHELPER#
exit 0

View File

@ -0,0 +1 @@
usr/share/ceph/mgr/diskprediction_local

View File

@ -0,0 +1,43 @@
#!/bin/sh
# vim: set noet ts=8:
# postinst script for ceph-mgr-diskprediction-local
#
# see: dh_installdeb(1)
set -e
# summary of how this script can be called:
#
# postinst configure <most-recently-configured-version>
# old-postinst abort-upgrade <new-version>
# conflictor's-postinst abort-remove in-favour <package> <new-version>
# postinst abort-remove
# deconfigured's-postinst abort-deconfigure in-favour <failed-install-package> <version> [<removing conflicting-package> <version>]
#
# for details, see http://www.debian.org/doc/debian-policy/ or
# the debian-policy package
case "$1" in
configure)
# attempt to load the plugin if the mgr is running
deb-systemd-invoke try-restart ceph-mgr.target
;;
abort-upgrade|abort-remove|abort-deconfigure)
:
;;
*)
echo "postinst called with unknown argument \`$1'" >&2
exit 1
;;
esac
# dh_installdeb will replace this with shell code automatically
# generated by other debhelper scripts.
#DEBHELPER#
exit 0

View File

@ -0,0 +1 @@
usr/share/ceph/mgr/k8sevents

View File

@ -0,0 +1,43 @@
#!/bin/sh
# vim: set noet ts=8:
# postinst script for ceph-mgr-k8sevents
#
# see: dh_installdeb(1)
set -e
# summary of how this script can be called:
#
# postinst configure <most-recently-configured-version>
# old-postinst abort-upgrade <new-version>
# conflictor's-postinst abort-remove in-favour <package> <new-version>
# postinst abort-remove
# deconfigured's-postinst abort-deconfigure in-favour <failed-install-package> <version> [<removing conflicting-package> <version>]
#
# for details, see http://www.debian.org/doc/debian-policy/ or
# the debian-policy package
case "$1" in
configure)
# attempt to load the plugin if the mgr is running
deb-systemd-invoke try-restart ceph-mgr.target
;;
abort-upgrade|abort-remove|abort-deconfigure)
:
;;
*)
echo "postinst called with unknown argument \`$1'" >&2
exit 1
;;
esac
# dh_installdeb will replace this with shell code automatically
# generated by other debhelper scripts.
#DEBHELPER#
exit 0

View File

@ -0,0 +1 @@
usr/share/ceph/mgr/rook

View File

@ -0,0 +1,43 @@
#!/bin/sh
# vim: set noet ts=8:
# postinst script for ceph-mgr-diskprediction-local
#
# see: dh_installdeb(1)
set -e
# summary of how this script can be called:
#
# postinst configure <most-recently-configured-version>
# old-postinst abort-upgrade <new-version>
# conflictor's-postinst abort-remove in-favour <package> <new-version>
# postinst abort-remove
# deconfigured's-postinst abort-deconfigure in-favour <failed-install-package> <version> [<removing conflicting-package> <version>]
#
# for details, see http://www.debian.org/doc/debian-policy/ or
# the debian-policy package
case "$1" in
configure)
# attempt to load the plugin if the mgr is running
deb-systemd-invoke try-restart ceph-mgr.target
;;
abort-upgrade|abort-remove|abort-deconfigure)
:
;;
*)
echo "postinst called with unknown argument \`$1'" >&2
exit 1
;;
esac
# dh_installdeb will replace this with shell code automatically
# generated by other debhelper scripts.
#DEBHELPER#
exit 0

View File

@ -0,0 +1 @@
usr/share/ceph/mgr/ssh

View File

@ -0,0 +1,43 @@
#!/bin/sh
# vim: set noet ts=8:
# postinst script for ceph-mgr-ssh
#
# see: dh_installdeb(1)
set -e
# summary of how this script can be called:
#
# postinst configure <most-recently-configured-version>
# old-postinst abort-upgrade <new-version>
# conflictor's-postinst abort-remove in-favour <package> <new-version>
# postinst abort-remove
# deconfigured's-postinst abort-deconfigure in-favour <failed-install-package> <version> [<removing conflicting-package> <version>]
#
# for details, see http://www.debian.org/doc/debian-policy/ or
# the debian-policy package
case "$1" in
configure)
# attempt to load the plugin if the mgr is running
deb-systemd-invoke try-restart ceph-mgr.target
;;
abort-upgrade|abort-remove|abort-deconfigure)
:
;;
*)
echo "postinst called with unknown argument \`$1'" >&2
exit 1
;;
esac
# dh_installdeb will replace this with shell code automatically
# generated by other debhelper scripts.
#DEBHELPER#
exit 0

View File

@ -0,0 +1 @@
var/lib/ceph/mgr

View File

@ -0,0 +1,32 @@
lib/systemd/system/ceph-mgr*
usr/bin/ceph-mgr
usr/share/ceph/mgr/alerts
usr/share/ceph/mgr/ansible
usr/share/ceph/mgr/balancer
usr/share/ceph/mgr/crash
usr/share/ceph/mgr/deepsea
usr/share/ceph/mgr/devicehealth
# Not included in RPM.
# usr/share/ceph/mgr/influx
usr/share/ceph/mgr/insights
usr/share/ceph/mgr/iostat
usr/share/ceph/mgr/localpool
usr/share/ceph/mgr/mgr_module.*
usr/share/ceph/mgr/mgr_util.*
usr/share/ceph/mgr/orchestrator.*
usr/share/ceph/mgr/orchestrator_cli
usr/share/ceph/mgr/osd_perf_query
usr/share/ceph/mgr/pg_autoscaler
usr/share/ceph/mgr/progress
usr/share/ceph/mgr/prometheus
usr/share/ceph/mgr/rbd_support
usr/share/ceph/mgr/restful
usr/share/ceph/mgr/selftest
usr/share/ceph/mgr/status
usr/share/ceph/mgr/telegraf
usr/share/ceph/mgr/telemetry
usr/share/ceph/mgr/test_orchestrator
usr/share/ceph/mgr/volumes
usr/share/ceph/mgr/zabbix

View File

@ -0,0 +1,51 @@
#!/bin/sh
# vim: set noet ts=8:
# postinst script for ceph-mgr
#
# see: dh_installdeb(1)
set -e
# summary of how this script can be called:
#
# postinst configure <most-recently-configured-version>
# old-postinst abort-upgrade <new-version>
# conflictor's-postinst abort-remove in-favour <package> <new-version>
# postinst abort-remove
# deconfigured's-postinst abort-deconfigure in-favour <failed-install-package> <version> [<removing conflicting-package> <version>]
#
# for details, see http://www.debian.org/doc/debian-policy/ or
# the debian-policy package
[ -f "/etc/default/ceph" ] && . /etc/default/ceph
[ -z "$SERVER_USER" ] && SERVER_USER=ceph
[ -z "$SERVER_GROUP" ] && SERVER_GROUP=ceph
case "$1" in
configure)
[ -x /sbin/start ] && start ceph-mgr-all || :
if ! dpkg-statoverride --list /var/lib/ceph/mgr >/dev/null
then
chown $SERVER_USER:$SERVER_GROUP /var/lib/ceph/mgr
fi
;;
abort-upgrade|abort-remove|abort-deconfigure)
:
;;
*)
echo "postinst called with unknown argument \`$1'" >&2
exit 1
;;
esac
# dh_installdeb will replace this with shell code automatically
# generated by other debhelper scripts.
#DEBHELPER#
exit 0

View File

@ -0,0 +1 @@
var/lib/ceph/mon

View File

@ -0,0 +1,7 @@
# %if %{with stx}
# %exclude %{_unitdir}/ceph-mon*
# lib/systemd/system/ceph-mon*
usr/bin/ceph-mon
usr/bin/ceph-monstore-tool
usr/share/man/man8/ceph-mon.8

View File

@ -0,0 +1,46 @@
#!/bin/bash
# vim: set noet ts=8:
# postinst script for ceph-mon
#
# see: dh_installdeb(1)
set -e
# summary of how this script can be called:
#
# postinst configure <most-recently-configured-version>
# old-postinst abort-upgrade <new-version>
# conflictor's-postinst abort-remove in-favour <package> <new-version>
# postinst abort-remove
# deconfigured's-postinst abort-deconfigure in-favour <failed-install-package> <version> [<removing conflicting-package> <version>]
#
# for details, see http://www.debian.org/doc/debian-policy/ or
# the debian-policy package
[ -f "/etc/default/ceph" ] && . /etc/default/ceph
[ -z "$SERVER_USER" ] && SERVER_USER=ceph
[ -z "$SERVER_GROUP" ] && SERVER_GROUP=ceph
case "$1" in
configure)
:
;;
abort-upgrade|abort-remove|abort-deconfigure)
:
;;
*)
echo "postinst called with unknown argument \`$1'" >&2
exit 1
;;
esac
# dh_installdeb will replace this with shell code automatically
# generated by other debhelper scripts.
#DEBHELPER#
exit 0

View File

@ -0,0 +1,2 @@
var/lib/ceph/osd
lib/udev/rules.d

View File

@ -0,0 +1,31 @@
debian/udev/* lib/udev/rules.d
etc/sudoers.d/ceph-osd-smartctl
etc/sysctl.d/30-ceph-osd.conf
# %if %{without stx}
# lib/systemd/system/ceph-osd*
# lib/systemd/system/ceph-volume@.service
usr/bin/ceph-bluestore-tool
usr/bin/ceph-clsinfo
usr/bin/ceph-objectstore-tool
usr/bin/ceph-osd
usr/bin/ceph-osdomap-tool
usr/lib/ceph/ceph-osd-prestart.sh
usr/lib/python*/dist-packages/ceph_volume-*
usr/lib/python*/dist-packages/ceph_volume/*
usr/sbin/ceph-volume
usr/sbin/ceph-volume-systemd
usr/share/man/man8/ceph-bluestore-tool.8
usr/share/man/man8/ceph-clsinfo.8
usr/share/man/man8/ceph-osd.8
usr/share/man/man8/ceph-volume-systemd.8
usr/share/man/man8/ceph-volume.8
# if %{with stx}
usr/sbin/ceph-manage-journal
lib/udev/rules.d/60-ceph-by-parttypeuuid.rules
# %if %{without stx}
# lib/udev/rules.d/95-ceph-osd.rules

View File

@ -0,0 +1 @@
usr/lib/ocf/resource.d/ceph/*

View File

@ -0,0 +1,180 @@
ceph (10.2.5-1) unstable; urgency=medium
## Upgrades from Debian Jessie
Online upgrades from Ceph versions prior to Hammer (0.94.x) are not
supported by upstream. As Debian Jessie has Ceph Firefly (0.80.x) an
online upgrade from Jessie to Stretch is not possible. You have to first
shutdown all Ceph daemons on all nodes, upgrade everything to the new
version and start all daemons again.
Ceph daemons are not automatically restarted on upgrade to minimize
disruption. You have to manually restart them after the upgrade.
-- Gaudenz Steinlin <gaudenz@debian.org> Sun, 08 Jan 2017 14:57:35 +0100
ceph (9.2.0-1) experimental; urgency=medium
## systemd Enablement
For all distributions that support systemd (Debian Jessie 8.x,
Ubuntu >= 16.04), Ceph daemons are now managed using upstream provided
systemd files instead of the legacy sysvinit scripts or distro provided
systemd files. For example:
systemctl start ceph.target # start all daemons
systemctl status ceph-osd@12 # check status of osd.12
To upgrade existing deployments that use the older systemd service
configurations (Ubuntu >= 15.04, Debian >= Jessie), you need to switch
to using the new ceph-mon@ service:
systemctl stop ceph-mon
systemctl disable ceph-mon
systemctl start ceph-mon@`hostname`
systemctl enable ceph-mon@`hostname`
and also enable the ceph target post upgrade:
systemctl enable ceph.target
The main notable distro that is *not* using systemd is Ubuntu 14.04
(The next Ubuntu LTS, 16.04, will use systemd instead of upstart).
## Ceph daemons no longer run as root
Ceph daemons now run as user and group 'ceph' by default. The
ceph user has a static UID assigned by Debian to ensure consistency
across servers within a Ceph deployment.
If your systems already have a ceph user, upgrading the package will cause
problems. We suggest you first remove or rename the existing 'ceph' user
and 'ceph' group before upgrading.
When upgrading, administrators have two options:
1. Add the following line to 'ceph.conf' on all hosts:
setuser match path = /var/lib/ceph/$type/$cluster-$id
This will make the Ceph daemons run as root (i.e., not drop
privileges and switch to user ceph) if the daemon's data
directory is still owned by root. Newly deployed daemons will
be created with data owned by user ceph and will run with
reduced privileges, but upgraded daemons will continue to run as
root.
2. Fix the data ownership during the upgrade. This is the
preferred option, but it is more work and can be very time
consuming. The process for each host is to:
1. Upgrade the ceph package. This creates the ceph user and group. For
example:
apt-get install ceph
NOTE: the permissions on /var/lib/ceph/mon will be set to ceph:ceph
as part of the package upgrade process on existing *systemd*
based installations; the ceph-mon systemd service will be
automatically restarted as part of the upgrade. All other
filesystem permissions on systemd based installs will
remain unmodified by the upgrade.
2. Stop the daemon(s):
systemctl stop ceph-osd@* # debian, ubuntu >= 15.04
stop ceph-all # ubuntu 14.04
3. Fix the ownership:
chown -R ceph:ceph /var/lib/ceph
4. Restart the daemon(s):
start ceph-all # ubuntu 14.04
systemctl start ceph.target # debian, ubuntu >= 15.04
Alternatively, the same process can be done with a single daemon
type, for example by stopping only monitors and chowning only
'/var/lib/ceph/osd'.
## KeyValueStore OSD on-disk format changes
The on-disk format for the experimental KeyValueStore OSD backend has
changed. You will need to remove any OSDs using that backend before you
upgrade any test clusters that use it.
## Deprecated commands
'ceph scrub', 'ceph compact' and 'ceph sync force' are now DEPRECATED.
Users should instead use 'ceph mon scrub', 'ceph mon compact' and
'ceph mon sync force'.
## Full pool behaviour
When a pool quota is reached, librados operations now block indefinitely,
the same way they do when the cluster fills up. (Previously they would
return -ENOSPC). By default, a full cluster or pool will now block. If
your librados application can handle ENOSPC or EDQUOT errors gracefully,
you can get error returns instead by using the new librados
OPERATION_FULL_TRY flag.
-- James Page <james.page@ubuntu.com> Mon, 30 Nov 2015 09:23:09 +0000
ceph (0.80.9-2) unstable; urgency=medium
## CRUSH fixes in 0.80.9
The 0.80.9 point release fixes several issues with CRUSH that trigger excessive
data migration when adjusting OSD weights. These are most obvious when a very
small weight change (e.g., a change from 0 to .01) triggers a large amount of
movement, but the same set of bugs can also lead to excessive (though less
noticeable) movement in other cases.
However, because the bug may already have affected your cluster, fixing it
may trigger movement back to the more correct location. For this reason, you
must manually opt-in to the fixed behavior.
In order to set the new tunable to correct the behavior:
ceph osd crush set-tunable straw_calc_version 1
Note that this change will have no immediate effect. However, from this
point forward, any straw bucket in your CRUSH map that is adjusted will get
non-buggy internal weights, and that transition may trigger some rebalancing.
You can estimate how much rebalancing will eventually be necessary on your
cluster with:
ceph osd getcrushmap -o /tmp/cm
crushtool -i /tmp/cm --num-rep 3 --test --show-mappings > /tmp/a 2>&1
crushtool -i /tmp/cm --set-straw-calc-version 1 -o /tmp/cm2
crushtool -i /tmp/cm2 --reweight -o /tmp/cm2
crushtool -i /tmp/cm2 --num-rep 3 --test --show-mappings > /tmp/b 2>&1
wc -l /tmp/a # num total mappings
diff -u /tmp/a /tmp/b | grep -c ^+ # num changed mappings
Divide the total number of lines in /tmp/a with the number of lines
changed. We've found that most clusters are under 10%.
You can force all of this rebalancing to happen at once with:
ceph osd crush reweight-all
Otherwise, it will happen at some unknown point in the future when
CRUSH weights are next adjusted.
## Mapping rbd devices with rbdmap on systemd systems
If you have setup rbd mappings in /etc/ceph/rbdmap and corresponding mounts
in /etc/fstab things might break with systemd because systemd waits for the
rbd device to appear before the legacy rbdmap init file has a chance to run
and drops into emergency mode if it times out.
This can be fixed by adding the nofail option in /etc/fstab to all rbd
backed mount points. With this systemd does not wait for the device and
proceeds with the boot process. After rbdmap mapped the device, systemd
detects the new device and mounts the file system.
-- Gaudenz Steinlin <gaudenz@debian.org> Mon, 04 May 2015 22:49:48 +0200

View File

@ -0,0 +1 @@
empty-binary-package

View File

@ -0,0 +1,2 @@
usr/bin/cephfs-shell
usr/lib/python3*/dist-packages/cephfs_shell-*.egg-info

View File

@ -0,0 +1,883 @@
ceph (14.2.22-1) stable; urgency=medium
* New upstream release
-- Ceph Release Team <ceph-maintainers@ceph.com> Tue, 29 Jun 2021 22:09:07 +0000
ceph (14.2.21-1) stable; urgency=medium
* New upstream release
-- Ceph Release Team <ceph-maintainers@ceph.com> Thu, 13 May 2021 17:23:05 +0000
ceph (14.2.20-1) stable; urgency=medium
* New upstream release
-- Ceph Release Team <ceph-maintainers@ceph.com> Mon, 19 Apr 2021 14:11:13 +0000
ceph (14.2.19-1) stable; urgency=medium
* New upstream release
-- Ceph Release Team <ceph-maintainers@ceph.com> Tue, 30 Mar 2021 16:19:15 +0000
ceph (14.2.18-1) stable; urgency=medium
* New upstream release
-- Ceph Release Team <ceph-maintainers@ceph.com> Mon, 15 Mar 2021 17:46:19 +0000
ceph (14.2.17-1) stable; urgency=medium
* New upstream release
-- Ceph Release Team <ceph-maintainers@ceph.com> Thu, 11 Mar 2021 17:07:30 +0000
ceph (14.2.16-1) stable; urgency=medium
* New upstream release
-- Ceph Release Team <ceph-maintainers@ceph.com> Wed, 16 Dec 2020 17:34:57 +0000
ceph (14.2.15-1) stable; urgency=medium
* New upstream release
-- Ceph Release Team <ceph-maintainers@ceph.com> Mon, 23 Nov 2020 18:30:13 +0000
ceph (14.2.14-1) stable; urgency=medium
* New upstream release
-- Ceph Release Team <ceph-maintainers@ceph.com> Tue, 17 Nov 2020 18:10:08 +0000
ceph (14.2.13-1) stable; urgency=medium
* New upstream release
-- Ceph Release Team <ceph-maintainers@ceph.com> Fri, 30 Oct 2020 14:54:35 +0000
ceph (14.2.12-1) stable; urgency=medium
* New upstream release
-- Ceph Release Team <ceph-maintainers@ceph.com> Mon, 19 Oct 2020 20:19:19 +0000
ceph (14.2.11-1) stable; urgency=medium
* New upstream release
-- Ceph Release Team <ceph-maintainers@ceph.com> Mon, 10 Aug 2020 20:15:20 +0000
ceph (14.2.10-1) stable; urgency=medium
* New upstream release
-- Ceph Release Team <ceph-maintainers@ceph.com> Thu, 25 Jun 2020 17:32:29 +0000
ceph (14.2.9-1) stable; urgency=medium
* New upstream release
-- Ceph Release Team <ceph-maintainers@ceph.com> Thu, 09 Apr 2020 16:17:27 +0000
ceph (14.2.8-1) stable; urgency=medium
* New upstream release
-- Ceph Release Team <ceph-maintainers@ceph.com> Mon, 02 Mar 2020 17:49:19 +0000
ceph (14.2.7-1) stable; urgency=medium
* New upstream release
-- Ceph Release Team <ceph-maintainers@ceph.com> Fri, 31 Jan 2020 17:07:50 +0000
ceph (14.2.6-1) stable; urgency=medium
* New upstream release
-- Ceph Release Team <ceph-maintainers@ceph.com> Wed, 08 Jan 2020 18:36:52 +0000
ceph (14.2.5-1) stable; urgency=medium
* New upstream release
-- Ceph Release Team <ceph-maintainers@ceph.com> Fri, 06 Dec 2019 16:42:32 +0000
ceph (14.2.4-1) stable; urgency=medium
* New upstream release
-- Ceph Release Team <ceph-maintainers@ceph.com> Fri, 13 Sep 2019 14:07:41 -0400
ceph (14.2.3-1) stable; urgency=medium
* New upstream release
-- Ceph Release Team <ceph-maintainers@ceph.com> Tue, 03 Sep 2019 13:19:56 +0000
ceph (14.2.2-1) stable; urgency=medium
* New upstream release
-- Ceph Release Team <ceph-maintainers@ceph.com> Wed, 17 Jul 2019 15:12:34 +0000
ceph (14.2.1-1) stable; urgency=medium
* New upstream release
-- Ceph Release Team <ceph-maintainers@ceph.com> Thu, 25 Apr 2019 18:15:46 +0000
ceph (14.2.0-1) stable; urgency=medium
* New upstream release
-- Ceph Release Team <ceph-maintainers@ceph.com> Mon, 18 Mar 2019 10:08:27 +0000
ceph (14.1.1-1) stable; urgency=medium
* New upstream release
-- Ceph Release Team <ceph-maintainers@ceph.com> Mon, 11 Mar 2019 16:42:54 +0000
ceph (14.1.0-1) stable; urgency=medium
* New upstream release
-- Ceph Release Team <ceph-maintainers@ceph.com> Fri, 22 Feb 2019 18:07:06 +0000
ceph (13.1.0-1) stable; urgency=medium
* New upstream release
-- Ceph Release Team <ceph-maintainers@ceph.com> Thu, 03 May 2018 17:57:32 +0000
ceph (12.1.2-1) stable; urgency=medium
* New upstream release
-- Ceph Release Team <ceph-maintainers@ceph.com> Tue, 01 Aug 2017 17:55:37 +0000
ceph (12.1.1-1) stable; urgency=medium
* New upstream release
-- Ceph Release Team <ceph-maintainers@ceph.com> Mon, 17 Jul 2017 16:55:59 +0000
ceph (12.1.0-1) stable; urgency=medium
* New upstream release
-- Ceph Release Team <ceph-maintainers@ceph.com> Thu, 22 Jun 2017 15:43:47 +0000
ceph (12.0.3-1) stable; urgency=medium
* New upstream release
-- Ceph Release Team <ceph-maintainers@ceph.com> Tue, 16 May 2017 12:42:53 +0000
ceph (12.0.2-1) stable; urgency=medium
* New upstream release
-- Ceph Release Team <ceph-maintainers@ceph.com> Thu, 20 Apr 2017 19:59:57 +0000
ceph (12.0.1-1) stable; urgency=medium
* New upstream release
-- Ceph Release Team <ceph-maintainers@ceph.com> Fri, 24 Mar 2017 15:47:57 +0000
ceph (12.0.0-1) stable; urgency=medium
* New upstream release
-- Ceph Release Team <ceph-maintainers@ceph.com> Wed, 08 Feb 2017 13:57:30 +0000
ceph (11.1.0-1) stable; urgency=medium
* New upstream release
-- Alfredo Deza <adeza@redhat.com> Mon, 12 Dec 2016 18:27:51 +0000
ceph (11.0.2-1) stable; urgency=medium
* New upstream release
-- Alfredo Deza <adeza@redhat.com> Mon, 17 Oct 2016 11:16:49 +0000
ceph (11.0.1-1) stable; urgency=medium
* New upstream release
-- Alfredo Deza <adeza@redhat.com> Tue, 11 Oct 2016 16:27:56 +0000
ceph (11.0.0-1) stable; urgency=low
* New upstream release
-- Sage Weil <sage@newdream.net> Tue, 28 Jun 2016 11:41:16 -0400
ceph (10.2.0-1) stable; urgency=medium
* New upstream release
-- Alfredo Deza <adeza@redhat.com> Wed, 20 Apr 2016 11:29:47 +0000
ceph (10.1.2-1) stable; urgency=medium
* New upstream release
-- Alfredo Deza <adeza@redhat.com> Tue, 12 Apr 2016 17:42:55 +0000
ceph (10.1.1-1) stable; urgency=medium
* New upstream release
-- Alfredo Deza <adeza@redhat.com> Wed, 06 Apr 2016 00:45:18 +0000
ceph (10.1.0-1) stable; urgency=medium
* New upstream release
-- Alfredo Deza <adeza@redhat.com> Thu, 24 Mar 2016 10:53:47 +0000
ceph (10.0.5) stable; urgency=low
* New upstream release (just fixing changelog)
-- Sage Weil <sage@newdream.net> Fri, 11 Mar 2016 12:04:26 -0500
ceph (10.0.4) stable; urgency=low
* New upstream release
-- Sage Weil <sage@newdream.net> Thu, 03 Mar 2016 13:34:18 -0500
ceph (10.0.3) stable; urgency=low
* New upstream release
-- Sage Weil <sage@newdream.net> Mon, 08 Feb 2016 17:10:25 -0500
ceph (10.0.2-1) stable; urgency=low
* New upstream release
-- Alfredo Deza <adeza@redhat.com> Wed, 13 Jan 2016 16:22:26 +0000
ceph (10.0.1-1) stable; urgency=low
* New upstream release
-- Alfredo Deza <adeza@redhat.com> Mon, 14 Dec 2015 23:48:54 +0000
ceph (10.0.0-1) stable; urgency=low
* New upstream release
-- Alfredo Deza <adeza@redhat.com> Mon, 16 Nov 2015 21:41:53 +0000
ceph (9.2.0-1) stable; urgency=low
* New upstream release
-- Jenkins Build Slave User <jenkins-build@jenkins-slave-wheezy.localdomain> Tue, 03 Nov 2015 16:58:32 +0000
ceph (9.1.0-1) stable; urgency=low
* New upstream release
-- Alfredo Deza <adeza@redhat.com> Tue, 13 Oct 2015 05:56:36 -0700
ceph (9.0.3-1) stable; urgency=low
* New upstream release
-- Alfredo Deza <adeza@redhat.com> Fri, 21 Aug 2015 12:46:31 -0700
ceph (9.0.2-1) stable; urgency=low
* New upstream release
-- Alfredo Deza <adeza@redhat.com> Tue, 14 Jul 2015 13:10:31 -0700
ceph (9.0.1-1) stable; urgency=low
* New upstream release
-- Alfredo Deza <adeza@redhat.com> Fri, 05 Jun 2015 10:59:02 -0700
ceph (9.0.0-1) stable; urgency=low
* New upstream release
-- Alfredo Deza <adeza@redhat.com> Mon, 04 May 2015 12:32:58 -0700
ceph (0.94-1) stable; urgency=low
* New upstream release
-- Alfredo Deza <adeza@redhat.com> Tue, 07 Apr 2015 10:05:40 -0700
ceph (0.93-1) stable; urgency=low
* New upstream release
-- Alfredo Deza <adeza@redhat.com> Fri, 27 Feb 2015 09:52:53 -0800
ceph (0.92-1) stable; urgency=low
* New upstream release
-- Alfredo Deza <adeza@redhat.com> Mon, 02 Feb 2015 10:35:27 -0800
ceph (0.91-1) stable; urgency=low
* New upstream release
-- Alfredo Deza <adeza@redhat.com> Tue, 13 Jan 2015 12:10:22 -0800
ceph (0.90-1) stable; urgency=low
* New upstream release
-- Alfredo Deza <adeza@redhat.com> Fri, 19 Dec 2014 06:56:22 -0800
ceph (0.89-1) stable; urgency=low
* New upstream release
-- Alfredo Deza <adeza@redhat.com> Wed, 03 Dec 2014 08:18:33 -0800
ceph (0.88-1) stable; urgency=low
* New upstream release
-- Alfredo Deza <adeza@redhat.com> Tue, 11 Nov 2014 09:33:12 -0800
ceph (0.87-1) stable; urgency=low
* New upstream release
-- Alfredo Deza <alfredo.deza@inktank.com> Wed, 29 Oct 2014 11:03:55 -0700
ceph (0.86-1) stable; urgency=low
* New upstream release
-- Alfredo Deza <alfredo.deza@inktank.com> Tue, 07 Oct 2014 06:20:21 -0700
ceph (0.85-1) stable; urgency=low
* Development release
-- Alfredo Deza <alfredo.deza@inktank.com> Mon, 08 Sep 2014 06:31:31 -0700
ceph (0.84-1) stable; urgency=low
* Development release
-- Alfredo Deza <alfredo.deza@inktank.com> Mon, 18 Aug 2014 09:02:20 -0700
ceph (0.83-1) stable; urgency=low
* Development release
-- Alfredo Deza <alfredo.deza@inktank.com> Tue, 29 Jul 2014 13:42:53 -0700
ceph (0.82-1) stable; urgency=low
* Development release
-- Alfredo Deza <alfredo.deza@inktank.com> Wed, 25 Jun 2014 16:47:51 +0000
ceph (0.81-1) stable; urgency=low
* Development release
-- Alfredo Deza <alfredo.deza@inktank.com> Mon, 02 Jun 2014 18:37:27 +0000
ceph (0.80-1) stable; urgency=low
* New upstream release
-- Alfredo Deza <alfredo.deza@inktank.com> Tue, 06 May 2014 14:03:27 +0000
ceph (0.80-rc1-1) stable; urgency=low
* New upstream release
-- Alfredo Deza <alfredo.deza@inktank.com> Tue, 22 Apr 2014 21:21:44 +0000
ceph (0.79-1) stable; urgency=low
* New upstream release
-- Alfredo Deza <alfredo.deza@inktank.com> Mon, 07 Apr 2014 16:48:36 +0000
ceph (0.78-1) stable; urgency=low
* New upstream release
-- Alfredo Deza <alfredo.deza@inktank.com> Fri, 21 Mar 2014 22:05:12 +0000
ceph (0.77-1) stable; urgency=low
* New upstream release
-- Ken Dreyer <ken.dreyer@inktank.com> Wed, 19 Feb 2014 22:54:06 +0000
ceph (0.76-1) stable; urgency=low
* New upstream release
-- Ken Dreyer <kdreyer@jenkins.front.sepia.ceph.com> Mon, 03 Feb 2014 18:14:59 +0000
ceph (0.75-1) stable; urgency=low
* New upstream release
-- Ken Dreyer <kdreyer@jenkins.front.sepia.ceph.com> Mon, 13 Jan 2014 21:05:07 +0000
ceph (0.74-1) stable; urgency=low
* New upstream release
-- Gary Lowell <glowell@jenkins.front.sepia.ceph.com> Mon, 30 Dec 2013 21:02:35 +0000
ceph (0.73-1) precise; urgency=low
* New upstream release
-- Gary Lowell <gary.lowell@inktank.com> Tue, 10 Dec 2013 04:55:06 +0000
ceph (0.72-1) stable; urgency=low
* New upstream release
-- Gary Lowell <gary.lowell@inktank.com> Thu, 07 Nov 2013 20:25:18 +0000
ceph (0.72-rc1-1) stable; urgency=low
* New upstream release
-- Gary Lowell <gary.lowell@inktank.com> Wed, 30 Oct 2013 00:44:25 +0000
ceph (0.71-1) stable; urgency=low
* New upstream release
-- Gary Lowell <gary.lowell@inktank.com> Thu, 17 Oct 2013 09:19:02 +0000
ceph (0.70-1) stable; urgency=low
* New upstream release
-- Gary Lowell <gary.lowell@inktank.com> Fri, 04 Oct 2013 20:11:51 +0000
ceph (0.69-1) precise; urgency=low
* New upstream release
-- Gary Lowell <gary.lowell@inktank.com> Wed, 18 Sep 2013 01:39:47 +0000
ceph (0.68-1) precise; urgency=low
* New upstream release
-- Gary Lowell <gary.lowell@inktank.com> Tue, 03 Sep 2013 16:10:11 -0700
ceph (0.67-1) precise; urgency=low
* New upstream release
-- Gary Lowell <gary.lowell@inktank.com> Tue, 13 Aug 2013 10:44:30 -0700
ceph (0.67-rc3-1) precise; urgency=low
* New upstream release
-- Gary Lowell <gary.lowell@inktank.com> Tue, 30 Jul 2013 14:37:40 -0700
ceph (0.67-rc2-1) precise; urgency=low
* New upstream release
-- Gary Lowell <gary.lowell@inktank.com> Wed, 24 Jul 2013 16:18:33 -0700
ceph (0.67-rc1-1) precise; urgency=low
* New upstream release
-- Gary Lowell <gary.lowell@inktank.com> Mon, 22 Jul 2013 11:57:01 -0700
ceph (0.66-1) precise; urgency=low
* New upstream release
-- Gary Lowell <gary.lowell@inktank.com> Mon, 08 Jul 2013 15:44:45 -0700
ceph (0.65-1) precise; urgency=low
* New upstream release
-- Gary Lowell <gary.lowell@inktank.com> Tue, 25 Jun 2013 09:19:14 -0700
ceph (0.64-1) precise; urgency=low
* New upstream release
-- Gary Lowell <gary.lowell@inktank.com> Wed, 12 Jun 2013 09:53:54 -0700
ceph (0.63-1) precise; urgency=low
* New upstream release
-- Gary Lowell <gary.lowell@inktank.com> Tue, 28 May 2013 13:57:53 -0700
ceph (0.62) precise; urgency=low
* New upstream release
-- Gary Lowell <gary.lowell@inktank.com> Tue, 14 May 2013 09:08:21 -0700
ceph (0.61-1) precise; urgency=low
* New upstream release
-- Gary Lowell <gary.lowell@inktank.com> Mon, 06 May 2013 13:18:43 -0700
ceph (0.60-1) precise; urgency=low
* New upstream release
-- Gary Lowell <gary.lowell@inktank.com> Mon, 01 Apr 2013 12:22:30 -0700
ceph (0.59-1) precise; urgency=low
* New upstream release
-- Gary Lowell <gary.lowell@inktank.com> Tue, 19 Mar 2013 22:26:37 -0700
ceph (0.58-1) precise; urgency=low
* New upstream release
-- Gary Lowell <gary.lowell@inktank.com> Mon, 04 Mar 2013 15:17:58 -0800
ceph (0.57-1) quantal; urgency=low
* New upstream release
-- Gary Lowell <gary.lowell@inktank.com> Tue, 19 Feb 2013 10:06:39 -0800
ceph (0.56-1) quantal; urgency=low
* New upstream release
-- Gary Lowell <gary.lowell@inktank.com> Mon, 31 Dec 2012 17:08:45 -0800
ceph (0.55.1-1) precise; urgency=low
* New upstream release
-- Gary Lowell <gary.lowell@inktank.com> Wed, 12 Dec 2012 16:24:13 -0800
ceph (0.55-1) precise; urgency=low
* New upstream release
-- Gary Lowell <gary.lowell@inktank.com> Mon, 03 Dec 2012 19:08:14 -0800
ceph (0.54-1) precise; urgency=low
* New upstream release
-- Gary Lowell <gary.lowell@inktank.com> Tue, 13 Nov 2012 13:17:19 -0800
ceph (0.53-1) precise; urgency=low
* New upstream release
-- Gary Lowell <gary.lowell@inktank.com> Tue, 16 Oct 2012 17:40:46 +0000
ceph (0.52-1) precise; urgency=low
* New upstream release
-- Ubuntu <gary.lowell@inktank.com> Thu, 27 Sep 2012 16:16:52 +0000
ceph (0.51-1) experimental; urgency=low
* New upstream release
-- Sage Weil <sage@newdream.net> Sat, 25 Aug 2012 15:58:23 -0700
ceph (0.50-1) experimental; urgency=low
* New upstream release
-- Sage Weil <sage@newdream.net> Mon, 13 Aug 2012 09:44:40 -0700
ceph (0.49-1) experimental; urgency=low
* New upstream release
-- Sage Weil <sage@newdream.net> Fri, 20 Jul 2012 23:26:43 -0700
ceph (0.48argonaut-1) experimental; urgency=low
* New upstream release
-- Sage Weil <sage@newdream.net> Sat, 30 Jun 2012 14:49:30 -0700
ceph (0.47.3-1) experimental; urgency=low
* New upstream release
-- Sage Weil <sage@newdream.net> Wed, 20 Jun 2012 10:57:03 -0700
ceph (0.47.2-1) experimental; urgency=low
* New upstream release
-- Sage Weil <sage@newdream.net> Wed, 23 May 2012 09:00:43 -0700
ceph (0.47.1-1) experimental; urgency=low
* New upstream release
-- Sage Weil <sage@newdream.net> Mon, 21 May 2012 14:28:30 -0700
ceph (0.47-1) experimental; urgency=low
* New upstream release
-- Sage Weil <sage@newdream.net> Sun, 20 May 2012 15:16:03 -0700
ceph (0.46-1) experimental; urgency=low
* New upstream release
-- Sage Weil <sage@newdream.net> Sun, 29 Apr 2012 21:21:01 -0700
ceph (0.45-1) experimental; urgency=low
* New upstream release
-- Sage Weil <sage@newdream.net> Tue, 10 Apr 2012 10:41:57 -0700
ceph (0.44.2-1) experimental; urgency=low
* New upstream release
-- Sage Weil <sage@newdream.net> Thu, 05 Apr 2012 14:54:17 -0700
ceph (0.44.1-1) experimental; urgency=low
* New upstream release
-- Sage Weil <sage@newdream.net> Tue, 27 Mar 2012 13:02:00 -0700
ceph (0.44-1) experimental; urgency=low
* New upstream release
-- Sage Weil <sage@newdream.net> Sun, 18 Mar 2012 12:03:38 -0700
ceph (0.43-1) experimental; urgency=low
* New upstream release
-- Sage Weil <sage@newdream.net> Fri, 02 Mar 2012 08:53:10 -0800
ceph (0.42.2-1) experimental; urgency=low
* New upstream release
-- Sage Weil <sage@newdream.net> Fri, 24 Feb 2012 12:59:38 -0800
ceph (0.42.1-1) experimental; urgency=low
* New upstream release
-- Sage Weil <sage@newdream.net> Thu, 23 Feb 2012 18:46:23 -0800
ceph (0.42-1) experimental; urgency=low
* New upstream release
-- Sage Weil <sage@newdream.net> Sun, 19 Feb 2012 15:30:20 -0800
ceph (0.41-1) experimental; urgency=low
* New upstream release
-- Sage Weil <sage@newdream.net> Fri, 27 Jan 2012 10:42:11 -0800
ceph (0.40-1) experimental; urgency=low
* New upstream release
-- Sage Weil <sage@newdream.net> Fri, 13 Jan 2012 08:36:02 -0800
ceph (0.39-1) experimental; urgency=low
* New upstream release
-- Sage Weil <sage@newdream.net> Fri, 02 Dec 2011 09:01:20 -0800
ceph (0.38-1) experimental; urgency=low
* New upstream release
-- Sage Weil <sage@newdream.net> Thu, 10 Nov 2011 15:06:44 -0800
ceph (0.37-1) experimental; urgency=low
* New upstream release
-- Sage Weil <sage@newdream.net> Mon, 17 Oct 2011 08:35:42 -0700
ceph (0.36-1) experimental; urgency=low
* New upstream release
-- Sage Weil <sage@newdream.net> Fri, 30 Sep 2011 09:29:29 -0700
ceph (0.35-1) experimental; urgency=low
* New upstream release
-- Sage Weil <sage@newdream.net> Wed, 21 Sep 2011 09:36:03 -0700
ceph (0.34-1) experimental; urgency=low
* New upstream release
-- Sage Weil <sage@newdream.net> Fri, 26 Aug 2011 21:48:35 -0700
ceph (0.33-1) experimental; urgency=low
* New upstream release.
-- Sage Weil <sage@newdream.net> Mon, 15 Aug 2011 16:42:07 -0700
ceph (0.32-1) experimental; urgency=low
* New upstream release
-- Sage Weil <sage@newdream.net> Fri, 29 Jul 2011 21:42:08 -0700
ceph (0.30-1) experimental; urgency=low
* New upstream release
-- Sage Weil <sage@newdream.net> Mon, 27 Jun 2011 20:06:06 -0700
ceph (0.29.1-1) experimental; urgency=low
* New upstream release
-- Sage Weil <sage@newdream.net> Thu, 16 Jun 2011 13:10:47 -0700
ceph (0.29-1) experimental; urgency=low
* New upstream release
-- Sage Weil <sage@newdream.net> Mon, 06 Jun 2011 09:59:25 -0700
ceph (0.28.2-1) experimental; urgency=low
* New upstream release.
-- Sage Weil <sage@newdream.net> Sat, 28 May 2011 09:14:17 -0700
ceph (0.28.1-1) experimental; urgency=low
* New upstream release.
-- Sage Weil <sage@newdream.net> Mon, 23 May 2011 21:11:30 -0700
ceph (0.28-1) experimental; urgency=low
* New upstream release.
-- Sage Weil <sage@newdream.net> Tue, 17 May 2011 18:03:11 -0700
ceph (0.27.1-1) experimental; urgency=low
* New upstream release.
-- Sage Weil <sage@newdream.net> Thu, 05 May 2011 13:42:06 -0700
ceph (0.27-1) experimental; urgency=low
* New upstream release.
-- Sage Weil <sage@newdream.net> Fri, 22 Apr 2011 16:51:49 -0700
ceph (0.26-1) experimental; urgency=low
* New upstream release.
* Make Ceph Linux only and build on all Linux archs (closes: #614890),
but only build-depend google-perftools on x86 and x64 archs only.
* Correct section of libcrush1, librados1, librbd1 and libceph1 to libs.
* Make Ceph cross buildable (closes: #618939), thanks to Hector Oron.
* Disable libatomic-ops on ARMv4t (armel) archs to prevent FTBFS
(closes: #615235), thanks go to Hector Oron again.
* Rename librados1{,-dbg,-dev} packages to librados2{,-dbg,-dev} ones;
conflict with and replace the former ones.
-- Laszlo Boszormenyi (GCS) <gcs@debian.hu> Fri, 01 Apr 2011 16:28:11 +0100
ceph (0.25.2-1) experimental; urgency=low
* New upstream release
-- Sage Weil <sage@newdream.net> Sun, 20 Mar 2011 21:07:38 -0700
ceph (0.25.1-1) experimental; urgency=low
* New upstream release
-- Sage Weil <sage@newdream.net> Mon, 14 Mar 2011 14:43:47 -0700
ceph (0.25-1) experimental; urgency=low
* New upstream release
-- Sage Weil <sage@newdream.net> Fri, 04 Mar 2011 14:39:54 -0800
ceph (0.24.3-1) experimental; urgency=low
* New upstream release
-- Sage Weil <sage@newdream.net> Thu, 10 Feb 2011 09:14:00 -0800
ceph (0.24.2-1) experimental; urgency=low
* New upstream release.
-- Sage Weil <sage@newdream.net> Mon, 24 Jan 2011 11:02:24 -0800
ceph (0.24.1-1) experimental; urgency=low
* New upstream release.
-- Sage Weil <sage@newdream.net> Fri, 07 Jan 2011 16:49:48 -0800
ceph (0.24-1) experimental; urgency=low
* New upstream release.
-- Laszlo Boszormenyi (GCS) <gcs@debian.hu> Wed, 01 Dec 2010 09:26:25 -0800
ceph (0.23.1-1) experimental; urgency=low
* Initial release (Closes: #506040)
-- Sage Weil <sage@newdream.net> Sun, 21 Nov 2010 15:22:21 -0800

View File

@ -0,0 +1,32 @@
configure
src/rocksdb/util/build_version.cc
src/pybind/*.pyc
src/test/pybind/*.pyc
src/rapidjson/thirdparty/gtest/googlemock/msvc/2005/gmock.sln
src/rapidjson/thirdparty/gtest/googlemock/msvc/2005/gmock.vcproj
src/rapidjson/thirdparty/gtest/googlemock/msvc/2005/gmock_config.vsprops
src/rapidjson/thirdparty/gtest/googlemock/msvc/2005/gmock_main.vcproj
src/rapidjson/thirdparty/gtest/googlemock/msvc/2005/gmock_test.vcproj
src/rapidjson/thirdparty/gtest/googlemock/msvc/2010/gmock.sln
src/rapidjson/thirdparty/gtest/googlemock/msvc/2010/gmock.vcxproj
src/rapidjson/thirdparty/gtest/googlemock/msvc/2010/gmock_config.props
src/rapidjson/thirdparty/gtest/googlemock/msvc/2010/gmock_main.vcxproj
src/rapidjson/thirdparty/gtest/googlemock/msvc/2010/gmock_test.vcxproj
src/rapidjson/thirdparty/gtest/googletest/codegear/gtest.cbproj
src/rapidjson/thirdparty/gtest/googletest/codegear/gtest.groupproj
src/rapidjson/thirdparty/gtest/googletest/codegear/gtest_all.cc
src/rapidjson/thirdparty/gtest/googletest/codegear/gtest_link.cc
src/rapidjson/thirdparty/gtest/googletest/codegear/gtest_main.cbproj
src/rapidjson/thirdparty/gtest/googletest/codegear/gtest_unittest.cbproj
src/rapidjson/thirdparty/gtest/googletest/msvc/gtest-md.sln
src/rapidjson/thirdparty/gtest/googletest/msvc/gtest-md.vcproj
src/rapidjson/thirdparty/gtest/googletest/msvc/gtest.sln
src/rapidjson/thirdparty/gtest/googletest/msvc/gtest.vcproj
src/rapidjson/thirdparty/gtest/googletest/msvc/gtest_main-md.vcproj
src/rapidjson/thirdparty/gtest/googletest/msvc/gtest_main.vcproj
src/rapidjson/thirdparty/gtest/googletest/msvc/gtest_prod_test-md.vcproj
src/rapidjson/thirdparty/gtest/googletest/msvc/gtest_prod_test.vcproj
src/rapidjson/thirdparty/gtest/googletest/msvc/gtest_unittest-md.vcproj
src/rapidjson/thirdparty/gtest/googletest/msvc/gtest_unittest.vcproj
debian/ceph-common.logrotate
debian/radosgw.init

View File

@ -0,0 +1 @@
10

View File

@ -0,0 +1,849 @@
Source: ceph
Section: admin
Priority: optional
Maintainer: Ceph Packaging Team <team+ceph@tracker.debian.org>
Uploaders:
James Page <jamespage@debian.org>,
Gaudenz Steinlin <gaudenz@debian.org>,
Bernd Zeimetz <bzed@debian.org>,
Thomas Goirand <zigo@debian.org>,
Build-Depends:
cmake,
cython3,
debhelper (>= 10~),
default-jdk,
dh-exec,
dh-python,
dpkg-dev (>= 1.16.1~),
gperf,
javahelper,
junit4,
libaio-dev,
libbabeltrace-ctf-dev,
libbabeltrace-dev,
libblkid-dev (>= 2.17),
libboost-atomic-dev (>= 1.67.0),
libboost-chrono-dev (>= 1.67.0),
libboost-context-dev (>= 1.67.0) [!s390x !mips64el !ia64 !m68k !ppc64 !riscv64 !sh4 !sparc64 !x32 !alpha],
libboost-coroutine-dev (>= 1.67.0) [!s390x !mips64el !ia64 !m68k !ppc64 !riscv64 !sh4 !sparc64 !x32 !alpha],
libboost-date-time-dev (>= 1.67.0),
libboost-iostreams-dev (>= 1.67.0),
libboost-program-options-dev (>= 1.67.0),
libboost-python-dev (>= 1.67.0),
libboost-random-dev (>= 1.67.0),
libboost-regex-dev (>= 1.67.0),
libboost-system-dev (>= 1.67.0),
libboost-thread-dev (>= 1.67.0),
libbz2-dev,
libcap-ng-dev,
libcunit1-dev,
libcurl4-gnutls-dev,
libedit-dev,
libexpat1-dev,
libfuse-dev,
libgoogle-perftools-dev [i386 amd64 powerpc armhf arm64 ppc64el],
libibverbs-dev,
libkeyutils-dev,
libldap2-dev,
libleveldb-dev,
liblz4-dev (>= 0.0~r131),
libncurses-dev,
libnl-3-dev,
libnl-genl-3-dev,
libnss3-dev,
liboath-dev,
librabbitmq-dev,
librdkafka-dev,
librdmacm-dev,
libsnappy-dev,
libssl-dev,
libtool,
libudev-dev,
libxml2-dev,
lsb-release,
pkg-config,
python3-cherrypy3,
python3-dev,
python3-pecan,
python3-setuptools,
python3-sphinx,
tox,
uuid-runtime,
valgrind [amd64 armhf i386 powerpc],
virtualenv,
xfslibs-dev,
yasm [amd64],
zlib1g-dev,
Build-Conflicts:
libcrypto++-dev,
Standards-Version: 4.2.1
Vcs-Git: https://salsa.debian.org/ceph-team/ceph.git
Vcs-Browser: https://salsa.debian.org/ceph-team/ceph
Homepage: http://ceph.com/
Package: ceph
Architecture: linux-any
Depends:
ceph-mgr (= ${binary:Version}),
ceph-mon (= ${binary:Version}),
ceph-osd (= ${binary:Version}),
${misc:Depends},
Suggests:
ceph-mds (= ${binary:Version}),
Description: distributed storage and file system
Ceph is a massively scalable, open-source, distributed
storage system that runs on commodity hardware and delivers object,
block and file system storage.
Package: ceph-base
Architecture: linux-any
Depends:
binutils,
ceph-common (= ${binary:Version}),
cryptsetup-bin | cryptsetup,
gdisk,
hdparm | sdparm,
parted,
uuid-runtime,
xfsprogs,
${misc:Depends},
${python3:Depends},
${shlibs:Depends},
Pre-Depends:
${misc:Pre-Depends},
Breaks:
ceph (<< 10.2.2-0ubuntu2~),
ceph-common (<< 9.2.0-0~),
ceph-test (<< 12.2.8+dfsg1-1~),
python-ceph (<< 0.94.1-1~),
Replaces:
ceph (<< 12.2.8+dfsg1-1~),
ceph-common (<< 9.2.0-0~),
ceph-test (<< 12.2.8+dfsg1-1~),
python-ceph (<< 0.94.1-1~),
Recommends:
ceph-mds (= ${binary:Version}),
chrony | time-daemon | ntp,
librados2 (= ${binary:Version}),
librbd1 (= ${binary:Version}),
Suggests:
btrfs-tools,
logrotate,
Description: common ceph daemon libraries and management tools
Ceph is a distributed storage system designed to provide excellent
performance, reliability, and scalability.
.
This package contains the libraries and management tools that are common among
the Ceph server daemons (ceph-mon, ceph-mgr, ceph-osd, ceph-mds). These tools
are necessary for creating, running, and administering a Ceph storage cluster.
Package: ceph-common
Architecture: linux-any
Depends:
librbd1 (= ${binary:Version}),
python3-cephfs (= ${binary:Version}),
python3-prettytable,
python3-rados (= ${binary:Version}),
python3-rbd (= ${binary:Version}),
python3-requests,
${misc:Depends},
${python3:Depends},
${shlibs:Depends},
Conflicts:
ceph-client-tools,
Breaks:
ceph (<< 9.2.0-0~),
ceph-base (<< 12.2.10+dfsg1-1~),
ceph-fs-common (<< 12.2.10+dfsg1-1~),
ceph-mds (<< 14.2.5-3~),
ceph-test (<< 9.2.0-0~),
librbd1 (<< 0.94.1-1~),
python-ceph (<< 0.94.1-1~),
radosgw (<< 12.0.3-0~),
Replaces:
ceph (<< 9.2.0-0~),
ceph-client-tools,
ceph-fs-common (<< 12.2.8+dfsg1-1~),
ceph-mds (<< 14.2.5-3~),
ceph-test (<< 9.2.0-1~),
librbd1 (<< 0.94.1-1~),
python-ceph (<< 0.94.1-1~),
radosgw (<< 12.0.3-0~),
Suggests:
ceph,
ceph-mds,
Description: common utilities to mount and interact with a ceph storage cluster
Ceph is a distributed storage and file system designed to provide
excellent performance, reliability, and scalability. This is a collection
of common tools that allow one to interact with and administer a Ceph cluster.
Package: ceph-fuse
Architecture: amd64
Depends:
python3,
${misc:Depends},
${python3:Depends},
${shlibs:Depends},
Recommends:
fuse,
Description: FUSE-based client for the Ceph distributed file system
Ceph is a distributed network file system designed to provide
excellent performance, reliability, and scalability. This is a
FUSE-based client that allows one to mount a Ceph file system without
root privileges.
.
Because the FUSE-based client has certain inherent performance
limitations, it is recommended that the native Linux kernel client
be used if possible. If it is not practical to load a kernel module
(insufficient privileges, older kernel, etc.), then the FUSE client will
do.
Package: ceph-mds
Architecture: linux-any
Depends:
ceph,
${misc:Depends},
${shlibs:Depends},
Recommends:
ceph-common,
ceph-fuse,
libcephfs2,
Breaks:
ceph (<< 0.67.3-1),
Replaces:
ceph (<< 0.67.3-1),
Description: metadata server for the ceph distributed file system
Ceph is a distributed storage and network file system designed to
provide excellent performance, reliability, and scalability.
.
This package contains the metadata server daemon, which is used to
create a distributed file system on top of the ceph storage cluster.
Package: ceph-mgr
Architecture: linux-any
Depends:
ceph-base (= ${binary:Version}),
python3-bcrypt,
python3-cherrypy3,
python3-jwt,
python3-openssl,
python3-pecan,
python3-werkzeug,
${misc:Depends},
${python3:Depends},
${shlibs:Depends},
Replaces:
ceph (<< 0.93-417),
Breaks:
ceph (<< 0.93-417),
Suggests:
ceph-mgr-dashboard,
ceph-mgr-diskprediction-cloud,
ceph-mgr-diskprediction-local,
ceph-mgr-rook,
ceph-mgr-ssh,
Description: manager for the ceph distributed file system
Ceph is a massively scalable, open-source, distributed
storage system that runs on commodity hardware and delivers object,
block and file system storage.
.
This package contains the manager daemon, which is used to expose high
level management and monitoring functionality.
Package: ceph-mgr-dashboard
Architecture: all
Depends:
ceph-mgr (>= ${binary:Version}),
python3-bcrypt,
python3-cherrypy3,
python3-distutils,
python3-jwt,
python3-openssl,
python3-routes,
python3-werkzeug,
${misc:Depends},
${python3:Depends},
${shlibs:Depends},
Description: dashboard plugin for ceph-mgr
Ceph is a massively scalable, open-source, distributed
storage system that runs on commodity hardware and delivers object,
block and file system storage.
.
This package provides a ceph-mgr plugin, providing a web-based
application to monitor and manage many aspects of a Ceph cluster and
related components.
.
See the Dashboard documentation at http://docs.ceph.com/ for details
and a detailed feature overview.
Package: ceph-mgr-diskprediction-cloud
Architecture: all
Depends:
ceph-mgr (>= ${binary:Version}),
${misc:Depends},
${python3:Depends},
${shlibs:Depends},
Description: diskprediction-cloud plugin for ceph-mgr
Ceph is a massively scalable, open-source, distributed
storage system that runs on commodity hardware and delivers object,
block and file system storage.
.
This package contains the diskprediction_cloud plugin for the ceph-mgr
daemon, which helps predict disk failures.
Package: ceph-mgr-diskprediction-local
Architecture: all
Depends:
ceph-mgr (>= ${binary:Version}),
python3-numpy,
python3-scipy,
python3-sklearn,
${misc:Depends},
${python3:Depends},
${shlibs:Depends},
Description: diskprediction-local plugin for ceph-mgr
Ceph is a massively scalable, open-source, distributed
storage system that runs on commodity hardware and delivers object,
block and file system storage.
.
This package contains the diskprediction_local plugin for the ceph-mgr
daemon, which helps predict disk failures.
Package: ceph-mgr-k8sevents
Architecture: all
Depends:
ceph-mgr (>= ${binary:Version}),
python3-kubernetes,
${misc:Depends},
${python:Depends},
Description: kubernetes events plugin for ceph-mgr
Ceph is a massively scalable, open-source, distributed
storage system that runs on commodity hardware and delivers object,
block and file system storage.
.
This package contains the k8sevents plugin, to allow ceph-mgr to send
ceph related events to the kubernetes events API, and track all events
that occur within the rook-ceph namespace.
Package: ceph-mgr-rook
Architecture: all
Depends:
ceph-mgr (>= ${binary:Version}),
python3-six,
${misc:Depends},
${python3:Depends},
${shlibs:Depends},
Description: rook plugin for ceph-mgr
Ceph is a massively scalable, open-source, distributed
storage system that runs on commodity hardware and delivers object,
block and file system storage.
.
This package contains the rook plugin for ceph-mgr's orchestration
functionality, to allow ceph-mgr to install and configure ceph using
Rook.
Package: ceph-mgr-ssh
Architecture: all
Depends:
ceph-mgr (>= ${binary:Version}),
python3-six,
${misc:Depends},
${python3:Depends},
Description: ssh orchestrator plugin for ceph-mgr
Ceph is a massively scalable, open-source, distributed
storage system that runs on commodity hardware and delivers object,
block and file system storage.
.
This package contains the SSH plugin for ceph-mgr's orchestration
functionality, to allow ceph-mgr to perform orchestration functions
over a standard SSH connection.
Package: ceph-mon
Architecture: linux-any
Depends:
ceph-base (= ${binary:Version}),
${misc:Depends},
${python3:Depends},
${shlibs:Depends},
Replaces:
ceph (<< 10.2.2-0ubuntu2~),
Breaks:
ceph (<< 10.2.2-0ubuntu2~),
Description: monitor server for the ceph storage system
Ceph is a massively scalable, open-source, distributed
storage system that runs on commodity hardware and delivers object,
block and file system storage.
.
This package contains the cluster monitor daemon for the Ceph storage
system. One or more instances of ceph-mon form a Paxos part-time parliament
cluster that provides extremely reliable and durable storage of cluster
membership, configuration, and state.
Package: ceph-osd
Architecture: linux-any
Depends:
ceph-base (= ${binary:Version}),
lvm2,
smartmontools (>= 7.0),
sudo,
${misc:Depends},
${python3:Depends},
${shlibs:Depends},
Suggests:
nvme-cli,
Pre-Depends:
ceph-common (= ${binary:Version}),
Replaces:
ceph (<< 10.2.2-0ubuntu2~),
ceph-test (<< 12.2.8+dfsg1-1~),
Breaks:
ceph (<< 10.2.2-0ubuntu2~),
ceph-test (<< 12.2.8+dfsg1-1~),
Description: OSD server for the ceph storage system
Ceph is a massively scalable, open-source, distributed
storage system that runs on commodity hardware and delivers object,
block and file system storage.
.
This package contains the Object Storage Daemon for the Ceph storage system.
It is responsible for storing objects on a local file system
and providing access to them over the network.
Package: ceph-resource-agents
Architecture: all
Priority: optional
Recommends:
pacemaker,
Depends:
ceph (>= ${binary:Version}),
resource-agents,
${misc:Depends},
Description: OCF-compliant resource agents for Ceph
Ceph is a distributed storage and network file system designed to provide
excellent performance, reliability, and scalability.
.
This package contains the resource agents (RAs) which integrate
Ceph with OCF-compliant cluster resource managers,
such as Pacemaker.
Package: cephfs-shell
Architecture: all
Depends:
${misc:Depends},
${python3:Depends},
Description: interactive shell for the Ceph distributed file system
Ceph is a massively scalable, open-source, distributed
storage system that runs on commodity hardware and delivers object,
block and file system storage. This is an interactive tool that
allows accessing a Ceph file system without mounting it by providing
a nice pseudo-shell which works like an FTP client.
.
This package contains a CLI for interacting with the CephFS.
Package: libcephfs-dev
Architecture: linux-any
Section: libdevel
Depends:
libcephfs2 (= ${binary:Version}),
${misc:Depends},
Conflicts:
libceph-dev,
libceph1-dev,
libcephfs2-dev,
Replaces:
libceph-dev,
libceph1-dev,
libcephfs2-dev,
Description: Ceph distributed file system client library (development files)
Ceph is a distributed network file system designed to provide
excellent performance, reliability, and scalability. This is a
shared library allowing applications to access a Ceph distributed
file system via a POSIX-like interface.
.
This package contains development files needed for building applications that
link against libcephfs2.
Package: libcephfs-java
Architecture: all
Section: java
Depends:
libcephfs-jni (>= ${binary:Version}),
${java:Depends},
${misc:Depends},
Description: Java library for the Ceph File System
Ceph is a distributed storage system designed to provide excellent
performance, reliability, and scalability.
.
This package contains the Java library for interacting with the Ceph
File System.
Package: libcephfs-jni
Architecture: linux-any
Section: libs
Depends:
libcephfs2 (= ${binary:Version}),
${misc:Depends},
${shlibs:Depends},
Description: Java Native Interface library for CephFS Java bindings
Ceph is a distributed storage system designed to provide excellent
performance, reliability, and scalability.
.
This package contains the Java Native Interface library for interacting
with the Ceph File System.
Package: libcephfs2
Architecture: linux-any
Section: libs
Conflicts:
libceph,
libceph1,
libcephfs,
Replaces:
libceph,
libceph1,
libcephfs,
Depends:
${misc:Depends},
${shlibs:Depends},
Pre-Depends:
${misc:Pre-Depends},
Description: Ceph distributed file system client library
Ceph is a distributed network file system designed to provide
excellent performance, reliability, and scalability. This is a
shared library allowing applications to access a Ceph distributed
file system via a POSIX-like interface.
Package: librados-dev
Architecture: linux-any
Section: libdevel
Depends:
librados2 (= ${binary:Version}),
${misc:Depends},
${shlibs:Depends},
Conflicts:
librados1-dev,
librados2-dev,
Replaces:
librados1-dev,
librados2-dev,
Description: RADOS distributed object store client library (development files)
RADOS is a reliable, autonomic distributed object storage cluster
developed as part of the Ceph distributed storage system. This is a
shared library allowing applications to access the distributed object
store using a simple file-like interface.
.
This package contains development files needed
for building applications that link against librados2.
Package: librados2
Architecture: linux-any
Section: libs
Conflicts:
librados,
librados1,
Replaces:
librados,
librados1,
Depends:
${misc:Depends},
${shlibs:Depends},
Pre-Depends:
${misc:Pre-Depends},
Description: RADOS distributed object store client library
RADOS is a reliable, autonomic distributed object storage cluster
developed as part of the Ceph distributed storage system. This is a
shared library allowing applications to access the distributed object
store using a simple file-like interface.
Package: libradospp-dev
Architecture: linux-any
Section: libdevel
Depends:
librados-dev (= ${binary:Version}),
${misc:Depends},
${shlibs:Depends},
Description: RADOS distributed object store client C++ library (development files)
RADOS is a reliable, autonomic distributed object storage cluster
developed as part of the Ceph distributed storage system. This is a
shared library allowing applications to access the distributed object
store using a simple file-like interface.
.
This package contains development files needed for building C++ applications that
link against librados.
Package: libradosstriper-dev
Architecture: linux-any
Section: libdevel
Depends:
libradosstriper1 (= ${binary:Version}),
${misc:Depends},
Description: RADOS striping interface (development files)
libradosstriper is a striping interface built on top of the rados
library, allowing to stripe bigger objects onto several standard
rados objects using an interface very similar to the rados one.
.
This package contains development files needed for building applications that
link against libradosstriper.
Package: libradosstriper1
Architecture: linux-any
Section: libs
Depends:
librados2 (= ${binary:Version}),
${misc:Depends},
${shlibs:Depends},
Description: RADOS striping interface
Striping interface built on top of the rados library, allowing
to stripe bigger objects onto several standard rados objects using
an interface very similar to the rados one.
Package: librbd-dev
Architecture: linux-any
Section: libdevel
Depends:
librados-dev,
librbd1 (= ${binary:Version}),
${misc:Depends},
Conflicts:
librbd1-dev,
Replaces:
librbd1-dev,
Description: RADOS block device client library (development files)
RBD is a block device striped across multiple distributed objects
in RADOS, a reliable, autonomic distributed object storage cluster
developed as part of the Ceph distributed storage system. This is a
shared library allowing applications to manage these block devices.
.
This package contains development files needed for building applications that
link against librbd1.
Package: librbd1
Architecture: linux-any
Section: libs
Depends:
librados2 (= ${binary:Version}),
${misc:Depends},
${shlibs:Depends},
Pre-Depends:
${misc:Pre-Depends},
Description: RADOS block device client library
RBD is a block device striped across multiple distributed objects
in RADOS, a reliable, autonomic distributed object storage cluster
developed as part of the Ceph distributed storage system. This is a
shared library allowing applications to manage these block devices.
Package: librgw-dev
Architecture: linux-any
Section: libdevel
Depends:
librados-dev (= ${binary:Version}),
librgw2 (= ${binary:Version}),
${misc:Depends},
Description: RADOS client library (development files)
RADOS is a distributed object store used by the Ceph distributed
storage system. This package provides a REST gateway to the
object store that aims to implement a superset of Amazon's S3
service.
.
This package contains development files needed for building applications
that link against librgw2.
Package: librgw2
Architecture: linux-any
Section: libs
Depends:
librados2 (= ${binary:Version}),
${misc:Depends},
${shlibs:Depends},
Description: RADOS Gateway client library
RADOS is a distributed object store used by the Ceph distributed
storage system. This package provides a REST gateway to the
object store that aims to implement a superset of Amazon's S3
service.
.
This package contains the library interface and headers only.
Package: python3-ceph
Architecture: all
Section: python
Depends:
python3-cephfs (<< ${source:Version}.1~),
python3-cephfs (>= ${source:Version}),
python3-rados (<< ${source:Version}.1~),
python3-rados (>= ${source:Version}),
python3-rbd (<< ${source:Version}.1~),
python3-rbd (>= ${source:Version}),
python3-rgw (<< ${source:Version}.1~),
python3-rgw (>= ${source:Version}),
${misc:Depends},
Description: Meta-package for all Python 3.x modules for the Ceph libraries
Ceph is a massively scalable, open-source, distributed
storage system that runs on commodity hardware and delivers object,
block and file system storage.
.
This package is a metapackage for all Ceph Python 3.x bindings.
Package: python3-ceph-argparse
Architecture: linux-any
Section: python
Depends:
${misc:Depends},
${python3:Depends},
Breaks:
ceph-common (<< 14.2.1-0~),
Replaces:
ceph-common (<< 14.2.1-0~),
Description: Python 3 utility libraries for Ceph CLI
Ceph is a massively scalable, open-source, distributed
storage system that runs on commodity hardware and delivers object,
block and file system storage.
.
This package contains types and routines for Python 3 used by the
Ceph CLI as well as the RESTful interface.
Package: python3-cephfs
Architecture: linux-any
Section: python
Depends:
libcephfs2 (= ${binary:Version}),
python3-ceph-argparse (= ${binary:Version}),
python3-rados (= ${binary:Version}),
${misc:Depends},
${python3:Depends},
${shlibs:Depends},
Description: Python 3 libraries for the Ceph libcephfs library
Ceph is a massively scalable, open-source, distributed
storage system that runs on commodity hardware and delivers object,
block and file system storage.
.
This package contains Python 3 libraries for interacting with Ceph's
CephFS file system client library.
Package: python3-rados
Architecture: linux-any
Section: python
Depends:
librados2 (= ${binary:Version}),
${misc:Depends},
${python3:Depends},
${shlibs:Depends},
Description: Python 3 libraries for the Ceph librados library
Ceph is a massively scalable, open-source, distributed
storage system that runs on commodity hardware and delivers object,
block and file system storage.
.
This package contains Python 3 libraries for interacting with Ceph's
RADOS object storage.
Package: python3-rbd
Architecture: linux-any
Section: python
Depends:
librbd1 (>= ${binary:Version}),
${misc:Depends},
${python3:Depends},
${shlibs:Depends},
Description: Python 3 libraries for the Ceph librbd library
Ceph is a massively scalable, open-source, distributed
storage system that runs on commodity hardware and delivers object,
block and file system storage.
.
This package contains Python 3 libraries for interacting with Ceph's
RBD block device library.
Package: python3-rgw
Architecture: linux-any
Section: python
Depends:
librgw2 (>= ${binary:Version}),
python3-rados (= ${binary:Version}),
${misc:Depends},
${python3:Depends},
${shlibs:Depends},
Description: Python 3 libraries for the Ceph librgw library
Ceph is a massively scalable, open-source, distributed
storage system that runs on commodity hardware and delivers object,
block and file system storage.
.
This package contains Python 3 libraries for interacting with Ceph's
RGW library.
Package: rados-objclass-dev
Architecture: linux-any
Section: libdevel
Depends:
librados-dev (= ${binary:Version}),
${misc:Depends},
Description: RADOS object class development kit
This package contains development files needed for building
RADOS object class plugins.
Package: radosgw
Architecture: linux-any
Depends:
ceph-common (= ${binary:Version}),
librgw2 (= ${binary:Version}),
${misc:Depends},
${shlibs:Depends},
Suggests:
logrotate,
Description: REST gateway for RADOS distributed object store
RADOS is a distributed object store used by the Ceph distributed
storage system. This package provides a REST gateway to the
object store that aims to implement a superset of Amazon's S3
service as well as the OpenStack Object Storage ("Swift") API.
.
This package contains the proxy daemon and related tools only.
Package: rbd-fuse
Architecture: linux-any
Depends:
${misc:Depends},
${shlibs:Depends},
Recommends:
fuse,
Description: FUSE-based rbd client for the Ceph distributed file system
Ceph is a distributed network file system designed to provide
excellent performance, reliability, and scalability. This is a
FUSE-based client that allows one to map Ceph rbd images as files.
Package: rbd-mirror
Architecture: linux-any
Depends:
ceph-common (= ${binary:Version}),
librados2 (= ${binary:Version}),
${misc:Depends},
${shlibs:Depends},
Description: Ceph daemon for mirroring RBD images
Ceph is a distributed storage system designed to provide excellent
performance, reliability, and scalability.
.
This package provides a daemon for mirroring RBD images between
Ceph clusters, streaming changes asynchronously.
Package: rbd-nbd
Architecture: linux-any
Depends:
${misc:Depends},
${shlibs:Depends},
Description: NBD-based rbd client for the Ceph distributed file system
Ceph is a massively scalable, open-source, distributed
storage system that runs on commodity hardware and delivers object,
block and file system storage. This is a
NBD-based client that allows one to map Ceph rbd images as local
block device.
.
NBD base client that allows one to map Ceph rbd images as local
block device.
# Added in from centos/ceph.spec
Package: ceph-grafana-dashboards
Architecture: linux-any
Depends:
${misc:Depends},
${shlibs:Depends},
Description: Set of Grafana dashboards for monitoring purposes
This package provides a set of Grafana dashboards for monitoring of
Ceph clusters. The dashboards require a Prometheus server setup
collecting data from Ceph Manager "prometheus" module and Prometheus
project "node_exporter" module. The dashboards are designed to be
integrated with the Ceph Manager Dashboard web UI.

View File

@ -0,0 +1,977 @@
Format-Specification: http://anonscm.debian.org/viewvc/dep/web/deps/dep5/copyright-format.xml?revision=279&view=markup
Name: ceph
Maintainer: Sage Weil <sage@newdream.net>
Source: http://ceph.com/
Files: *
Copyright: 2004-2014 Sage Weil <sage@newdream.net>
2004-2014 Inktank <info@inktank.com>
Inktank, Inc
Inktank Storage, Inc.
2012-2014 Red Hat <contact@redhat.com>
2013-2014 Cloudwatt <libre.licensing@cloudwatt.com>
2013 CohortFS, LLC
2004-2011 Dreamhost
2013 eNovance SAS <licensing@enovance.com>
2014 Adam Crume <adamcrume@gmail.com>
2012 Florian Haas, hastexo
2010 Greg Farnum <gregf@hq.newdream.net>
2014 John Spray <john.spray@inktank.com
2004-2012 New Dream Network
2014 Sebastien Ponce <sebastien.ponce@cern.ch>
2011 Stanislav Sedov <stas@FreeBSD.org>
2013-2014 UnitedStack <haomai@unitedstack.com>
2011 Wido den Hollander <wido@widodh.nl>
License: LGPL2.1 (see COPYING-LGPL2.1)
Files: cmake/modules/FindLTTngUST.cmake
Copyright:
Copyright 2016 Kitware, Inc.
Copyright 2016 Philippe Proulx <pproulx@efficios.com>
License: BSD 3-clause
Files: doc/*
Copyright: (c) 2010-2012 New Dream Network and contributors
License: Creative Commons Attribution Share Alike 3.0 (CC-BY-SA-3.0)
Files: bin/git-archive-all.sh
License: GPL3
Files: src/mount/canonicalize.c
Copyright: Copyright (C) 1993 Rick Sladkey <jrs@world.std.com>
License: LGPL2 or later (see COPYING-GPL2)
Files: src/os/btrfs_ioctl.h
Copyright: Copyright (C) 2007 Oracle. All rights reserved.
License: GPL2 (see COPYING-GPL2)
Files: src/include/ceph_hash.cc
Copyright: None
License: Public domain
Files: src/common/bloom_filter.hpp
Copyright: Copyright (C) 2000 Arash Partow <arash@partow.net>
License: Boost Software License, Version 1.0
Files: src/common/crc32c_intel*:
Copyright:
Copyright 2012-2013 Intel Corporation All Rights Reserved.
License: BSD 3-clause
Files: src/common/sctp_crc32.c:
Copyright:
Copyright (c) 2001-2007, by Cisco Systems, Inc. All rights reserved.
Copyright (c) 2004-2006 Intel Corporation - All Rights Reserved
License:
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
a) Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
b) Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in
the documentation and/or other materials provided with the distribution.
c) Neither the name of Cisco Systems, Inc. nor the names of its
contributors may be used to endorse or promote products derived
from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
THE POSSIBILITY OF SUCH DAMAGE.
Files: src/json_spirit
Copyright:
Copyright John W. Wilkinson 2007 - 2011
License:
The MIT License
Copyright (c) 2007 - 2010 John W. Wilkinson
Permission is hereby granted, free of charge, to any person
obtaining a copy of this software and associated documentation
files (the "Software"), to deal in the Software without
restriction, including without limitation the rights to use,
copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the
Software is furnished to do so, subject to the following
conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
OTHER DEALINGS IN THE SOFTWARE.
Files: src/test/common/Throttle.cc src/test/filestore/chain_xattr.cc
Copyright: Copyright (C) 2013 Cloudwatt <libre.licensing@cloudwatt.com>
License: LGPL2.1 or later
Files: src/osd/ErasureCodePluginJerasure/*.{c,h}
Copyright: Copyright (c) 2011, James S. Plank <plank@cs.utk.edu>
License:
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
- Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
- Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in
the documentation and/or other materials provided with the
distribution.
- Neither the name of the University of Tennessee nor the names of its
contributors may be used to endorse or promote products derived
from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS
OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED
AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY
WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
Files: qa/workunits/erasure-code/jquery.js
Copyright: 2012 jQuery Foundation and other contributors
License: MIT
Files: qa/workunits/erasure-code/jquery.{flot.categories,flot}.js
Copyright: 2007-2014 IOLA and Ole Laursen.
License: MIT
Files: src/include/timegm.h
Copyright: Howard Hinnant
2010-2011 Vicente J. Botet Escriba
License: Boost Software License, Version 1.0
Files: src/pybind/mgr/diskprediction_local/models/*
Copyright: None
License: Public domain
Files: src/ceph-volume/plugin/zfs/*
Copyright: 2018, Willem Jan Withagen
License: BSD 3-clause
Files: src/test/perf_local.cc
Copyright:
(c) 2011-2014 Stanford University
(c) 2011 Facebook
License:
The MIT License
Comment: -----------------------------------------------
Content above is taken from upstream's COPYING file.
Unfortunately it is incomplete. Debian/Ubuntu packaging
findings/additions are below.
-------------------------------------------------------
Files: src/erasure-code/jerasure/ErasureCode*
src/erasure-code/ErasureCode*
src/erasure-code/isa/*
src/include/str_map.h
src/test/common/test_str_map.cc
src/test/erasure-code/*
src/test/rgw/test_rgw_manifest.cc
Copyright: 2014 CERN/Switzerland
2013-2014 Cloudwatt <libre.licensing@cloudwatt.com>
2014 Red Hat <contact@redhat.com>
2013 eNovance SAS <licensing@enovance.com>
License: LGPL-2.1+
Files: src/erasure-code/isa/isa-l/erasure_code/*
Copyright: 2011-2014 Intel Corporation
License: BSD-3-clause
Files: src/rocksdb/*
Copyright: 2004-2013 Facebook, Inc.
2011 The LevelDB Authors
2009 Google Inc.
License: BSD-3-clause
Comment:
Additional Grant of Patent Rights
.
“Software” means the rocksdb software distributed by Facebook, Inc.
.
Facebook hereby grants you a perpetual, worldwide, royalty-free,
non-exclusive, irrevocable (subject to the termination provision below)
license under any rights in any patent claims owned by Facebook, to make,
have made, use, sell, offer to sell, import, and otherwise transfer the
Software. For avoidance of doubt, no license is granted under Facebooks
rights in any patent claims that are infringed by (i) modifications to the
Software made by you or a third party, or (ii) the Software in combination
with any software or other technology provided by you or a third party.
.
The license granted hereunder will terminate, automatically and without
notice, for anyone that makes any claim (including by filing any lawsuit,
assertion or other action) alleging (a) direct, indirect, or contributory
infringement or inducement to infringe any patent: (i) by Facebook or any
of its subsidiaries or affiliates, whether or not such claim is related
to the Software, (ii) by any party if such claim arises in whole or in
part from any software, product or service of Facebook or any of its
subsidiaries or affiliates, whether or not such claim is related to the
Software, or (iii) by any party relating to the Software; or (b) that
any right in any patent claim of Facebook is invalid or unenforceable.
Files: src/rocksdb/util/xxhash.*
Copyright: 2012-2014, Yann Collet.
License: BSD-2-clause
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
.
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Files: src/mount/canonicalize.c
src/test/common/test_config.cc
src/test/crush/TestCrushWrapper.cc
src/test/common/Throttle.cc
src/test/objectstore/chain_xattr.cc
src/test/mon/mon-test-helpers.sh
src/test/objectstore/chain_xattr.cc
src/test/osd/osd-test-helpers.sh
src/ceph-disk
src/stop.sh
Copyright: 1993 Rick Sladkey <jrs@world.std.com>
2013 Inktank <info@inktank.com>
2013-2014 Cloudwatt <libre.licensing@cloudwatt.com>
License: LGPL-2+
Files: src/os/btrfs_ioctl.h
src/test/mon/PGMap.cc
Copyright: 2007 Oracle. All rights reserved.
2014 Inktank <info@inktank.com>
License: GPL-2
Files: src/common/ceph_hash.cc
Copyright: 1995-1997 Robert J. Jenkins Jr.
License: public-domain
This file uses Robert Jenkin's hash function as detailed at:
.
http://burtleburtle.net/bob/hash/evahash.html
.
This is in the public domain.
Files: src/common/bloom_filter.hpp
Copyright: 2000 Arash Partow
License: Boost-Software-License-1.0
Permission is hereby granted, free of charge, to any person or organization
obtaining a copy of the software and accompanying documentation covered by
this license (the "Software") to use, reproduce, display, distribute,
execute, and transmit the Software, and to prepare derivative works of the
Software, and to permit third-parties to whom the Software is furnished to
do so, all subject to the following:
.
The copyright notices in the Software and this entire statement,
including the above license grant, this restriction and the following
disclaimer, must be included in all copies of the Software, in whole or
in part, and all derivative works of the Software, unless such copies
or derivative works are solely in the form of machine-executable object
code generated by a source language processor.
.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE, TITLE AND NON-INFRINGEMENT. IN NO EVENT
SHALL THE COPYRIGHT HOLDERS OR ANYONE DISTRIBUTING THE SOFTWARE BE LIABLE
FOR ANY DAMAGES OR OTHER LIABILITY, WHETHER IN CONTRACT, TORT OR OTHERWISE,
ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE.
Files: src/common/crc32c_intel*
Copyright: 2012-2013 Intel Corporation All Rights Reserved.
License: BSD-3-clause
Files: src/common/sctp_crc32.c
Copyright: 2001-2007, by Cisco Systems, Inc. All rights reserved,
2004-2006 Intel Corporation - All Rights Reserved
License: BSD-3-clause
Files: src/erasure-code/jerasure/gf-complete/*/*
Copyright: 2013 James S. Plank
Ethan L. Miller
Kevin M. Greenan
Benjamin A. Arnold
John A. Burnum
Adam W. Disney
Allen C. McBride
License: BSD-3-clause
Comment:
https://bitbucket.org/jimplank/gf-complete
Files: src/erasure-code/jerasure/jerasure/*/*
Copyright: 2011-2013 James S. Plank <plank@cs.utk.edu>
2013 Kevin Greenan
License: BSD-3-clause
Files: src/gtest/*
Copyright: 2008, Google Inc.
License: BSD-3-clause
Files: src/civetweb/*
Copyright: 2004-2013 Sergey Lyubka
2013-2014 the Civetweb developers
License: Expat
Files: src/json_spirit/*
Copyright: 2007-2011, John W. Wilkinson
License: Expat
Files: src/java/native/ScopedLocalRef.h
src/java/native/JniConstants.*
Copyright: 2010 The Android Open Source Project
License: Apache-2.0
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
.
http://www.apache.org/licenses/LICENSE-2.0
.
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
.
The complete text of the Apache License, Version 2.0
can be found in "/usr/share/common-licenses/Apache-2.0".
Files: src/libs3/*
Copyright: 2008 Bryan Ischo <bryan@ischo.com>
License: GPL-3/OpenSSL
libs3 is free software: you can redistribute it and/or modify it under the
terms of the GNU General Public License as published by the Free Software
Foundation, version 3 of the License.
.
In addition, as a special exception, the copyright holders give
permission to link the code of this library and its programs with the
OpenSSL library, and distribute linked combinations including the two.
.
libs3 is distributed in the hope that it will be useful, but WITHOUT ANY
WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
FOR A PARTICULAR PURPOSE. See the GNU General Public License for more
details.
.
The complete text of the GNU General Public License version 3
can be found in "/usr/share/common-licenses/GPL-3' file.
Files: src/mount/mtab.c
Copyright: util-linux-ng AUTHORS
License: GPL-2+
Comment:
"mount/fstab.c" from line 559:
https://git.kernel.org/cgit/utils/util-linux/util-linux.git/tree/mount-deprecated/fstab.c?h=v2.22#n559
https://git.kernel.org/cgit/utils/util-linux/util-linux.git/tree/README.licensing
Files: src/test/librbd/fsx.c
Copyright: 1991, NeXT Computer, Inc.
License: APSL-2.0
The contents of this file constitute Original Code as defined in and
are subject to the Apple Public Source License Version 2.0 (the
"License"). You may not use this file except in compliance with the
License. Please obtain a copy of the License at
http://www.opensource.apple.com/apsl/ and read it before using this file.
.
This Original Code and all software distributed under the License are
distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESS OR IMPLIED, AND APPLE HEREBY DISCLAIMS ALL SUCH WARRANTIES,
INCLUDING WITHOUT LIMITATION, ANY WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE OR NON-INFRINGEMENT. Please see the
License for the specific language governing rights and limitations
under the License.
Comment:
http://codemonkey.org.uk/projects/fsx/
http://codemonkey.org.uk/projects/fsx/fsx-macosforge/fsx.c
Files: man/*
debian/man/*
Copyright: 2010-2014, Inktank Storage, Inc. and contributors.
License: CC-BY-SA-3.0
Files: debian/missing-sources/bootstrap.js
Copyright: 2011-2015 Twitter, Inc
License: MIT
Files: debian/missing-sources/two.js
Copyright: 2012 - 2017 jonobr1 / http://jonobr1.com
License: MIT
Files: debian/*
Copyright: 2010 Sage Weil <sage@newdream.net>
2010 Canonical, Ltd.
2011-2013 László Böszörményi (GCS) <gcs@debian.org>
2013-2014 James Page <james.page@ubuntu.com>
2014 Dmitry Smirnov <onlyjob@debian.org>
2019 Bernd Zeimetz <bzed@debian.org>
License: LGPL-2.1
License: GPL-2
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
.
On Debian systems, the complete text of the GNU General Public License
version 2 can be found in `/usr/share/common-licenses/GPL-2' file.
License: GPL-2+
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 2 of the License, or
(at your option) any later version.
.
This package is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
.
On Debian systems, the complete text of the GNU General Public License
version 2 can be found in `/usr/share/common-licenses/GPL-2'.
License: LGPL-2.1
This library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License version 2.1 as published by the Free Software Foundation.
.
This library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
.
You should have received a copy of the GNU Lesser General Public
License along with this library; if not, write to the Free Software
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
.
On Debian systems, the complete text of the GNU Lesser General
Public License can be found in `/usr/share/common-licenses/LGPL-2.1'.
License: LGPL-2.1+
This library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.
.
This library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
.
You should have received a copy of the GNU Lesser General Public
License along with this library; if not, write to the Free Software
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
.
On Debian systems, the complete text of the GNU Lesser General
Public License can be found in `/usr/share/common-licenses/LGPL-2.1'.
License: LGPL-2+
This library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License version 2 (or later) as published by the Free Software Foundation.
.
This library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.
.
You should have received a copy of the GNU Lesser General Public
License along with this library; if not, write to the Free Software
Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
.
On Debian systems, the complete text of the GNU Lesser General
Public License 2 can be found in `/usr/share/common-licenses/LGPL-2'.
License: Expat
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
.
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
Comment:
This license also known as "MIT" however FSF consider "MIT" labelling
ambiguous and copyright-format specification recommend to label such license
as "Expat".
License: CC-BY-SA-3.0
Creative Commons Attribution-ShareAlike 3.0 Unported
CREATIVE COMMONS CORPORATION IS NOT A LAW FIRM AND DOES NOT PROVIDE
LEGAL SERVICES. DISTRIBUTION OF THIS LICENSE DOES NOT CREATE AN
ATTORNEY-CLIENT RELATIONSHIP. CREATIVE COMMONS PROVIDES THIS INFORMATION
ON AN "AS-IS" BASIS. CREATIVE COMMONS MAKES NO WARRANTIES REGARDING THE
INFORMATION PROVIDED, AND DISCLAIMS LIABILITY FOR DAMAGES RESULTING FROM
ITS USE.
License
THE WORK (AS DEFINED BELOW) IS PROVIDED UNDER THE TERMS OF THIS CREATIVE
COMMONS PUBLIC LICENSE ("CCPL" OR "LICENSE"). THE WORK IS PROTECTED BY
COPYRIGHT AND/OR OTHER APPLICABLE LAW. ANY USE OF THE WORK OTHER THAN AS
AUTHORIZED UNDER THIS LICENSE OR COPYRIGHT LAW IS PROHIBITED.
BY EXERCISING ANY RIGHTS TO THE WORK PROVIDED HERE, YOU ACCEPT AND AGREE
TO BE BOUND BY THE TERMS OF THIS LICENSE. TO THE EXTENT THIS LICENSE MAY
BE CONSIDERED TO BE A CONTRACT, THE LICENSOR GRANTS YOU THE RIGHTS
CONTAINED HERE IN CONSIDERATION OF YOUR ACCEPTANCE OF SUCH TERMS AND
CONDITIONS.
1. Definitions
a. "Adaptation" means a work based upon the Work, or upon the Work and
other pre-existing works, such as a translation, adaptation, derivative
work, arrangement of music or other alterations of a literary or
artistic work, or phonogram or performance and includes cinematographic
adaptations or any other form in which the Work may be recast,
transformed, or adapted including in any form recognizably derived from
the original, except that a work that constitutes a Collection will not
be considered an Adaptation for the purpose of this License. For the
avoidance of doubt, where the Work is a musical work, performance or
phonogram, the synchronization of the Work in timed-relation with a
moving image ("synching") will be considered an Adaptation for the
purpose of this License.
b. "Collection" means a collection of literary or artistic works, such
as encyclopedias and anthologies, or performances, phonograms or
broadcasts, or other works or subject matter other than works listed in
Section 1(f) below, which, by reason of the selection and arrangement of
their contents, constitute intellectual creations, in which the Work is
included in its entirety in unmodified form along with one or more other
contributions, each constituting separate and independent works in
themselves, which together are assembled into a collective whole. A work
that constitutes a Collection will not be considered an Adaptation (as
defined below) for the purposes of this License.
c. "Creative Commons Compatible License" means a license that is listed
at http://creativecommons.org/compatiblelicenses that has been approved
by Creative Commons as being essentially equivalent to this License,
including, at a minimum, because that license: (i) contains terms that
have the same purpose, meaning and effect as the License Elements of
this License; and, (ii) explicitly permits the relicensing of
adaptations of works made available under that license under this
License or a Creative Commons jurisdiction license with the same License
Elements as this License.
d. "Distribute" means to make available to the public the original and
copies of the Work or Adaptation, as appropriate, through sale or other
transfer of ownership.
e. "License Elements" means the following high-level license attributes
as selected by Licensor and indicated in the title of this License:
Attribution, ShareAlike.
f. "Licensor" means the individual, individuals, entity or entities that
offer(s) the Work under the terms of this License.
g. "Original Author" means, in the case of a literary or artistic work,
the individual, individuals, entity or entities who created the Work or
if no individual or entity can be identified, the publisher; and in
addition (i) in the case of a performance the actors, singers,
musicians, dancers, and other persons who act, sing, deliver, declaim,
play in, interpret or otherwise perform literary or artistic works or
expressions of folklore; (ii) in the case of a phonogram the producer
being the person or legal entity who first fixes the sounds of a
performance or other sounds; and, (iii) in the case of broadcasts, the
organization that transmits the broadcast.
h. "Work" means the literary and/or artistic work offered under the
terms of this License including without limitation any production in the
literary, scientific and artistic domain, whatever may be the mode or
form of its expression including digital form, such as a book, pamphlet
and other writing; a lecture, address, sermon or other work of the same
nature; a dramatic or dramatico-musical work; a choreographic work or
entertainment in dumb show; a musical composition with or without words;
a cinematographic work to which are assimilated works expressed by a
process analogous to cinematography; a work of drawing, painting,
architecture, sculpture, engraving or lithography; a photographic work
to which are assimilated works expressed by a process analogous to
photography; a work of applied art; an illustration, map, plan, sketch
or three-dimensional work relative to geography, topography,
architecture or science; a performance; a broadcast; a phonogram; a
compilation of data to the extent it is protected as a copyrightable
work; or a work performed by a variety or circus performer to the extent
it is not otherwise considered a literary or artistic work.
i. "You" means an individual or entity exercising rights under this
License who has not previously violated the terms of this License with
respect to the Work, or who has received express permission from the
Licensor to exercise rights under this License despite a previous
violation.
j. "Publicly Perform" means to perform public recitations of the Work
and to communicate to the public those public recitations, by any means
or process, including by wire or wireless means or public digital
performances; to make available to the public Works in such a way that
members of the public may access these Works from a place and at a place
individually chosen by them; to perform the Work to the public by any
means or process and the communication to the public of the performances
of the Work, including by public digital performance; to broadcast and
rebroadcast the Work by any means including signs, sounds or images.
k. "Reproduce" means to make copies of the Work by any means including
without limitation by sound or visual recordings and the right of
fixation and reproducing fixations of the Work, including storage of a
protected performance or phonogram in digital form or other electronic
medium.
2. Fair Dealing Rights. Nothing in this License is intended to reduce,
limit, or restrict any uses free from copyright or rights arising from
limitations or exceptions that are provided for in connection with the
copyright protection under copyright law or other applicable laws.
3. License Grant. Subject to the terms and conditions of this License,
Licensor hereby grants You a worldwide, royalty-free, non-exclusive,
perpetual (for the duration of the applicable copyright) license to
exercise the rights in the Work as stated below:
a. to Reproduce the Work, to incorporate the Work into one or more
Collections, and to Reproduce the Work as incorporated in the
Collections;
b. to create and Reproduce Adaptations provided that any such
Adaptation, including any translation in any medium, takes reasonable
steps to clearly label, demarcate or otherwise identify that changes
were made to the original Work. For example, a translation could be
marked "The original work was translated from English to Spanish," or a
modification could indicate "The original work has been modified.";
c. to Distribute and Publicly Perform the Work including as incorporated
in Collections; and,
d. to Distribute and Publicly Perform Adaptations.
e. For the avoidance of doubt:
i. Non-waivable Compulsory License Schemes. In those jurisdictions in
which the right to collect royalties through any statutory or compulsory
licensing scheme cannot be waived, the Licensor reserves the exclusive
right to collect such royalties for any exercise by You of the rights
granted under this License;
ii. Waivable Compulsory License Schemes. In those jurisdictions in which
the right to collect royalties through any statutory or compulsory
licensing scheme can be waived, the Licensor waives the exclusive right
to collect such royalties for any exercise by You of the rights granted
under this License; and,
iii. Voluntary License Schemes. The Licensor waives the right to collect
royalties, whether individually or, in the event that the Licensor is a
member of a collecting society that administers voluntary licensing
schemes, via that society, from any exercise by You of the rights
granted under this License.
The above rights may be exercised in all media and formats whether now
known or hereafter devised. The above rights include the right to make
such modifications as are technically necessary to exercise the rights
in other media and formats. Subject to Section 8(f), all rights not
expressly granted by Licensor are hereby reserved.
4. Restrictions. The license granted in Section 3 above is expressly
made subject to and limited by the following restrictions:
a. You may Distribute or Publicly Perform the Work only under the terms
of this License. You must include a copy of, or the Uniform Resource
Identifier (URI) for, this License with every copy of the Work You
Distribute or Publicly Perform. You may not offer or impose any terms on
the Work that restrict the terms of this License or the ability of the
recipient of the Work to exercise the rights granted to that recipient
under the terms of the License. You may not sublicense the Work. You
must keep intact all notices that refer to this License and to the
disclaimer of warranties with every copy of the Work You Distribute or
Publicly Perform. When You Distribute or Publicly Perform the Work, You
may not impose any effective technological measures on the Work that
restrict the ability of a recipient of the Work from You to exercise the
rights granted to that recipient under the terms of the License. This
Section 4(a) applies to the Work as incorporated in a Collection, but
this does not require the Collection apart from the Work itself to be
made subject to the terms of this License. If You create a Collection,
upon notice from any Licensor You must, to the extent practicable,
remove from the Collection any credit as required by Section 4(c), as
requested. If You create an Adaptation, upon notice from any Licensor
You must, to the extent practicable, remove from the Adaptation any
credit as required by Section 4(c), as requested.
b. You may Distribute or Publicly Perform an Adaptation only under the
terms of: (i) this License; (ii) a later version of this License with
the same License Elements as this License; (iii) a Creative Commons
jurisdiction license (either this or a later license version) that
contains the same License Elements as this License (e.g.,
Attribution-ShareAlike 3.0 US)); (iv) a Creative Commons Compatible
License. If you license the Adaptation under one of the licenses
mentioned in (iv), you must comply with the terms of that license. If
you license the Adaptation under the terms of any of the licenses
mentioned in (i), (ii) or (iii) (the "Applicable License"), you must
comply with the terms of the Applicable License generally and the
following provisions: (I) You must include a copy of, or the URI for,
the Applicable License with every copy of each Adaptation You Distribute
or Publicly Perform; (II) You may not offer or impose any terms on the
Adaptation that restrict the terms of the Applicable License or the
ability of the recipient of the Adaptation to exercise the rights
granted to that recipient under the terms of the Applicable License;
(III) You must keep intact all notices that refer to the Applicable
License and to the disclaimer of warranties with every copy of the Work
as included in the Adaptation You Distribute or Publicly Perform; (IV)
when You Distribute or Publicly Perform the Adaptation, You may not
impose any effective technological measures on the Adaptation that
restrict the ability of a recipient of the Adaptation from You to
exercise the rights granted to that recipient under the terms of the
Applicable License. This Section 4(b) applies to the Adaptation as
incorporated in a Collection, but this does not require the Collection
apart from the Adaptation itself to be made subject to the terms of the
Applicable License.
c. If You Distribute, or Publicly Perform the Work or any Adaptations or
Collections, You must, unless a request has been made pursuant to
Section 4(a), keep intact all copyright notices for the Work and
provide, reasonable to the medium or means You are utilizing: (i) the
name of the Original Author (or pseudonym, if applicable) if supplied,
and/or if the Original Author and/or Licensor designate another party or
parties (e.g., a sponsor institute, publishing entity, journal) for
attribution ("Attribution Parties") in Licensor's copyright notice,
terms of service or by other reasonable means, the name of such party or
parties; (ii) the title of the Work if supplied; (iii) to the extent
reasonably practicable, the URI, if any, that Licensor specifies to be
associated with the Work, unless such URI does not refer to the
copyright notice or licensing information for the Work; and (iv) ,
consistent with Ssection 3(b), in the case of an Adaptation, a credit
identifying the use of the Work in the Adaptation (e.g., "French
translation of the Work by Original Author," or "Screenplay based on
original Work by Original Author"). The credit required by this Section
4(c) may be implemented in any reasonable manner; provided, however,
that in the case of a Adaptation or Collection, at a minimum such credit
will appear, if a credit for all contributing authors of the Adaptation
or Collection appears, then as part of these credits and in a manner at
least as prominent as the credits for the other contributing authors.
For the avoidance of doubt, You may only use the credit required by this
Section for the purpose of attribution in the manner set out above and,
by exercising Your rights under this License, You may not implicitly or
explicitly assert or imply any connection with, sponsorship or
endorsement by the Original Author, Licensor and/or Attribution Parties,
as appropriate, of You or Your use of the Work, without the separate,
express prior written permission of the Original Author, Licensor and/or
Attribution Parties.
d. Except as otherwise agreed in writing by the Licensor or as may be
otherwise permitted by applicable law, if You Reproduce, Distribute or
Publicly Perform the Work either by itself or as part of any Adaptations
or Collections, You must not distort, mutilate, modify or take other
derogatory action in relation to the Work which would be prejudicial to
the Original Author's honor or reputation. Licensor agrees that in those
jurisdictions (e.g. Japan), in which any exercise of the right granted
in Section 3(b) of this License (the right to make Adaptations) would be
deemed to be a distortion, mutilation, modification or other derogatory
action prejudicial to the Original Author's honor and reputation, the
Licensor will waive or not assert, as appropriate, this Section, to the
fullest extent permitted by the applicable national law, to enable You
to reasonably exercise Your right under Section 3(b) of this License
(right to make Adaptations) but not otherwise.
5. Representations, Warranties and Disclaimer
UNLESS OTHERWISE MUTUALLY AGREED TO BY THE PARTIES IN WRITING, LICENSOR
OFFERS THE WORK AS-IS AND MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY
KIND CONCERNING THE WORK, EXPRESS, IMPLIED, STATUTORY OR OTHERWISE,
INCLUDING, WITHOUT LIMITATION, WARRANTIES OF TITLE, MERCHANTIBILITY,
FITNESS FOR A PARTICULAR PURPOSE, NONINFRINGEMENT, OR THE ABSENCE OF
LATENT OR OTHER DEFECTS, ACCURACY, OR THE PRESENCE OF ABSENCE OF ERRORS,
WHETHER OR NOT DISCOVERABLE. SOME JURISDICTIONS DO NOT ALLOW THE
EXCLUSION OF IMPLIED WARRANTIES, SO SUCH EXCLUSION MAY NOT APPLY TO YOU.
6. Limitation on Liability. EXCEPT TO THE EXTENT REQUIRED BY APPLICABLE
LAW, IN NO EVENT WILL LICENSOR BE LIABLE TO YOU ON ANY LEGAL THEORY FOR
ANY SPECIAL, INCIDENTAL, CONSEQUENTIAL, PUNITIVE OR EXEMPLARY DAMAGES
ARISING OUT OF THIS LICENSE OR THE USE OF THE WORK, EVEN IF LICENSOR HAS
BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
7. Termination
a. This License and the rights granted hereunder will terminate
automatically upon any breach by You of the terms of this License.
Individuals or entities who have received Adaptations or Collections
from You under this License, however, will not have their licenses
terminated provided such individuals or entities remain in full
compliance with those licenses. Sections 1, 2, 5, 6, 7, and 8 will
survive any termination of this License.
b. Subject to the above terms and conditions, the license granted here
is perpetual (for the duration of the applicable copyright in the Work).
Notwithstanding the above, Licensor reserves the right to release the
Work under different license terms or to stop distributing the Work at
any time; provided, however that any such election will not serve to
withdraw this License (or any other license that has been, or is
required to be, granted under the terms of this License), and this
License will continue in full force and effect unless terminated as
stated above.
8. Miscellaneous
a. Each time You Distribute or Publicly Perform the Work or a
Collection, the Licensor offers to the recipient a license to the Work
on the same terms and conditions as the license granted to You under
this License.
b. Each time You Distribute or Publicly Perform an Adaptation, Licensor
offers to the recipient a license to the original Work on the same terms
and conditions as the license granted to You under this License.
c. If any provision of this License is invalid or unenforceable under
applicable law, it shall not affect the validity or enforceability of
the remainder of the terms of this License, and without further action
by the parties to this agreement, such provision shall be reformed to
the minimum extent necessary to make such provision valid and
enforceable.
d. No term or provision of this License shall be deemed waived and no
breach consented to unless such waiver or consent shall be in writing
and signed by the party to be charged with such waiver or consent.
e. This License constitutes the entire agreement between the parties
with respect to the Work licensed here. There are no understandings,
agreements or representations with respect to the Work not specified
here. Licensor shall not be bound by any additional provisions that may
appear in any communication from You. This License may not be modified
without the mutual written agreement of the Licensor and You.
f. The rights granted under, and the subject matter referenced, in this
License were drafted utilizing the terminology of the Berne Convention
for the Protection of Literary and Artistic Works (as amended on
September 28, 1979), the Rome Convention of 1961, the WIPO Copyright
Treaty of 1996, the WIPO Performances and Phonograms Treaty of 1996 and
the Universal Copyright Convention (as revised on July 24, 1971). These
rights and subject matter take effect in the relevant jurisdiction in
which the License terms are sought to be enforced according to the
corresponding provisions of the implementation of those treaty
provisions in the applicable national law. If the standard suite of
rights granted under applicable copyright law includes additional rights
not granted under this License, such additional rights are deemed to be
included in the License; this License is not intended to restrict the
license of any rights under applicable law.
Creative Commons Notice
Creative Commons is not a party to this License, and makes no warranty
whatsoever in connection with the Work. Creative Commons will not be
liable to You or any party on any legal theory for any damages
whatsoever, including without limitation any general, special,
incidental or consequential damages arising in connection to this
license. Notwithstanding the foregoing two (2) sentences, if Creative
Commons has expressly identified itself as the Licensor hereunder, it
shall have all rights and obligations of Licensor.
Except for the limited purpose of indicating to the public that the Work
is licensed under the CCPL, Creative Commons does not authorize the use
by either party of the trademark "Creative Commons" or any related
trademark or logo of Creative Commons without the prior written consent
of Creative Commons. Any permitted use will be in compliance with
Creative Commons' then-current trademark usage guidelines, as may be
published on its website or otherwise made available upon request from
time to time. For the avoidance of doubt, this trademark restriction
does not form part of the License.
Creative Commons may be contacted at http://creativecommons.org/.
License: BSD-3-clause
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
.
1. Redistributions of source code must retain the above
copyright notice, this list of conditions and the following
disclaimer.
.
2. Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following
disclaimer in the documentation and/or other materials
provided with the distribution.
.
3. Neither the name of the copyright holder nor the names of
its contributors may be used to endorse or promote products
derived from this software without specific prior written
permission.
.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT,
INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@ -0,0 +1,12 @@
#!/bin/sh
#/lib/systemd/system-sleep/ceph
case $1 in
pre)
/bin/systemctl stop ceph
;;
post)
/bin/systemctl start ceph
;;
esac

View File

@ -0,0 +1,9 @@
[Unit]
Description=Create Ceph client.admin key when possible
PartOf=ceph-mon.service
[Service]
Environment=CLUSTER=ceph
Environment=CONFIG=/etc/ceph/ceph.conf
EnvironmentFile=-/etc/default/ceph
ExecStart=/usr/sbin/ceph-create-keys --cluster ${CLUSTER} --id %H

View File

@ -0,0 +1,16 @@
[Unit]
Description=Ceph metadata server daemon (MDS)
Documentation=man:ceph-mds
After=network-online.target nss-lookup.target
Wants=network-online.target nss-lookup.target
PartOf=ceph.target
[Service]
LimitNOFILE=1048576
LimitNPROC=1048576
EnvironmentFile=-/etc/default/ceph
Environment=CLUSTER=ceph
ExecStart=/usr/bin/ceph-mds -f --cluster ${CLUSTER} --id %H --setuser ceph --setgroup ceph
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,22 @@
[Unit]
Description=Ceph cluster monitor daemon
Documentation=man:ceph-mon
After=network-online.target local-fs.target ceph-create-keys.service
Wants=network-online.target local-fs.target ceph-create-keys.service
PartOf=ceph.target
[Service]
LimitNOFILE=1048576
LimitNPROC=1048576
EnvironmentFile=-/etc/default/ceph
Environment=CLUSTER=ceph
ExecStart=/usr/bin/ceph-mon -f --cluster ${CLUSTER} --id %H --setuser ceph --setgroup ceph
ExecReload=/bin/kill -HUP $MAINPID
Restart=on-failure
RestartSec=30
TasksMax=infinity
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,22 @@
[Unit]
Description=Ceph object storage daemon (OSD)
Documentation=man:ceph-osd
After=network-online.target
Wants=network-online.target
PartOf=ceph.service
RequiresMountsFor=/var/lib/ceph/osd/ceph-%i
[Service]
Environment=CLUSTER=ceph
Environment=CONFIG=/etc/ceph/ceph.conf
EnvironmentFile=-/etc/default/ceph
ExecStartPre=-/bin/sh -c '${osd_prestart_sh}' -- %i
ExecStartPre=/usr/lib/ceph/ceph-osd-prestart.sh --id %i --cluster ${CLUSTER}
ExecStart=/usr/bin/ceph-osd --id %i --foreground --cluster ${CLUSTER} -c ${CONFIG}
ExecStopPost=-/bin/sh -c '${osd_poststop_sh}' -- %i
LimitNOFILE=327680
Restart=on-failure
RestartSec=30
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,2 @@
usr/include/cephfs/*.h
usr/lib/*/libcephfs.so

View File

@ -0,0 +1 @@
debian/tmp/usr/share/java/libcephfs.jar

View File

@ -0,0 +1 @@
usr/lib/*/libcephfs_jni.so* usr/lib/jni

View File

@ -0,0 +1,2 @@
custom-library-search-path usr/lib/jni/libcephfs_jni.so.1.0.0 /usr/lib/jvm/default-java/lib
custom-library-search-path usr/lib/jni/libcephfs_jni.so.1.0.0 /usr/lib/jvm/default-java/lib/server

View File

@ -0,0 +1 @@
usr/lib/*/libcephfs.so.*

View File

@ -0,0 +1,5 @@
usr/bin/librados-config
usr/include/rados/librados.h
usr/include/rados/rados_types.h
usr/lib/*/librados.so
usr/share/man/man8/librados-config.8

View File

@ -0,0 +1,2 @@
usr/lib/*/ceph/libceph-common.so*
usr/lib/*/librados.so.*

View File

@ -0,0 +1,8 @@
usr/include/rados/buffer.h
usr/include/rados/buffer_fwd.h
usr/include/rados/crc32c.h
usr/include/rados/inline_memory.h
usr/include/rados/librados.hpp
usr/include/rados/librados_fwd.hpp
usr/include/rados/page.h
usr/include/rados/rados_types.hpp

View File

@ -0,0 +1,3 @@
usr/include/radosstriper/libradosstriper.h
usr/include/radosstriper/libradosstriper.hpp
usr/lib/*/libradosstriper.so

View File

@ -0,0 +1 @@
usr/lib/*/libradosstriper.so.*

View File

@ -0,0 +1,4 @@
usr/include/rbd/features.h
usr/include/rbd/librbd.h
usr/include/rbd/librbd.hpp
usr/lib/*/librbd.so

View File

@ -0,0 +1 @@
usr/lib/*/librbd.so.*

View File

@ -0,0 +1,5 @@
usr/include/rados/librgw.h
usr/include/rados/librgw_admin_user.h
usr/include/rados/rgw_file.h
usr/lib/*/librgw.so
usr/lib/*/librgw_admin_user.so

View File

@ -0,0 +1,2 @@
usr/lib/*/librgw.so.*
usr/lib/*/librgw_admin_user.so.*

View File

@ -0,0 +1,24 @@
.TH ceph-crush-location "1" "April 2014" "ceph-crush-location" "User Commands"
.SH NAME
ceph-crush-location \- get CRUSH location
.SH DESCRIPTION
Generate a CRUSH location for the given entity
The CRUSH location consists of a list of key=value pairs, separated
by spaces, all on a single line. This describes where in CRUSH
hierarhcy this entity should be placed.
.SH OPTIONS
.TP 4
\fB\-\-cluster\fR <clustername>
name of the cluster (see /etc/ceph/$cluster.conf)
.TP 4
\fB\-\-type\fR <osd|mds|client>
daemon/entity type
.TP 4
\fB\-\-id\fR <id>
id (osd number, mds name, client name)
.SH SEE ALSO
.TP
\fBceph-conf\fP(8)

View File

@ -0,0 +1,30 @@
.TH mount.fuse.ceph "8" "March 2014" "ceph-fuse" "User Commands"
.SH NAME
mount.fuse.ceph \- wrapper around ceph-fuse
.SH DESCRIPTION
Helper to mount ceph-fuse from /etc/fstab. To use, add an entry like:
.nf
# DEVICE PATH TYPE OPTIONS
mount.fuse.ceph#conf=/etc/ceph/ceph.conf,id=admin /mnt/ceph fuse _netdev,noatime,allow_other 0 0
mount.fuse.ceph#conf=/etc/ceph/foo.conf,id=myuser /mnt/ceph2 fuse _netdev,noatime,allow_other 0 0
.fi
where the device field is a comma-separated list of options to pass on
the command line. The examples above, for example, specify that
ceph-fuse will authenticated as client.admin and client.myuser
(respectively), and the second example also sets the "conf" option to
"/etc/ceph/foo.conf" via the ceph-fuse command line. Any valid
ceph-fuse option can be passed in this way.
.SH OPTIONS
.TP 4
\fB\-\-conf\fR
path to ceph cponfiguration file, usually "/etc/ceph/ceph.conf"
.TP 4
\fB\-\-id\fR
user name
.SH SEE ALSO
.TP
\fBceph-fuse\fP(8)

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,2 @@
usr/lib/python3*/dist-packages/ceph_argparse.py
usr/lib/python3*/dist-packages/ceph_daemon.py

View File

@ -0,0 +1,3 @@
usr/lib/python3*/dist-packages/ceph_volume_client.py
usr/lib/python3*/dist-packages/cephfs-*.egg-info
usr/lib/python3*/dist-packages/cephfs.cpython*.so

View File

@ -0,0 +1,2 @@
usr/lib/python3*/dist-packages/rados-*.egg-info
usr/lib/python3*/dist-packages/rados.cpython*.so

View File

@ -0,0 +1,2 @@
usr/lib/python3*/dist-packages/rbd-*.egg-info
usr/lib/python3*/dist-packages/rbd.cpython*.so

View File

@ -0,0 +1,2 @@
usr/lib/python3*/dist-packages/rgw-*.egg-info
usr/lib/python3*/dist-packages/rgw.cpython*.so

View File

@ -0,0 +1 @@
usr/include/rados/objclass.h

View File

@ -0,0 +1,4 @@
var/lib/ceph/radosgw
# %if %{with stx}
var/log/radosgw

View File

@ -0,0 +1,17 @@
# %if %{without stx}
# lib/systemd/system/ceph-radosgw*
usr/bin/radosgw
usr/bin/radosgw-es
usr/bin/radosgw-object-expirer
usr/bin/radosgw-token
usr/share/man/man8/radosgw.8
usr/bin/rgw-gap-list
usr/bin/rgw-gap-list-comparator
# %if %{with stx}
etc/init.d/ceph-radosgw
usr/bin/ceph-diff-sorted
usr/bin/rgw-orphan-list

View File

@ -0,0 +1,17 @@
#!/bin/sh
set -e
if [ "${1}" = "configure" ] ; then
[ -f "/etc/default/ceph" ] && . /etc/default/ceph
[ -z "$SERVER_USER" ] && SERVER_USER=ceph
[ -z "$SERVER_GROUP" ] && SERVER_GROUP=ceph
if ! dpkg-statoverride --list /var/lib/ceph/radosgw >/dev/null; then
chown $SERVER_USER:$SERVER_GROUP /var/lib/ceph/radosgw
fi
fi
#DEBHELPER#
exit 0

View File

@ -0,0 +1,22 @@
#!/bin/sh
# vim: set noet ts=8:
set -e
case "$1" in
remove)
invoke-rc.d radosgw stop || {
RESULT=$?
if [ $RESULT != 100 ]; then
exit $RESULT
fi
}
;;
*)
;;
esac
#DEBHELPER#
exit 0

View File

@ -0,0 +1,2 @@
usr/bin/rbd-fuse
usr/share/man/man8/rbd-fuse.8

View File

@ -0,0 +1,5 @@
# %if %{without stx}
# lib/systemd/system/ceph-rbd-mirror*
usr/bin/rbd-mirror
usr/share/man/man8/rbd-mirror.8

View File

@ -0,0 +1,2 @@
usr/bin/rbd-nbd
usr/share/man/man8/rbd-nbd.8

View File

@ -0,0 +1 @@
usr/bin/rest-bench

289
ceph/ceph/debian/deb_folder/rules Executable file
View File

@ -0,0 +1,289 @@
#!/usr/bin/make -f
# -*- makefile -*-
# export DH_VERBOSE=1
# Additional files
SOURCE1 := ceph.sh
SOURCE2 := mgr-restful-plugin.py
SOURCE3 := ceph.conf.pmon
SOURCE4 := ceph-init-wrapper.sh
SOURCE5 := ceph.conf
SOURCE6 := ceph-manage-journal.py
SOURCE7 := ceph.service
SOURCE8 := mgr-restful-plugin.service
SOURCE9 := ceph-preshutdown.sh
SOURCE10 := starlingx-docker-override.conf
# Paths
export DESTDIR = $(CURDIR)/debian/tmp
export INITDIR = etc/init.d
export LIBEXECDIR = usr/lib
export SBINDIR = usr/sbin
export SYSCONFDIR = etc
export UDEVRULESDIR = lib/udev/rules.d
export UNITDIR = lib/systemd/system
export JAVA_HOME=/usr/lib/jvm/default-java
## Set JAVAC to prevent FTBFS due to incorrect use of 'gcj' if found (see "m4/ac_prog_javac.m4").
export JAVAC=javac
DEB_HOST_ARCH_BITS ?= $(shell dpkg-architecture -qDEB_HOST_ARCH_BITS)
export DEB_BUILD_ARCH ?= $(shell dpkg-architecture -qDEB_BUILD_ARCH)
export DEB_HOST_ARCH ?= $(shell dpkg-architecture -qDEB_HOST_ARCH)
# support ccache for faster build
# cmake uses /usr/bin/c*
ifeq (yes,$(findstring yes,$(shell test -L /usr/lib/ccache/c++ && test -L /usr/lib/ccache/cc && echo -n yes)))
extraopts += -DWITH_CCACHE=ON
endif
# try to save even more memory on some architectures
# see #849657 for hints.
# Reduce size of debug symbols to fix FTBFS due to the
# 2GB/3GB address space limits on 32bit
ifeq (32,$(DEB_HOST_ARCH_BITS))
export DEB_CFLAGS_MAINT_APPEND = -g1
export DEB_CXXFLAGS_MAINT_APPEND = -g1
endif
# we don't have NEON on armel.
ifeq ($(DEB_HOST_ARCH),armel)
extraopts += -DHAVE_ARM_NEON=0
endif
# disable ceph-dencoder on 32bit except i386 to avoid g++ oom
ifneq (,$(filter $(DEB_HOST_ARCH), armel armhf hppa m68k mips mipsel powerpc sh4 x32))
extraopts += -DDISABLE_DENCODER=1
endif
ifeq ($(shell dpkg-vendor --is Ubuntu && echo yes) $(DEB_HOST_ARCH), yes i386)
skip_packages = -Nceph -Nceph-base -Nceph-mds -Nceph-mgr -Nceph-mon -Nceph-osd
endif
# minimise needless linking and link to libatomic
# The last is needed because long long atomic operations are not directly
# supported by all processor architectures
export DEB_LDFLAGS_MAINT_APPEND= -Wl,--as-needed -latomic
# Enable hardening
export DEB_BUILD_MAINT_OPTIONS = hardening=+all
# STX CONFIG
extraopts += -DCEPH_SYSTEMD_ENV_DIR=/etc/default
extraopts += -DCMAKE_BUILD_TYPE=Release
extraopts += -DCMAKE_INSTALL_INITCEPH=/$(INITDIR)
extraopts += -DCMAKE_INSTALL_LIBEXECDIR=/$(LIBEXECDIR)
extraopts += -DCMAKE_INSTALL_SYSCONFDIR=/$(SYSCONFDIR)
extraopts += -DCMAKE_INSTALL_SYSTEMD_SERVICEDIR=/$(UNITDIR)
extraopts += -DMGR_PYTHON_VERSION=3
extraopts += -DWITH_BABELTRACE=OFF
extraopts += -DWITH_CEPHFS=ON
extraopts += -DWITH_CEPHFS_JAVA=ON
extraopts += -DWITH_CEPHFS_SHELL=ON
extraopts += -DWITH_CEPH_TEST_PACKAGE=OFF
extraopts += -DWITH_CLIENT=ON
extraopts += -DWITH_COVERAGE=OFF
extraopts += -DWITH_CRYPTOPP=OFF
extraopts += -DWITH_CYTHON=ON
extraopts += -DWITH_DEBUG=OFF
extraopts += -DWITH_EMBEDDED=OFF
extraopts += -DWITH_EVENTFD=ON
extraopts += -DWITH_FUSE=ON
extraopts += -DWITH_GITVERSION=ON
extraopts += -DWITH_GRAFANA=ON
extraopts += -DWITH_JEMALLOC=OFF
extraopts += -DWITH_KINETIC=OFF
extraopts += -DWITH_LIBAIO=ON
extraopts += -DWITH_LIBATOMIC_OPS=ON
extraopts += -DWITH_LIBROCKSDB=OFF
extraopts += -DWITH_LIBXFS=ON
extraopts += -DWITH_LIBZFS=OFF
extraopts += -DWITH_LTTNG=OFF
extraopts += -DWITH_MAKE_CHECK=OFF
extraopts += -DWITH_MAN_PAGES=OFF
extraopts += -DWITH_MDS=ON
extraopts += -DWITH_MGR_DASHBOARD_FRONTEND=OFF
extraopts += -DWITH_MON=ON
extraopts += -DWITH_NSS=ON
extraopts += -DWITH_OCF=ON
extraopts += -DWITH_OPENLDAP=ON
extraopts += -DWITH_OSD=ON
extraopts += -DWITH_PGREFDEBUGGING=OFF
extraopts += -DWITH_PROFILER=OFF
extraopts += -DWITH_PYTHON2=OFF
extraopts += -DWITH_PYTHON3=ON
extraopts += -DWITH_RADOS=ON
extraopts += -DWITH_RADOSGW=ON
extraopts += -DWITH_RADOSSTRIPER=ON
extraopts += -DWITH_RBD=ON
extraopts += -DWITH_SEASTAR=OFF
extraopts += -DWITH_SELINUX=OFF
extraopts += -DWITH_SERVER=ON
# Disable SPDK as it generates a build which is no compatible
# with older CPU's which are still supported by Ubuntu.
extraopts += -DWITH_SPDK=OFF
extraopts += -DWITH_SUBMAN=OFF
extraopts += -DWITH_SYSTEMD=ON
extraopts += -DWITH_SYSTEM_BOOST=ON
extraopts += -DWITH_TCMALLOC=ON
extraopts += -DWITH_TESTS=OFF
extraopts += -DWITH_VALGRIND=OFF
extraopts += -DWITH_XIO=OFF
ifneq (,$(filter parallel=%,$(DEB_BUILD_OPTIONS)))
NUMJOBS = $(patsubst parallel=%,%,$(filter parallel=%,$(DEB_BUILD_OPTIONS)))
extraopts += -DBOOST_J=$(NUMJOBS)
endif
ifneq (,$(filter $(DEB_HOST_ARCH),s390x mips64el ia64 m68k ppc64 riscv64 sh4 sparc64 x32 alpha))
# beast depends on libboost_{context,coroutine} which is not supported on s390x
extraopts += -DWITH_BOOST_CONTEXT=OFF
else
extraopts += -DWITH_BOOST_CONTEXT=ON
endif
MAX_PARALLEL ?= $(shell ./debian/calc-max-parallel.sh)
%:
dh $@ --buildsystem=cmake --with javahelper,python3 $(MAX_PARALLEL)
override_dh_auto_configure:
env | sort
dh_auto_configure --buildsystem=cmake -- $(extraopts)
override_dh_auto_install:
dh_auto_install --buildsystem=cmake --destdir=$(DESTDIR)
if [ ! -f $(DESTDIR)/usr/bin/ceph-dencoder ]; then \
cp debian/workarounds/ceph-dencoder-oom $(DESTDIR)/usr/bin/ceph-dencoder ;\
chmod 755 $(DESTDIR)/usr/bin/ceph-dencoder ;\
fi
# udev rules
install -d -m 755 $(DESTDIR)/$(UDEVRULESDIR)/
install -D -m 644 udev/50-rbd.rules $(DESTDIR)/$(UDEVRULESDIR)/
install -D -m 640 udev/60-ceph-by-parttypeuuid.rules $(DESTDIR)/$(UDEVRULESDIR)/
# if %{without stx}
# install -D -m 644 udev/95-ceph-osd.rules $(DESTDIR)/$(UDEVRULESDIR)/
# sudoers.d
install -m 0440 -D sudoers.d/ceph-osd-smartctl $(DESTDIR)/etc/sudoers.d/ceph-osd-smartctl
install -D -m 640 src/etc-rbdmap $(DESTDIR)/etc/ceph/rbdmap
install -D -m 644 etc/sysctl/90-ceph-osd.conf $(DESTDIR)/etc/sysctl.d/30-ceph-osd.conf
# NOTE: ensure that any versioned erasure coding test code is dropped
# from the package install - package ships unversioned modules.
rm -f $(CURDIR)/debian/tmp/usr/lib/*/ceph/erasure-code/libec_*.so.*
find $(CURDIR)/debian/tmp/usr/lib/*/ceph/erasure-code -type l -delete || :
# if %{with stx}
install -d -m 750 $(DESTDIR)/${SYSCONFDIR}/services.d/controller/
install -d -m 750 $(DESTDIR)/${SYSCONFDIR}/services.d/storage/
install -d -m 750 $(DESTDIR)/${SYSCONFDIR}/services.d/worker/
mkdir -p $(DESTDIR)/${INITDIR}/
mkdir -p $(DESTDIR)/${SYSCONFDIR}/ceph/
mkdir -p $(DESTDIR)/${UNITDIR}/
install -D -m 750 ${SOURCE1} $(DESTDIR)/${SYSCONFDIR}/services.d/controller/
install -D -m 750 ${SOURCE1} $(DESTDIR)/${SYSCONFDIR}/services.d/storage/
install -D -m 750 ${SOURCE1} $(DESTDIR)/${SYSCONFDIR}/services.d/worker/
install -D -m 750 ${SOURCE2} $(DESTDIR)/${INITDIR}/mgr-restful-plugin
install -D -m 750 ${SOURCE3} $(DESTDIR)/${SYSCONFDIR}/ceph/
install -D -m 750 ${SOURCE4} $(DESTDIR)/${INITDIR}/ceph-init-wrapper
install -D -m 640 ${SOURCE5} $(DESTDIR)/${SYSCONFDIR}/ceph/
install -D -m 700 ${SOURCE6} $(DESTDIR)/${SBINDIR}/ceph-manage-journal
install -D -m 644 ${SOURCE7} $(DESTDIR)/${UNITDIR}/ceph.service
install -D -m 644 ${SOURCE8} $(DESTDIR)/${UNITDIR}/mgr-restful-plugin.service
install -D -m 700 ${SOURCE9} $(DESTDIR)/${SBINDIR}/ceph-preshutdown.sh
install -D -m 644 ${SOURCE10} $(DESTDIR)/${UNITDIR}/docker.service.d/starlingx-docker-override.conf
install -m 750 src/init-radosgw $(DESTDIR)/${INITDIR}/ceph-radosgw
sed -i '/### END INIT INFO/a SYSTEMCTL_SKIP_REDIRECT=1' $(DESTDIR)/${INITDIR}/ceph-radosgw
install -m 750 src/init-rbdmap $(DESTDIR)/${INITDIR}/rbdmap
install -d -m 750 $(DESTDIR)/var/log/radosgw
# if %{without stx}
# install -m 0644 -D systemd/50-ceph.preset $(DESTDIR)/${LIBEXECDIR}/systemd/system-preset/50-ceph.preset
# doc/changelog is a directory, which confuses dh_installchangelogs
override_dh_installchangelogs:
dh_installchangelogs --exclude doc/changelog
override_dh_installlogrotate:
cp src/logrotate.conf debian/ceph-common.logrotate
dh_installlogrotate -pceph-common
override_dh_installinit:
cp src/init-radosgw debian/radosgw.init
dh_installinit --no-start
dh_installinit -pceph-common --name=rbdmap --no-start
dh_installinit -pceph-base --name ceph --no-start
# install the systemd stuff manually since we have funny service names
# and need to update the paths in all of the files post install
# systemd:ceph-common
install -d -m0755 debian/ceph-common/usr/lib/tmpfiles.d
# if %{without stx}
# install -m 0644 -D systemd/ceph.tmpfiles.d debian/ceph-common/usr/lib/tmpfiles.d/ceph.conf
# NOTE(jamespage): Install previous ceph-mon service from packaging for upgrades
# Excluded, as per "files mon" section in ceph.spec for when %{with stx} is on.
# install -d -m0755 debian/ceph-mon/lib/systemd/system
# install -m0644 debian/lib-systemd/system/ceph-mon.service debian/ceph-mon/lib/systemd/system
# Ensure Debian/Ubuntu specific systemd units are NOT automatically enabled and started
# Enable systemd targets only
dh_systemd_enable -Xceph-mon.service -Xceph-osd.service -X ceph-mds.service
# Start systemd targets only
dh_systemd_start --no-stop-on-upgrade --no-restart-after-upgrade
override_dh_systemd_enable:
# systemd enable done as part of dh_installinit
override_dh_systemd_start:
# systemd start done as part of dh_installinit
override_dh_makeshlibs:
# exclude jni libraries in libcephfs-jni to avoid pointless ldconfig
# calls in maintainer scripts; exclude private erasure-code plugins.
dh_makeshlibs -V -X/usr/lib/jni -X/usr/lib/$(DEB_HOST_MULTIARCH)/ceph/erasure-code
override_dh_auto_test:
# do not run tests
override_dh_shlibdeps:
dh_shlibdeps -a --exclude=erasure-code --exclude=rados-classes --exclude=compressor
override_dh_python3:
for binding in rados cephfs rbd rgw; do \
dh_python3 -p python3-$$binding --shebang=/usr/bin/python3; \
done
dh_python3 -p python3-ceph-argparse --shebang=/usr/bin/python3
dh_python3 -p ceph-common --shebang=/usr/bin/python3
dh_python3 -p ceph-base --shebang=/usr/bin/python3
dh_python3 -p ceph-osd --shebang=/usr/bin/python3
dh_python3 -p ceph-mgr --shebang=/usr/bin/python3
dh_python3 -p cephfs-shell --shebang=/usr/bin/python3
override_dh_builddeb:
dh_builddeb ${skip_packages}
override_dh_gencontrol:
dh_gencontrol ${skip_packages}
override_dh_fixperms:
dh_fixperms \
-Xceph.sh \
-Xmgr-restful-plugin \
-Xceph.conf.pmon \
-Xceph-init-wrapper \
-Xceph.conf \
-Xceph-manage-journal \
-Xceph.service \
-Xmgr-restful-plugin.service \
-Xceph-preshutdown.sh \
-Xstarlingx-docker-override.conf \
-Xceph-radosgw \
-Xrbdmap \
-Xradosgw \
-X60-ceph-by-parttypeuuid.rules \
-Xceph-osd-smartctl

View File

@ -0,0 +1,16 @@
license-problem-json-evil src/rapidjson/license.txt
wayward-symbolic-link-target-in-source .git -> ../../../../.repo/projects/cgcs-root/stx/git/ceph.git
source-is-missing qa/workunits/erasure-code/jquery.flot.js line length is 3134 characters (>512)
source-is-missing src/civetweb/examples/_obsolete/docroot/jquery.js line length is 517 characters (>512)
source-is-missing src/civetweb/src/third_party/duktape-1.5.2/src-separate/duk_initjs_min.js
source-is-missing src/civetweb/src/third_party/duktape-1.8.0/src-separate/duk_initjs_min.js
source-is-missing src/civetweb/test/ajax/jquery.js line length is 32404 characters (>512)
source-contains-prebuilt-windows-binary ceph-erasure-code-corpus/v0.86-310/plugin=isa stripe-width=10000 technique=reed_sol_van k=7 m=3/8
source-contains-prebuilt-windows-binary ceph-erasure-code-corpus/v0.86-310/plugin=isa stripe-width=10000 technique=reed_sol_van k=7 m=4/8
source-contains-prebuilt-windows-binary ceph-erasure-code-corpus/v0.86-310/plugin=isa stripe-width=10000 technique=reed_sol_van k=7 m=5/8
source-contains-prebuilt-windows-binary ceph-erasure-code-corpus/v0.92-988/plugin=isa stripe-width=10000 technique=reed_sol_van k=7 m=3/8
source-contains-prebuilt-windows-binary ceph-erasure-code-corpus/v0.92-988/plugin=isa stripe-width=10000 technique=reed_sol_van k=7 m=4/8
source-contains-prebuilt-windows-binary ceph-object-corpus/archive/0.80-rc1-35-g4812150/objects/MOSDPGCreate/a47ae3cd3ba843424dedce0c05308bb5
source-contains-prebuilt-windows-binary ceph-object-corpus/archive/11.2.0-280-g34e04de/objects/bluestore_extent_ref_map_t::record_t/e800965f13053b760af9612e91333fe4
source-contains-prebuilt-windows-binary ceph-object-corpus/archive/11.2.0-280-g34e04de/objects/inodeno_t/d849f83485b1a2c4343515901334f02e
source-contains-prebuilt-windows-binary ceph-object-corpus/archive/11.2.0-280-g34e04de/objects/utime_t/00a9a0a9e1f88f5bb148fd729330db51

View File

@ -0,0 +1 @@
3.0 (quilt)

View File

@ -0,0 +1,11 @@
extend-diff-ignore = ".*src/rapidjson/thirdparty/gtest/googlemock/msvc/20\d\d/gmock\.sln"
extend-diff-ignore = ".*src/rapidjson/thirdparty/gtest/googlemock/msvc/20\d\d/gmock.*vcproj"
extend-diff-ignore = ".*src/rapidjson/thirdparty/gtest/googlemock/msvc/20\d\d/gmock.*vsprops"
extend-diff-ignore = ".*src/rapidjson/thirdparty/gtest/googlemock/msvc/20\d\d/gmock.*vcxproj"
extend-diff-ignore = ".*src/rapidjson/thirdparty/gtest/googlemock/msvc/20\d\d/gmock_config.props"
extend-diff-ignore = ".*src/rapidjson/thirdparty/gtest/googletest/codegear/gtest.*\.cbproj"
extend-diff-ignore = ".*src/rapidjson/thirdparty/gtest/googletest/codegear/gtest_all\.cc"
extend-diff-ignore = ".*src/rapidjson/thirdparty/gtest/googletest/codegear/gtest_link\.cc"
extend-diff-ignore = ".*src/rapidjson/thirdparty/gtest/googletest/codegear/gtest\.groupproj"
extend-diff-ignore = ".*src/rapidjson/thirdparty/gtest/googletest/msvc/gtest.*\.vcproj"
extend-diff-ignore = ".*src/rapidjson/thirdparty/gtest/googletest/msvc/gtest.*\.sln"

View File

@ -0,0 +1,31 @@
#!/bin/sh
# autopkgtest check: Build and run a program against librados2 to
# validate that headers are installed and libraries exists
set -e
WORKDIR=$(mktemp -d)
trap "rm -rf $WORKDIR" 0 INT QUIT ABRT PIPE TERM
cd $WORKDIR
cat <<EOF > radostest.c
#include <rados/librados.h>
int
main(void)
{
int err;
rados_t cluster;
err = rados_create(&cluster, NULL);
if (err < 0) {
return (1);
}
return(0);
}
EOF
gcc -o radostest radostest.c -lrados
echo "build: OK"
[ -x radostest ]
./radostest
echo "run: OK"

View File

@ -0,0 +1,24 @@
#!/bin/sh
# autopkgtest check: Build and run a program against librbd1 to
# validate that headers are installed and libraries exists
set -e
WORKDIR=$(mktemp -d)
trap "rm -rf $WORKDIR" 0 INT QUIT ABRT PIPE TERM
cd $WORKDIR
cat <<EOF > rbdtest.c
#include <rbd/librbd.h>
int
main(void)
{
return(0);
}
EOF
gcc -o rbdtest rbdtest.c -lrbd
echo "build: OK"
[ -x rbdtest ]
./rbdtest
echo "run: OK"

View File

@ -0,0 +1,11 @@
#!/bin/bash
set -e
CLIENTS=('ceph')
for client in "${CLIENTS[@]}"; do
echo -n "Testing client $client: "
$client -v 2>&1 > /dev/null
echo "OK"
done

View File

@ -0,0 +1,9 @@
Tests: ceph-client build-rados build-rbd python-ceph
Depends:
build-essential,
ceph-common,
librados-dev,
librbd-dev,
python3-rados,
python3-rbd,
Restrictions: needs-root

View File

@ -0,0 +1,7 @@
#!/usr/bin/python3
# Test that rbd and rados can be imported OK
import rbd
import rados
print("python-ceph: OK")

Some files were not shown because too many files have changed in this diff Show More