Create layer-specific aptly binary repositories

In the current state of the build environment, all layers -- common,
flock, distro, compiler, etc -- share the same aptly binary repository:
`deb-local-binary`.

While this works for the vast majority of scenarios, there is one
specific scenario that the build environment does not currently support,
but which would be of great values to us.

Consider the following example:

```
Imagine that our goal is to build `python-barbicanclient`.

This particular package is required for both the platform, StarlingX,
and the containerized application, StarlingX OpenStack. However, for the
platform, this package needs to be built in the `Victoria` version,
while for the application, in the `Antelope` version.

If we compare the dependencies of each version by looking at their
respective `control` files, we can see, for example, that `Victoria`
lists `python3-keystoneauth1` as one of the dependencies *without*
specifying version constraints [1]:

* python3-keystoneauth1,

Which means that, if we attempt to build `python-barbicanclient` for the
platform, it will use whatever version is available in
`deb-local-binary`. In this case, since `base-bullseye.lst` -- from
`common` layer -- specifies version `4.2.1-2`, that would be the version
used.

On the other hand, `Antelope` lists this same dependency with a specific
version [2]:

* python3-keystoneauth1 (>= 5.1.1),

Which means that, if we attempt to build `python-barbicanclient` for the
application, it will *fail*, because there is no `python3-keystoneauth1`
available in `deb-local-binary` that matches the criteria.

However, even if we had *both* binaries in `deb-local-binary` there
would be another problem:

If we were to build `python-barbicanclient` for the platform, because
`Victoria` has no version constraint for `python3-keystoneauth1`
dependency, the most recent version available would be used, which means
that the package would be built with the source code of the `Victoria`
version, with dependencies of the `Antelope` version. This would
certainly cause problems at runtimes.

This is only a problem because *different layers use the same aptly
binary repository*.
```

Therefore, to avoid having to create a patch for each package built on
top of wrong dependencies -- when multiple version of the same package
are available -- this change proposes the use of *layer-specific aptly
binary repositories* in addition to the existing `deb-local-binary`.

The `deb-local-binary` will still exist, for the `common` layer, but
every other layer will have its own aptly binary repository, e.g.:

  * deb-local-binary-flock;
  * deb-local-binary-distro;
  * deb-local-binary-compiler;
  * deb-local-binary-containers;
  * deb-local-binary-openstack;
  * deb-local-binary-<layer>.

Regardless of the package and/or layer built, `deb-local-binary` will
continue to be present in the `sources.list`. However, if the package is
from the `openstack` layer, for example, the corresponding repository --
`deb-local-binary-openstack` -- will be added to the `sources.list` so
that the build can access other dependencies that are uniquely and
exclusively relevant to that layer.

This control, in particular, is implemented in the `Depends-On` change.

[1] https://salsa.debian.org/openstack-team/clients/python-barbicanclient/-/blob/debian/5.0.1-2/debian/control#L20
[2] https://salsa.debian.org/openstack-team/clients/python-barbicanclient/-/blob/debian/5.5.0-1/debian/control#L20

Test Plan:
PASS - Run `downloader` -- and its layer variants -- successfully:
       * downloader -l compiler
       * downloader -l containers
       * downloader -l distro
       * downloader -l flock
       * downloader -l openstack
PASS - Verify that multiple binary repositories were created:
       * repo_manage.py list
PASS - Run `build-pkgs -c -a --refresh_chroots` -- and its layer
       variants -- successfully:
       * build-pkgs -c -l compiler --refresh_chroots
       * build-pkgs -c -l containers --refresh_chroots
       * build-pkgs -c -l distro --refresh_chroots
       * build-pkgs -c -l flock --refresh_chroots
       * build-pkgs -c -l openstack --refresh_chroots
PASS - Run `build-image` successfully

Story: 2010797
Task: 48697

Depends-On: https://review.opendev.org/c/starlingx/tools/+/896770

Change-Id: I496cceeab084112b7b8e27024ead96e8da0c6a11
Signed-off-by: Luan Nunes Utimura <LuanNunes.Utimura@windriver.com>
(cherry picked from commit f953c4a671)
This commit is contained in:
Luan Nunes Utimura 2023-08-29 13:13:11 -03:00 committed by Lucas de Ataides Barreto
parent 22174f5a72
commit 01f8933531
4 changed files with 236 additions and 75 deletions

View File

@ -41,6 +41,11 @@ DEB_CONFIG_DIR = 'stx-tools/debian-mirror-tools/config/'
PKG_LIST_DIR = os.path.join(os.environ.get('MY_REPO_ROOT_DIR'), DEB_CONFIG_DIR)
CERT_FILE = 'cgcs-root/public-keys/TiBoot.crt'
CERT_PATH = os.path.join(os.environ.get('MY_REPO_ROOT_DIR'), CERT_FILE)
IMAGE_LAYERS_FILE = 'cgcs-root/build-tools/stx/image-layers.conf'
IMAGE_LAYERS_PATH = os.path.join(
os.environ.get('MY_REPO_ROOT_DIR'),
IMAGE_LAYERS_FILE
)
img_pkgs = []
kernel_type = 'std'
stx_std_kernel = 'linux-image-5.10.0-6-amd64-unsigned'
@ -49,15 +54,20 @@ WAIT_TIME_BEFORE_CHECKING_LOG = 2
# The max timeout value to wait LAT to output the build log
MAX_WAIT_LAT_TIME = 300
pkg_version_mapping = {}
binary_repositories = []
logger = logging.getLogger('build-image')
utils.set_logger(logger)
def merge_local_repos(repomgr):
logger.debug('Calls repo manager to create/udpate the snapshot %s which is merged from local repositories', REPO_ALL)
# REPO_BUILD is higher priority than REPO_BINARY for repomgr to select package
# The build repository (deb-local-build) has a higher priority than
# the binary repositories (deb-local-binary-*) for `repomgr` to
# select packages.
try:
pubname = repomgr.merge(REPO_ALL, ','.join([REPO_BUILD, REPO_BINARY]))
pubname = repomgr.merge(REPO_ALL, ','.join([REPO_BUILD, *binary_repositories]))
except Exception as e:
logger.error(str(e))
logger.error('Exception when repo_manager creates/updates snapshot %s', REPO_ALL)
@ -316,7 +326,7 @@ def check_base_os_binaries(repomgr):
'does not exist']))
return False
results = verify_pkgs_in_repo(repomgr, REPO_BINARY, base_bins_list)
results = verify_pkgs_in_repo(repomgr, binary_repositories, base_bins_list)
if results:
logger.error("====OS binaries checking fail:")
for deb in results:
@ -336,7 +346,7 @@ def check_stx_binaries(repomgr, btype='std'):
# Assume no such list here means ok
return True
results = verify_pkgs_in_repo(repomgr, REPO_BINARY, stx_bins_list)
results = verify_pkgs_in_repo(repomgr, binary_repositories, stx_bins_list)
if results:
logger.error("====STX binaries checking fail:")
for deb in results:
@ -355,7 +365,7 @@ def check_stx_patched(repomgr, btype='std'):
'does not exist']))
return False
results = verify_pkgs_in_repo(repomgr, REPO_BUILD, stx_patched_list)
results = verify_pkgs_in_repo(repomgr, [REPO_BUILD], stx_patched_list)
if results:
logger.error("====STX patched packages checking fail:")
for deb in results:
@ -366,7 +376,16 @@ def check_stx_patched(repomgr, btype='std'):
return True
def verify_pkgs_in_repo(repomgr, repo_name, pkg_list_path):
def verify_pkgs_in_repo(repomgr, repo_names, pkg_list_path):
"""Verify if packages exist in one (or more) repositories.
:param repomgr: A RepoMgr instance.
:param repo_names: The list of repositories to query.
:param pkg_list_path: The path to the file listing the packages to be
checked.
:returns: list -- The list of packages that could not be found.
"""
failed_pkgs = []
with open(pkg_list_path, 'r') as flist:
lines = list(line for line in (lpkg.strip() for lpkg in flist) if line)
@ -376,23 +395,45 @@ def verify_pkgs_in_repo(repomgr, repo_name, pkg_list_path):
continue
pname_parts = pkg.split()
name = pname_parts[0]
if len(pname_parts) > 1:
version = pname_parts[1]
pkg_name = ''.join([name, '_', version])
if repomgr.search_pkg(repo_name, name, version):
img_pkgs.append(''.join([name, '=', version]))
logger.debug(''.join(['Found package:name=', name,
' version=', version]))
found = False
for i, repo_name in enumerate(repo_names):
if len(pname_parts) > 1:
version = pname_parts[1]
pkg_name = ''.join([name, '_', version])
if repomgr.search_pkg(repo_name, name, version):
found = True
if repo_name != REPO_BUILD:
if name not in pkg_version_mapping:
pkg_version_mapping[name] = [version]
else:
if version not in pkg_version_mapping[name]:
failed_pkgs.append(pkg_name)
logger.error(
f"Multiple versions found for `{name}`: "
f"{pkg_version_mapping[name]}"
)
img_pkgs.append(''.join([name, '=', version]))
logger.debug(''.join(['Found package:name=', name,
' version=', version]))
# If after processing the last repository the package was
# still not found, mark it as missing.
if not found and i == len(repo_names) - 1:
logger.debug(' '.join([pkg_name,
'is missing in local binary repo']))
failed_pkgs.append(pkg_name)
else:
logger.debug(' '.join([pkg_name,
'is missing in local binary repo']))
failed_pkgs.append(pkg_name)
else:
if repomgr.search_pkg(repo_name, name, None, True):
img_pkgs.append(name)
logger.debug(''.join(['Found package with name:', name]))
else:
failed_pkgs.append(name)
if repomgr.search_pkg(repo_name, name, None, True):
found = True
img_pkgs.append(name)
logger.debug(''.join(['Found package with name:', name]))
# If after processing the last repository the package was
# still not found, mark it as missing.
if not found and i == len(repo_names) - 1:
failed_pkgs.append(name)
return failed_pkgs
@ -501,9 +542,51 @@ def sign_iso_dev(img_yaml):
logger.info("Image signed %s", real_iso_file)
def get_binary_repositories(config: str):
# The binary repository of the `common` layer is always present.
repositories = [REPO_BINARY]
layers = []
logger.info(f"Processing config file `{config}`...")
try:
with open(config, "r") as f:
layers = f.readlines()
except IOError as e:
logger.error(f"Unable to process config file `{config}`.")
logger.error(str(e))
sys.exit(1)
for layer in layers:
# Ignore if it's comment or white space.
if not layer.strip() or layer.startswith("#"):
continue
# Check if it's a valid layer.
layer = layer.strip().lower()
if layer in ALL_LAYERS:
repository = f"{REPO_BINARY}-{layer}"
repositories.append(repository)
logger.info(
f"Added binary repository for layer `{layer}`: {repository}"
)
else:
logger.error(
f"Unable to add binary repository for layer `{layer}`. "
f"The layer must be one of {ALL_LAYERS}."
)
sys.exit(1)
logger.info("Processing complete.")
return repositories
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="build-image helper")
parser = argparse.ArgumentParser(
description="build-image helper",
formatter_class=argparse.ArgumentDefaultsHelpFormatter
)
kernel_types = parser.add_mutually_exclusive_group()
kernel_types.add_argument('--std', help="build standard image",
action='store_true')
@ -516,6 +599,15 @@ if __name__ == "__main__":
default=False, action='store_true')
parser.add_argument('--no-sign', action='store_true',
default=False, help="Don't sign ISO at the end")
parser.add_argument(
'--image-layers-file',
help=(
"The absolute path of the configuration file that lists the "
"the layers that contribute binaries to the ISO"
),
type=str,
default=IMAGE_LAYERS_PATH
)
args = parser.parse_args()
if args.rt:
kernel_type = 'rt'
@ -529,8 +621,29 @@ if __name__ == "__main__":
repo_manager = repo_manage.RepoMgr('aptly', os.environ.get('REPOMGR_URL'),
'/tmp/', os.environ.get('REPOMGR_ORIGIN'),
rmg_logger)
# Upload build repository (deb-local-build) to `aptly`
# and create a repository URL for it.
repo_manager.upload_pkg(REPO_BUILD, None)
repo_manager.upload_pkg(REPO_BINARY, None)
build_repository_url = "deb {}{} bullseye main".format(
os.environ.get("REPOMGR_DEPLOY_URL"),
REPO_BUILD
)
# Get binary repositories that contribute binaries to the ISO.
binary_repositories = get_binary_repositories(args.image_layers_file)
binary_repositories_urls = []
# Upload binary repositories (deb-local-binary-*) to `aptly`
# and create repository URLs for them.
for binary_repository in binary_repositories:
repo_manager.upload_pkg(binary_repository, None)
binary_repositories_urls.append(
"deb {}{} bullseye main".format(
os.environ.get("REPOMGR_DEPLOY_URL"),
binary_repository
)
)
logger.info("\n")
logger.info("=====Build Image start ......")
@ -577,15 +690,14 @@ if __name__ == "__main__":
if update_debootstrap_mirror(lat_initramfs_yaml):
logger.debug("Debootstrap switches to mirror %s in %s", REPO_ALL, lat_initramfs_yaml)
binary_repo_url = ''.join(['deb ',
os.environ.get('REPOMGR_DEPLOY_URL'),
REPO_BINARY, ' bullseye main'])
build_repo_url = ''.join(['deb ',
os.environ.get('REPOMGR_DEPLOY_URL'),
REPO_BUILD, ' bullseye main'])
for yaml_file in (lat_yaml, lat_initramfs_yaml):
if not feed_lat_src_repos(yaml_file, [binary_repo_url, build_repo_url]):
if not feed_lat_src_repos(
yaml_file,
[
*binary_repositories_urls,
build_repository_url
]
):
logger.error(' '.join(['Failed to set local repos to', yaml_file]))
sys.exit(1)
else:

View File

@ -1094,7 +1094,7 @@ class BuildController():
logger.info("Successfully uploaded source %s to repository %s", dsc, repo_name)
return True
def req_add_task(self, pkg_dir, dsc, build_type, snapshot_index):
def req_add_task(self, pkg_dir, dsc, build_type, snapshot_index, layer):
status = 'fail'
chroot = None
# For serial build and parallel build, the pkg_jobs should have different value
@ -1111,6 +1111,7 @@ class BuildController():
req_params['run_tests'] = self.attrs['run_tests']
req_params['jobs'] = str(pkg_jobs)
req_params['snapshot_idx'] = snapshot_index
req_params['layer'] = layer
try:
resp = requests.post(BUILDER_URL + 'addtask', json=req_params)
@ -1491,7 +1492,7 @@ class BuildController():
self.publish_repo(REPO_BUILD, snapshot_idx)
# Requires the remote pkgbuilder to add build task
logger.info("To Require to add build task for %s with snapshot %s", pkg_name, snapshot_idx)
(status, chroot) = self.req_add_task(pkg_dir, dsc_path, build_type, snapshot_idx)
(status, chroot) = self.req_add_task(pkg_dir, dsc_path, build_type, snapshot_idx, layer)
if 'fail' in status:
if chroot and 'ServerError' in chroot:
self.req_stop_task()

View File

@ -115,7 +115,7 @@ def get_all_binary_list(distro=STX_DEFAULT_DISTRO, layers=None, build_types=None
"""
Return all binary packages listed in base-bullseye.lst, os-std.lst,os-rt.lst
"""
bin_list = []
layer_binaries = {}
stx_config = os.path.join(os.environ.get('MY_REPO_ROOT_DIR'),
'stx-tools/debian-mirror-tools/config/debian')
@ -128,6 +128,8 @@ def get_all_binary_list(distro=STX_DEFAULT_DISTRO, layers=None, build_types=None
layers = ALL_LAYERS
for layer in layers:
if layer not in layer_binaries:
layer_binaries[layer] = []
search_dir = os.path.join(stx_config, layer)
all_build_types = discovery.get_layer_build_types(distro=distro, layer=layer)
if not all_build_types:
@ -145,16 +147,27 @@ def get_all_binary_list(distro=STX_DEFAULT_DISTRO, layers=None, build_types=None
pattern=''.join(['os-',build_type,'.lst'])
for root, dirs, files in os.walk(search_dir):
for f in fnmatch.filter(files, pattern):
bin_list.append(os.path.join(root, f))
layer_binaries[layer].append(os.path.join(root, f))
logger.info(
f"Binary lists for layer `{layer}`: "
f"{layer_binaries[layer]}"
)
search_dir = os.path.join(stx_config, 'common')
pattern='base-*.lst'
if "common" not in layer_binaries:
layer_binaries["common"] = []
for root, dirs, files in os.walk(search_dir):
for f in fnmatch.filter(files, pattern):
bin_list.append(os.path.join(root, f))
layer_binaries["common"].append(os.path.join(root, f))
logger.info("bin_list=%s" % bin_list)
return bin_list
logger.info(
f"Binary lists for layer `common`: "
f"{layer_binaries['common']}"
)
return layer_binaries
class BaseDownloader():
@ -215,27 +228,35 @@ class BaseDownloader():
class DebDownloader(BaseDownloader):
def __init__(self, arch, _dl_dir, force, _bin_lists):
def __init__(self, arch, _dl_dir, force, _layer_binaries):
super(DebDownloader, self).__init__(arch, _dl_dir, force)
self.need_download = []
self.downloaded = []
self.need_upload = []
self.bin_lists = _bin_lists
self.layer_binaries = _layer_binaries
self.apt_cache = apt.cache.Cache()
def _get_layer_binaries_repository(self, layer: str):
repo = REPO_BIN
if layer != "common":
repo = f"{REPO_BIN}-{layer.lower()}"
return repo
def create_binary_repo(self):
if not self.repomgr:
logger.error("The repo manager is not created")
return False
try:
self.repomgr.upload_pkg(REPO_BIN, None)
except Exception as e:
logger.error(str(e))
logger.error("Failed to create repository %s", REPO_BIN)
return False
for layer in self.layer_binaries:
repo = self._get_layer_binaries_repository(layer)
try:
self.repomgr.upload_pkg(repo, None)
except Exception as e:
logger.error(str(e))
logger.error("Failed to create repository %s", repo)
return False
logger.info("Successfully created repository %s", REPO_BIN)
logger.info("Successfully created repository %s", repo)
return True
def download(self, _name, _version, url=None):
@ -282,21 +303,24 @@ class DebDownloader(BaseDownloader):
return None
def reports(self):
try:
self.repomgr.deploy_repo(REPO_BIN)
except Exception as e:
logger.error(str(e))
logger.error("Failed to publish repository %s", REPO_BIN)
return
for layer in self.layer_binaries:
repo = self._get_layer_binaries_repository(layer)
try:
self.repomgr.deploy_repo(repo)
except Exception as e:
logger.error(str(e))
logger.error("Failed to publish repository %s", repo)
return
if self.layer_binaries[layer]:
logger.info(f"[{layer}] Binary list:")
for bin_list in self.layer_binaries[layer]:
logger.info(bin_list)
if len(self.bin_lists):
logger.info("All binary lists are:")
for blist in self.bin_lists:
logger.info(blist)
logger.info("Show result for binary download:")
return super(DebDownloader, self).reports()
def download_list(self, list_file):
def download_list(self, repo, list_file):
if not os.path.exists(list_file):
return
@ -343,7 +367,7 @@ class DebDownloader(BaseDownloader):
# should be defined in the package list file with ':'
self.need_download.append([pkg_name + '_' + pkg_name_array[1], url])
previously_uploaded = self.repomgr.list_pkgs(REPO_BIN)
previously_uploaded = self.repomgr.list_pkgs(repo)
logger.info(' '.join(['previously_uploaded', str(previously_uploaded)]))
for debs in self.need_upload:
deb = debs[0]
@ -351,22 +375,22 @@ class DebDownloader(BaseDownloader):
deb_path = os.path.join(stx_bin_mirror, deb)
# Search the package with the "eopch" in aptly repo
if previously_uploaded and deb_fver in previously_uploaded:
logger.info("%s has already been uploaded to %s, skip", deb_path, REPO_BIN)
logger.info("%s has already been uploaded to %s, skip", deb_path, repo)
continue
try:
debnames = deb.split('_')
del_ret = self.repomgr.delete_pkg(REPO_BIN, debnames[0], 'binary', None)
del_ret = self.repomgr.delete_pkg(repo, debnames[0], 'binary', None)
logger.debug("Only need uploading: Tried to delete the old %s, ret %d", debnames[0], del_ret)
upload_ret = self.repomgr.upload_pkg(REPO_BIN, deb_path, deploy=False)
upload_ret = self.repomgr.upload_pkg(repo, deb_path, deploy=False)
except Exception as e:
logger.error(str(e))
logger.error("Exception on uploading %s to %s", deb_path, REPO_BIN)
logger.error("Exception on uploading %s to %s", deb_path, repo)
sys.exit(1)
else:
if upload_ret:
logger.debug("%s is uploaded to %s", deb_path, REPO_BIN)
logger.debug("%s is uploaded to %s", deb_path, repo)
else:
logger.error("Failed to upload %s to %s", deb_path, REPO_BIN)
logger.error("Failed to upload %s to %s", deb_path, repo)
break
self.need_upload.clear()
@ -382,18 +406,18 @@ class DebDownloader(BaseDownloader):
deb_ver = debnames[1].split(":")[-1]
self.dl_success.append('_'.join([debnames[0], deb_ver]))
try:
del_ret = self.repomgr.delete_pkg(REPO_BIN, debnames[0], 'binary', None)
del_ret = self.repomgr.delete_pkg(repo, debnames[0], 'binary', None)
logger.debug("Tried to delete the old %s, ret %d", debnames[0], del_ret)
upload_ret = self.repomgr.upload_pkg(REPO_BIN, ret, deploy=False)
upload_ret = self.repomgr.upload_pkg(repo, ret, deploy=False)
except Exception as e:
logger.error(str(e))
logger.error("Exception on uploading %s to %s", deb_path, REPO_BIN)
logger.error("Exception on uploading %s to %s", deb_path, repo)
sys.exit(1)
else:
if upload_ret:
logger.info(''.join([debnames[0], '_', debnames[1], ' is uploaded to ', REPO_BIN]))
logger.info(''.join([debnames[0], '_', debnames[1], ' is uploaded to ', repo]))
else:
logger.error(''.join([debnames[0], '_', debnames[1], ' fail to upload to ', REPO_BIN]))
logger.error(''.join([debnames[0], '_', debnames[1], ' fail to upload to ', repo]))
break
else:
self.dl_failed.append(deb)
@ -406,10 +430,16 @@ class DebDownloader(BaseDownloader):
+ <layer>/os-rt.lst
"""
super(DebDownloader, self).clean()
if len(self.bin_lists):
for bin_list in self.bin_lists:
self.download_list(bin_list)
else:
empty = True
for layer in self.layer_binaries:
repo = self._get_layer_binaries_repository(layer)
if self.layer_binaries[layer]:
for bin_list in self.layer_binaries[layer]:
empty = False
self.download_list(repo, bin_list)
if empty:
logger.error("There are no lists of binary packages found")
sys.exit(1)

View File

@ -0,0 +1,18 @@
# This configuration file contains the list of layers that should be taken
# into account by the `build-image` script.
#
# The rationale behind this list is that not every layer -- and their
# respective binaries -- are needed to create an ISO.
#
# Layers like `containers` and `openstack`, for example, list binaries that
# are only relevant for building container images (via the
# `build-stx-base.sh` and `build-stx-images.sh` scripts).
#
# Therefore, only layers that matter for the ISO creation process are listed
# here.
#
# Note: The `common` layer -- despite not being listed -- is always considered.
compiler
distro
flock