1 .. This work is licensed under a Creative Commons Attribution 4.0 International License.
2 .. http://creativecommons.org/licenses/by/4.0
3 .. Copyright 2019 Samsung Electronics Co., Ltd.
5 OOM ONAP Offline Installer Package Build Guide
6 =============================================================
8 This document is describing procedure for building offline installer packages. It is supposed to be triggered on server with internet connectivity and will download all artifacts required for ONAP deployment based on our static lists. The server used for the procedure in this guide is preferred to be separate build server.
10 Procedure was completely tested on RHEL 7.6 as it’s tested target platform, however with small adaptations it should be applicable also for other platforms.
11 Some discrepancies when Centos 7.6 is used are described below as well.
17 We assume that procedure is executed on RHEL 7.6 server with \~300G disc space, 16G+ RAM and internet connectivity
19 Some additional sw packages are required by ONAP Offline platform building tooling. in order to install them
20 following repos has to be configured for RHEL 7.6 platform.
25 All commands stated in this guide are meant to be run in root shell.
34 subscription-manager register --username <rhel licence name> --password <password> --auto-attach
36 # required by special centos docker recommended by ONAP
37 yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
39 # required by docker dependencies i.e. docker-selinux
40 subscription-manager repos --enable=rhel-7-server-extras-rpms
42 # epel is required by npm within blob build
43 rpm -ivh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
53 # required by special centos docker recommended by ONAP
54 yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
56 # enable epel repo for npm and jq
57 yum install -y epel-release
59 Subsequent steps are the same on both platforms:
63 # install following packages
64 yum install -y docker-ce-18.09.5 git createrepo expect nodejs npm jq
67 yum install -y python36 python36-pip
69 # docker daemon must be running on host
72 Then it is necessary to clone all installer and build related repositories and prepare the directory structure.
76 # prepare the onap build directory structure
78 git clone https://gerrit.onap.org/r/oom/offline-installer onap-offline
81 # install required pip packages for build and download scripts
82 pip3 install -r ./build/requirements.txt
83 pip3 install -r ./build/download/requirements.txt
85 Part 2. Download artifacts for offline installer
86 ------------------------------------------------
89 It is possible to generate actual list of docker images using docker-images-collector.sh (helm is required) from cloned OOM directory
90 based on enabled subsystems.
92 In the beginning of the generated list is written commit number from which it was created - the same commit number should be used
93 in Part 4. Packages preparation.
95 Following example will create the list to the default path:
98 # clone the OOM repository
99 git clone https://gerrit.onap.org/r/oom -b <branch> --recurse-submodules /tmp/oom
101 .. note:: replace <branch> by branch you want to build
103 # docker-images-collector.sh script uses oom/kubernetes/onap/resources/overrides/onap-all.yaml file to find what subsystems
104 are enabled. By default all subsystems are enabled there. Modify the file if want to drop some subsystems.
106 #run the collector providing path the the project
107 ./build/creating_data/docker-images-collector.sh /tmp/oom/kubernetes/onap
109 For the other options check the usage of the script.
111 .. note:: Skip this step if you have already all necessary resources and continue with Part 3. Populate local nexus
114 A RPM repository containing packages to be installed on all nodes needs to be created:
118 # run the docker container with -d parameter for destination directory with RPM packages
119 ./offline-installer/build/create_repo.sh -d $(pwd)
121 .. note:: If script fails due to permissions issue, it could be a problem with SeLinux. It can be fixed by running:
124 # Change security context of directory
125 chcon -Rt svirt_sandbox_file_t $(pwd)
127 It's possible to download rest artifacts in single ./download.py execution. Recently we improved reliability of download scripts
128 so one might try following command to download most of the required artifacts in single shot.
132 # following arguments are provided
133 # all data lists are taken from ./build/data_lists/ folder
134 # all resources will be stored in expected folder structure within ../resources folder
136 ./build/download/download.py --docker ./build/data_lists/infra_docker_images.list ../resources/offline_data/docker_images_infra \
137 --http ./build/data_lists/infra_bin_utils.list ../resources/downloads
139 # following docker images do not necessarily need to be stored under resources as they load into repository in next part
140 # if second argument for --docker is not present, images are just pulled and cached.
141 # Warning: script must be run twice separately, for more details run download.py --help
142 ./build/download/download.py --docker ./build/data_lists/rke_docker_images.list \
143 --docker ./build/data_lists/k8s_docker_images.list \
144 --docker ./build/data_lists/onap_docker_images.list \
147 This concludes SW download part required for ONAP offline platform creating.
149 Part 3. Populate local nexus
150 ----------------------------
154 - All data lists and resources which are pushed to local nexus repository are available
155 - Following ports are not occupied by another service: 80, 8081, 8082, 10001
156 - There's no docker container called "nexus"
158 .. note:: In case you skipped the Part 2 for the artifacts download, please ensure that the onap docker images are cached and copy of resources data are untarred in *./onap-offline/../resources/*
162 #Whole nexus blob data will be created by running script build_nexus_blob.sh.
163 ./onap-offline/build/build_nexus_blob.sh
165 It will load the listed docker images, run the Nexus, configure it as npm, pypi
166 and docker repositories. Then it will push all listed docker images to the repositories. After all is done the repository container is stopped.
168 .. note:: In the current release scope we aim to maintain just single example data lists set, tags used in previous releases are not needed. Datalists are also covering latest versions verified by us despite user is allowed to build data lists on his own.
171 Part 4. Packages preparation
172 --------------------------------------------------------
174 ONAP offline deliverable consist of 3 packages:
176 +---------------------------------------+------------------------------------------------------------------------------+
177 | Package | Description |
178 +=======================================+==============================================================================+
179 | sw_package.tar | Contains installation software and configuration for infrastructure and ONAP |
180 +---------------------------------------+------------------------------------------------------------------------------+
181 | resources_package.tar | Contains all input files needed to deploy infrastructure and ONAP |
182 +---------------------------------------+------------------------------------------------------------------------------+
183 | aux_package.tar | Contains auxiliary input files that can be added to ONAP |
184 +---------------------------------------+------------------------------------------------------------------------------+
186 All packages can be created using script build/package.py. Beside of archiving files gathered in the previous steps, script also builds docker images used in on infra server.
188 From onap-offline directory run:
192 ./build/package.py <helm charts repo> --build-version <version> --application-repository_reference <commit/tag/branch> --output-dir <target\_dir> --resources-directory <target\_dir>
198 ./build/package.py https://gerrit.onap.org/r/oom --application-repository_reference <branch> --output-dir /tmp/packages --resources-directory /tmp/resources
200 .. note:: replace <branch> by branch you want to build
202 In the target directory you should find tar files:
207 resources_package.tar