1 .. _integration-installation:
5 Integration Environment Installation
6 -------------------------------------
8 ONAP is deployed on top of kubernetes through the OOM installer.
9 Kubernetes can be installed on bare metal or on different environments such as
10 OpenStack (private or public cloud), Azure, AWS,..
12 The integration team maintains a heat template to install ONAP on OpenStack.
13 This template creates the needed resources (VMs, networks, security groups,
14 ...) in order to support a HA Kubernetes then a full ONAP installation.
16 Sample OpenStack RC (credential) files environment files or deployment scripts
17 are provided, they correspond to files used on windriver environment.
18 This environment is used by the integration team to validate the installation,
19 perform tests and troubleshoot.
21 If you intend to deploy your own environment, they can be used as reference but
22 must be adapted according to your context.
27 - HEAT template files: https://git.onap.org/integration/tree/deployment/heat/onap-rke?h=guilin
28 - Sample OpenStack RC file: https://git.onap.org/integration/tree/deployment/heat/onap-rke/env/windriver/Integration-SB-00-openrc?h=guilin
29 - Sample environment file: https://git.onap.org/integration/tree/deployment/heat/onap-rke/env/windriver/onap-oom.env?h=guilin
30 - Deployment script: https://git.onap.org/integration/tree/deployment/heat/onap-rke/scripts/deploy.sh?h=guilin
33 Heat Template Description
34 ~~~~~~~~~~~~~~~~~~~~~~~~~
36 The ONAP Integration Project provides a sample HEAT template that
37 fully automates the deployment of ONAP using OOM as described in
38 OOM documentation, see :ref:`OOM <onap-oom:oom_quickstart_guide>` for details.
40 The ONAP OOM HEAT template deploys the entire ONAP platform. It spins
41 up an HA-enabled Kubernetes cluster, and deploys ONAP using OOM onto
44 - 1 Shared NFS server (called Rancher VM for legacy reasons)
45 - 3 orch VMs for Kubernetes HA controller and etcd roles
46 - 12 k8s VMs for Kubernetes HA worker roles
48 See OOM documentation for details.
54 Using the Wind River lab configuration as an example, here is what
55 you need to do to deploy ONAP:
59 git clone https://git.onap.org/integration
60 cd integration/deployment/heat/onap-rke/
61 source ./env/windriver/Integration-SB-00-openrc
62 ./scripts/deploy.sh ./env/windriver/onap-oom.env
65 Environment and RC files
66 ~~~~~~~~~~~~~~~~~~~~~~~~
68 Before deploying ONAP to your own environment, it is necessary to
69 customize the environment and RC files. You should make a copy of the
70 sample RC and environment files shown above and customize the values
71 for your specific OpenStack environments.
73 The environment file contains a block called integration_override_yaml.
75 The content of this block will be used by OOM to overwrite some parts of its
76 installation parameters used in the helm charts.
78 This file may deal with:
80 * Cloud adaptation (use the defined flavors, available images)
81 * Proxies (apt, docker,..)
82 * Pre-defined resources for use cases (networks, tenant references)
83 * performance tuning (initialization timers)
85 Performance tuning reflects the adaptation to the hardware at a given time.
86 The lab may evolve and the timers shall follow.
88 Be sure to customize the necessary values within this block to match your
89 OpenStack environment as well.
91 **Notes on select parameters**
95 apt_proxy: 10.12.5.2:8000
96 docker_proxy: 10.12.5.2:5000
98 rancher_vm_flavor: m1.large
99 k8s_vm_flavor: m1.xlarge
100 etcd_vm_flavor: m1.medium # not currently used
101 orch_vm_flavor: m1.medium
105 helm_deploy_delay: 2.5m
107 It is recommended that you set up an apt proxy and a docker proxy
108 local to your lab. If you do not wish to use such proxies, you can
109 set the apt_proxy and docker_proxy parameters to the empty string "".
111 rancher_vm_flavor needs to have 8 GB of RAM.
112 k8s_vm_flavor needs to have at least 16 GB of RAM.
113 orch_vm_flavor needs to have 4 GB of RAM.
114 By default the template assumes that you have already imported a
115 keypair named "onap_key" into your OpenStack environment. If the
116 desired keypair has a different name, change the key_name parameter.
118 The helm_deploy_delay parameter introduces a delay in-between the
119 deployments of each ONAP helm subchart to help alleviate system load or
120 contention issues caused by trying to spin up too many pods
121 simultaneously. The value of this parameter is passed to the Linux
122 "sleep" command. Adjust this parameter based on the performance and
123 load characteristics of your OpenStack environment.
126 Exploring the Rancher VM
127 ~~~~~~~~~~~~~~~~~~~~~~~~
129 The Rancher VM that is spun up by this HEAT template serves the
131 - Hosts the /dockerdata-nfs/ NFS export shared by all the k8s VMs for persistent volumes
132 - git clones the oom repo into /root/oom
133 - git clones the integration repo into /root/integration
134 - Creates the helm override file at /root/integration-override.yaml
135 - Deploys ONAP using helm and OOM