1 .. This work is licensed under a Creative Commons Attribution 4.0 International License.
2 .. http://creativecommons.org/licenses/by/4.0
3 .. Copyright 2018 Amdocs, Bell Canada
6 .. _Curated applications for Kubernetes: https://github.com/kubernetes/charts
7 .. _Services: https://kubernetes.io/docs/concepts/services-networking/service/
8 .. _ReplicaSet: https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/
9 .. _StatefulSet: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/
10 .. _Helm Documentation: https://docs.helm.sh/helm/
11 .. _Helm: https://docs.helm.sh/
12 .. _Kubernetes: https://Kubernetes.io/
13 .. _Kubernetes LoadBalancer: https://kubernetes.io/docs/concepts/services-networking/service/#type-loadbalancer
19 The ONAP Operations Manager (OOM) provide the ability to manage the entire
20 life-cycle of an ONAP installation, from the initial deployment to final
21 decommissioning. This guide provides instructions for users of ONAP to
22 use the Kubernetes_/Helm_ system as a complete ONAP management system.
24 This guide provides many examples of Helm command line operations. For a
25 complete description of these commands please refer to the `Helm
28 .. figure:: oomLogoV2-medium.png
31 The following sections describe the life-cycle operations:
33 - Deploy_ - with built-in component dependency management
34 - Configure_ - unified configuration across all ONAP components
35 - Monitor_ - real-time health monitoring feeding to a Consul UI and Kubernetes
36 - Heal_- failed ONAP containers are recreated automatically
37 - Scale_ - cluster ONAP services to enable seamless scaling
38 - Upgrade_ - change-out containers or configuration with little or no service impact
39 - Delete_ - cleanup individual containers or entire deployments
41 .. figure:: oomLogoV2-Deploy.png
47 The OOM team with assistance from the ONAP project teams, have built a
48 comprehensive set of Helm charts, yaml files very similar to TOSCA files, that
49 describe the composition of each of the ONAP components and the relationship
50 within and between components. Using this model Helm is able to deploy all of
51 ONAP with a few simple commands.
55 Your environment must have both the Kubernetes `kubectl` and Helm setup as a
60 Enter the following to install kubectl (on Ubuntu, there are slight differences
61 on other O/Ss), the Kubernetes command line interface used to manage a
64 > curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.8.10/bin/linux/amd64/kubectl
66 > sudo mv ./kubectl /usr/local/bin/kubectl
69 Paste kubectl config from Rancher (see the :ref:`cloud-setup-guide-label` for
70 alternative Kubernetes environment setups) into the `~/.kube/config` file.
72 Verify that the Kubernetes config is correct::
74 > kubectl get pods --all-namespaces
76 At this point you should see six Kubernetes pods running.
80 Helm is used by OOM for package and configuration management. To install Helm,
83 > wget http://storage.googleapis.com/kubernetes-helm/helm-v2.9.1-linux-amd64.tar.gz
84 > tar -zxvf helm-v2.9.1-linux-amd64.tar.gz
85 > sudo mv linux-amd64/helm /usr/local/bin/helm
87 Verify the Helm version with::
91 Install the Helm Tiller application and initialize with::
97 Once kubectl and Helm are setup, one needs to setup a local Helm server to
98 server up the ONAP charts::
100 > helm install osn/onap
103 The osn repo is not currently available so creation of a local repository is
106 Helm is able to use charts served up from a repository and comes setup with a
107 default CNCF provided `Curated applications for Kubernetes`_ repository called
108 stable which should be removed to avoid confusion::
110 > helm repo remove stable
112 .. To setup the Open Source Networking Nexus repository for helm enter::
113 .. > helm repo add osn 'https://nexus3.onap.org:10001/helm/helm-repo-in-nexus/master/'
115 To prepare your system for an installation of ONAP, you'll need to::
117 > git clone -b casablanca http://gerrit.onap.org/r/oom
121 To setup a local Helm server to server up the ONAP charts::
126 Note the port number that is listed and use it in the Helm repo add as
129 > helm repo add local http://127.0.0.1:8879
131 To get a list of all of the available Helm chart repositories::
135 local http://127.0.0.1:8879
137 Then build your local Helm repository::
141 The Helm search command reads through all of the repositories configured on the
142 system, and looks for matches::
145 NAME VERSION DESCRIPTION
146 local/appc 2.0.0 Application Controller
147 local/clamp 2.0.0 ONAP Clamp
148 local/common 2.0.0 Common templates for inclusion in other charts
149 local/onap 2.0.0 Open Network Automation Platform (ONAP)
150 local/robot 2.0.0 A helm Chart for kubernetes-ONAP Robot
151 local/so 2.0.0 ONAP Service Orchestrator
153 In any case, setup of the Helm repository is a one time activity.
155 Next, install Helm Plugins required to deploy the ONAP Casablanca release::
157 > cp -R helm/plugins/ ~/.helm
159 Once the repo is setup, installation of ONAP can be done with a single
162 > helm deploy development local/onap --namespace onap
164 This will install ONAP from a local repository in a 'development' Helm release.
165 As described below, to override the default configuration values provided by
166 OOM, an environment file can be provided on the command line as follows::
168 > helm deploy development local/onap --namespace onap -f overrides.yaml
170 To get a summary of the status of all of the pods (containers) running in your
173 > kubectl get pods --all-namespaces -o=wide
176 The Kubernetes namespace concept allows for multiple instances of a component
177 (such as all of ONAP) to co-exist with other components in the same
178 Kubernetes cluster by isolating them entirely. Namespaces share only the
179 hosts that form the cluster thus providing isolation between production and
180 development systems as an example. The OOM deployment of ONAP in Beijing is
181 now done within a single Kubernetes namespace where in Amsterdam a namespace
182 was created for each of the ONAP components.
185 The Helm `--name` option refers to a release name and not a Kubernetes namespace.
188 To install a specific version of a single ONAP component (`so` in this example)
189 with the given release name enter::
191 > helm deploy so onap/so --version 3.0.1
193 To display details of a specific resource or group of resources type::
195 > kubectl describe pod so-1071802958-6twbl
197 where the pod identifier refers to the auto-generated pod identifier.
199 .. figure:: oomLogoV2-Configure.png
205 Each project within ONAP has its own configuration data generally consisting
206 of: environment variables, configuration files, and database initial values.
207 Many technologies are used across the projects resulting in significant
208 operational complexity and an inability to apply global parameters across the
209 entire ONAP deployment. OOM solves this problem by introducing a common
210 configuration technology, Helm charts, that provide a hierarchical
211 configuration with the ability to override values with higher
212 level charts or command line options.
214 The structure of the configuration of ONAP is shown in the following diagram.
215 Note that key/value pairs of a parent will always take precedence over those
216 of a child. Also note that values set on the command line have the highest
224 oValues [label="values.yaml"]
225 demo [label="onap-demo.yaml"]
226 prod [label="onap-production.yaml"]
227 oReq [label="requirements.yaml"]
228 soValues [label="values.yaml"]
229 soReq [label="requirements.yaml"]
230 mdValues [label="values.yaml"]
233 oResources [label="resources"]
237 oResources -> environments
250 The top level onap/values.yaml file contains the values required to be set
251 before deploying ONAP. Here is the contents of this file:
253 .. include:: ../kubernetes/onap/values.yaml
256 One may wish to create a value file that is specific to a given deployment such
257 that it can be differentiated from other deployments. For example, a
258 onap-development.yaml file may create a minimal environment for development
259 while onap-production.yaml might describe a production deployment that operates
260 independently of the developer version.
262 For example, if the production OpenStack instance was different from a
263 developer's instance, the onap-production.yaml file may contain a different
264 value for the vnfDeployment/openstack/oam_network_cidr key as shown below.
270 apps: consul msb mso message-router sdnc vid robot portal policy appc aai
271 sdc dcaegen2 log cli multicloud clamp vnfsdk aaf kube2msb
272 dataRootDir: /dockerdata-nfs
274 # docker repositories
276 onap: nexus3.onap.org:10001
279 filebeat: docker.elastic.co
284 # vnf deployment environment
287 ubuntu_14_image: "Ubuntu_14.04.5_LTS"
288 public_net_id: "e8f51956-00dd-4425-af36-045716781ffc"
289 oam_network_id: "d4769dfb-c9e4-4f72-b3d6-1d18f4ac4ee6"
290 oam_subnet_id: "191f7580-acf6-4c2b-8ec0-ba7d99b3bc4e"
291 oam_network_cidr: "192.168.30.0/24"
295 To deploy ONAP with this environment file, enter::
297 > helm deploy local/onap -n casablanca -f environments/onap-production.yaml
299 .. include:: environments_onap_demo.yaml
302 When deploying all of ONAP a requirements.yaml file control which and what
303 version of the ONAP components are included. Here is an excerpt of this
308 # Referencing a named repo called 'local'.
309 # Can add this repo by running commands like:
311 # > helm repo add local http://127.0.0.1:8879
317 condition: so.enabled
320 The ~ operator in the `so` version value indicates that the latest "2.X.X"
321 version of `so` shall be used thus allowing the chart to allow for minor
322 upgrades that don't impact the so API; hence, version 2.0.1 will be installed
325 The onap/resources/environment/onap-dev.yaml (see the excerpt below) enables
326 for fine grained control on what components are included as part of this
327 deployment. By changing this `so` line to `enabled: false` the `so` component
328 will not be deployed. If this change is part of an upgrade the existing `so`
329 component will be shut down. Other `so` parameters and even `so` child values
330 can be modified, for example the `so`'s `liveness` probe could be disabled
331 (which is not recommended as this change would disable auto-healing of `so`).
335 #################################################################
336 # Global configuration overrides.
338 # These overrides will affect all helm charts (ie. applications)
339 # that are listed below and are 'enabled'.
340 #################################################################
344 #################################################################
345 # Enable/disable and configure helm charts (ie. applications)
346 # to customize the ONAP deployment.
347 #################################################################
351 so: # Service Orchestrator
357 # necessary to disable liveness probe when setting breakpoints
358 # in debugger so K8s doesn't restart unresponsive container
363 Accessing the ONAP Portal using OOM and a Kubernetes Cluster
364 ------------------------------------------------------------
366 The ONAP deployment created by OOM operates in a private IP network that isn't
367 publicly accessible (i.e. Openstack VMs with private internal network) which
368 blocks access to the ONAP Portal. To enable direct access to this Portal from a
369 user's own environment (a laptop etc.) the portal application's port 8989 is
370 exposed through a `Kubernetes LoadBalancer`_ object.
372 Typically, to be able to access the Kubernetes nodes publicly a public address
373 is assigned. In Openstack this is a floating IP address.
375 When the `portal-app` chart is deployed a Kubernetes service is created that
376 instantiates a load balancer. The LB chooses the private interface of one of
377 the nodes as in the example below (10.0.0.4 is private to the K8s cluster only).
378 Then to be able to access the portal on port 8989 from outside the K8s &
379 Openstack environment, the user needs to assign/get the floating IP address that
380 corresponds to the private IP as follows::
382 > kubectl -n onap get services|grep "portal-app"
383 portal-app LoadBalancer 10.43.142.201 10.0.0.4 8989:30215/TCP,8006:30213/TCP,8010:30214/TCP 1d app=portal-app,release=dev
386 In this example, use the 10.0.0.4 private address as a key find the
387 corresponding public address which in this example is 10.12.6.155. If you're
388 using OpenStack you'll do the lookup with the horizon GUI or the Openstack CLI
389 for your tenant (openstack server list). That IP is then used in your
390 `/etc/hosts` to map the fixed DNS aliases required by the ONAP Portal as shown
393 10.12.6.155 portal.api.simpledemo.onap.org
394 10.12.6.155 vid.api.simpledemo.onap.org
395 10.12.6.155 sdc.api.fe.simpledemo.onap.org
396 10.12.6.155 sdc.workflow.plugin.simpledemo.onap.org
397 10.12.6.155 sdc.dcae.plugin.simpledemo.onap.org
398 10.12.6.155 portal-sdk.simpledemo.onap.org
399 10.12.6.155 policy.api.simpledemo.onap.org
400 10.12.6.155 aai.api.sparky.simpledemo.onap.org
401 10.12.6.155 cli.api.simpledemo.onap.org
402 10.12.6.155 msb.api.discovery.simpledemo.onap.org
403 10.12.6.155 msb.api.simpledemo.onap.org
404 10.12.6.155 clamp.api.simpledemo.onap.org
405 10.12.6.155 so.api.simpledemo.onap.org
407 Ensure you've disabled any proxy settings the browser you are using to access
408 the portal and then simply access now the new ssl-encrypted URL:
409 https://portal.api.simpledemo.onap.org:30225/ONAPPORTAL/login.htm
412 Using the HTTPS based Portal URL the Browser needs to be configured to accept
413 unsecure credentials.
414 Additionally when opening an Application inside the Portal, the Browser
415 might block the content, which requires to disable the blocking and reloading
419 Besides the ONAP Portal the Components can deliver additional user interfaces,
420 please check the Component specific documentation.
424 | Alternatives Considered:
426 - Kubernetes port forwarding was considered but discarded as it would require
427 the end user to run a script that opens up port forwarding tunnels to each of
428 the pods that provides a portal application widget.
430 - Reverting to a VNC server similar to what was deployed in the Amsterdam
431 release was also considered but there were many issues with resolution, lack
432 of volume mount, /etc/hosts dynamic update, file upload that were a tall order
433 to solve in time for the Beijing release.
437 - If you are not using floating IPs in your Kubernetes deployment and directly attaching
438 a public IP address (i.e. by using your public provider network) to your K8S Node
439 VMs' network interface, then the output of 'kubectl -n onap get services | grep "portal-app"'
440 will show your public IP instead of the private network's IP. Therefore,
441 you can grab this public IP directly (as compared to trying to find the floating
442 IP first) and map this IP in /etc/hosts.
444 .. figure:: oomLogoV2-Monitor.png
450 All highly available systems include at least one facility to monitor the
451 health of components within the system. Such health monitors are often used as
452 inputs to distributed coordination systems (such as etcd, zookeeper, or consul)
453 and monitoring systems (such as nagios or zabbix). OOM provides two mechanisms
454 to monitor the real-time health of an ONAP deployment:
456 - a Consul GUI for a human operator or downstream monitoring systems and
457 Kubernetes liveness probes that enable automatic healing of failed
459 - a set of liveness probes which feed into the Kubernetes manager which
460 are described in the Heal section.
462 Within ONAP, Consul is the monitoring system of choice and deployed by OOM in
465 - a three-way, centralized Consul server cluster is deployed as a highly
466 available monitor of all of the ONAP components, and
467 - a number of Consul agents.
469 The Consul server provides a user interface that allows a user to graphically
470 view the current health status of all of the ONAP components for which agents
471 have been created - a sample from the ONAP Integration labs follows:
473 .. figure:: consulHealth.png
476 To see the real-time health of a deployment go to: http://<kubernetes IP>:30270/ui/
477 where a GUI much like the following will be found:
480 .. figure:: oomLogoV2-Heal.png
486 The ONAP deployment is defined by Helm charts as mentioned earlier. These Helm
487 charts are also used to implement automatic recoverability of ONAP components
488 when individual components fail. Once ONAP is deployed, a "liveness" probe
489 starts checking the health of the components after a specified startup time.
491 Should a liveness probe indicate a failed container it will be terminated and a
492 replacement will be started in its place - containers are ephemeral. Should the
493 deployment specification indicate that there are one or more dependencies to
494 this container or component (for example a dependency on a database) the
495 dependency will be satisfied before the replacement container/component is
496 started. This mechanism ensures that, after a failure, all of the ONAP
497 components restart successfully.
499 To test healing, the following command can be used to delete a pod::
501 > kubectl delete pod [pod name] -n [pod namespace]
503 One could then use the following command to monitor the pods and observe the
504 pod being terminated and the service being automatically healed with the
505 creation of a replacement pod::
507 > kubectl get pods --all-namespaces -o=wide
509 .. figure:: oomLogoV2-Scale.png
515 Many of the ONAP components are horizontally scalable which allows them to
516 adapt to expected offered load. During the Beijing release scaling is static,
517 that is during deployment or upgrade a cluster size is defined and this cluster
518 will be maintained even in the presence of faults. The parameter that controls
519 the cluster size of a given component is found in the values.yaml file for that
520 component. Here is an excerpt that shows this parameter:
524 # default number of instances
527 In order to change the size of a cluster, an operator could use a helm upgrade
528 (described in detail in the next section) as follows::
530 > helm upgrade --set replicaCount=3 onap/so/mariadb
532 The ONAP components use Kubernetes provided facilities to build clustered,
533 highly available systems including: Services_ with load-balancers, ReplicaSet_,
534 and StatefulSet_. Some of the open-source projects used by the ONAP components
535 directly support clustered configurations, for example ODL and MariaDB Galera.
537 The Kubernetes Services_ abstraction to provide a consistent access point for
538 each of the ONAP components, independent of the pod or container architecture
539 of that component. For example, SDN-C uses OpenDaylight clustering with a
540 default cluster size of three but uses a Kubernetes service to and change the
541 number of pods in this abstract this cluster from the other ONAP components
542 such that the cluster could change size and this change is isolated from the
543 other ONAP components by the load-balancer implemented in the ODL service
546 A ReplicaSet_ is a construct that is used to describe the desired state of the
547 cluster. For example 'replicas: 3' indicates to Kubernetes that a cluster of 3
548 instances is the desired state. Should one of the members of the cluster fail,
549 a new member will be automatically started to replace it.
551 Some of the ONAP components many need a more deterministic deployment; for
552 example to enable intra-cluster communication. For these applications the
553 component can be deployed as a Kubernetes StatefulSet_ which will maintain a
554 persistent identifier for the pods and thus a stable network id for the pods.
555 For example: the pod names might be web-0, web-1, web-{N-1} for N 'web' pods
556 with corresponding DNS entries such that intra service communication is simple
557 even if the pods are physically distributed across multiple nodes. An example
558 of how these capabilities can be used is described in the Running Consul on
561 .. figure:: oomLogoV2-Upgrade.png
567 Helm has built-in capabilities to enable the upgrade of pods without causing a
568 loss of the service being provided by that pod or pods (if configured as a
569 cluster). As described in the OOM Developer's Guide, ONAP components provide
570 an abstracted 'service' end point with the pods or containers providing this
571 service hidden from other ONAP components by a load balancer. This capability
572 is used during upgrades to allow a pod with a new image to be added to the
573 service before removing the pod with the old image. This 'make before break'
574 capability ensures minimal downtime.
576 Prior to doing an upgrade, determine of the status of the deployed charts::
579 NAME REVISION UPDATED STATUS CHART NAMESPACE
580 so 1 Mon Feb 5 10:05:22 2018 DEPLOYED so-2.0.1 default
582 When upgrading a cluster a parameter controls the minimum size of the cluster
583 during the upgrade while another parameter controls the maximum number of nodes
584 in the cluster. For example, SNDC configured as a 3-way ODL cluster might
585 require that during the upgrade no fewer than 2 pods are available at all times
586 to provide service while no more than 5 pods are ever deployed across the two
587 versions at any one time to avoid depleting the cluster of resources. In this
588 scenario, the SDNC cluster would start with 3 old pods then Kubernetes may add
589 a new pod (3 old, 1 new), delete one old (2 old, 1 new), add two new pods (2
590 old, 3 new) and finally delete the 2 old pods (3 new). During this sequence
591 the constraints of the minimum of two pods and maximum of five would be
592 maintained while providing service the whole time.
594 Initiation of an upgrade is triggered by changes in the Helm charts. For
595 example, if the image specified for one of the pods in the SDNC deployment
596 specification were to change (i.e. point to a new Docker image in the nexus3
597 repository - commonly through the change of a deployment variable), the
598 sequence of events described in the previous paragraph would be initiated.
600 For example, to upgrade a container by changing configuration, specifically an
603 > helm deploy casablanca onap/so --version 2.0.1 --set enableDebug=true
605 Issuing this command will result in the appropriate container being stopped by
606 Kubernetes and replaced with a new container with the new environment value.
608 To upgrade a component to a new version with a new configuration file enter::
610 > helm deploy casablanca onap/so --version 2.0.2 -f environments/demo.yaml
612 To fetch release history enter::
615 REVISION UPDATED STATUS CHART DESCRIPTION
616 1 Mon Feb 5 10:05:22 2018 SUPERSEDED so-2.0.1 Install complete
617 2 Mon Feb 5 10:10:55 2018 DEPLOYED so-2.0.2 Upgrade complete
619 Unfortunately, not all upgrades are successful. In recognition of this the
620 lineup of pods within an ONAP deployment is tagged such that an administrator
621 may force the ONAP deployment back to the previously tagged configuration or to
622 a specific configuration, say to jump back two steps if an incompatibility
623 between two ONAP components is discovered after the two individual upgrades
626 This rollback functionality gives the administrator confidence that in the
627 unfortunate circumstance of a failed upgrade the system can be rapidly brought
628 back to a known good state. This process of rolling upgrades while under
629 service is illustrated in this short YouTube video showing a Zero Downtime
630 Upgrade of a web application while under a 10 million transaction per second
633 For example, to roll-back back to previous system revision enter::
638 REVISION UPDATED STATUS CHART DESCRIPTION
639 1 Mon Feb 5 10:05:22 2018 SUPERSEDED so-2.0.1 Install complete
640 2 Mon Feb 5 10:10:55 2018 SUPERSEDED so-2.0.2 Upgrade complete
641 3 Mon Feb 5 10:14:32 2018 DEPLOYED so-2.0.1 Rollback to 1
645 The description field can be overridden to document actions taken or include
648 Many of the ONAP components contain their own databases which are used to
649 record configuration or state information. The schemas of these databases may
650 change from version to version in such a way that data stored within the
651 database needs to be migrated between versions. If such a migration script is
652 available it can be invoked during the upgrade (or rollback) by Container
653 Lifecycle Hooks. Two such hooks are available, PostStart and PreStop, which
654 containers can access by registering a handler against one or both. Note that
655 it is the responsibility of the ONAP component owners to implement the hook
656 handlers - which could be a shell script or a call to a specific container HTTP
657 endpoint - following the guidelines listed on the Kubernetes site. Lifecycle
658 hooks are not restricted to database migration or even upgrades but can be used
659 anywhere specific operations need to be taken during lifecycle operations.
661 OOM uses Helm K8S package manager to deploy ONAP components. Each component is
662 arranged in a packaging format called a chart - a collection of files that
663 describe a set of k8s resources. Helm allows for rolling upgrades of the ONAP
664 component deployed. To upgrade a component Helm release you will need an
665 updated Helm chart. The chart might have modified, deleted or added values,
666 deployment yamls, and more. To get the release name use::
670 To easily upgrade the release use::
672 > helm upgrade [RELEASE] [CHART]
674 To roll back to a previous release version use::
676 > helm rollback [flags] [RELEASE] [REVISION]
678 For example, to upgrade the onap-so helm release to the latest SO container
681 - Edit so values.yaml which is part of the chart
682 - Change "so: nexus3.onap.org:10001/openecomp/so:v1.1.1" to
683 "so: nexus3.onap.org:10001/openecomp/so:v1.1.2"
684 - From the chart location run::
686 > helm upgrade onap-so
688 The previous so pod will be terminated and a new so pod with an updated so
689 container will be created.
691 .. figure:: oomLogoV2-Delete.png
697 Existing deployments can be partially or fully removed once they are no longer
698 needed. To minimize errors it is recommended that before deleting components
699 from a running deployment the operator perform a 'dry-run' to display exactly
700 what will happen with a given command prior to actually deleting anything. For
703 > helm undeploy casablanca --dry-run
705 will display the outcome of deleting the 'casablanca' release from the
707 To completely delete a release and remove it from the internal store enter::
709 > helm undeploy casablanca --purge
711 One can also remove individual components from a deployment by changing the
712 ONAP configuration values. For example, to remove `so` from a running
715 > helm undeploy casablanca-so --purge
717 will remove `so` as the configuration indicates it's no longer part of the
718 deployment. This might be useful if a one wanted to replace just `so` by
719 installing a custom version.