1 .. This work is licensed under a Creative Commons Attribution 4.0 International License.
2 .. http://creativecommons.org/licenses/by/4.0
3 .. Copyright 2018 Amdocs, Bell Canada
6 .. _Curated applications for Kubernetes: https://github.com/kubernetes/charts
7 .. _Services: https://kubernetes.io/docs/concepts/services-networking/service/
8 .. _ReplicaSet: https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/
9 .. _StatefulSet: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/
10 .. _Helm Documentation: https://docs.helm.sh/helm/
11 .. _Helm: https://docs.helm.sh/
12 .. _Kubernetes: https://Kubernetes.io/
13 .. _Kubernetes LoadBalancer: https://kubernetes.io/docs/concepts/services-networking/service/#type-loadbalancer
19 The ONAP Operations Manager (OOM) provide the ability to manage the entire
20 life-cycle of an ONAP installation, from the initial deployment to final
21 decommissioning. This guide provides instructions for users of ONAP to
22 use the Kubernetes_/Helm_ system as a complete ONAP management system.
24 This guide provides many examples of Helm command line operations. For a
25 complete description of these commands please refer to the `Helm
28 .. figure:: oomLogoV2-medium.png
31 The following sections describe the life-cycle operations:
33 - Deploy_ - with built-in component dependency management
34 - Configure_ - unified configuration across all ONAP components
35 - Monitor_ - real-time health monitoring feeding to a Consul UI and Kubernetes
36 - Heal_- failed ONAP containers are recreated automatically
37 - Scale_ - cluster ONAP services to enable seamless scaling
38 - Upgrade_ - change-out containers or configuration with little or no service impact
39 - Delete_ - cleanup individual containers or entire deployments
41 .. figure:: oomLogoV2-Deploy.png
47 The OOM team with assistance from the ONAP project teams, have built a
48 comprehensive set of Helm charts, yaml files very similar to TOSCA files, that
49 describe the composition of each of the ONAP components and the relationship
50 within and between components. Using this model Helm is able to deploy all of
51 ONAP with a few simple commands.
55 Your environment must have both the Kubernetes `kubectl` and Helm setup as a one time activity.
59 Enter the following to install kubectl (on Ubuntu, there are slight differences on other O/Ss), the Kubernetes command line interface used to manage a Kubernetes cluster::
61 > curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.8.10/bin/linux/amd64/kubectl
63 > sudo mv ./kubectl /usr/local/bin/kubectl
66 Paste kubectl config from Rancher (see the :ref:`cloud-setup-guide-label` for alternative Kubenetes environment setups) into the `~/.kube/config` file.
68 Verify that the Kubernetes config is correct::
70 > kubectl get pods --all-namespaces
72 At this point you should see six Kubernetes pods running.
76 Helm is used by OOM for package and configuration management. To install Helm, enter the following::
78 > wget http://storage.googleapis.com/kubernetes-helm/helm-v2.9.1-linux-amd64.tar.gz
79 > tar -zxvf helm-v2.9.1-linux-amd64.tar.gz
80 > sudo mv linux-amd64/helm /usr/local/bin/helm
82 Verify the Helm version with::
86 Install the Helm Tiller application and initialize with::
92 Once kubectl and Helm are setup, one needs to setup a local Helm server to server up the ONAP charts::
94 > helm install osn/onap
97 The osn repo is not currently available so creation of a local repository is
100 Helm is able to use charts served up from a repository and comes setup with a
101 default CNCF provided `Curated applications for Kubernetes`_ repository called
102 stable which should be removed to avoid confusion::
104 > helm repo remove stable
106 .. To setup the Open Source Networking Nexus repository for helm enter::
107 .. > helm repo add osn 'https://nexus3.onap.org:10001/helm/helm-repo-in-nexus/master/'
109 To prepare your system for an installation of ONAP, you'll need to::
111 > git clone -b beijing http://gerrit.onap.org/r/oom
115 To setup a local Helm server to server up the ONAP charts::
120 Note the port number that is listed and use it in the Helm repo add as follows::
122 > helm repo add local http://127.0.0.1:8879
124 To get a list of all of the available Helm chart repositories::
128 local http://127.0.0.1:8879
130 Then build your local Helm repository::
134 The Helm search command reads through all of the repositories configured on the
135 system, and looks for matches::
138 NAME VERSION DESCRIPTION
139 local/appc 2.0.0 Application Controller
140 local/clamp 2.0.0 ONAP Clamp
141 local/common 2.0.0 Common templates for inclusion in other charts
142 local/onap 2.0.0 Open Network Automation Platform (ONAP)
143 local/robot 2.0.0 A helm Chart for kubernetes-ONAP Robot
144 local/so 2.0.0 ONAP Service Orchestrator
146 In any case, setup of the Helm repository is a one time activity.
148 Once the repo is setup, installation of ONAP can be done with a single command::
150 > helm install local/onap --name development
152 This will install ONAP from a local repository in a 'development' Helm release.
153 As described below, to override the default configuration values provided by
154 OOM, an environment file can be provided on the command line as follows::
156 > helm install local/onap --name development -f onap-development.yaml
158 To get a summary of the status of all of the pods (containers) running in your
161 > kubectl get pods --all-namespaces -o=wide
164 The Kubernetes namespace concept allows for multiple instances of a component
165 (such as all of ONAP) to co-exist with other components in the same
166 Kubernetes cluster by isolating them entirely. Namespaces share only the
167 hosts that form the cluster thus providing isolation between production and
168 development systems as an example. The OOM deployment of ONAP in Beijing is
169 now done within a single Kubernetes namespace where in Amsterdam a namespace
170 was created for each of the ONAP components.
173 The Helm `--name` option refers to a release name and not a Kubernetes namespace.
176 To install a specific version of a single ONAP component (`so` in this example)
177 with the given name enter::
179 > helm install onap/so --version 2.0.1 -n so
181 To display details of a specific resource or group of resources type::
183 > kubectl describe pod so-1071802958-6twbl
185 where the pod identifier refers to the auto-generated pod identifier.
187 .. figure:: oomLogoV2-Configure.png
193 Each project within ONAP has its own configuration data generally consisting
194 of: environment variables, configuration files, and database initial values.
195 Many technologies are used across the projects resulting in significant
196 operational complexity and an inability to apply global parameters across the
197 entire ONAP deployment. OOM solves this problem by introducing a common
198 configuration technology, Helm charts, that provide a hierarchical
199 configuration with the ability to override values with higher
200 level charts or command line options.
202 The structure of the configuration of ONAP is shown in the following diagram.
203 Note that key/value pairs of a parent will always take precedence over those
204 of a child. Also note that values set on the command line have the highest
212 oValues [label="values.yaml"]
213 demo [label="onap-demo.yaml"]
214 prod [label="onap-production.yaml"]
215 oReq [label="requirements.yaml"]
216 soValues [label="values.yaml"]
217 soReq [label="requirements.yaml"]
218 mdValues [label="values.yaml"]
221 oResources [label="resources"]
225 oResources -> environments
238 The top level onap/values.yaml file contains the values required to be set
239 before deploying ONAP. Here is the contents of this file:
241 .. include:: onap_values.yaml
244 One may wish to create a value file that is specific to a given deployment such
245 that it can be differentiated from other deployments. For example, a
246 onap-development.yaml file may create a minimal environment for development
247 while onap-production.yaml might describe a production deployment that operates
248 independently of the developer version.
250 For example, if the production OpenStack instance was different from a
251 developer's instance, the onap-production.yaml file may contain a different
252 value for the vnfDeployment/openstack/oam_network_cidr key as shown below.
258 apps: consul msb mso message-router sdnc vid robot portal policy appc aai
259 sdc dcaegen2 log cli multicloud clamp vnfsdk aaf kube2msb
260 dataRootDir: /dockerdata-nfs
262 # docker repositories
264 onap: nexus3.onap.org:10001
267 filebeat: docker.elastic.co
272 # vnf deployment environment
275 ubuntu_14_image: "Ubuntu_14.04.5_LTS"
276 public_net_id: "e8f51956-00dd-4425-af36-045716781ffc"
277 oam_network_id: "d4769dfb-c9e4-4f72-b3d6-1d18f4ac4ee6"
278 oam_subnet_id: "191f7580-acf6-4c2b-8ec0-ba7d99b3bc4e"
279 oam_network_cidr: "192.168.30.0/24"
283 To deploy ONAP with this environment file, enter::
285 > helm install local/onap -n beijing -f environments/onap-production.yaml
287 .. include:: environments_onap_demo.yaml
290 When deploying all of ONAP a requirements.yaml file control which and what
291 version of the ONAP components are included. Here is an excerpt of this
296 # Referencing a named repo called 'local'.
297 # Can add this repo by running commands like:
299 # > helm repo add local http://127.0.0.1:8879
305 condition: so.enabled
308 The ~ operator in the `so` version value indicates that the latest "2.X.X"
309 version of `so` shall be used thus allowing the chart to allow for minor
310 upgrades that don't impact the so API; hence, version 2.0.1 will be installed
313 The onap/resources/environment/onap-dev.yaml (see the excerpt below) enables
314 for fine grained control on what components are included as part of this
315 deployment. By changing this `so` line to `enabled: false` the `so` component
316 will not be deployed. If this change is part of an upgrade the existing `so`
317 component will be shut down. Other `so` parameters and even `so` child values
318 can be modified, for example the `so`'s `liveness` probe could be disabled
319 (which is not recommended as this change would disable auto-healing of `so`).
323 #################################################################
324 # Global configuration overrides.
326 # These overrides will affect all helm charts (ie. applications)
327 # that are listed below and are 'enabled'.
328 #################################################################
332 #################################################################
333 # Enable/disable and configure helm charts (ie. applications)
334 # to customize the ONAP deployment.
335 #################################################################
339 so: # Service Orchestrator
345 # necessary to disable liveness probe when setting breakpoints
346 # in debugger so K8s doesn't restart unresponsive container
351 Accessing the ONAP Portal using OOM and a Kubernetes Cluster
352 ------------------------------------------------------------
354 The ONAP deployment created by OOM operates in a private IP network that isn't
355 publicly accessible (i.e. Openstack VMs with private internal network) which
356 blocks access to the ONAP Portal. To enable direct access to this Portal from a
357 user's own environment (a laptop etc.) the portal application's port 8989 is
358 exposed through a `Kubernetes LoadBalancer`_ object.
360 Typically, to be able to access the Kubernetes nodes publicly a public address is
361 assigned. In Openstack this is a floating IP address.
363 When the `portal-app` chart is deployed a Kubernetes service is created that
364 instantiates a load balancer. The LB chooses the private interface of one of
365 the nodes as in the example below (10.0.0.4 is private to the K8s cluster only).
366 Then to be able to access the portal on port 8989 from outside the K8s &
367 Openstack environment, the user needs to assign/get the floating IP address that
368 corresponds to the private IP as follows::
370 > kubectl -n onap get services|grep "portal-app"
371 portal-app LoadBalancer 10.43.142.201 10.0.0.4 8989:30215/TCP,8006:30213/TCP,8010:30214/TCP 1d app=portal-app,release=dev
374 In this example, use the 10.0.0.4 private address as a key find the
375 corresponding public address which in this example is 10.12.6.155. If you're
376 using OpenStack you'll do the lookup with the horizon GUI or the Openstack CLI
377 for your tenant (openstack server list). That IP is then used in your
378 `/etc/hosts` to map the fixed DNS aliases required by the ONAP Portal as shown
381 10.12.6.155 portal.api.simpledemo.onap.org
382 10.12.6.155 vid.api.simpledemo.onap.org
383 10.12.6.155 sdc.api.fe.simpledemo.onap.org
384 10.12.6.155 portal-sdk.simpledemo.onap.org
385 10.12.6.155 policy.api.simpledemo.onap.org
386 10.12.6.155 aai.api.sparky.simpledemo.onap.org
387 10.12.6.155 cli.api.simpledemo.onap.org
388 10.12.6.155 msb.api.discovery.simpledemo.onap.org
390 Ensure you've disabled any proxy settings the browser you are using to access
391 the portal and then simply access the familiar URL:
392 http://portal.api.simpledemo.onap.org:8989/ONAPPORTAL/login.htm
397 | Alternatives Considered:
399 - Kubernetes port forwarding was considered but discarded as it would require
400 the end user to run a script that opens up port forwarding tunnels to each of
401 the pods that provides a portal application widget.
403 - Reverting to a VNC server similar to what was deployed in the Amsterdam
404 release was also considered but there were many issues with resolution, lack
405 of volume mount, /etc/hosts dynamic update, file upload that were a tall order
406 to solve in time for the Beijing release.
410 - If you are not using floating IPs in your Kubernetes deployment and directly attaching
411 a public IP address (i.e. by using your public provider network) to your K8S Node
412 VMs' network interface, then the output of 'kubectl -n onap get services | grep "portal-app"'
413 will show your public IP instead of the private network's IP. Therefore,
414 you can grab this public IP directly (as compared to trying to find the floating
415 IP first) and map this IP in /etc/hosts.
417 .. figure:: oomLogoV2-Monitor.png
423 All highly available systems include at least one facility to monitor the
424 health of components within the system. Such health monitors are often used as
425 inputs to distributed coordination systems (such as etcd, zookeeper, or consul)
426 and monitoring systems (such as nagios or zabbix). OOM provides two mechanims
427 to monitor the real-time health of an ONAP deployment:
429 - a Consul GUI for a human operator or downstream monitoring systems and
430 Kubernetes liveness probes that enable automatic healing of failed
432 - a set of liveness probes which feed into the Kubernetes manager which
433 are described in the Heal section.
435 Within ONAP, Consul is the monitoring system of choice and deployed by OOM in two parts:
437 - a three-way, centralized Consul server cluster is deployed as a highly
438 available monitor of all of the ONAP components, and
439 - a number of Consul agents.
441 The Consul server provides a user interface that allows a user to graphically
442 view the current health status of all of the ONAP components for which agents
443 have been created - a sample from the ONAP Integration labs follows:
445 .. figure:: consulHealth.png
448 To see the real-time health of a deployment go to: http://<kubernetes IP>:30270/ui/
449 where a GUI much like the following will be found:
452 .. figure:: oomLogoV2-Heal.png
458 The ONAP deployment is defined by Helm charts as mentioned earlier. These Helm
459 charts are also used to implement automatic recoverability of ONAP components
460 when individual components fail. Once ONAP is deployed, a "liveness" probe
461 starts checking the health of the components after a specified startup time.
463 Should a liveness probe indicate a failed container it will be terminated and a
464 replacement will be started in its place - containers are ephemeral. Should the
465 deployment specification indicate that there are one or more dependencies to
466 this container or component (for example a dependency on a database) the
467 dependency will be satisfied before the replacement container/component is
468 started. This mechanism ensures that, after a failure, all of the ONAP
469 components restart successfully.
471 To test healing, the following command can be used to delete a pod::
473 > kubectl delete pod [pod name] -n [pod namespace]
475 One could then use the following command to monitor the pods and observe the
476 pod being terminated and the service being automatically healed with the
477 creation of a replacement pod::
479 > kubectl get pods --all-namespaces -o=wide
481 .. figure:: oomLogoV2-Scale.png
487 Many of the ONAP components are horizontally scalable which allows them to
488 adapt to expected offered load. During the Beijing release scaling is static,
489 that is during deployment or upgrade a cluster size is defined and this cluster
490 will be maintained even in the presence of faults. The parameter that controls
491 the cluster size of a given component is found in the values.yaml file for that
492 component. Here is an excerpt that shows this parameter:
496 # default number of instances
499 In order to change the size of a cluster, an operator could use a helm upgrade
500 (described in detail in the next section) as follows::
502 > helm upgrade --set replicaCount=3 onap/so/mariadb
504 The ONAP components use Kubernetes provided facilities to build clustered,
505 highly available systems including: Services_ with load-balancers, ReplicaSet_,
506 and StatefulSet_. Some of the open-source projects used by the ONAP components
507 directly support clustered configurations, for example ODL and MariaDB Galera.
509 The Kubernetes Services_ abstraction to provide a consistent access point for
510 each of the ONAP components, independent of the pod or container architecture
511 of that component. For example, SDN-C uses OpenDaylight clustering with a
512 default cluster size of three but uses a Kubernetes service to and change the
513 number of pods in this abstract this cluster from the other ONAP components
514 such that the cluster could change size and this change is isolated from the
515 other ONAP components by the load-balancer implemented in the ODL service
518 A ReplicaSet_ is a construct that is used to describe the desired state of the
519 cluster. For example 'replicas: 3' indicates to Kubernetes that a cluster of 3
520 instances is the desired state. Should one of the members of the cluster fail,
521 a new member will be automatically started to replace it.
523 Some of the ONAP components many need a more deterministic deployment; for
524 example to enable intra-cluster communication. For these applications the
525 component can be deployed as a Kubernetes StatefulSet_ which will maintain a
526 persistent identifier for the pods and thus a stable network id for the pods.
527 For example: the pod names might be web-0, web-1, web-{N-1} for N 'web' pods
528 with corresponding DNS entries such that intra service communication is simple
529 even if the pods are physically distributed across multiple nodes. An example
530 of how these capabilities can be used is described in the Running Consul on
533 .. figure:: oomLogoV2-Upgrade.png
539 Helm has built-in capabilities to enable the upgrade of pods without causing a
540 loss of the service being provided by that pod or pods (if configured as a
541 cluster). As described in the OOM Developer's Guide, ONAP components provide
542 an abstracted 'service' end point with the pods or containers providing this
543 service hidden from other ONAP components by a load balancer. This capability
544 is used during upgrades to allow a pod with a new image to be added to the
545 service before removing the pod with the old image. This 'make before break'
546 capability ensures minimal downtime.
548 Prior to doing an upgrade, determine of the status of the deployed charts::
551 NAME REVISION UPDATED STATUS CHART NAMESPACE
552 so 1 Mon Feb 5 10:05:22 2018 DEPLOYED so-2.0.1 default
554 When upgrading a cluster a parameter controls the minimum size of the cluster
555 during the upgrade while another parameter controls the maximum number of nodes
556 in the cluster. For example, SNDC configured as a 3-way ODL cluster might
557 require that during the upgrade no fewer than 2 pods are available at all times
558 to provide service while no more than 5 pods are ever deployed across the two
559 versions at any one time to avoid depleting the cluster of resources. In this
560 scenario, the SDNC cluster would start with 3 old pods then Kubernetes may add
561 a new pod (3 old, 1 new), delete one old (2 old, 1 new), add two new pods (2
562 old, 3 new) and finally delete the 2 old pods (3 new). During this sequence
563 the constraints of the minimum of two pods and maximum of five would be
564 maintained while providing service the whole time.
566 Initiation of an upgrade is triggered by changes in the Helm charts. For
567 example, if the image specified for one of the pods in the SDNC deployment
568 specification were to change (i.e. point to a new Docker image in the nexus3
569 repository - commonly through the change of a deployment variable), the
570 sequence of events described in the previous paragraph would be initiated.
572 For example, to upgrade a container by changing configuration, specifically an
575 > helm upgrade beijing onap/so --version 2.0.1 --set enableDebug=true
577 Issuing this command will result in the appropriate container being stopped by
578 Kubernetes and replaced with a new container with the new environment value.
580 To upgrade a component to a new version with a new configuration file enter::
582 > helm upgrade beijing onap/so --version 2.0.2 -f environments/demo.yaml
584 To fetch release history enter::
587 REVISION UPDATED STATUS CHART DESCRIPTION
588 1 Mon Feb 5 10:05:22 2018 SUPERSEDED so-2.0.1 Install complete
589 2 Mon Feb 5 10:10:55 2018 DEPLOYED so-2.0.2 Upgrade complete
591 Unfortunately, not all upgrades are successful. In recognition of this the
592 lineup of pods within an ONAP deployment is tagged such that an administrator
593 may force the ONAP deployment back to the previously tagged configuration or to
594 a specific configuration, say to jump back two steps if an incompatibility
595 between two ONAP components is discovered after the two individual upgrades
598 This rollback functionality gives the administrator confidence that in the
599 unfortunate circumstance of a failed upgrade the system can be rapidly brought
600 back to a known good state. This process of rolling upgrades while under
601 service is illustrated in this short YouTube video showing a Zero Downtime
602 Upgrade of a web application while under a 10 million transaction per second
605 For example, to roll-back back to previous system revision enter::
610 REVISION UPDATED STATUS CHART DESCRIPTION
611 1 Mon Feb 5 10:05:22 2018 SUPERSEDED so-2.0.1 Install complete
612 2 Mon Feb 5 10:10:55 2018 SUPERSEDED so-2.0.2 Upgrade complete
613 3 Mon Feb 5 10:14:32 2018 DEPLOYED so-2.0.1 Rollback to 1
617 The description field can be overridden to document actions taken or include
620 Many of the ONAP components contain their own databases which are used to
621 record configuration or state information. The schemas of these databases may
622 change from version to version in such a way that data stored within the
623 database needs to be migrated between versions. If such a migration script is
624 available it can be invoked during the upgrade (or rollback) by Container
625 Lifecycle Hooks. Two such hooks are available, PostStart and PreStop, which
626 containers can access by registering a handler against one or both. Note that
627 it is the responsibility of the ONAP component owners to implement the hook
628 handlers - which could be a shell script or a call to a specific container HTTP
629 endpoint - following the guidelines listed on the Kubernetes site. Lifecycle
630 hooks are not restricted to database migration or even upgrades but can be used
631 anywhere specific operations need to be taken during lifecycle operations.
633 OOM uses Helm K8S package manager to deploy ONAP components. Each component is
634 arranged in a packaging format called a chart - a collection of files that
635 describe a set of k8s resources. Helm allows for rolling upgrades of the ONAP
636 component deployed. To upgrade a component Helm release you will need an
637 updated Helm chart. The chart might have modified, deleted or added values,
638 deployment yamls, and more. To get the release name use::
642 To easily upgrade the release use::
644 > helm upgrade [RELEASE] [CHART]
646 To roll back to a previous release version use::
648 > helm rollback [flags] [RELEASE] [REVISION]
650 For example, to upgrade the onap-so helm release to the latest SO container
653 - Edit so values.yaml which is part of the chart
654 - Change "so: nexus3.onap.org:10001/openecomp/so:v1.1.1" to
655 "so: nexus3.onap.org:10001/openecomp/so:v1.1.2"
656 - From the chart location run::
658 > helm upgrade onap-so
660 The previous so pod will be terminated and a new so pod with an updated so
661 container will be created.
663 .. figure:: oomLogoV2-Delete.png
669 Existing deployments can be partially or fully removed once they are no longer
670 needed. To minimize errors it is recommended that before deleting components
671 from a running deployment the operator perform a 'dry-run' to display exactly
672 what will happen with a given command prior to actually deleting anything. For
675 > helm delete --dry-run beijing
677 will display the outcome of deleting the 'beijing' release from the deployment.
678 To completely delete a release and remove it from the internal store enter::
680 > helm delete --purge beijing
682 One can also remove individual components from a deployment by changing the
683 ONAP configuration values. For example, to remove `so` from a running
686 > helm upgrade beijing osn/onap --set so.enabled=false
688 will remove `so` as the configuration indicates it's no longer part of the
689 deployment. This might be useful if a one wanted to replace just `so` by
690 installing a custom version.