1 .. This work is licensed under a Creative Commons Attribution 4.0 International License.
12 The ONAP Operations Manager (OOM) is responsible for life-cycle
13 management of the ONAP platform itself; components such as MSO, SDNC,
14 etc. It is not responsible for the management of services, VNFs or
15 infrastructure instantiated by ONAP or used by ONAP to host such
16 services or VNFs. OOM uses the open-source Kubernetes container
17 management system as a means to manage the Docker containers that
18 compose ONAP where the containers are hosted either directly on
19 bare-metal servers or on VMs hosted by a 3rd party management system.
20 OOM ensures that ONAP is easily deployable and maintainable throughout
21 its life cycle while using hardware resources efficiently. There are two
22 deployment options for OOM:
24 - A minimal deployment where single instances of the ONAP components
25 are instantiated with no resource reservations, and
27 - | A production deployment where ONAP components are deployed with
28 redundancy and anti-affinity rules such that single faults do not
29 interrupt ONAP operation.
30 | When deployed as containers directly on bare-metal, the minimal
31 deployment option requires a single host (32GB memory with 12
32 vCPUs) however further optimization should allow this deployment to
33 target a laptop computer. Production deployments will require more
34 resources as determined by anti-affinity and geo-redundancy
37 OOM deployments of ONAP provide many benefits:
39 - Life-cycle Management Kubernetes is a comprehensive system for
40 managing the life-cycle of containerized applications. Its use as a
41 platform manager will ease the deployment of ONAP, provide fault
42 tolerance and horizontal scalability, and enable seamless upgrades.
44 - Hardware Efficiency ONAP can be deployed on a single host using less
45 than 32GB of memory. As opposed to VMs that require a guest operating
46 system be deployed along with the application, containers provide
47 similar application encapsulation with neither the computing, memory
48 and storage overhead nor the associated long term support costs of
49 those guest operating systems. An informal goal of the project is to
50 be able to create a development deployment of ONAP that can be hosted
53 - Rapid Deployment With locally cached images ONAP can be deployed from
54 scratch in 7 minutes. Eliminating the guest operating system results
55 in containers coming into service much faster than a VM equivalent.
56 This advantage can be particularly useful for ONAP where rapid
57 reaction to inevitable failures will be critical in production
60 - Portability OOM takes advantage of Kubernetes' ability to be hosted
61 on multiple hosted cloud solutions like Google Compute Engine, AWS
62 EC2, Microsoft Azure, CenturyLink Cloud, IBM Bluemix and more.
64 - Minimal Impact As ONAP is already deployed with Docker containers
65 minimal changes are required to the components themselves when
70 - Platform Deployment Automated deployment/un-deployment of ONAP
71 instance(s) / Automated deployment/un-deployment of individual
72 platform components using docker containers & kubernetes
74 - Platform Monitoring & healing Monitor platform state, Platform health
75 checks, fault tolerance and self-healing using docker containers &
78 - Platform Scaling Platform horizontal scalability through using docker
79 containers & kubernetes
81 - Platform Upgrades Platform upgrades using docker containers &
84 - Platform Configurations Manage overall platform components
85 configurations using docker containers & kubernetes
87 - | Platform migrations Manage migration of platform components using
88 docker containers & kubernetes
89 | Please note that the ONAP Operations Manager does not provide
90 support for containerization of services or VNFs that are managed
91 by ONAP; the OOM orchestrates the life-cycle of the ONAP platform
92 components themselves.
97 Linux containers allow for an application and all of its operating
98 system dependencies to be packaged and deployed as a single unit without
99 including a guest operating system as done with virtual machines. The
100 most popular container solution
101 is \ `Docker <https://www.docker.com/>`__ which provides tools for
102 container management like the Docker Host (dockerd) which can create,
103 run, stop, move, or delete a container. Docker has a very popular
104 registry of containers images that can be used by any Docker system;
105 however, in the ONAP context, Docker images are built by the standard
106 CI/CD flow and stored
107 in \ `Nexus <https://nexus.onap.org/#welcome>`__ repositories. OOM uses
108 the "standard" ONAP docker containers and three new ones specifically
111 Containers are isolated from each other primarily via name spaces within
112 the Linux kernel without the need for multiple guest operating systems.
113 As such, multiple containers can be deployed with little overhead such
114 as all of ONAP can be deployed on a single host. With some optimization
115 of the ONAP components (e.g. elimination of redundant database
116 instances) it may be possible to deploy ONAP on a single laptop
119 Life Cycle Management via Kubernetes
120 ====================================
122 As with the VNFs deployed by ONAP, the components of ONAP have their own
123 life-cycle where the components are created, run, healed, scaled,
124 stopped and deleted. These life-cycle operations are managed by
125 the \ `Kubernetes <https://kubernetes.io/>`__ container management
126 system which maintains the desired state of the container system as
127 described by one or more deployment descriptors - similar in concept to
128 OpenStack HEAT Orchestration Templates. The following sections describe
129 the fundamental objects managed by Kubernetes, the network these
130 components use to communicate with each other and other entities outside
131 of ONAP and the templates that describe the configuration and desired
132 state of the ONAP components.
134 ONAP Components to Kubernetes Object Relationships
135 --------------------------------------------------
137 Kubernetes deployments consist of multiple objects:
139 - nodes - a worker machine - either physical or virtual - that hosts
140 multiple containers managed by kubernetes.
142 - services - an abstraction of a logical set of pods that provide a
145 - pods - one or more (but typically one) container(s) that provide
146 specific application functionality.
148 - persistent volumes - One or more permanent volumes need to be
149 established to hold non-ephemeral configuration and state data.
151 The relationship between these objects is shown in the following figure:
153 .. figure:: ../kubernetes_objects.png
155 OOM uses these kubernetes objects as described in the following
161 OOM works with both physical and virtual worker machines.
163 - Virtual Machine Deployments - If ONAP is to be deployed onto a set of
164 virtual machines, the creation of the VMs is outside of the scope of
165 OOM and could be done in many ways, such as:
167 - manually, for example by a user using the OpenStack Horizon
169 EC2 <https://wiki.onap.org/display/DW/ONAP+on+AWS#ONAPonAWS-Option0:DeployOOMKubernetestoaspotVM>`__,
172 - automatically, for example with the use of a OpenStack Heat
173 Orchestration Template which builds an ONAP stack, or
175 - orchestrated, for example with Cloudify creating the VMs from a
176 TOSCA template and controlling their life cycle for the life of
179 - Physical Machine Deployments - If ONAP is to be deployed onto
180 physical machines there are several options but the recommendation is
182 `Rancher <http://rancher.com/docs/rancher/v1.6/en/quick-start-guide/>`__
183 along with `Helm <https://github.com/kubernetes/helm/releases>`__ to
184 associate hosts with a kubernetes cluster.
189 A group of containers with shared storage and networking can be grouped
190 together into a kubernetes pod. All of the containers within a pod are
191 co-located and co-scheduled so they operate as a single unit. Within
192 ONAP Amsterdam release, pods are mapped one-to-one to docker containers
193 although this may change in the future. As explained in the Services
194 section below the use of Pods within each ONAP component is abstracted
195 from other ONAP components.
200 OOM uses the kubernetes service abstraction to provide a consistent
201 access point for each of the ONAP components independent of the pod or
202 container architecture of that component. For example, the SDNC
203 component may introduce OpenDaylight clustering as some point and change
204 the number of pods in this component to three or more but this change
205 will be isolated from the other ONAP components by the service
206 abstraction. A service can include a load balancer on its ingress to
207 distribute traffic between the pods and even react to dynamic changes in
208 the number of pods if they are part of a replica set (see the MSO
209 example below for a brief explanation of replica sets).
214 As pods and containers are ephemeral, any data that must be persisted
215 across pod restart events needs to be stored outside of the pod in a
216 persistent volume(s). Kubernetes supports a wide variety of types of
217 persistent volumes such as: Fibre Channel, NFS, iSCSI, CephFS, and
218 GlusterFS (for a full list look
219 `here <https://kubernetes.io/docs/concepts/storage/persistent-volumes/#types-of-persistent-volumes>`__)
220 so there are many options as to how storage is configured when deploying
223 OOM Networking with Kubernetes
224 ------------------------------
228 - Ports - Flattening the containers also expose port conflicts between
229 the containers which need to be resolved.
234 Within the namespaces are kubernete's services that provide external
235 connectivity to pods that host Docker containers. The following is a
236 list of the namespaces and the services within:
244 - model-loader-service
256 - search-data-service
283 - dcae-collector-common-event
285 - dcae-collector-dmaapbc
297 - onap-message-router
373 Note that services listed in \ *italics* are local to the namespace
374 itself and not accessible from outside of the namespace.
376 Kubernetes Deployment Specifications for ONAP
377 ---------------------------------------------
379 Each of the ONAP components are deployed as described in a deployment
380 specification. This specification documents key parameters and
381 dependencies between the pods of an ONAP components such that kubernetes
382 is able to repeatably startup the component. The components artifacts
383 are stored here in the oom/kubernetes repo in \ `ONAP
384 gerrit <https://gerrit.onap.org/r/gitweb?p=oom.git;a=tree;f=kubernetes;h=4597d09dbce86d7543174924322435c30cb5b0ee;hb=refs/heads/master>`__.
385 The mso project is a relatively simple example, so let's start there.
391 the \ `oom/kubernetes/templates/mso <https://gerrit.onap.org/r/gitweb?p=oom.git;a=tree;f=kubernetes/templates/mso;h=d8b778a16381d6695f635c14b9dcab72fb9fcfcd;hb=refs/heads/master>`__ repo,
392 one will find four files in yaml format:
394 - `all-services.yaml <https://gerrit.onap.org/r/gitweb?p=oom.git;a=blob_plain;f=kubernetes/mso/templates/all-services.yaml;hb=refs/heads/master>`__
396 - `db-deployment.yaml <https://gerrit.onap.org/r/gitweb?p=oom.git;a=blob_plain;f=kubernetes/mso/templates/db-deployment.yaml;hb=refs/heads/master>`__
398 - `mso-deployment.yaml <https://gerrit.onap.org/r/gitweb?p=oom.git;a=blob_plain;f=kubernetes/mso/templates/db-deployment.yaml;hb=refs/heads/master>`__
400 - `mso-pv-pvc.yaml <https://gerrit.onap.org/r/gitweb?p=oom.git;a=blob_plain;f=kubernetes/mso/templates/mso-pv-pvc.yaml;hb=refs/heads/master>`__
402 The db-deployment.yaml file describes deployment of the database
403 component of mso. Here is the contents:
405 **db-deployment.yaml**::
407 apiVersion: extensions/v1beta1
411 namespace: "{{ .Values.nsPrefix }}-mso"
426 image: {{ .Values.image.mariadb }}
427 imagePullPolicy: {{ .Values.pullPolicy }}
430 - name: MYSQL_ROOT_PASSWORD
432 - name: MARIADB_MAJOR
434 - name: MARIADB_VERSION
435 value: "10.1.11+maria-1~jessie"
437 - mountPath: /etc/localtime
440 - mountPath: /etc/mysql/conf.d
441 name: mso-mariadb-conf
442 - mountPath: /docker-entrypoint-initdb.d
443 name: mso-mariadb-docker-entrypoint-initdb
444 - mountPath: /var/lib/mysql
445 name: mso-mariadb-data
447 - containerPort: 3306
452 initialDelaySeconds: 5
458 - name: mso-mariadb-conf
460 path: /dockerdata-nfs/{{ .Values.nsPrefix }}/mso/mariadb/conf.d
461 - name: mso-mariadb-docker-entrypoint-initdb
463 path: /dockerdata-nfs/{{ .Values.nsPrefix }}/mso/mariadb/docker-entrypoint-initdb.d
464 - name: mso-mariadb-data
465 persistentVolumeClaim:
468 - name: "{{ .Values.nsPrefix }}-docker-registry-key"
471 The first part of the yaml file simply states that this is a deployment
472 specification for a mariadb pod.
474 The spec section starts off with 'replicas: 1' which states that only 1
475 'replica' will be use here. If one was to change the number of replicas
476 to 3 for example, kubernetes would attempt to ensure that three replicas
477 of this pod are operational at all times. One can see that in a
478 clustered environment the number of replicas should probably be more
479 than 1 but for simple deployments 1 is sufficient.
481 The selector label is a grouping primitive of kubernetes but this simple
482 example doesn't exercise it's full capabilities.
484 The template/spec section is where the key information required to start
487 - image: is a reference to the location of the docker image in nexus3
489 - name: is the name of the docker image
491 - env is a section supports the creation of operating system
492 environment variables within the container and are specified as a set
493 of key/value pairs. For example, MYSQL\_ROOT\_PASSWORD is set to
496 - volumeMounts: allow for the creation of custom mount points
498 - ports: define the networking ports that will be opened on the
499 container. Note that further in the all-services.yaml file ports
500 that are defined here can be exposed outside of ONAP component's name
501 space by creating a 'nodePort' - a mechanism used to resolve port
504 - readinessProbe: is the mechanism kubernetes uses to determine the
505 state of the container.
507 - volumes: a location to define volumes required by the container, in
508 this case configuration and initialization information.
510 - imagePullSecrets: an key to access the nexus3 repo when pulling
513 As one might image, the mso-deployment.yaml file describes the
514 deployment artifacts of the mso application. Here are the contents:
516 **mso-deployment.yaml**::
518 apiVersion: extensions/v1beta1
522 namespace: "{{ .Values.nsPrefix }}-mso"
534 pod.beta.kubernetes.io/init-containers: '[
549 "fieldPath": "metadata.namespace"
554 "image": "{{ .Values.image.readiness }}",
555 "imagePullPolicy": "{{ .Values.pullPolicy }}",
556 "name": "mso-readiness"
562 - /docker-files/scripts/start-jboss-server.sh
563 image: {{ .Values.image.mso }}
564 imagePullPolicy: {{ .Values.pullPolicy }}
567 - mountPath: /etc/localtime
572 - mountPath: /docker-files
573 name: mso-docker-files
578 - containerPort: 3904
579 - containerPort: 3905
580 - containerPort: 8080
581 - containerPort: 9990
582 - containerPort: 8787
586 initialDelaySeconds: 5
594 path: /dockerdata-nfs/{{ .Values.nsPrefix }}/mso/mso
595 - name: mso-docker-files
597 path: /dockerdata-nfs/{{ .Values.nsPrefix }}/mso/docker-files
599 - name: "{{ .Values.nsPrefix }}-docker-registry-key"
601 Much like the db deployment specification the first and last part of
602 this yaml file describe meta-data, replicas, images, volumes, etc. The
603 template section has an important new functionality though, a deployment
604 specification for a new "initialization" container . The entire purpose
605 of the init-container is to allow dependencies to be resolved in an
606 orderly manner such that the entire ONAP system comes up every time.
607 Once the dependencies are met and the init-containers job is complete,
608 this container will terminate. Therefore, when OOM starts up ONAP one
609 is able to see a number of init-containers start and then disappear as
610 the system stabilizes. Note that more than one init-container may be
611 specified, each completing before starting the next, if complex startup
612 relationships need to be specified.
614 In this particular init-container, the command '/root/ready.py' will be
615 executed to determine when mariadb is ready, but this could be a simple
616 bash script. The image/name section describes where and how to get the
617 docker image from the init-container.
619 To ensure that data isn't lost when an ephemeral container undergoes
620 life-cycle events (like being restarted), non-volatile or persistent
621 volumes can be attached to the service. The following pv-pvc.yaml
622 file defines the persistent volume as 2 GB storage claimed by the
628 kind: PersistentVolume
630 name: "{{ .Values.nsPrefix }}-mso-db"
631 namespace: "{{ .Values.nsPrefix }}-mso"
633 name: "{{ .Values.nsPrefix }}-mso-db"
639 persistentVolumeReclaimPolicy: Retain
641 path: /dockerdata-nfs/{{ .Values.nsPrefix }}/mso/mariadb/data
643 kind: PersistentVolumeClaim
647 namespace: "{{ .Values.nsPrefix }}-mso"
656 name: "{{ .Values.nsPrefix }}-mso-db"
658 The last of the four files is the all-services.yaml file which defines
659 the kubernetes service(s) that will be exposed in this name space. Here
660 is the contents of the file:
662 **all-services.yaml**::
668 namespace: "{{ .Values.nsPrefix }}-mso"
674 nodePort: {{ .Values.nodePortPrefix }}52
683 namespace: "{{ .Values.nsPrefix }}-mso"
687 msb.onap.org/service-info: '[
691 "url": "/ecomp/mso/infra",
697 "serviceName": "so-deprecated",
699 "url": "/ecomp/mso/infra",
703 "path":"/ecomp/mso/infra"
712 nodePort: {{ .Values.nodePortPrefix }}23
715 nodePort: {{ .Values.nodePortPrefix }}25
718 nodePort: {{ .Values.nodePortPrefix }}24
721 nodePort: {{ .Values.nodePortPrefix }}22
724 nodePort: {{ .Values.nodePortPrefix }}50
727 First of all, note that this file is really two service specification in
728 a single file: the mariadb service and the mso service. In some
729 circumstances it may be possible to hide some of the complexity of the
730 containers/pods by hiding them behind a single service.
732 The mariadb service specification is quite simple; other than the name
733 the only section of interest is the nodePort specification. When
734 containers require exposing ports to the world outside of a kubernetes
735 namespace, there is a potential for port conflict. To resolve this
736 potential port conflict kubernetes uses the concept of a nodePort that
737 is mapped one-to-one with a port within the namespace. In this case the
738 port 3306 (which was defined in the db-deployment.yaml file) is mapped
739 to 30252 externally thus avoiding the conflict that would have arisen
740 from deployment multiple mariadb containers.
742 The mso service definition is largely the same as the mariadb service
743 with the exception that the ports are named.
745 Customizing Deployment Specifications
746 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
748 For each ONAP component deployed by OOM, a set of deployment
749 specifications are required. Use fortunately there are many examples to
750 use as references such that the previous
751 '`mso <https://gerrit.onap.org/r/gitweb?p=oom.git;a=tree;f=kubernetes/mso;h=d8b778a16381d6695f635c14b9dcab72fb9fcfcd;hb=refs/heads/master>`__'
753 `aai <https://gerrit.onap.org/r/gitweb?p=oom.git;a=tree;f=kubernetes/aai;h=243ff90da714459a07fa33023e6655f5d036bfcd;hb=refs/heads/master>`__,
754 `appc <https://gerrit.onap.org/r/gitweb?p=oom.git;a=tree;f=kubernetes/appc;h=d34eaca8a17fc28033a491d3b71aaa1e25673f9e;hb=refs/heads/master>`__,
755 `message-router <https://gerrit.onap.org/r/gitweb?p=oom.git;a=tree;f=kubernetes/message-router;h=51fcb23fb7fbbfab277721483d01c6e3f98ca2cc;hb=refs/heads/master>`__,
756 `policy <https://gerrit.onap.org/r/gitweb?p=oom.git;a=tree;f=kubernetes/policy;h=8c29597b23876ea2ae17dbf747f4ab1e3b955dd9;hb=refs/heads/master>`__,
757 `portal <https://gerrit.onap.org/r/gitweb?p=oom.git;a=tree;f=kubernetes/portal;h=371db03ddef92703daa699014e8c1c9623f7994d;hb=refs/heads/master>`__,
758 `robot <https://gerrit.onap.org/r/gitweb?p=oom.git;a=tree;f=kubernetes/robot;h=46445652d43d93dc599c5108f5c10b303a3c777b;hb=refs/heads/master>`__,
759 `sdc <https://gerrit.onap.org/r/gitweb?p=oom.git;a=tree;f=kubernetes/sdc;h=1d59f7b5944d4604491e72d0b6def0ff3f10ba4d;hb=refs/heads/master>`__,
760 `sdnc <https://gerrit.onap.org/r/gitweb?p=oom.git;a=tree;f=kubernetes/sdnc;h=dbaab2ebd62190edcf489b5a5f1f52992847a73a;hb=refs/heads/master>`__
762 `vid <https://gerrit.onap.org/r/gitweb?p=oom.git;a=tree;f=kubernetes/vid;h=e91788c8504f2da12c086e802e1e7e8648418c66;hb=refs/heads/master>`__.
763 If your components isn't already deployed by OOM, you can create your
764 own set of deployment specifications that can be easily added to OOM.
766 Development Deployments
767 ~~~~~~~~~~~~~~~~~~~~~~~
769 For the Amsterdam release, the deployment specifications represent a
770 simple simplex deployment of ONAP that may not have the robustness
771 typically required of a full operational deployment. Follow on releases
772 will enhance these deployment specifications as follows:
774 - Load Balancers - kubernets has built in support for user defined or
775 simple 'ingress' load balances at the service layer to hide the
776 complexity of multi-pod deployments from other components.
778 - Horizontal Scaling - replica sets can be used to dynamically scale
779 the number of pods behind a service to that of the offered load.
781 - Stateless Pods - using concepts such as DBaaS (database as a service)
782 database technologies could be removed (where appropriate) from the
783 services thus moving to the 'cattle' model so common in cloud
786 Kubernetes Under-Cloud Deployments
787 ==================================
789 The automated ONAP deployment depends on a fully functional kubernetes
790 environment being available prior to ONAP installation. Fortunately,
791 kubenetes is supported on a wide variety of systems such as Google
793 EC2 <https://wiki.onap.org/display/DW/ONAP+on+AWS#ONAPonAWS-Option0:DeployOOMKubernetestoaspotVM>`__,
794 Microsoft Azure, CenturyLink Cloud, IBM Bluemix and more. If you're
795 setting up your own kubernetes environment, please refer to \ `ONAP on
796 Kubernetes <file:///C:\display\DW\ONAP+on+Kubernetes>`__ for a walk
797 through of how to set this environment up on several platforms.
799 ONAP 'OneClick' Deployment Walk-though
800 ======================================
802 Once a kubernetes environment is available and the deployment artifacts
803 have been customized for your location, ONAP is ready to be installed.
805 The first step is to setup
806 the \ `/oom/kubernetes/config/onap-parameters.yaml <https://gerrit.onap.org/r/gitweb?p=oom.git;a=blob;f=kubernetes/config/onap-parameters.yaml;h=7ddaf4d4c3dccf2fad515265f0da9c31ec0e64b1;hb=refs/heads/master>`__ file
807 with key-value pairs specific to your OpenStack environment. There is
808 a \ `sample <https://gerrit.onap.org/r/gitweb?p=oom.git;a=blob;f=kubernetes/config/onap-parameters-sample.yaml;h=3a74beddbbf7f9f9ec8e5a6abaecb7cb238bd519;hb=refs/heads/master>`__\ that
809 may help you out or even be usable directly if you don't intend to
810 actually use OpenStack resources. Here is the contents of this file:
812 **onap-parameters-sample.yaml**
814 .. literalinclude:: https://gerrit.onap.org/r/gitweb?p=oom.git;a=blob_plain;f=kubernetes/config/onap-parameters-sample.yaml;hb=refs/heads/amsterdam
823 NEXUS_HTTP_REPO: https://nexus.onap.org/content/sites/raw
824 NEXUS_DOCKER_REPO: nexus3.onap.org:10001
825 NEXUS_USERNAME: docker
826 NEXUS_PASSWORD: docker
829 OPENSTACK_PUBLIC_NET_ID: "af6880a2-3173-430a-aaa2-6229df57ee15"
830 OPENSTACK_PUBLIC_NET_NAME: "vlan200_net_ext"
831 # Could be reduced, it needs 15 IPs for DCAE VMs
832 OPENSTACK_OAM_NETWORK_CIDR: "10.0.0.0/16"
835 OPENSTACK_USERNAME: "nso"
836 OPENSTACK_API_KEY: "Password"
837 OPENSTACK_TENANT_NAME: "nso-rancher"
838 OPENSTACK_TENANT_ID: "5c59f02201d54a9af1f2207f7be2c1"
839 OPENSTACK_REGION: "RegionOne"
841 OPENSTACK_API_VERSION: "v2.0"
842 OPENSTACK_KEYSTONE_URL: "http://10.1.1.2:5000"
843 OPENSTACK_SERVICE_TENANT_NAME: "service"
846 OPENSTACK_FLAVOUR_SMALL: "m1.small"
847 OPENSTACK_FLAVOUR_MEDIUM: "m1.medium"
848 OPENSTACK_FLAVOUR_LARGE: "m1.large"
851 OPENSTACK_UBUNTU_14_IMAGE: "trusty"
852 OPENSTACK_UBUNTU_16_IMAGE: "xenial"
853 OPENSTACK_CENTOS_7_IMAGE: "centos-7"
856 # Do not change unless you know what you're doing
858 DEMO_ARTIFACTS_VERSION: "1.1.1"
865 # Whether or not to deploy DCAE
866 # If set to false, all the parameters bellow can be left empty or removed
867 # If set to false, update ../dcaegen2/values.yaml disableDcae value to true,
868 # this is to avoid deploying the DCAE deployments and services.
871 DCAE_IP_ADDR: "10.0.4.1"
874 # Do not change unless you know what you're doing
875 DCAE_DOCKER_VERSION: v1.1.1
876 DCAE_VM_BASE_NAME: "dcae"
878 # Can be the same as OPENSTACK_KEYSTONE_URL/OPENSTACK_API_VERSION
879 DCAE_KEYSTONE_URL: "http://10.195.4.2:5000/v2.0"
881 # The private key needs to be in a specific format so it's formatted properly
882 # when ending up in the DCAE HEAT stack. The best way is to do the following:
883 # - copy paste your key
884 # - surround it with quote
885 # - add \n at the end of each line
886 # - escape the result using https://www.freeformatter.com/java-dotnet-escape.html#ad-output
887 OPENSTACK_KEY_NAME: "onap_key"
888 OPENSTACK_PUB_KEY: "ssh-rsa AAAAB3NzaC1yc2EAAAADAQAAABAQC7G5MqLJvkchuD/YGS/lUlTXXkPqdBLz8AhF/Dosln4YpVg9oD2X2fH2Nxs6Gz0wjB6w1pIqQm7ypz3kk2920PiRV2W1L0/mTF/9Wmi9ReVJzkC6VoBxL20MhRi0dx/Wxg4vmbAT4NGk+8ufqA45oFB6l0bQIdtmjzZH/WZFVB+rc1CtX6Ia0hrMyeLbzLM7IzLdVeb411hxumsQ1N0L4dQWY0E1SeynS2azQNU61Kbxjmm4b89Kw/y9iNW9GdFUodOFWbhK8XU/duSLS+NpoQ/kPJXuBzgPFCy6B7DCJhqZ20j0oXGPqZzXcKApZUJdgeLGML3q4DyiNkXAP4okaN Generated-by-Nova"
889 OPENSTACK_PRIVATE_KEY: \"-----BEGIN RSA PRIVATE KEY-----\\n\r\nMIIEpQIBACAQEAuxuTKiyb5HIbg/2Bkv5VJU115D6nQS8/AIRfw6LJZ+GKVYPa\\n\r\nA9l9nx9jcbOhs9MIwesNaSKkJu8qc95JNvdtD4kVdltS9P5kxf/VpovUXlSc5Aul\\n\r\naAcS9tDIUYtHcf1sYOL5mwE+DRpPvLn6gOOaBQepdG0CHbZo82R/1mRVQfq3NQrV\\n\r\n+iGtIazMni28yzOyMy3VXm+NdYcbprENTdC+HUFmNBNUnsp0tms0DVOtSm8Y5puG\\n\r\n/PSsP8vYjVvRnRVKHThVm4SvF1P3bki0vjaaEP5DyV7gc4DxQsugewwiYamdtI9K\\n\r\nFxj6mc13CgKWVCXYHixjC96uA8ojZFwD+KJGjQIDAQABAoIBAG5sLqTEINhoMy7p\\n\r\nLFAowu050qp6A1En5eGTPcUCTCR/aZlgMAj3kPiYmKKgpXyvvcpbwtVaOVA083Pg\\n\r\nKotC6F0zxLPN355wh96GRnt8qD9nZhP7f4luK1X4D1B4hxiRvCVNros453rqHUa+\\n\r\n50SrjdkMFYh9ULNiVHvXws4u9lXx81K+M+FzIcf5GT8Cm9PSG0JiwGG2rmwv++fp\\n\r\nJDH3Z2k+B940ox6RLvoh68CXNYolSnWQ/GI0+o1nv2uncRE9wuAhnVN4JmvWw/zR\\n\r\nqA7k305LgfbeJrma6dE4GOZo5cVbUcVKTD+rilCE13DCYx0yCEhxmDBMizNb83nH\\n\r\nge5AXI0CgYEA3oRVKnTBUSLrLK0ft5LJRz91aaxMUemzCqoQBpM7kaaGSf+gg2Z7\\n\r\nBTRp4fyLrYKyXACZGAXjhw2SVsTjntVACA+pIJQNim4vUNo03hcDVraxUMggvsJx\\n\r\nSKnwDe4zpGbIo7VEJVBgUhWccHKbBo0dB26VOic8xtUI/pDWeR9ryEMCgYEA10M6\\n\r\nrgFhvb4fleS0bzMe+Yv7YsbvEWeHDEgO060n050eIpcvrDtpnc4ag1BFKy9MSqnY\\n\r\n4VUIjIWI9i5Gq7rwxahduJfH/MgjagLtSmvIXA2uYni7unOKarqq75Nko9NG93b7\\n\r\np0nRKxFMm2hCVL7/gy6KzEuLkUhtok8+HOc3cO8CgYEAt/Fs9cvOguP6xNPYhEgz\\n\r\nW1J6HQDxlkU6XHZ5CPZtJ9og6MsIRZdR2tuZK9c5IBYKm0NjSxiTHfF6J4BbKdHf\\n\r\nPMq1ZNj+2JB9TLkVOwKLIAOmUMEfUJIsU4UnjFx9FEpjUfFmg/INrc1vpQUYYjIE\\n\r\n7T/c3FXTSAqThNz2buoqj0ECgYEAx9TiWXxw5vrjSXw5wG0dmR3I7aatcmPAK7eN\\n\r\nBBZfvYPC4Oum1uWEo3kchcBzpaZP1ZQdAPm2aPTh8198PZnaQDOPZXiJr/F/Zr92\\n\r\n1zp9km8k7scTxv/RhEjrvGIA8FCHNd1fuqm9IpT5n99GjHOOsZH4SbTryKALHr0f\\n\r\ndSd0AUMCgYEAi36u1D0Ht40WgGHp+T8AVaYHnXvx+IlH2EXqMDwwv0aINOcHfsUG\\n\r\nG7OrxyJAVaEgwtxgskS7LLp9ANvccyI+F9KLZbBoe2aYcCHjWdtvnc9bJUUs+ERk\\n\r\nJpJwR9NyQ5iObsnAEebILOLP+4yLGAxBz18ZvTRrSz1To456+EO+E+k=\\n\r\n-----END RSA PRIVATE KEY-----\\n\"
892 # This settings allows one to configure the /etc/resolv.conf nameserver resolution for all the DCAE VMs.
894 # In the HEAT setup, it's meant to be a list, as the HEAT setup deploys a DNS Server VM in addition to DNS Designate
895 # and this DNS Server is setup to forward request to the DNS Designate backend when it cannot resolve, hence the
896 # DNS_FORWARDER config here. The DCAE Boostrap requires both inputs, even though they are now similar, we have to pass
899 # ATTENTION: Assumption is made the DNS Designate backend is configure to forward request to a public DNS (e.g. 8.8.8.8)
901 # Put the IP of the DNS Designate backend (e.g. the OpenStack IP supporting DNS Designate)
902 DNS_LIST : "10.195.1.1"
903 DNS_FORWARDER: "10.195.1.1"
905 # Do not change - Public DNS - not used but required by the DCAE boostrap container
906 EXTERNAL_DNS: "8.8.8.8"
908 # Proxy DNS Designate is only supportted for windriver-multicloud adapter (limitation from DCAE)
909 # Set to true if you wish to use it (e.g. Integration lab)
910 DNSAAS_PROXY_ENABLE: "false"
912 # Possibility to have DNS Designate installed in another OpenStack, if not, provide the same values
913 # as the OPENSTACK_* ones above.
914 DNSAAS_REGION: "RegionOne"
915 DNSAAS_KEYSTONE_URL: "http://10.1.1.2:5000/v2.0"
916 DNSAAS_TENANT_NAME: "nso-rancher"
917 DNSAAS_USERNAME: "nso"
918 DNSAAS_PASSWORD: "Password"
920 # DNS domain for the DCAE VMs
921 DCAE_DOMAIN: "dcaeg2.onap.org"
924 Note that these values are required or the following steps will fail.
926 In-order to be able to support multiple ONAP instances within a single
927 kubernetes environment a configuration set is required. The
928 `createConfig.sh <https://gerrit.onap.org/r/gitweb?p=oom.git;a=blob;f=kubernetes/config/createConfig.sh;h=f226ccae47ca6de15c1da49be4b8b6de974895ed;hb=refs/heads/master>`__
929 script is used to do this.
931 **createConfig.sh**::
933 > ./createConfig.sh -n onapTrial
936 script \ `createAll.bash <https://gerrit.onap.org/r/gitweb?p=oom.git;a=blob;f=kubernetes/oneclick/createAll.bash;h=5e5f2dc76ea7739452e757282e750638b4e3e1de;hb=refs/heads/master>`__ is
937 used to create an ONAP deployment with kubernetes. It has two primary
940 - Creating the namespaces used to encapsulate the ONAP components, and
942 - Creating the services, pods and containers within each of these
943 namespaces that provide the core functionality of ONAP.
947 > ./createAll.bash -n onapTrial
949 Namespaces provide isolation between ONAP components as ONAP release 1.0
950 contains duplicate application (e.g. mariadb) and port usage. As
951 such createAll.bash requires the user to enter a namespace prefix string
952 that can be used to separate multiple deployments of onap. The result
953 will be set of 10 namespaces (e.g. onapTrial-sdc, onapTrial-aai,
954 onapTrial-mso, onapTrial-message-router, onapTrial-robot, onapTrial-vid,
955 onapTrial-sdnc, onapTrial-portal, onapTrial-policy, onapTrial-appc)
956 being created within the kubernetes environment. A prerequisite pod
957 config-init (\ `pod-config-init.yaml <https://gerrit.onap.org/r/gitweb?p=oom.git;a=blob;f=kubernetes/config/pod-config-init.yaml;h=b1285ce21d61815c082f6d6aa3c43d00561811c7;hb=refs/heads/master>`__)
958 may editing to match you environment and deployment into the default
959 namespace before running createAll.bash.
964 The \ `Microservices Bus
965 Project <https://wiki.onap.org/pages/viewpage.action?pageId=3246982>`__ provides
966 facilities to integrate micro-services into ONAP and therefore needs to
967 integrate into OOM - primarily through Consul which is the backend of
968 MSB service discovery. The following is a brief description of how this
969 integration will be done:
971 A registrator to push the service endpoint info to MSB service
974 - The needed service endpoint info is put into the kubernetes yaml file
975 as annotation, including service name, Protocol,version, visual
976 range,LB method, IP, Port,etc.
978 - OOM deploy/start/restart/scale in/scale out/upgrade ONAP components
980 - Registrator watch the kubernetes event
982 - When an ONAP component instance has been started/destroyed by OOM,
983 Registrator get the notification from kubernetes
985 - Registrator parse the service endpoint info from annotation and
986 register/update/unregister it to MSB service discovery
988 - MSB API Gateway uses the service endpoint info for service routing
991 Details of the registration service API can be found at \ `Microservice
993 Documentation <https://wiki.onap.org/display/DW/Microservice+Bus+API+Documentation>`__.
995 How to define the service endpoints using annotation \ `ONAP Services
996 List#OOMIntegration <https://wiki.onap.org/display/DW/ONAP+Services+List#ONAPServicesList-OOMIntegration>`__
998 A preliminary view of the OOM-MSB integration is as follows:
1000 .. figure:: ../MSB-OOM-Diagram.png
1002 A message sequence chart of the registration process:
1004 .. figure:: ../MSB-OOM-MSC.png
1006 MSB Usage Instructions
1007 ----------------------
1008 MSB provides kubernetes charts in OOM, so it can be spun up by oom oneclick command.
1010 Please note that kubernetes authentication token must be set at *kubernetes/kube2msb/values.yaml* so the kube2msb registrator can get the access to watch the kubernetes events and get service annotation by kubernetes APIs. The token can be found in the kubectl configuration file *~/.kube/config*
1012 MSB and kube2msb can be spun up with all the ONAP components together, or separately using the following commands.
1014 **Start MSB services**::
1016 createAll.bash -n onap -a msb
1018 **Start kube2msb registrator**::
1020 createAll.bash -n onap -a kube2msb
1022 More details can be found here `MSB installation <http://onap.readthedocs.io/en/latest/submodules/msb/apigateway.git/docs/platform/installation.html>`__.
1024 FAQ (Frequently Asked Questions)
1025 ================================
1027 Does OOM enable the deployment of VNFs on containers?
1029 - No. OOM provides a mechanism to instantiate and manage the ONAP
1030 components themselves with containers but does not provide a
1031 Multi-VIM capability such that VNFs can be deployed into containers.
1032 The Multi VIM/Cloud Project may provide this functionality at some point.
1034 Configuration Parameters
1035 ========================
1037 Configuration parameters that are specific to the ONAP deployment, for example
1038 hard coded IP addresses, are parameterized and stored in a OOM specific
1039 set of configuration files.
1041 More information about ONAP configuration can be found in the Configuration Management
1047 - Docker - http://docker.com
1049 - Kubernetes - http://kubernetes.io
1051 - Helm - https://helm.sh