+++ /dev/null
-.. This work is licensed under a Creative Commons Attribution 4.0
-.. International License.
-.. http://creativecommons.org/licenses/by/4.0
-.. Copyright 2018-2020 Amdocs, Bell Canada, Orange, Samsung
-
-.. Links
-.. _HELM Best Practices Guide: https://docs.helm.sh/chart_best_practices/#requirements
-.. _kubectl Cheat Sheet: https://kubernetes.io/docs/reference/kubectl/cheatsheet/
-.. _Kubernetes documentation for emptyDir: https://kubernetes.io/docs/concepts/storage/volumes/#emptydir
-.. _Docker DevOps: https://lf-onap.atlassian.net/wiki/spaces/DW/pages/16239251/Docker+DevOps#Docker-Build
-.. _http://cd.onap.info:30223/mso/logging/debug: http://cd.onap.info:30223/mso/logging/debug
-.. _README.md: https://gerrit.onap.org/r/gitweb?p=oom.git;a=blob;f=kubernetes/README.md
-
-.. figure:: images/oom_logo/oomLogoV2-medium.png
- :align: right
-
-.. _onap-on-kubernetes-with-rancher:
-
-ONAP on HA Kubernetes Cluster
-#############################
-
-This guide provides instructions on how to setup a Highly-Available Kubernetes
-Cluster. For this, we are hosting our cluster on OpenStack VMs and using the
-Rancher Kubernetes Engine (RKE) to deploy and manage our Kubernetes Cluster.
-
-.. contents::
- :depth: 1
- :local:
-..
-
-The result at the end of this tutorial will be:
-
-#. Creation of a Key Pair to use with Open Stack and RKE
-
-#. Creation of OpenStack VMs to host Kubernetes Control Plane
-
-#. Creation of OpenStack VMs to host Kubernetes Workers
-
-#. Installation and configuration of RKE to setup an HA Kubernetes
-
-#. Installation and configuration of kubectl
-
-#. Installation and configuration of Helm
-
-#. Creation of an NFS Server to be used by ONAP as shared persistance
-
-There are many ways one can execute the above steps. Including automation
-through the use of HEAT to setup the OpenStack VMs. To better illustrate the
-steps involved, we have captured the manual creation of such an environment
-using the ONAP Wind River Open Lab.
-
-Create Key Pair
-===============
-A Key Pair is required to access the created OpenStack VMs and will be used by
-RKE to configure the VMs for Kubernetes.
-
-Use an existing key pair, import one or create a new one to assign.
-
-.. image:: images/keys/key_pair_1.png
-
-.. Note::
- If you're creating a new Key Pair, ensure to create a local copy of the
- Private Key through the use of "Copy Private Key to Clipboard".
-
-For the purpose of this guide, we will assume a new local key called "onap-key"
-has been downloaded and is copied into **~/.ssh/**, from which it can be
-referenced.
-
-Example::
-
- > mv onap-key ~/.ssh
-
- > chmod 600 ~/.ssh/onap-key
-
-
-Create Network
-==============
-
-An internal network is required in order to deploy our VMs that will host
-Kubernetes.
-
-.. image:: images/network/network_1.png
-
-.. image:: images/network/network_2.png
-
-.. image:: images/network/network_3.png
-
-.. Note::
- It's better to have one network per deployment and obviously the name of this
- network should be unique.
-
-Now we need to create a router to attach this network to outside:
-
-.. image:: images/network/network_4.png
-
-Create Security Group
-=====================
-
-A specific security group is also required
-
-.. image:: images/sg/sg_1.png
-
-then click on `manage rules` of the newly created security group.
-And finally click on `Add Rule` and create the following one:
-
-.. image:: images/sg/sg_2.png
-
-.. Note::
- the security is clearly not good here and the right SG will be proposed in a
- future version
-
-Create Kubernetes Control Plane VMs
-===================================
-
-The following instructions describe how to create 3 OpenStack VMs to host the
-Highly-Available Kubernetes Control Plane.
-ONAP workloads will not be scheduled on these Control Plane nodes.
-
-Launch new VM instances
------------------------
-
-.. image:: images/cp_vms/control_plane_1.png
-
-Select Ubuntu 18.04 as base image
----------------------------------
-Select "No" for "Create New Volume"
-
-.. image:: images/cp_vms/control_plane_2.png
-
-Select Flavor
--------------
-The recommended flavor is at least 4 vCPU and 8GB ram.
-
-.. image:: images/cp_vms/control_plane_3.png
-
-Networking
-----------
-
-Use the created network:
-
-.. image:: images/cp_vms/control_plane_4.png
-
-Security Groups
----------------
-
-Use the created security group:
-
-.. image:: images/cp_vms/control_plane_5.png
-
-Key Pair
---------
-Assign the key pair that was created/selected previously (e.g. onap_key).
-
-.. image:: images/cp_vms/control_plane_6.png
-
-Apply customization script for Control Plane VMs
-------------------------------------------------
-
-Click :download:`openstack-k8s-controlnode.sh <shell/openstack-k8s-controlnode.sh>`
-to download the script.
-
-.. literalinclude:: shell/openstack-k8s-controlnode.sh
- :language: bash
-
-This customization script will:
-
-* update ubuntu
-* install docker
-
-.. image:: images/cp_vms/control_plane_7.png
-
-Launch Instance
----------------
-
-.. image:: images/cp_vms/control_plane_8.png
-
-
-
-Create Kubernetes Worker VMs
-============================
-The following instructions describe how to create OpenStack VMs to host the
-Highly-Available Kubernetes Workers. ONAP workloads will only be scheduled on
-these nodes.
-
-Launch new VM instances
------------------------
-
-The number and size of Worker VMs is dependent on the size of the ONAP
-deployment. By default, all ONAP applications are deployed. It's possible to
-customize the deployment and enable a subset of the ONAP applications. For the
-purpose of this guide, however, we will deploy 12 Kubernetes Workers that have
-been sized to handle the entire ONAP application workload.
-
-.. image:: images/wk_vms/worker_1.png
-
-Select Ubuntu 18.04 as base image
----------------------------------
-Select "No" on "Create New Volume"
-
-.. image:: images/wk_vms/worker_2.png
-
-Select Flavor
--------------
-The size of Kubernetes hosts depend on the size of the ONAP deployment
-being installed.
-
-If a small subset of ONAP applications are being deployed
-(i.e. for testing purposes), then 16GB or 32GB may be sufficient.
-
-.. image:: images/wk_vms/worker_3.png
-
-Networking
------------
-
-.. image:: images/wk_vms/worker_4.png
-
-Security Group
----------------
-
-.. image:: images/wk_vms/worker_5.png
-
-Key Pair
---------
-Assign the key pair that was created/selected previously (e.g. onap_key).
-
-.. image:: images/wk_vms/worker_6.png
-
-Apply customization script for Kubernetes VM(s)
------------------------------------------------
-
-Click :download:`openstack-k8s-workernode.sh <shell/openstack-k8s-workernode.sh>` to
-download the script.
-
-.. literalinclude:: shell/openstack-k8s-workernode.sh
- :language: bash
-
-This customization script will:
-
-* update ubuntu
-* install docker
-* install nfs common
-
-
-Launch Instance
----------------
-
-.. image:: images/wk_vms/worker_7.png
-
-
-
-
-Assign Floating IP addresses
-----------------------------
-Assign Floating IPs to all Control Plane and Worker VMs.
-These addresses provide external access to the VMs and will be used by RKE
-to configure kubernetes on to the VMs.
-
-Repeat the following for each VM previously created:
-
-.. image:: images/floating_ips/floating_1.png
-
-Resulting floating IP assignments in this example.
-
-.. image:: images/floating_ips/floating_2.png
-
-
-
-
-Configure Rancher Kubernetes Engine (RKE)
-=========================================
-
-Install RKE
------------
-Download and install RKE on a VM, desktop or laptop.
-Binaries can be found here for Linux and Mac: https://github.com/rancher/rke/releases/tag/v1.0.6
-
-.. note::
- There are several ways to install RKE. Further parts of this documentation
- assumes that you have rke command available.
- If you don't know how to install RKE you may follow the below steps:
-
- * chmod +x ./rke_linux-amd64
- * sudo mv ./rke_linux-amd64 /user/local/bin/rke
-
-RKE requires a *cluster.yml* as input. An example file is show below that
-describes a Kubernetes cluster that will be mapped onto the OpenStack VMs
-created earlier in this guide.
-
-Click :download:`cluster.yml <yaml/cluster.yml>` to download the
-configuration file.
-
-.. literalinclude:: yaml/cluster.yml
- :language: yaml
-
-Prepare cluster.yml
--------------------
-Before this configuration file can be used the external **address**
-and the **internal_address** must be mapped for each control and worker node
-in this file.
-
-Run RKE
--------
-From within the same directory as the cluster.yml file, simply execute::
-
- > rke up
-
-The output will look something like::
-
- INFO[0000] Initiating Kubernetes cluster
- INFO[0000] [certificates] Generating admin certificates and kubeconfig
- INFO[0000] Successfully Deployed state file at [./cluster.rkestate]
- INFO[0000] Building Kubernetes cluster
- INFO[0000] [dialer] Setup tunnel for host [10.12.6.82]
- INFO[0000] [dialer] Setup tunnel for host [10.12.6.249]
- INFO[0000] [dialer] Setup tunnel for host [10.12.6.74]
- INFO[0000] [dialer] Setup tunnel for host [10.12.6.85]
- INFO[0000] [dialer] Setup tunnel for host [10.12.6.238]
- INFO[0000] [dialer] Setup tunnel for host [10.12.6.89]
- INFO[0000] [dialer] Setup tunnel for host [10.12.5.11]
- INFO[0000] [dialer] Setup tunnel for host [10.12.6.90]
- INFO[0000] [dialer] Setup tunnel for host [10.12.6.244]
- INFO[0000] [dialer] Setup tunnel for host [10.12.5.165]
- INFO[0000] [dialer] Setup tunnel for host [10.12.6.126]
- INFO[0000] [dialer] Setup tunnel for host [10.12.6.111]
- INFO[0000] [dialer] Setup tunnel for host [10.12.5.160]
- INFO[0000] [dialer] Setup tunnel for host [10.12.5.191]
- INFO[0000] [dialer] Setup tunnel for host [10.12.6.195]
- INFO[0002] [network] Deploying port listener containers
- INFO[0002] [network] Pulling image [nexus3.onap.org:10001/rancher/rke-tools:v0.1.27] on host [10.12.6.85]
- INFO[0002] [network] Pulling image [nexus3.onap.org:10001/rancher/rke-tools:v0.1.27] on host [10.12.6.89]
- INFO[0002] [network] Pulling image [nexus3.onap.org:10001/rancher/rke-tools:v0.1.27] on host [10.12.6.90]
- INFO[0011] [network] Successfully pulled image [nexus3.onap.org:10001/rancher/rke-tools:v0.1.27] on host [10.12.6.89]
- . . . .
- INFO[0309] [addons] Setting up Metrics Server
- INFO[0309] [addons] Saving ConfigMap for addon rke-metrics-addon to Kubernetes
- INFO[0309] [addons] Successfully saved ConfigMap for addon rke-metrics-addon to Kubernetes
- INFO[0309] [addons] Executing deploy job rke-metrics-addon
- INFO[0315] [addons] Metrics Server deployed successfully
- INFO[0315] [ingress] Setting up nginx ingress controller
- INFO[0315] [addons] Saving ConfigMap for addon rke-ingress-controller to Kubernetes
- INFO[0316] [addons] Successfully saved ConfigMap for addon rke-ingress-controller to Kubernetes
- INFO[0316] [addons] Executing deploy job rke-ingress-controller
- INFO[0322] [ingress] ingress controller nginx deployed successfully
- INFO[0322] [addons] Setting up user addons
- INFO[0322] [addons] no user addons defined
- INFO[0322] Finished building Kubernetes cluster successfully
-
-Install Kubectl
-===============
-
-Download and install kubectl. Binaries can be found here for Linux and Mac:
-
-https://storage.googleapis.com/kubernetes-release/release/v1.15.11/bin/linux/amd64/kubectl
-https://storage.googleapis.com/kubernetes-release/release/v1.15.11/bin/darwin/amd64/kubectl
-
-You only need to install kubectl where you'll launch Kubernetes command. This
-can be any machines of the Kubernetes cluster or a machine that has IP access
-to the APIs.
-Usually, we use the first controller as it has also access to internal
-Kubernetes services, which can be convenient.
-
-Validate deployment
--------------------
-
-::
-
- > mkdir -p ~/.kube
-
- > cp kube_config_cluster.yml ~/.kube/config.onap
-
- > export KUBECONFIG=~/.kube/config.onap
-
- > kubectl config use-context onap
-
- > kubectl get nodes -o=wide
-
-::
-
- NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
- onap-control-1 Ready controlplane,etcd 3h53m v1.15.2 10.0.0.8 <none> Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5
- onap-control-2 Ready controlplane,etcd 3h53m v1.15.2 10.0.0.11 <none> Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5
- onap-control-3 Ready controlplane,etcd 3h53m v1.15.2 10.0.0.12 <none> Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5
- onap-k8s-1 Ready worker 3h53m v1.15.2 10.0.0.14 <none> Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5
- onap-k8s-10 Ready worker 3h53m v1.15.2 10.0.0.16 <none> Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5
- onap-k8s-11 Ready worker 3h53m v1.15.2 10.0.0.18 <none> Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5
- onap-k8s-12 Ready worker 3h53m v1.15.2 10.0.0.7 <none> Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5
- onap-k8s-2 Ready worker 3h53m v1.15.2 10.0.0.26 <none> Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5
- onap-k8s-3 Ready worker 3h53m v1.15.2 10.0.0.5 <none> Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5
- onap-k8s-4 Ready worker 3h53m v1.15.2 10.0.0.6 <none> Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5
- onap-k8s-5 Ready worker 3h53m v1.15.2 10.0.0.9 <none> Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5
- onap-k8s-6 Ready worker 3h53m v1.15.2 10.0.0.17 <none> Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5
- onap-k8s-7 Ready worker 3h53m v1.15.2 10.0.0.20 <none> Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5
- onap-k8s-8 Ready worker 3h53m v1.15.2 10.0.0.10 <none> Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5
- onap-k8s-9 Ready worker 3h53m v1.15.2 10.0.0.4 <none> Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5
-
-
-Install Helm
-============
-
-Example Helm client install on Linux::
-
- > wget https://get.helm.sh/helm-v2.16.6-linux-amd64.tar.gz
-
- > tar -zxvf helm-v2.16.6-linux-amd64.tar.gz
-
- > sudo mv linux-amd64/helm /usr/local/bin/helm
-
-Initialize Kubernetes Cluster for use by Helm
----------------------------------------------
-
-::
-
- > kubectl -n kube-system create serviceaccount tiller
-
- > kubectl create clusterrolebinding tiller --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
-
- > helm init --service-account tiller
-
- > kubectl -n kube-system rollout status deploy/tiller-deploy
-
-
-
-Setting up an NFS share for Multinode Kubernetes Clusters
-=========================================================
-Deploying applications to a Kubernetes cluster requires Kubernetes nodes to
-share a common, distributed filesystem. In this tutorial, we will setup an
-NFS Master, and configure all Worker nodes a Kubernetes cluster to play
-the role of NFS slaves.
-
-It is recommended that a separate VM, outside of the kubernetes
-cluster, be used. This is to ensure that the NFS Master does not compete for
-resources with Kubernetes Control Plane or Worker Nodes.
-
-
-Launch new NFS Server VM instance
----------------------------------
-.. image:: images/nfs_server/nfs_server_1.png
-
-Select Ubuntu 18.04 as base image
----------------------------------
-Select "No" on "Create New Volume"
-
-.. image:: images/nfs_server/nfs_server_2.png
-
-Select Flavor
--------------
-
-.. image:: images/nfs_server/nfs_server_3.png
-
-Networking
------------
-
-.. image:: images/nfs_server/nfs_server_4.png
-
-Security Group
----------------
-
-.. image:: images/nfs_server/nfs_server_5.png
-
-Key Pair
---------
-Assign the key pair that was created/selected previously (e.g. onap_key).
-
-.. image:: images/nfs_server/nfs_server_6.png
-
-Apply customization script for NFS Server VM
---------------------------------------------
-
-Click :download:`openstack-nfs-server.sh <shell/openstack-nfs-server.sh>` to download
-the script.
-
-.. literalinclude:: shell/openstack-nfs-server.sh
- :language: bash
-
-This customization script will:
-
-* update ubuntu
-* install nfs server
-
-
-Launch Instance
----------------
-
-.. image:: images/nfs_server/nfs_server_7.png
-
-
-
-Assign Floating IP addresses
-----------------------------
-
-.. image:: images/nfs_server/nfs_server_8.png
-
-Resulting floating IP assignments in this example.
-
-.. image:: images/nfs_server/nfs_server_9.png
-
-
-To properly set up an NFS share on Master and Slave nodes, the user can run the
-scripts below.
-
-Click :download:`master_nfs_node.sh <shell/master_nfs_node.sh>` to download the
-script.
-
-.. literalinclude:: shell/master_nfs_node.sh
- :language: bash
-
-Click :download:`slave_nfs_node.sh <shell/slave_nfs_node.sh>` to download the script.
-
-.. literalinclude:: shell/slave_nfs_node.sh
- :language: bash
-
-The master_nfs_node.sh script runs in the NFS Master node and needs the list of
-NFS Slave nodes as input, e.g.::
-
- > sudo ./master_nfs_node.sh node1_ip node2_ip ... nodeN_ip
-
-The slave_nfs_node.sh script runs in each NFS Slave node and needs the IP of
-the NFS Master node as input, e.g.::
-
- > sudo ./slave_nfs_node.sh master_node_ip
-
-
-ONAP Deployment via OOM
-=======================
-Now that Kubernetes and Helm are installed and configured you can prepare to
-deploy ONAP. Follow the instructions in the README.md_ or look at the official
-documentation to get started:
-
-- :ref:`quick-start-label` - deploy ONAP on an existing cloud
-- :ref:`user-guide-label` - a guide for operators of an ONAP instance
Pre-requisites
--------------
-Your environment must have the Kubernetes `kubectl` with Strimzi Apache Kafka, Cert-Manager
-and Helm setup as a one time activity.
+Your environment must have the Kubernetes `kubectl` with Strimzi Apache Kafka,
+Cert-Manager and Helm setup as a one time activity.
Install Kubectl
~~~~~~~~~~~~~~~
The top level onap/values.yaml file contains the values required to be set
before deploying ONAP. Here is the contents of this file:
-.. include:: ../kubernetes/onap/values.yaml
+.. include:: ../../kubernetes/onap/values.yaml
:code: yaml
One may wish to create a value file that is specific to a given deployment such
--- /dev/null
+#################################################################
+# Global configuration overrides.
+#
+# These overrides will affect all helm charts (ie. applications)
+# that are listed below and are 'enabled'.
+#################################################################
+global:
+ # Change to an unused port prefix range to prevent port conflicts
+ # with other instances running within the same k8s cluster
+ nodePortPrefix: 302
+
+ # image repositories
+ repository: nexus3.onap.org:10001
+ repositorySecret: eyJuZXh1czMub25hcC5vcmc6MTAwMDEiOnsidXNlcm5hbWUiOiJkb2NrZXIiLCJwYXNzd29yZCI6ImRvY2tlciIsImVtYWlsIjoiQCIsImF1dGgiOiJaRzlqYTJWeU9tUnZZMnRsY2c9PSJ9fQ==
+ # readiness check
+ readinessImage: onap/oom/readiness:6.0.3
+ # logging agent
+ loggingRepository: docker.elastic.co
+
+ # image pull policy
+ pullPolicy: IfNotPresent
+
+ # override default mount path root directory
+ # referenced by persistent volumes and log files
+ persistence:
+ mountPath: /dockerdata
+
+ # flag to enable debugging - application support required
+ debugEnabled: true
+
+#################################################################
+# Enable/disable and configure helm charts (ie. applications)
+# to customize the ONAP deployment.
+#################################################################
+aai:
+ enabled: false
+cli:
+ enabled: false
+cps:
+ enabled: false
+dcaegen2:
+ enabled: false
+message-router:
+ enabled: false
+msb:
+ enabled: false
+multicloud:
+ enabled: false
+policy:
+ enabled: false
+robot: # Robot Health Check
+ enabled: true
+sdc:
+ enabled: false
+sdnc:
+ enabled: false
+so: # Service Orchestrator
+ enabled: true
+
+ replicaCount: 1
+
+ liveness:
+ # necessary to disable liveness probe when setting breakpoints
+ # in debugger so K8s doesn't restart unresponsive container
+ enabled: true
+
+ # so server configuration
+ config:
+ # message router configuration
+ dmaapTopic: "AUTO"
+ # openstack configuration
+ openStackUserName: "vnf_user"
+ openStackRegion: "RegionOne"
+ openStackKeyStoneUrl: "http://1.2.3.4:5000"
+ openStackServiceTenantName: "service"
+ openStackEncryptedPasswordHere: "c124921a3a0efbe579782cde8227681e"
+
+ # configure embedded mariadb
+ mariadb:
+ config:
+ mariadbRootPassword: password
+uui:
+ enabled: false
+vfc:
+ enabled: false
+vnfsdk:
+ enabled: false
sphinxcontrib-spelling
sphinxcontrib-plantuml
sphinx_toolbox>=3.2.0
-six
\ No newline at end of file
+six
for ONAP component interfaces.
Usually the Ingress is accessed via a LoadBalancer IP (<ingress-IP>),
which is used as central address.
-All APIs/UIs are provided via separate URLs which are routed to the component service.
+All APIs/UIs are provided via separate URLs which are routed to the component
+service.
To use these URLs they need to be resolvable via DNS or via /etc/hosts.
-The domain name is usually defined in the `global` section of the ONAP helm-charts,
-`virtualhost.baseurl` (here "simpledemo.onap.org") whereas the hostname of
-the service (e.g. "sdc-fe-ui") is defined in the component's chart.
+The domain name is usually defined in the `global` section of the ONAP
+helm-charts, `virtualhost.baseurl` (here "simpledemo.onap.org") whereas the
+hostname of the service (e.g. "sdc-fe-ui") is defined in the component's chart.
-.. code-block:: none
+.. code-block:: bash
<ingress-IP> kiali.simpledemo.onap.org
<ingress-IP> cds-ui.simpledemo.onap.org
In the development setop OOM operates in a private IP network that isn't
publicly accessible (i.e. OpenStack VMs with private internal network) which
blocks access to the ONAP User Interfaces.
-To enable direct access to a service from a user's own environment (a laptop etc.)
-the application's internal port is exposed through a `Kubernetes NodePort`_ or
-`Kubernetes LoadBalancer`_ object.
+To enable direct access to a service from a user's own environment (a laptop
+etc.) the application's internal port is exposed through a
+`Kubernetes NodePort`_ or `Kubernetes LoadBalancer`_ object.
Typically, to be able to access the Kubernetes nodes publicly a public address
is assigned. In OpenStack this is a floating IP address.
Most ONAP applications use the `NodePort` as predefined `service:type`,
which opens allows access to the service through the the IP address of each
Kubernetes node.
-When using the `Loadbalancer` as `service:type` `Kubernetes LoadBalancer`_ object
-which gets a separate IP address.
+When using the `Loadbalancer` as `service:type` `Kubernetes LoadBalancer`_
+object which gets a separate IP address.
When e.g. the `sdc-fe` chart is deployed a Kubernetes service is created that
instantiates a load balancer. The LB chooses the private interface of one of
-the nodes as in the example below (10.0.0.4 is private to the K8s cluster only).
+the nodes as in the example below (10.0.0.4 is private to the K8s cluster
+only).
Then to be able to access the portal on port 8989 from outside the K8s &
-OpenStack environment, the user needs to assign/get the floating IP address that
-corresponds to the private IP as follows::
+OpenStack environment, the user needs to assign/get the floating IP address
+that corresponds to the private IP as follows::
> kubectl -n onap get services|grep "sdc-fe"
sdc-fe LoadBalancer 10.43.142.201 10.0.0.4 8181:30207/TCP
-
In this example, use the 10.0.0.4 private address as a key find the
corresponding public address which in this example is 10.12.6.155. If you're
using OpenStack you'll do the lookup with the horizon GUI or the OpenStack CLI
-for your tenant (openstack server list). That IP is then used in your
+for your tenant (openstack server list). That IP is then used in your
`/etc/hosts` to map the fixed DNS aliases required by the ONAP Portal as shown
below::
| Alternatives Considered:
- Kubernetes port forwarding was considered but discarded as it would
- require the end user to run a script that opens up port forwarding tunnels
- to each of the pods that provides a portal application widget.
+ require the end user to run a script that opens up port forwarding
+ tunnels to each of the pods that provides a portal application widget.
- Reverting to a VNC server similar to what was deployed in the Amsterdam
release was also considered but there were many issues with resolution,
Observations:
- If you are not using floating IPs in your Kubernetes deployment and
- directly attaching a public IP address (i.e. by using your public provider
- network) to your K8S Node VMs' network interface, then the output of
- 'kubectl -n onap get services | grep "portal-app"'
+ directly attaching a public IP address (i.e. by using your public
+ provider network) to your K8S Node VMs' network interface, then the
+ output of 'kubectl -n onap get services | grep "portal-app"'
will show your public IP instead of the private network's IP. Therefore,
you can grab this public IP directly (as compared to trying to find the
floating IP first) and map this IP in /etc/hosts.
:widths: 20,20,20,20,20
:header-rows: 1
-
This table retrieves information from the ONAP deployment using the following
Kubernetes command:
.. code-block:: bash
kubectl get svc -n onap -o go-template='{{range .items}}{{range.spec.ports}}{{if .nodePort}}{{.nodePort}}{{.}}{{"\n"}}{{end}}{{end}}{{end}}'
-
OOM Custom Overrides
####################
-The OOM `helm deploy`_ plugin requires deployment configuration as input, usually in the form of override yaml files.
-These input files determine what ONAP components get deployed, and the configuration of the OOM deployment.
+The OOM `helm deploy`_ plugin requires deployment configuration as input,
+usually in the form of override yaml files.
+These input files determine what ONAP components get deployed, and the
+configuration of the OOM deployment.
Other helm config options like `--set log.enabled=true|false` are available.
-See the `helm deploy`_ plugin usage section for more detail, or it the plugin has already been installed, execute the following::
+See the `helm deploy`_ plugin usage section for more detail, or it the plugin
+has already been installed, execute the following::
> helm deploy --help
ServiceMesh settings:
-- enabled: true → enables ServiceMesh functionality in the ONAP Namespace (Istio: enables Sidecar deployment)
+- enabled: true → enables ServiceMesh functionality in the ONAP Namespace
+ (Istio: enables Sidecar deployment)
- tls: true → enables mTLS encryption in Sidecar communication
- engine: istio → sets the SM engine (currently only Istio is supported)
- aafEnabled: false → disables AAF usage for TLS interfaces
- tlsEnabled: false → disables creation of TLS in component services
- cmpv2Enabled: false → disable cmpv2 feature
-- msbEnabled: false → MSB is not used in Istio setup (Open, if all components are MSB independend)
+- msbEnabled: false → MSB is not used in Istio setup (Open, if all components
+ are MSB independend)
Ingress settings:
-- enabled: true → enables Ingress using: Nginx (when SM disabled), Istio IngressGateway (when SM enabled)
+- enabled: true → enables Ingress using: Nginx (when SM disabled), Istio
+ IngressGateway (when SM enabled)
- enable_all: true → enables Ingress configuration in each component
- provider: "..." → sets the Ingress provider (ingress, istio, gw-api)
-- ingressClass: "" → Ingress class (only for provider "ingress"): e.g. nginx, traefik
-- ingressSelector: "" → Selector (only for provider "istio") to match with the ingress pod label "istio=ingress"
-- commonGateway: "" → optional: common used Gateway (for Istio, GW-API) and http(s) listener names
-- virtualhost.baseurl: "simpledemo.onap.org" → sets globally the URL for all Interfaces set by the components,
- resulting in e.g. "aai-api.simpledemo.onap.org", can be overwritten in the component via: ingress.baseurlOverride
-- virtualhost.preaddr: "pre-" → sets globally a prefix for the Application name for all Interfaces set by the components,
- resulting in e.g. "pre-aai-api.simpledemo.onap.org", can be overwritten in the component via: ingress.preaddrOverride
-- virtualhost.postaddr: "-post" → sets globally a postfix for the Application name for all Interfaces set by the components,
- resulting in e.g. "aai-api-post.simpledemo.onap.org", can be overwritten in the component via: ingress.postaddrOverride
-- config.ssl: redirect → sets in the Ingress globally the redirection of all Interfaces from http (port 80) to https (port 443)
-- config.tls.secret: "..." → (optional) overrides the default selfsigned SSL certificate with a certificate stored in the specified secret
-- namespace: istio-ingress → (optional) overrides the namespace of the ingress gateway which is used for the created SSL certificate
+- ingressClass: "" → Ingress class (only for provider "ingress"): e.g. nginx,
+ traefik
+- ingressSelector: "" → Selector (only for provider "istio") to match with the
+ ingress pod label "istio=ingress"
+- commonGateway: "" → optional: common used Gateway (for Istio, GW-API) and
+ http(s) listener names
+- virtualhost.baseurl: "simpledemo.onap.org" → sets globally the URL for all
+ Interfaces set by the components, resulting in e.g.
+ "aai-api.simpledemo.onap.org", can be overwritten in the component via:
+ ingress.baseurlOverride
+- virtualhost.preaddr: "pre-" → sets globally a prefix for the Application name
+ for all Interfaces set by the components, resulting in e.g.
+ "pre-aai-api.simpledemo.onap.org", can be overwritten in the component via:
+ ingress.preaddrOverride
+- virtualhost.postaddr: "-post" → sets globally a postfix for the Application
+ name for all Interfaces set by the components, resulting in e.g.
+ "aai-api-post.simpledemo.onap.org", can be overwritten in the component via:
+ ingress.postaddrOverride
+- config.ssl: redirect → sets in the Ingress globally the redirection of all
+ Interfaces from http (port 80) to https (port 443)
+- config.tls.secret: "..." → (optional) overrides the default selfsigned SSL
+ certificate with a certificate stored in the specified secret
+- namespace: istio-ingress → (optional) overrides the namespace of the ingress
+ gateway which is used for the created SSL certificate
.. note::
For the Ingress setup example override files (`onap-all-ingress-istio.yaml`, `onap-all-ingress-gatewayapi.yaml`)
.. figure:: ../../resources/images/oom_logo/oomLogoV2-medium.png
:align: right
-ONAP OOM supports several options for the deployment of ONAP using it's helm charts.
+ONAP OOM supports several options for the deployment of ONAP using it's helm
+charts.
* :ref:`oom_helm_release_repo_deploy`
* :ref:`oom_helm_testing_repo_deploy`
| :ref:`Set up your base platform<oom_base_setup_guide>`
-Each deployment method can be customized to deploy a subset of ONAP component applications.
+Each deployment method can be customized to deploy a subset of ONAP component
+applications.
See the :ref:`oom_customize_overrides` section for more details.
OOM Developer Testing Deployment
================================
-Developing and testing changes to the existing OOM project can be done locally by setting up some additional
-tools to host the updated helm charts.
+Developing and testing changes to the existing OOM project can be done locally
+by setting up some additional tools to host the updated helm charts.
**Step 1.** Clone the OOM repository from ONAP gerrit::
**Step 3.** Install Chartmuseum
-Chart museum is required to host the helm charts locally when deploying in a development environment::
+Chart museum is required to host the helm charts locally when deploying in a
+development environment::
> curl https://raw.githubusercontent.com/helm/chartmuseum/main/scripts/get-chartmuseum | bash
> chartmuseum --storage local --storage-local-rootdir ~/helm3-storage -port 8879 &
-Note the port number that is listed and use it in the Helm repo add as follows::
+Note the port number that is listed and use it in the Helm repo add as
+follows::
> helm repo add local http://127.0.0.1:8879
To customize what applications are deployed, see the :ref:`oom_customize_overrides` section for more details, to provide your own custom overrides yaml file.
-- To deploy a release, execute the following, substituting the <version> tag with your preferred release (ie. 13.0.0)::
+- To deploy a release, execute the following, substituting the <version> tag with
+ your preferred release (ie. 13.0.0)::
> helm deploy dev onap-release/onap --namespace onap --create-namespace --set global.masterPassword=myAwesomePasswordThatINeedToChange --version <version> -f oom/kubernetes/onap/resources/overrides/onap-all.yaml
OOM Helm Testing Deployment
===========================
-ONAP hosts the OOM `testing` helm charts in it's `ONAP helm testing repository`_.
+ONAP hosts the OOM `testing` helm charts in it's
+`ONAP helm testing repository`_.
This is helm repo contains:
* The `latest` charts built from the head of the `OOM`_ project's master
- branch, tagged with the version number of the current development cycle (ie. 12.0.0).
+ branch, tagged with the version number of the current development cycle
+ (ie. 15.0.0).
Add the OOM testing repo & Deploy
- --container-name
- so-mariadb
env:
- ...
\ No newline at end of file
+ ...
persistent volume should be used to store all data that needs to be persisted
over the re-creation of a container. Persistent volumes have been created for
the database components of each of the projects and the same technique can be
-used for all persistent state information.
\ No newline at end of file
+used for all persistent state information.
Charts are created as files laid out in a particular directory tree, then they
can be packaged into versioned archives to be deployed. There is a public
-archive of `Helm Charts`_ on ArtifactHUB that includes many technologies applicable
-to ONAP. Some of these charts have been used in ONAP and all of the ONAP charts
-have been created following the guidelines provided.
+archive of `Helm Charts`_ on ArtifactHUB that includes many technologies
+applicable to ONAP. Some of these charts have been used in ONAP and all of the
+ONAP charts have been created following the guidelines provided.
An example structure of the OOM common helm charts is shown below:
:local:
..
-For additional platform add-ons, see the :ref:`oom_base_optional_addons` section.
+For additional platform add-ons, see the :ref:`oom_base_optional_addons`
+section.
Install & configure kubectl
---------------------------
-The Kubernetes command line interface used to manage a Kubernetes cluster needs to be installed
-and configured to run as non root.
+The Kubernetes command line interface used to manage a Kubernetes cluster needs
+to be installed and configured to run as non root.
-For additional information regarding kubectl installation and configuration see the `kubectl installation guide`_
+For additional information regarding kubectl installation and configuration see
+the `kubectl installation guide`_
-To install kubectl, execute the following, replacing the <recommended-kubectl-version> with the version defined
-in the :ref:`versions_table` table::
+To install kubectl, execute the following, replacing the
+<recommended-kubectl-version> with the version defined in the
+:ref:`versions_table` table::
> curl -LO https://dl.k8s.io/release/v<recommended-kubectl-version>/bin/linux/amd64/kubectl
Install & configure helm
------------------------
-Helm is used for package and configuration management of the relevant helm charts.
-For additional information, see the `helm installation guide`_
+Helm is used for package and configuration management of the relevant helm
+charts. For additional information, see the `helm installation guide`_
-To install helm, execute the following, replacing the <recommended-helm-version> with the version defined
-in the :ref:`versions_table` table::
+To install helm, execute the following, replacing the
+<recommended-helm-version> with the version defined in the
+:ref:`versions_table` table::
> wget https://get.helm.sh/helm-v<recommended-helm-version>-linux-amd64.tar.gz
> helm version
-Helm's default CNCF provided `Curated applications for Kubernetes`_ repository called
-*stable* can be removed to avoid confusion::
+Helm's default CNCF provided `Curated applications for Kubernetes`_ repository
+called *stable* can be removed to avoid confusion::
> helm repo remove stable
Set the default StorageClass
----------------------------
-In some ONAP components it is important to have a default storageClass defined (e.g. cassandra),
-if you don't want to explicitly set it during the deployment via helm overrides.
+In some ONAP components it is important to have a default storageClass defined
+(e.g. cassandra), if you don't want to explicitly set it during the deployment
+via helm overrides.
-Therefor you should set the default storageClass (if not done during the K8S cluster setup) via the command::
+Therefor you should set the default storageClass (if not done during the K8S
+cluster setup) via the command::
> kubectl patch storageclass <storageclass> -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
Install the Strimzi Kafka Operator
----------------------------------
-Strimzi Apache Kafka provides a way to run an Apache Kafka cluster on Kubernetes
-in various deployment configurations by using kubernetes operators.
-Operators are a method of packaging, deploying, and managing Kubernetes applications.
+Strimzi Apache Kafka provides a way to run an Apache Kafka cluster on
+Kubernetes in various deployment configurations by using kubernetes operators.
+Operators are a method of packaging, deploying, and managing Kubernetes
+applications.
Strimzi Operators extend the Kubernetes functionality, automating common
and complex tasks related to a Kafka deployment. By implementing
> helm repo add strimzi https://strimzi.io/charts/
-To install the strimzi kafka operator, execute the following, replacing the <recommended-strimzi-version> with the version defined
-in the :ref:`versions_table` table::
+To install the strimzi kafka operator, execute the following, replacing the
+<recommended-strimzi-version> with the version defined in the
+:ref:`versions_table` table::
> helm install strimzi-kafka-operator strimzi/strimzi-kafka-operator --namespace strimzi-system --version <recommended-strimzi-version> --set watchAnyNamespace=true --create-namespace
steps, please refer to `Cert-Manager kubectl plugin documentation`_.
-To install cert-manager, execute the following, replacing the <recommended-cm-version> with the version defined
-in the :ref:`versions_table` table::
+To install cert-manager, execute the following, replacing the
+<recommended-cm-version> with the version defined in the
+:ref:`versions_table` table::
> kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v<recommended-cm-version>/cert-manager.yaml
`ONAP Next Generation Security & Logging Architecture`_
ONAP is currenty supporting Istio as default ServiceMesh platform.
-Therefor the following instructions describe the setup of Istio and required tools.
-Used `Istio setup guide`_
+Therefor the following instructions describe the setup of Istio and required
+tools. Used `Istio setup guide`_
.. _oom_base_optional_addons_istio_installation:
> helm upgrade -i istio-base istio/base -n istio-system --version <recommended-istio-version>
-- Create an override for istiod (e.g. istiod.yaml) to add the oauth2-proxy as external
- authentication provider and apply some specific config settings
- Be aware, that from Istio version 1.21.0 the format of the values.yaml changes.
- Additionally a new feature (Native Sidecars) can be enabled, if it is enabled in
- Kubernetes (version > 1.28)
+- Create an override for istiod (e.g. istiod.yaml) to add the oauth2-proxy as
+ external authentication provider and apply some specific config settings
+ Be aware, that from Istio version 1.21.0 the format of the values.yaml
+ changes. Additionally a new feature (Native Sidecars) can be enabled, if it
+ is enabled in Kubernetes (version > 1.28)
.. collapse:: istiod.yaml (version => 1.21)
.. include:: ../../resources/yaml/istiod.yaml
:code: yaml
-- Install the Istio Base Istio Discovery chart which deploys the istiod service, replacing the
- <recommended-istio-version> with the version defined in the :ref:`versions_table` table::
+- Install the Istio Base Istio Discovery chart which deploys the istiod
+ service, replacing the <recommended-istio-version> with the version
+ defined in the :ref:`versions_table` table::
> helm upgrade -i istiod istio/istiod -n istio-system --version <recommended-istio-version>
--wait -f ./istiod.yaml
- Istio Gateway `Istio-Gateway`_ (alternative, but in the future deprecated)
Depending on the solution, the ONAP helm values.yaml has to be configured.
-See the :ref:`OOM customized deployment<oom_customize_overrides>` section for more details.
+See the :ref:`OOM customized deployment<oom_customize_overrides>` section for
+more details.
Gateway-API (recommended)
^^^^^^^^^^^^^^^^^^^^^^^^^
> kubectl label namespace istio-ingress istio-injection=enabled
-- To expose additional ports besides HTTP/S (e.g. for external Kafka access, SDNC-callhome)
- create an override file (e.g. istio-ingress.yaml)
+- To expose additional ports besides HTTP/S (e.g. for external Kafka access,
+ SDNC-callhome) create an override file (e.g. istio-ingress.yaml)
.. collapse:: istio-ingress.yaml
^^^^^^^^^^^^^^^^^^
- To configure the Keycloak instance
- create an override file (e.g. keycloak-server-values.yaml)
+ create an override file (e.g. keycloak-server-values.yaml) and use
+ the "image.tag" of the keycloak version (here 26.0.6)
.. collapse:: keycloak-server-values.yaml
.. rubric:: Minimum Hardware Configuration
-Some recommended hardware requirements are provided below. Note that this is for a
-full ONAP deployment (all components).
+Some recommended hardware requirements are provided below. Note that this is
+for a full ONAP deployment (all components).
.. table:: OOM Hardware Requirements
224GB 160GB 112 0.0.0.0/0 (all open)
===== ===== ====== ====================
-Customizing ONAP to deploy only components that are needed will drastically reduce these requirements.
-See the :ref:`OOM customized deployment<oom_customize_overrides>` section for more details.
+Customizing ONAP to deploy only components that are needed will drastically
+reduce these requirements.
+See the :ref:`OOM customized deployment<oom_customize_overrides>` section for
+more details.
.. note::
| Kubernetes supports a maximum of 110 pods per node - this can be overcome by modifying your kubelet config.
============== =========== ======= ======== ======== ============= ========
Montreal 1.27.5 3.12.3 1.27.x 20.10.x 1.13.2 0.36.1
New Delhi 1.28.6 3.13.1 1.28.x 20.10.x 1.14.4 0.41.0
- Oslo 1.28.6 3.13.1 1.28.x 20.10.x 1.14.4 0.43.0
+ Oslo 1.28.6 3.13.1 1.30.x 23.0.x 1.16.2 0.44.0
============== =========== ======= ======== ======== ============= ========
.. table:: OOM Software Requirements (production)
============== ====== ============ ==============
Montreal 1.19.3 v1.0.0 19.0.3-legacy
New Delhi 1.21.0 v1.0.0 22.0.4
- Oslo 1.23.0 v1.0.0 22.0.4
+ Oslo 1.24.1 v1.2.1 26.0.6
============== ====== ============ ==============
.. table:: OOM Software Requirements (optional)
- ============== ================= ========== =================
- Release Prometheus Stack K8ssandra MariaDB-Operator
- ============== ================= ========== =================
- Montreal 45.x 1.10.2 0.23.1
- New Delhi 45.x 1.16.0 0.28.1
- Oslo 45.x 1.19.0 0.30.0
- ============== ================= ========== =================
+ ============== =========== ========== =========== ============ ===========
+ Release Prometheus K8ssandra MariaDB-Op Postgres-Op MongoDB-Op
+ ============== =========== ========== =========== ============ ===========
+ Montreal 45.x 1.10.2 0.23.1 - -
+ New Delhi 45.x 1.16.0 0.28.1 - -
+ Oslo 45.x 1.20.2 0.36.0 5.7.2 1.18.0
+ ============== =========== ========== =========== ============ ===========
.. _K8ssandra setup guide: https://docs.k8ssandra.io/install/
.. _Mariadb-Operator setup guide: https://github.com/mariadb-operator/mariadb-operator
.. _Postgres-Operator setup guide: https://github.com/CrunchyData/postgres-operator
+.. _MongoDB-Operator setup guide: https://docs.percona.com/percona-operator-for-mongodb/helm.html
.. _oom_base_optional_addons:
> helm repo update
-- To install prometheus, execute the following, replacing the <recommended-pm-version> with the version defined in the :ref:`versions_table` table::
+- To install prometheus, execute the following, replacing the
+ <recommended-pm-version> with the version defined in the
+ :ref:`versions_table` table::
> helm install prometheus prometheus-community/kube-prometheus-stack --namespace=prometheus --create-namespace --version=<recommended-pm-version>
> kubectl -n istio-system apply -f kiali-ingress.yaml
-
Jaeger Installation
-------------------
> kubectl label namespace k8ssandra-operator istio-injection=enabled
-- Install the k8ssandra-operator replacing the <recommended-version> with the version defined in the :ref:`versions_table` table::
+- Install the k8ssandra-operator replacing the <recommended-version> with the
+ version defined in the :ref:`versions_table` table::
> helm repo add k8ssandra https://helm.k8ssandra.io/stable
> kubectl label namespace mariadb-operator istio-injection=enabled
-- Install the mariadb-operator replacing the <recommended-version> with the version defined in the :ref:`versions_table` table::::
+- Install the mariadb-operator replacing the <recommended-version> with the
+ version defined in the :ref:`versions_table` table::::
- > helm repo add mariadb-operator https://mariadb-operator.github.io/mariadb-operator
+ > helm repo add mariadb-operator https://helm.mariadb.com/mariadb-operator
> helm repo update mariadb-operator
+ > helm install mariadb-operator-crds --namespace mariadb-operator --version=<recommended-version>
+
> helm install mariadb-operator --namespace mariadb-operator
mariadb-operator/mariadb-operator --set ha.enabled=true
--set metrics.enabled=true --set webhook.certificate.certManager=true
For setup the Postgres-Operator is used, see `Postgres-Operator setup guide`_
+MongoDB-Operator Installation
+------------------------------
+
+MongoDB-Operator is used to ease the installation and lifecycle management of
+MongoDB DB instances, including monitoring and backup
+
+For setup the MongoDB-Operator is used, see `MongoDB-Operator setup guide`_
+
+- Install mongodb-operator namespace::
+
+ > kubectl create namespace mongodb-operator
+
+ > kubectl label namespace mongodb-operator istio-injection=enabled
+
+- Install the mongodb-operator replacing the <recommended-version> with the
+ version defined in the :ref:`versions_table` table::
+
+ > helm repo add percona https://percona.github.io/percona-helm-charts
+
+ > helm repo update percona
+
+ > helm install mongodb-operator --namespace mongodb-operator
+ percona/psmdb-operator --version=<recommended-version>
+
Kserve Installation
-------------------
service. Once deployed, the inference services can be queried for the
prediction.
-**Kserve participant component in Policy ACM requires this installation. Kserve participant deploy/undeploy inference services in Kserve.**
+**Kserve participant component in Policy ACM requires this installation.**
+**Kserve participant deploy/undeploy inference services in Kserve.**
Dependent component version compatibility details and installation instructions
can be found at `Kserve setup guide`_
Kserve installation requires the following components:
-- Istio. Its installation instructions can be found at :ref:`oom_base_optional_addons_istio_installation`
+- Istio. Its installation instructions can be found at
+ :ref:`oom_base_optional_addons_istio_installation`
-- Cert-Manager. Its installation instructions can be found at :ref:`oom_base_setup_cert_manager`
+- Cert-Manager. Its installation instructions can be found at
+ :ref:`oom_base_setup_cert_manager`
Installation instructions as follows,
- Monitor_ - real-time health monitoring feeding to a Consul UI and Kubernetes
- Heal_- failed ONAP containers are recreated automatically
- Scale_ - cluster ONAP services to enable seamless scaling
-- Upgrade_ - change-out containers or configuration with little or no service impact
+- Upgrade_ - change-out containers or configuration with little or no service
+ impact
- Delete_ - cleanup individual containers or entire deployments
.. figure:: ../../resources/images/oom_logo/oomLogoV2-Deploy.png
within and between components. Using this model Helm is able to deploy all of
ONAP with a few simple commands.
-Please refer to the :ref:`oom_deploy_guide` for deployment pre-requisites and options
+Please refer to the :ref:`oom_deploy_guide` for deployment pre-requisites and
+options
.. note::
Refer to the :ref:`oom_customize_overrides` section on how to update overrides.yaml and values.yaml
service impact
- **Delete** - cleanup individual containers or entire deployments
-OOM supports a wide variety of Kubernetes private clouds - built with ClusterAPI,
-Kubespray - and public cloud infrastructures such as: Microsoft
+OOM supports a wide variety of Kubernetes private clouds - built with
+ClusterAPI, Kubespray - and public cloud infrastructures such as: Microsoft
Azure, Amazon AWS, Google GCD, VMware VIO, and OpenStack.
The OOM documentation is broken into four different areas each targeted at a
different user:
- :ref:`oom_dev_guide` - a guide for developers of OOM
-- :ref:`oom_infra_guide` - a guide for those setting up the environments that OOM will use
-- :ref:`oom_deploy_guide` - a guide for those deploying OOM on an existing cloud
+- :ref:`oom_infra_guide` - a guide for those setting up the environments that
+ OOM will use
+- :ref:`oom_deploy_guide` - a guide for those deploying OOM on an existing
+ cloud
- :ref:`oom_user_guide` - a guide for operators of an OOM instance
-- :ref:`oom_access_info_guide` - a guide for operators who require access to OOM applications
+- :ref:`oom_access_info_guide` - a guide for operators who require access to
+ OOM applications
The :ref:`release_notes` for OOM describe the incremental features per release.
* Kubernetes support for version up to 1.23.8
* Helm support for version up to Helm: 3.8.2
-* Kubespray version used for automated deployment 2.19 (used for automated deployment)
+* Kubespray version used for automated deployment 2.19 (used for automated
+ deployment)
* Initial Setup for "ONAP on ServiceMesh" deployment
* using Istio 1.14.1 as SM platform
Documentation Deliverables
~~~~~~~~~~~~~~~~~~~~~~~~~~
-- :ref:`Project Description <oom_project_description>` - a guide for developers of OOM
+- :ref:`Project Description <oom_project_description>` - a guide for developers
+ of OOM
- :ref:`oom_dev_guide` - a guide for developers of OOM
-- :ref:`oom_infra_guide` - a guide for those setting up the environments that OOM will use
-- :ref:`oom_deploy_guide` - a guide for those deploying OOM on an existing cloud
+- :ref:`oom_infra_guide` - a guide for those setting up the environments that
+ OOM will use
+- :ref:`oom_deploy_guide` - a guide for those deploying OOM on an existing
+ cloud
- :ref:`oom_user_guide` - a guide for operators of an OOM instance
-- :ref:`oom_access_info_guide` - a guide for operators who require access to OOM applications
+- :ref:`oom_access_info_guide` - a guide for operators who require access to
+ OOM applications
Known Limitations, Issues and Workarounds
=========================================
* Introduction of "Production" ONAP setup, including:
* Istio Service Mesh based deployment
- * Ingress (Istio-Gateway) deployment and usage as standard external access method
- * Internal Security provided by ServiceMesh and Component2Component AuthorizationPolicies
- * External Security by introducing AuthN/Z using Keycloak and OAuth2Proxy for Ingress Access
+ * Ingress (Istio-Gateway) deployment and usage as standard external access
+ method
+ * Internal Security provided by ServiceMesh and Component2Component
+ AuthorizationPolicies
+ * External Security by introducing AuthN/Z using Keycloak and OAuth2Proxy for
+ Ingress Access
* Removal of unsupported components (AAF, Portal, Contrib,...)
* Update of Helmcharts to use common templates and practices
Documentation Deliverables
~~~~~~~~~~~~~~~~~~~~~~~~~~
-- :ref:`Project Description <oom_project_description>` - a guide for developers of OOM
+- :ref:`Project Description <oom_project_description>` - a guide for developers
+ of OOM
- :ref:`oom_dev_guide` - a guide for developers of OOM
-- :ref:`oom_infra_guide` - a guide for those setting up the environments that OOM will use
-- :ref:`oom_deploy_guide` - a guide for those deploying OOM on an existing cloud
+- :ref:`oom_infra_guide` - a guide for those setting up the environments that
+ OOM will use
+- :ref:`oom_deploy_guide` - a guide for those deploying OOM on an existing
+ cloud
- :ref:`oom_user_guide` - a guide for operators of an OOM instance
-- :ref:`oom_access_info_guide` - a guide for operators who require access to OOM applications
+- :ref:`oom_access_info_guide` - a guide for operators who require access to
+ OOM applications
Known Limitations, Issues and Workarounds
=========================================
Documentation Deliverables
~~~~~~~~~~~~~~~~~~~~~~~~~~
-- :ref:`Project Description <oom_project_description>` - a guide for developers of OOM
+- :ref:`Project Description <oom_project_description>` - a guide for developers
+ of OOM
- :ref:`oom_dev_guide` - a guide for developers of OOM
-- :ref:`oom_infra_guide` - a guide for those setting up the environments that OOM will use
-- :ref:`oom_deploy_guide` - a guide for those deploying OOM on an existing cloud
+- :ref:`oom_infra_guide` - a guide for those setting up the environments that
+ OOM will use
+- :ref:`oom_deploy_guide` - a guide for those deploying OOM on an existing
+ cloud
- :ref:`oom_user_guide` - a guide for operators of an OOM instance
-- :ref:`oom_access_info_guide` - a guide for operators who require access to OOM applications
+- :ref:`oom_access_info_guide` - a guide for operators who require access to
+ OOM applications
Known Limitations, Issues and Workarounds
=========================================
--- /dev/null
+.. This work is licensed under a Creative Commons Attribution 4.0
+ International License.
+.. http://creativecommons.org/licenses/by/4.0
+.. (c) ONAP Project and its contributors
+.. _release_notes_newdelhi:
+
+:orphan:
+
+*************************************
+ONAP Operations Manager Release Notes
+*************************************
+
+Previous Release Notes
+======================
+
+- :ref:`Montreal <release_notes_montreal>`
+- :ref:`London <release_notes_london>`
+- :ref:`Kohn <release_notes_kohn>`
+- :ref:`Jakarta <release_notes_jakarta>`
+- :ref:`Istanbul <release_notes_istanbul>`
+- :ref:`Honolulu <release_notes_honolulu>`
+- :ref:`Guilin <release_notes_guilin>`
+- :ref:`Frankfurt <release_notes_frankfurt>`
+- :ref:`El Alto <release_notes_elalto>`
+- :ref:`Dublin <release_notes_dublin>`
+- :ref:`Casablanca <release_notes_casablanca>`
+- :ref:`Beijing <release_notes_beijing>`
+- :ref:`Amsterdam <release_notes_amsterdam>`
+
+Abstract
+========
+
+This document provides the release notes for the New Delhi release.
+
+Summary
+=======
+
+
+
+Release Data
+============
+
++--------------------------------------+--------------------------------------+
+| **Project** | OOM |
+| | |
++--------------------------------------+--------------------------------------+
+| **Docker images** | N/A |
+| | |
++--------------------------------------+--------------------------------------+
+| **Release designation** | New Delhi |
+| | |
++--------------------------------------+--------------------------------------+
+| **Release date** | 2024/06/13 |
+| | |
++--------------------------------------+--------------------------------------+
+
+New features
+------------
+
+* authentication (14.0.0) - add configurable Keycloak Realm and enable Ingress
+ Interface Authentication and Authorization
+* Update the helm common templates (13.2.0) to:
+
+ * Support the latest Database Operators:
+
+ * MariaDB-Operator (0.28.1)
+ * K8ssandra-Operator (v0.16.0)
+ * Postgres-Operator (CrunchyData) (5.5.0)
+
+* cassandra (13.1.0) - support for new K8ssandra-Operator
+* mariadb-galera (13.1.0) - support for new MariaDB-Operator
+* mongodb (14.12.3) - update to latest bitnami chart version
+* postgres (13.1.0) - support for new Postgres-Operator
+* postgres-init (13.0.1) - support for new Postgres-Operator
+* readinessCheck (13.1.0) - added check for "Service" readiness
+* serviceAccount (13.0.1) - add default role creation
+
+**Bug fixes**
+
+A list of issues resolved in this release can be found here:
+https://lf-onap.atlassian.net/projects/OOM/versions/11502
+
+**Known Issues**
+
+
+Deliverables
+------------
+
+Software Deliverables
+~~~~~~~~~~~~~~~~~~~~~
+
+OOM provides `Helm charts <https://nexus3.onap.org/service/rest/repository/browse/onap-helm-release/>`_
+
+Documentation Deliverables
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+- :ref:`Project Description <oom_project_description>` - a guide for developers
+ of OOM
+- :ref:`oom_dev_guide` - a guide for developers of OOM
+- :ref:`oom_infra_guide` - a guide for those setting up the environments that
+ OOM will use
+- :ref:`oom_deploy_guide` - a guide for those deploying OOM on an existing
+ cloud
+- :ref:`oom_user_guide` - a guide for operators of an OOM instance
+- :ref:`oom_access_info_guide` - a guide for operators who require access to
+ OOM applications
+
+Known Limitations, Issues and Workarounds
+=========================================
+
+Known Vulnerabilities
+---------------------
+
+
+Workarounds
+-----------
+
+Security Notes
+--------------
+
+**Fixed Security Issues**
+
+References
+==========
+
+For more information on the ONAP Istanbul release, please see:
+
+#. `ONAP Home Page`_
+#. `ONAP Documentation`_
+#. `ONAP Release Downloads`_
+#. `ONAP Wiki Page`_
+
+
+.. _`ONAP Home Page`: https://www.onap.org
+.. _`ONAP Wiki Page`: https://lf-onap.atlassian.net/wiki
+.. _`ONAP Documentation`: https://docs.onap.org
+.. _`ONAP Release Downloads`: https://git.onap.org
+.. _`Gateway-API`: https://istio.io/latest/docs/tasks/traffic-management/ingress/gateway-api/
Previous Release Notes
======================
+- :ref:`New Delhi <release_notes_newdelhi>`
- :ref:`Montreal <release_notes_montreal>`
- :ref:`London <release_notes_london>`
- :ref:`Kohn <release_notes_kohn>`
Abstract
========
-This document provides the release notes for the New Delhi release.
+This document provides the release notes for the Oslo release.
Summary
=======
| **Docker images** | N/A |
| | |
+--------------------------------------+--------------------------------------+
-| **Release designation** | New Delhi |
+| **Release designation** | Oslo |
| | |
+--------------------------------------+--------------------------------------+
-| **Release date** | 2024/06/13 |
+| **Release date** | 2025/01/09 |
| | |
+--------------------------------------+--------------------------------------+
New features
------------
-* authentication (14.0.0) - add configurable Keycloak Realm and enable Ingress
- Interface Authentication and Authorization
-* Update the helm common templates (13.2.0) to:
+* Support the latest Database Operators:
- * Support the latest Database Operators:
+ * MariaDB-Operator (0.36.0)
+ * K8ssandra-Operator (v0.20.2)
+ * Postgres-Operator (CrunchyData) (5.7.2)
+ * MongoDB-Operator (Percona) (1.18.0)
- * MariaDB-Operator (0.28.1)
- * K8ssandra-Operator (v0.16.0)
- * Postgres-Operator (CrunchyData) (5.5.0)
+* authentication (15.0.0)
-* cassandra (13.1.0) - support for new K8ssandra-Operator
-* mariadb-galera (13.1.0) - support for new MariaDB-Operator
-* mongodb (14.12.3) - update to latest bitnami chart version
-* postgres (13.1.0) - support for new Postgres-Operator
-* postgres-init (13.0.1) - support for new Postgres-Operator
-* readinessCheck (13.1.0) - added check for "Service" readiness
-* serviceAccount (13.0.1) - add default role creation
+ * support for REALM Client AuthorizationSettings
+ * update oauth2-proxy and keycloak-config-cli versions
+ * add support for latest keycloak version 26.x
+
+* Update the helm common templates (13.2.10) to:
+
+ * add SecurityContext settings for Production readiness
+
+* cassandra (13.1.1)
+
+ * support for new cassandra version (4.1.6)
+ * add SecurityContext settings for Production readiness
+
+* mariadb-galera (13.2.3)
+
+ * add SecurityContext settings for Production readiness
+
+* mariadb-init (13.0.2)
+
+ * add SecurityContext settings for Production readiness
+
+* mongodb (14.12.4)
+
+ * add SecurityContext settings for Production readiness
+
+* mongodb-init (13.0.2)
+
+ * new chart to support external mongodb initialization
+
+* postgres (13.1.0)
+
+ * add SecurityContext settings for Production readiness
+
+* postgres-init (13.0.3)
+
+ * add SecurityContext settings for Production readiness
+
+* readinessCheck (13.1.1)
+
+ * add SecurityContext settings for Production readiness
+
+* serviceAccount (13.0.2)
+
+ * adjust default role mapping
**Bug fixes**
A list of issues resolved in this release can be found here:
-https://lf-onap.atlassian.net/projects/OOM/versions/11502
+https://lf-onap.atlassian.net/projects/OOM/versions/10783
**Known Issues**
Documentation Deliverables
~~~~~~~~~~~~~~~~~~~~~~~~~~
-- :ref:`Project Description <oom_project_description>` - a guide for developers of OOM
+- :ref:`Project Description <oom_project_description>` - a guide for developers
+ of OOM
- :ref:`oom_dev_guide` - a guide for developers of OOM
-- :ref:`oom_infra_guide` - a guide for those setting up the environments that OOM will use
-- :ref:`oom_deploy_guide` - a guide for those deploying OOM on an existing cloud
+- :ref:`oom_infra_guide` - a guide for those setting up the environments that
+ OOM will use
+- :ref:`oom_deploy_guide` - a guide for those deploying OOM on an existing
+ cloud
- :ref:`oom_user_guide` - a guide for operators of an OOM instance
-- :ref:`oom_access_info_guide` - a guide for operators who require access to OOM applications
+- :ref:`oom_access_info_guide` - a guide for operators who require access to
+ OOM applications
Known Limitations, Issues and Workarounds
=========================================
NAME CHART VERSION APP VERSION DESCRIPTION
-local/onap 12.0.0 London Open Network Automation Platform (ONAP)
-local/a1policymanagement 12.0.0 ONAP A1 Policy Management
-local/aai 12.0.0 ONAP Active and Available Inventory
-local/cassandra 12.0.0 ONAP cassandra
-local/cds 12.0.0 ONAP Controller Design Studio (CDS)
-local/cli 12.0.0 ONAP Command Line Interface
-local/common 12.0.0 Common templates for inclusion in other charts
-local/cps 12.0.0 ONAP Configuration Persistene Service (CPS)
-local/dcaegen2 12.0.0 ONAP DCAE Gen2
-local/dmaap 12.0.0 ONAP DMaaP components
-local/mariadb-galera 12.0.0 Chart for MariaDB Galera cluster
-local/msb 12.0.0 ONAP MicroServices Bus
-local/multicloud 12.0.0 ONAP multicloud broker
-local/nbi 12.0.0 ONAP Northbound Interface
-local/nfs-provisioner 12.0.0 NFS provisioner
-local/oof 12.0.0 ONAP Optimization Framework
-local/policy 12.0.0 ONAP Policy Administration Point
-local/postgres 12.0.0 ONAP Postgres Server
-local/robot 12.0.0 A helm Chart for kubernetes-ONAP Robot
-local/sdc 12.0.0 Service Design and Creation Umbrella Helm charts
-local/sdnc 12.0.0 SDN Controller
-local/sdnc-prom 12.0.0 ONAP SDNC Policy Driven Ownership Management
-local/sniro-emulator 12.0.0 ONAP Mock Sniro Emulator
-local/so 12.0.0 ONAP Service Orchestrator
-local/strimzi 12.0.0 ONAP Strimzi Apache Kafka
-local/uui 12.0.0 ONAP uui
-local/vfc 12.0.0 ONAP Virtual Function Controller (VF-C)
-local/vnfsdk 12.0.0 ONAP VNF SDK
+local/onap 15.0.0 Oslo Open Network Automation Platform (ONAP)
+local/a1policymanagement 13.0.0 ONAP A1 Policy Management
+local/aai 15.0.1 ONAP Active and Available Inventory
+local/authentication 15.0.0 ONAP Realm creation, Oauth2Proxy installation and configuration
+local/cassandra 13.1.1 ONAP cassandra
+local/cds 13.0.2 ONAP Controller Design Studio (CDS)
+local/common 13.2.10 Common templates for inclusion in other charts
+local/cps 13.0.1 ONAP Configuration Persistene Service (CPS)
+local/dcaegen2 15.0.1 ONAP DCAE Gen2
+local/mariadb-galera 13.2.3 Chart for MariaDB Galera cluster
+local/multicloud 15.0.2 ONAP multicloud broker
+local/platform 13.0.1 ONAP platform components
+local/policy 15.0.1 ONAP Policy Administration Point
+local/portal-ng 13.0.1 ONAP Next Generation Portal
+local/postgres 13.1.0 ONAP Postgres Server
+local/repository-wrapper 13.0.0 Wrapper chart to allow docker secret to be shared all instances
+local/robot 13.0.0 A helm Chart for kubernetes-ONAP Robot
+local/roles-wrapper 13.0.0 Wrapper chart to allow default roles to be shared among onap instances
+local/sdc 13.0.1 Service Design and Creation Umbrella Helm charts
+local/sdnc 15.1.0 SDN Controller
+local/so 13.0.1 ONAP Service Orchestrator
+local/strimzi 13.0.2 ONAP Strimzi Apache Kafka
+local/uui 13.1.0 ONAP uui
-apiVersion: gateway.networking.k8s.io/v1beta1
+apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: common-gateway
Matches:
Path:
Type: PathPrefix
- Value: /auth
+ Value: /
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
- backendRefs:
- group: ""
kind: Service
- name: keycloak-keycloakx-http
+ name: keycloak-http
port: 80
weight: 1
matches:
- path:
type: PathPrefix
- value: /auth
+ value: /
---
+fullnameOverride: keycloak
+
+image:
+ tag: "26.0.6"
+
command:
- "/opt/keycloak/bin/kc.sh"
- "--verbose"
- "start"
+ - "--proxy-headers=forwarded"
- "--http-enabled=true"
- "--http-port=8080"
- "--hostname-strict=false"
- - "--hostname-strict-https=false"
- "--spi-events-listener-jboss-logging-success-level=info"
- "--spi-events-listener-jboss-logging-error-level=warn"
skipsdist=true
[doc8]
-ignore-path-errors=docs/helm-search.txt;D001
+ignore-path-errors=docs/sections/resources/helm/helm-search.txt;D001
[testenv:doc8]
basepython = python3.8