1 .. This work is licensed under a Creative Commons Attribution 4.0 International License.
2 .. http://creativecommons.org/licenses/by/4.0
3 .. Copyright 2018 Amdocs, Bell Canada
6 .. _HELM Best Practices Guide: https://docs.helm.sh/chart_best_practices/#requirements
7 .. _kubectl Cheat Sheet: https://kubernetes.io/docs/reference/kubectl/cheatsheet/
8 .. _Kubernetes documentation for emptyDir: https://kubernetes.io/docs/concepts/storage/volumes/#emptydir
9 .. _Docker DevOps: https://wiki.onap.org/display/DW/Docker+DevOps#DockerDevOps-DockerBuild
10 .. _http://cd.onap.info:30223/mso/logging/debug: http://cd.onap.info:30223/mso/logging/debug
11 .. _Onboarding and Distributing a Vendor Software Product: https://wiki.onap.org/pages/viewpage.action?pageId=1018474
12 .. _README.md: https://gerrit.onap.org/r/gitweb?p=oom.git;a=blob;f=kubernetes/README.md
14 .. figure:: oomLogoV2-medium.png
17 .. _onap-on-kubernetes-with-rancher:
19 ONAP on HA Kubernetes Cluster
20 #############################
22 This guide provides instructions on how to setup a Highly-Available Kubernetes Cluster.
23 For this, we are hosting our cluster on OpenStack VMs and using the Rancher Kubernetes Engine (RKE)
24 to deploy and manage our Kubernetes Cluster.
31 The result at the end of this tutorial will be:
33 #. Creation of a Key Pair to use with Open Stack and RKE
35 #. Creation of OpenStack VMs to host Kubernetes Control Plane
37 #. Creation of OpenStack VMs to host Kubernetes Workers
39 #. Installation and configuration of RKE to setup an HA Kubernetes
41 #. Installation and configuration of kubectl
43 #. Installation and configuration of helm
45 #. Creation of an NFS Server to be used by ONAP as shared persistance
47 There are many ways one can execute the above steps. Including automation through the use of HEAT to setup the OpenStack VMs.
48 To better illustrate the steps involved, we have captured the manual creation of such an environment using the ONAP Wind River Open Lab.
52 A Key Pair is required to access the created OpenStack VMs and will be used by
53 RKE to configure the VMs for Kubernetes.
55 Use an existing key pair, import one or create a new one to assign.
57 .. image:: images/keys/key_pair_1.png
60 If you're creating a new Key Pair, ensure to create a local copy of the Private Key through the use of "Copy Private Key to Clipboard".
62 For the purpose of this guide, we will assume a new local key called "onap-key"
63 has been downloaded and is copied into **~/.ssh/**, from which it can be referenced.
69 > chmod 600 ~/.ssh/onap-key
75 An internal network is required in order to deploy our VMs that will host
78 .. image:: images/network/network_1.png
80 .. image:: images/network/network_2.png
82 .. image:: images/network/network_3.png
85 It's better to have one network per deployment and obviously the name of this
86 network should be unique.
88 Now we need to create a router to attach this network to outside:
90 .. image:: images/network/network_4.png
95 A specific security group is also required
97 .. image:: images/sg/sg_1.png
99 then click on `manage rules` of the newly created security group.
100 And finally click on `Add Rule` and create the following one:
102 .. image:: images/sg/sg_2.png
105 the security is clearly not good here and the right SG will be proposed in a
108 Create Kubernetes Control Plane VMs
109 ===================================
111 The following instructions describe how to create 3 OpenStack VMs to host the
112 Highly-Available Kubernetes Control Plane.
113 ONAP workloads will not be scheduled on these Control Plane nodes.
115 Launch new VM instances
116 -----------------------
118 .. image:: images/cp_vms/control_plane_1.png
120 Select Ubuntu 18.04 as base image
121 ---------------------------------
122 Select "No" for "Create New Volume"
124 .. image:: images/cp_vms/control_plane_2.png
128 The recommended flavor is at least 4 vCPU and 8GB ram.
130 .. image:: images/cp_vms/control_plane_3.png
135 Use the created network:
137 .. image:: images/cp_vms/control_plane_4.png
142 Use the created security group:
144 .. image:: images/cp_vms/control_plane_5.png
148 Assign the key pair that was created/selected previously (e.g. onap_key).
150 .. image:: images/cp_vms/control_plane_6.png
152 Apply customization script for Control Plane VMs
153 ------------------------------------------------
155 Click :download:`openstack-k8s-controlnode.sh <openstack-k8s-controlnode.sh>`
156 to download the script.
158 .. literalinclude:: openstack-k8s-controlnode.sh
161 This customization script will:
166 .. image:: images/cp_vms/control_plane_7.png
171 .. image:: images/cp_vms/control_plane_8.png
175 Create Kubernetes Worker VMs
176 ============================
177 The following instructions describe how to create OpenStack VMs to host the
178 Highly-Available Kubernetes Workers. ONAP workloads will only be scheduled on these nodes.
180 Launch new VM instances
181 -----------------------
183 The number and size of Worker VMs is depenedent on the size of the ONAP deployment.
184 By default, all ONAP applications are deployed. It's possible to customize the deployment
185 and enable a subset of the ONAP applications. For the purpose of this guide, however,
186 we will deploy 12 Kubernetes Workers that have been sized to handle the entire ONAP
187 application workload.
189 .. image:: images/wk_vms/worker_1.png
191 Select Ubuntu 18.04 as base image
192 ---------------------------------
193 Select "No" on "Create New Volume"
195 .. image:: images/wk_vms/worker_2.png
199 The size of Kubernetes hosts depend on the size of the ONAP deployment
202 If a small subset of ONAP applications are being deployed
203 (i.e. for testing purposes), then 16GB or 32GB may be sufficient.
205 .. image:: images/wk_vms/worker_3.png
210 .. image:: images/wk_vms/worker_4.png
215 .. image:: images/wk_vms/worker_5.png
219 Assign the key pair that was created/selected previously (e.g. onap_key).
221 .. image:: images/wk_vms/worker_6.png
223 Apply customization script for Kubernetes VM(s)
224 -----------------------------------------------
226 Click :download:`openstack-k8s-workernode.sh <openstack-k8s-workernode.sh>` to download the
229 .. literalinclude:: openstack-k8s-workernode.sh
232 This customization script will:
242 .. image:: images/wk_vms/worker_7.png
247 Assign Floating IP addresses
248 ----------------------------
249 Assign Floating IPs to all Control Plane and Worker VMs.
250 These addresses provide external access to the VMs and will be used by RKE
251 to configure kubernetes on to the VMs.
253 Repeat the following for each VM previously created:
255 .. image:: images/floating_ips/floating_1.png
257 Resulting floating IP assignments in this example.
259 .. image:: images/floating_ips/floating_2.png
264 Configure Rancher Kubernetes Engine (RKE)
265 =========================================
269 Download and install RKE on a VM, desktop or laptop.
270 Binaries can be found here for Linux and Mac: https://github.com/rancher/rke/releases/tag/v1.0.6
273 There are several ways to install RKE. Further parts of this documentation assumes that you have rke command available.
274 If you don't know how to install RKE you may follow the below steps:
276 * chmod +x ./rke_linux-amd64
277 * sudo mv ./rke_linux-amd64 /user/local/bin/rke
279 RKE requires a *cluster.yml* as input. An example file is show below that
280 describes a Kubernetes cluster that will be mapped onto the OpenStack VMs
281 created earlier in this guide.
283 Click :download:`cluster.yml <cluster.yml>` to download the
286 .. literalinclude:: cluster.yml
291 Before this configuration file can be used the external **address**
292 and the **internal_address** must be mapped for each control and worker node
297 From within the same directory as the cluster.yml file, simply execute::
301 The output will look something like::
303 INFO[0000] Initiating Kubernetes cluster
304 INFO[0000] [certificates] Generating admin certificates and kubeconfig
305 INFO[0000] Successfully Deployed state file at [./cluster.rkestate]
306 INFO[0000] Building Kubernetes cluster
307 INFO[0000] [dialer] Setup tunnel for host [10.12.6.82]
308 INFO[0000] [dialer] Setup tunnel for host [10.12.6.249]
309 INFO[0000] [dialer] Setup tunnel for host [10.12.6.74]
310 INFO[0000] [dialer] Setup tunnel for host [10.12.6.85]
311 INFO[0000] [dialer] Setup tunnel for host [10.12.6.238]
312 INFO[0000] [dialer] Setup tunnel for host [10.12.6.89]
313 INFO[0000] [dialer] Setup tunnel for host [10.12.5.11]
314 INFO[0000] [dialer] Setup tunnel for host [10.12.6.90]
315 INFO[0000] [dialer] Setup tunnel for host [10.12.6.244]
316 INFO[0000] [dialer] Setup tunnel for host [10.12.5.165]
317 INFO[0000] [dialer] Setup tunnel for host [10.12.6.126]
318 INFO[0000] [dialer] Setup tunnel for host [10.12.6.111]
319 INFO[0000] [dialer] Setup tunnel for host [10.12.5.160]
320 INFO[0000] [dialer] Setup tunnel for host [10.12.5.191]
321 INFO[0000] [dialer] Setup tunnel for host [10.12.6.195]
322 INFO[0002] [network] Deploying port listener containers
323 INFO[0002] [network] Pulling image [nexus3.onap.org:10001/rancher/rke-tools:v0.1.27] on host [10.12.6.85]
324 INFO[0002] [network] Pulling image [nexus3.onap.org:10001/rancher/rke-tools:v0.1.27] on host [10.12.6.89]
325 INFO[0002] [network] Pulling image [nexus3.onap.org:10001/rancher/rke-tools:v0.1.27] on host [10.12.6.90]
326 INFO[0011] [network] Successfully pulled image [nexus3.onap.org:10001/rancher/rke-tools:v0.1.27] on host [10.12.6.89]
328 INFO[0309] [addons] Setting up Metrics Server
329 INFO[0309] [addons] Saving ConfigMap for addon rke-metrics-addon to Kubernetes
330 INFO[0309] [addons] Successfully saved ConfigMap for addon rke-metrics-addon to Kubernetes
331 INFO[0309] [addons] Executing deploy job rke-metrics-addon
332 INFO[0315] [addons] Metrics Server deployed successfully
333 INFO[0315] [ingress] Setting up nginx ingress controller
334 INFO[0315] [addons] Saving ConfigMap for addon rke-ingress-controller to Kubernetes
335 INFO[0316] [addons] Successfully saved ConfigMap for addon rke-ingress-controller to Kubernetes
336 INFO[0316] [addons] Executing deploy job rke-ingress-controller
337 INFO[0322] [ingress] ingress controller nginx deployed successfully
338 INFO[0322] [addons] Setting up user addons
339 INFO[0322] [addons] no user addons defined
340 INFO[0322] Finished building Kubernetes cluster successfully
345 Download and install kubectl. Binaries can be found here for Linux and Mac:
347 https://storage.googleapis.com/kubernetes-release/release/v1.15.11/bin/linux/amd64/kubectl
348 https://storage.googleapis.com/kubernetes-release/release/v1.15.11/bin/darwin/amd64/kubectl
350 You only need to install kubectl where you'll launch kubernetes command. This
351 can be any machines of the kubernetes cluster or a machine that has IP access
353 Usually, we use the first controller as it has also access to internal
354 Kubernetes services, which can be convenient.
363 > cp kube_config_cluster.yml ~/.kube/config.onap
365 > export KUBECONFIG=~/.kube/config.onap
367 > kubectl config use-context onap
369 > kubectl get nodes -o=wide
373 NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
374 onap-control-1 Ready controlplane,etcd 3h53m v1.15.2 10.0.0.8 <none> Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5
375 onap-control-2 Ready controlplane,etcd 3h53m v1.15.2 10.0.0.11 <none> Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5
376 onap-control-3 Ready controlplane,etcd 3h53m v1.15.2 10.0.0.12 <none> Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5
377 onap-k8s-1 Ready worker 3h53m v1.15.2 10.0.0.14 <none> Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5
378 onap-k8s-10 Ready worker 3h53m v1.15.2 10.0.0.16 <none> Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5
379 onap-k8s-11 Ready worker 3h53m v1.15.2 10.0.0.18 <none> Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5
380 onap-k8s-12 Ready worker 3h53m v1.15.2 10.0.0.7 <none> Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5
381 onap-k8s-2 Ready worker 3h53m v1.15.2 10.0.0.26 <none> Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5
382 onap-k8s-3 Ready worker 3h53m v1.15.2 10.0.0.5 <none> Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5
383 onap-k8s-4 Ready worker 3h53m v1.15.2 10.0.0.6 <none> Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5
384 onap-k8s-5 Ready worker 3h53m v1.15.2 10.0.0.9 <none> Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5
385 onap-k8s-6 Ready worker 3h53m v1.15.2 10.0.0.17 <none> Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5
386 onap-k8s-7 Ready worker 3h53m v1.15.2 10.0.0.20 <none> Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5
387 onap-k8s-8 Ready worker 3h53m v1.15.2 10.0.0.10 <none> Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5
388 onap-k8s-9 Ready worker 3h53m v1.15.2 10.0.0.4 <none> Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5
394 Example Helm client install on Linux::
396 > wget https://get.helm.sh/helm-v2.16.6-linux-amd64.tar.gz
398 > tar -zxvf helm-v2.16.6-linux-amd64.tar.gz
400 > sudo mv linux-amd64/helm /usr/local/bin/helm
402 Initialize Kubernetes Cluster for use by Helm
403 ---------------------------------------------
407 > kubectl -n kube-system create serviceaccount tiller
409 > kubectl create clusterrolebinding tiller --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
411 > helm init --service-account tiller
413 > kubectl -n kube-system rollout status deploy/tiller-deploy
417 Setting up an NFS share for Multinode Kubernetes Clusters
418 =========================================================
419 Deploying applications to a Kubernetes cluster requires Kubernetes nodes to
420 share a common, distributed filesystem. In this tutorial, we will setup an
421 NFS Master, and configure all Worker nodes a Kubernetes cluster to play
422 the role of NFS slaves.
424 It is recommneded that a separate VM, outside of the kubernetes
425 cluster, be used. This is to ensure that the NFS Master does not compete for
426 resources with Kubernetes Control Plane or Worker Nodes.
429 Launch new NFS Server VM instance
430 ---------------------------------
431 .. image:: images/nfs_server/nfs_server_1.png
433 Select Ubuntu 18.04 as base image
434 ---------------------------------
435 Select "No" on "Create New Volume"
437 .. image:: images/nfs_server/nfs_server_2.png
442 .. image:: images/nfs_server/nfs_server_3.png
447 .. image:: images/nfs_server/nfs_server_4.png
452 .. image:: images/nfs_server/nfs_server_5.png
456 Assign the key pair that was created/selected previously (e.g. onap_key).
458 .. image:: images/nfs_server/nfs_server_6.png
460 Apply customization script for NFS Server VM
461 --------------------------------------------
463 Click :download:`openstack-nfs-server.sh <openstack-nfs-server.sh>` to download the
466 .. literalinclude:: openstack-nfs-server.sh
469 This customization script will:
478 .. image:: images/nfs_server/nfs_server_7.png
482 Assign Floating IP addresses
483 ----------------------------
485 .. image:: images/nfs_server/nfs_server_8.png
487 Resulting floating IP assignments in this example.
489 .. image:: images/nfs_server/nfs_server_9.png
492 To properly set up an NFS share on Master and Slave nodes, the user can run the
495 Click :download:`master_nfs_node.sh <master_nfs_node.sh>` to download the
498 .. literalinclude:: master_nfs_node.sh
501 Click :download:`slave_nfs_node.sh <slave_nfs_node.sh>` to download the script.
503 .. literalinclude:: slave_nfs_node.sh
506 The master_nfs_node.sh script runs in the NFS Master node and needs the list of
507 NFS Slave nodes as input, e.g.::
509 > sudo ./master_nfs_node.sh node1_ip node2_ip ... nodeN_ip
511 The slave_nfs_node.sh script runs in each NFS Slave node and needs the IP of
512 the NFS Master node as input, e.g.::
514 > sudo ./slave_nfs_node.sh master_node_ip
517 ONAP Deployment via OOM
518 =======================
519 Now that kubernetes and Helm are installed and configured you can prepare to
520 deploy ONAP. Follow the instructions in the README.md_ or look at the official
521 documentation to get started:
523 - :ref:`quick-start-label` - deploy ONAP on an existing cloud
524 - :ref:`user-guide-label` - a guide for operators of an ONAP instance