1 .. This work is licensed under a Creative Commons Attribution 4.0 International License.
2 .. http://creativecommons.org/licenses/by/4.0
3 .. Copyright 2018 Amdocs, Bell Canada
6 .. _HELM Best Practices Guide: https://docs.helm.sh/chart_best_practices/#requirements
7 .. _kubectl Cheat Sheet: https://kubernetes.io/docs/reference/kubectl/cheatsheet/
8 .. _Kubernetes documentation for emptyDir: https://kubernetes.io/docs/concepts/storage/volumes/#emptydir
9 .. _Docker DevOps: https://wiki.onap.org/display/DW/Docker+DevOps#DockerDevOps-DockerBuild
10 .. _http://cd.onap.info:30223/mso/logging/debug: http://cd.onap.info:30223/mso/logging/debug
11 .. _Onboarding and Distributing a Vendor Software Product: https://wiki.onap.org/pages/viewpage.action?pageId=1018474
12 .. _README.md: https://gerrit.onap.org/r/gitweb?p=oom.git;a=blob;f=kubernetes/README.md
14 .. figure:: oomLogoV2-medium.png
17 .. _onap-on-kubernetes-with-rancher:
19 ONAP on HA Kubernetes Cluster
20 #############################
22 This guide provides instructions on how to setup a Highly-Available Kubernetes Cluster.
23 For this, we are hosting our cluster on OpenStack VMs and using the Rancher Kubernetes Engine (RKE)
24 to deploy and manage our Kubernetes Cluster.
31 The result at the end of this tutorial will be:
33 #. Creation of a Key Pair to use with Open Stack and RKE
35 #. Creation of OpenStack VMs to host Kubernetes Control Plane
37 #. Creation of OpenStack VMs to host Kubernetes Workers
39 #. Installation and configuration of RKE to setup an HA Kubernetes
41 #. Installation and configuration of kubectl
43 #. Installation and configuration of helm
45 #. Creation of an NFS Server to be used by ONAP as shared persistance
47 There are many ways one can execute the above steps. Including automation through the use of HEAT to setup the OpenStack VMs.
48 To better illustrate the steps involved, we have captured the manual creation of such an environment using the ONAP Wind River Open Lab.
52 A Key Pair is required to access the created OpenStack VMs and will be used by
53 RKE to configure the VMs for Kubernetes.
55 Use an existing key pair, import one or create a new one to assign.
57 .. image:: images/keys/key_pair_1.png
60 If you're creating a new Key Pair, ensure to create a local copy of the Private Key through the use of "Copy Private Key to Clipboard".
62 For the purpose of this guide, we will assume a new local key called "onap-key"
63 has been downloaded and is copied into **~/.ssh/**, from which it can be referenced.
69 > chmod 600 ~/.ssh/onap-key
75 An internal network is required in order to deploy our VMs that will host
78 .. image:: images/network/network_1.png
80 .. image:: images/network/network_2.png
82 .. image:: images/network/network_3.png
85 It's better to have one network per deployment and obviously the name of this
86 network should be unique.
88 Now we need to create a router to attach this network to outside:
90 .. image:: images/network/network_4.png
95 A specific security group is also required
97 .. image:: images/sg/sg_1.png
99 then click on `manage rules` of the newly created security group.
100 And finally click on `Add Rule` and create the following one:
102 .. image:: images/sg/sg_2.png
105 the security is clearly not good here and the right SG will be proposed in a
108 Create Kubernetes Control Plane VMs
109 ===================================
111 The following instructions describe how to create 3 OpenStack VMs to host the
112 Highly-Available Kubernetes Control Plane.
113 ONAP workloads will not be scheduled on these Control Plane nodes.
115 Launch new VM instances
116 -----------------------
118 .. image:: images/cp_vms/control_plane_1.png
120 Select Ubuntu 18.04 as base image
121 ---------------------------------
122 Select "No" for "Create New Volume"
124 .. image:: images/cp_vms/control_plane_2.png
128 The recommended flavor is at least 4 vCPU and 8GB ram.
130 .. image:: images/cp_vms/control_plane_3.png
135 Use the created network:
137 .. image:: images/cp_vms/control_plane_4.png
142 Use the created security group:
144 .. image:: images/cp_vms/control_plane_5.png
148 Assign the key pair that was created/selected previously (e.g. onap_key).
150 .. image:: images/cp_vms/control_plane_6.png
152 Apply customization script for Control Plane VMs
153 ------------------------------------------------
155 Click :download:`openstack-k8s-controlnode.sh <openstack-k8s-controlnode.sh>`
156 to download the script.
158 .. literalinclude:: openstack-k8s-controlnode.sh
161 This customization script will:
166 .. image:: images/cp_vms/control_plane_7.png
171 .. image:: images/cp_vms/control_plane_8.png
175 Create Kubernetes Worker VMs
176 ============================
177 The following instructions describe how to create OpenStack VMs to host the
178 Highly-Available Kubernetes Workers. ONAP workloads will only be scheduled on these nodes.
180 Launch new VM instances
181 -----------------------
183 The number and size of Worker VMs is depenedent on the size of the ONAP deployment.
184 By default, all ONAP applications are deployed. It's possible to customize the deployment
185 and enable a subset of the ONAP applications. For the purpose of this guide, however,
186 we will deploy 12 Kubernetes Workers that have been sized to handle the entire ONAP
187 application workload.
189 .. image:: images/wk_vms/worker_1.png
191 Select Ubuntu 18.04 as base image
192 ---------------------------------
193 Select "No" on "Create New Volume"
195 .. image:: images/wk_vms/worker_2.png
199 The size of Kubernetes hosts depend on the size of the ONAP deployment
202 If a small subset of ONAP applications are being deployed
203 (i.e. for testing purposes), then 16GB or 32GB may be sufficient.
205 .. image:: images/wk_vms/worker_3.png
210 .. image:: images/wk_vms/worker_4.png
215 .. image:: images/wk_vms/worker_5.png
219 Assign the key pair that was created/selected previously (e.g. onap_key).
221 .. image:: images/wk_vms/worker_6.png
223 Apply customization script for Kubernetes VM(s)
224 -----------------------------------------------
226 Click :download:`openstack-k8s-workernode.sh <openstack-k8s-workernode.sh>` to download the
229 .. literalinclude:: openstack-k8s-workernode.sh
232 This customization script will:
242 .. image:: images/wk_vms/worker_7.png
247 Assign Floating IP addresses
248 ----------------------------
249 Assign Floating IPs to all Control Plane and Worker VMs.
250 These addresses provide external access to the VMs and will be used by RKE
251 to configure kubernetes on to the VMs.
253 Repeat the following for each VM previously created:
255 .. image:: images/floating_ips/floating_1.png
257 Resulting floating IP assignments in this example.
259 .. image:: images/floating_ips/floating_2.png
264 Configure Rancher Kubernetes Engine (RKE)
265 =========================================
269 Download and install RKE on a VM, desktop or laptop.
270 Binaries can be found here for Linux and Mac: https://github.com/rancher/rke/releases/tag/v0.2.1
272 RKE requires a *cluster.yml* as input. An example file is show below that
273 describes a Kubernetes cluster that will be mapped onto the OpenStack VMs
274 created earlier in this guide.
276 Example: **cluster.yml**
278 .. image:: images/rke/rke_1.png
280 Click :download:`cluster.yml <cluster.yml>` to download the
283 .. literalinclude:: cluster.yml
288 Before this configuration file can be used the external **address**
289 and the **internal_address** must be mapped for each control and worker node
294 From within the same directory as the cluster.yml file, simply execute::
298 The output will look something like::
300 INFO[0000] Initiating Kubernetes cluster
301 INFO[0000] [certificates] Generating admin certificates and kubeconfig
302 INFO[0000] Successfully Deployed state file at [./cluster.rkestate]
303 INFO[0000] Building Kubernetes cluster
304 INFO[0000] [dialer] Setup tunnel for host [10.12.6.82]
305 INFO[0000] [dialer] Setup tunnel for host [10.12.6.249]
306 INFO[0000] [dialer] Setup tunnel for host [10.12.6.74]
307 INFO[0000] [dialer] Setup tunnel for host [10.12.6.85]
308 INFO[0000] [dialer] Setup tunnel for host [10.12.6.238]
309 INFO[0000] [dialer] Setup tunnel for host [10.12.6.89]
310 INFO[0000] [dialer] Setup tunnel for host [10.12.5.11]
311 INFO[0000] [dialer] Setup tunnel for host [10.12.6.90]
312 INFO[0000] [dialer] Setup tunnel for host [10.12.6.244]
313 INFO[0000] [dialer] Setup tunnel for host [10.12.5.165]
314 INFO[0000] [dialer] Setup tunnel for host [10.12.6.126]
315 INFO[0000] [dialer] Setup tunnel for host [10.12.6.111]
316 INFO[0000] [dialer] Setup tunnel for host [10.12.5.160]
317 INFO[0000] [dialer] Setup tunnel for host [10.12.5.191]
318 INFO[0000] [dialer] Setup tunnel for host [10.12.6.195]
319 INFO[0002] [network] Deploying port listener containers
320 INFO[0002] [network] Pulling image [nexus3.onap.org:10001/rancher/rke-tools:v0.1.27] on host [10.12.6.85]
321 INFO[0002] [network] Pulling image [nexus3.onap.org:10001/rancher/rke-tools:v0.1.27] on host [10.12.6.89]
322 INFO[0002] [network] Pulling image [nexus3.onap.org:10001/rancher/rke-tools:v0.1.27] on host [10.12.6.90]
323 INFO[0011] [network] Successfully pulled image [nexus3.onap.org:10001/rancher/rke-tools:v0.1.27] on host [10.12.6.89]
325 INFO[0309] [addons] Setting up Metrics Server
326 INFO[0309] [addons] Saving ConfigMap for addon rke-metrics-addon to Kubernetes
327 INFO[0309] [addons] Successfully saved ConfigMap for addon rke-metrics-addon to Kubernetes
328 INFO[0309] [addons] Executing deploy job rke-metrics-addon
329 INFO[0315] [addons] Metrics Server deployed successfully
330 INFO[0315] [ingress] Setting up nginx ingress controller
331 INFO[0315] [addons] Saving ConfigMap for addon rke-ingress-controller to Kubernetes
332 INFO[0316] [addons] Successfully saved ConfigMap for addon rke-ingress-controller to Kubernetes
333 INFO[0316] [addons] Executing deploy job rke-ingress-controller
334 INFO[0322] [ingress] ingress controller nginx deployed successfully
335 INFO[0322] [addons] Setting up user addons
336 INFO[0322] [addons] no user addons defined
337 INFO[0322] Finished building Kubernetes cluster successfully
342 Download and install kubectl. Binaries can be found here for Linux and Mac:
344 https://storage.googleapis.com/kubernetes-release/release/v1.15.2/bin/linux/amd64/kubectl
345 https://storage.googleapis.com/kubernetes-release/release/v1.15.2/bin/darwin/amd64/kubectl
352 > cp kube_config_cluster.yml ~/.kube/config.onap
354 > export KUBECONFIG=~/.kube/config.onap
356 > kubectl config use-context onap
358 > kubectl get nodes -o=wide
362 NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
363 onap-control-1 Ready controlplane,etcd 3h53m v1.15.2 10.0.0.8 <none> Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5
364 onap-control-2 Ready controlplane,etcd 3h53m v1.15.2 10.0.0.11 <none> Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5
365 onap-control-3 Ready controlplane,etcd 3h53m v1.15.2 10.0.0.12 <none> Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5
366 onap-k8s-1 Ready worker 3h53m v1.15.2 10.0.0.14 <none> Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5
367 onap-k8s-10 Ready worker 3h53m v1.15.2 10.0.0.16 <none> Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5
368 onap-k8s-11 Ready worker 3h53m v1.15.2 10.0.0.18 <none> Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5
369 onap-k8s-12 Ready worker 3h53m v1.15.2 10.0.0.7 <none> Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5
370 onap-k8s-2 Ready worker 3h53m v1.15.2 10.0.0.26 <none> Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5
371 onap-k8s-3 Ready worker 3h53m v1.15.2 10.0.0.5 <none> Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5
372 onap-k8s-4 Ready worker 3h53m v1.15.2 10.0.0.6 <none> Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5
373 onap-k8s-5 Ready worker 3h53m v1.15.2 10.0.0.9 <none> Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5
374 onap-k8s-6 Ready worker 3h53m v1.15.2 10.0.0.17 <none> Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5
375 onap-k8s-7 Ready worker 3h53m v1.15.2 10.0.0.20 <none> Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5
376 onap-k8s-8 Ready worker 3h53m v1.15.2 10.0.0.10 <none> Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5
377 onap-k8s-9 Ready worker 3h53m v1.15.2 10.0.0.4 <none> Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5
383 Example Helm client install on Linux::
385 > wget http://storage.googleapis.com/kubernetes-helm/helm-v2.14.2-linux-amd64.tar.gz
387 > tar -zxvf helm-v2.14.2-linux-amd64.tar.gz
389 > sudo mv linux-amd64/helm /usr/local/bin/helm
391 Initialize Kubernetes Cluster for use by Helm
392 ---------------------------------------------
396 > kubectl -n kube-system create serviceaccount tiller
398 > kubectl create clusterrolebinding tiller --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
400 > helm init --service-account tiller
402 > kubectl -n kube-system rollout status deploy/tiller-deploy
406 Setting up an NFS share for Multinode Kubernetes Clusters
407 =========================================================
408 Deploying applications to a Kubernetes cluster requires Kubernetes nodes to
409 share a common, distributed filesystem. In this tutorial, we will setup an
410 NFS Master, and configure all Worker nodes a Kubernetes cluster to play
411 the role of NFS slaves.
413 It is recommneded that a separate VM, outside of the kubernetes
414 cluster, be used. This is to ensure that the NFS Master does not compete for
415 resources with Kubernetes Control Plane or Worker Nodes.
418 Launch new NFS Server VM instance
419 ---------------------------------
420 .. image:: images/nfs_server/nfs_server_1.png
422 Select Ubuntu 18.04 as base image
423 ---------------------------------
424 Select "No" on "Create New Volume"
426 .. image:: images/nfs_server/nfs_server_2.png
431 .. image:: images/nfs_server/nfs_server_3.png
436 .. image:: images/nfs_server/nfs_server_4.png
441 .. image:: images/nfs_server/nfs_server_5.png
445 Assign the key pair that was created/selected previously (e.g. onap_key).
447 .. image:: images/nfs_server/nfs_server_6.png
449 Apply customization script for NFS Server VM
450 --------------------------------------------
452 Click :download:`openstack-nfs-server.sh <openstack-nfs-server.sh>` to download the
455 .. literalinclude:: openstack-nfs-server.sh
458 This customization script will:
467 .. image:: images/nfs_server/nfs_server_7.png
471 Assign Floating IP addresses
472 ----------------------------
474 .. image:: images/nfs_server/nfs_server_8.png
476 Resulting floating IP assignments in this example.
478 .. image:: images/nfs_server/nfs_server_9.png
481 To properly set up an NFS share on Master and Slave nodes, the user can run the
484 Click :download:`master_nfs_node.sh <master_nfs_node.sh>` to download the
487 .. literalinclude:: master_nfs_node.sh
490 Click :download:`slave_nfs_node.sh <slave_nfs_node.sh>` to download the script.
492 .. literalinclude:: slave_nfs_node.sh
495 The master_nfs_node.sh script runs in the NFS Master node and needs the list of
496 NFS Slave nodes as input, e.g.::
498 > sudo ./master_nfs_node.sh node1_ip node2_ip ... nodeN_ip
500 The slave_nfs_node.sh script runs in each NFS Slave node and needs the IP of
501 the NFS Master node as input, e.g.::
503 > sudo ./slave_nfs_node.sh master_node_ip
506 ONAP Deployment via OOM
507 =======================
508 Now that kubernetes and Helm are installed and configured you can prepare to
509 deploy ONAP. Follow the instructions in the README.md_ or look at the official
510 documentation to get started:
512 - :ref:`quick-start-label` - deploy ONAP on an existing cloud
513 - :ref:`user-guide-label` - a guide for operators of an ONAP instance