1 .. This work is licensed under a Creative Commons Attribution 4.0 International License.
2 .. http://creativecommons.org/licenses/by/4.0
3 .. Copyright 2018 Amdocs, Bell Canada
6 .. _HELM Best Practices Guide: https://docs.helm.sh/chart_best_practices/#requirements
7 .. _kubectl Cheat Sheet: https://kubernetes.io/docs/reference/kubectl/cheatsheet/
8 .. _Kubernetes documentation for emptyDir: https://kubernetes.io/docs/concepts/storage/volumes/#emptydir
9 .. _Docker DevOps: https://wiki.onap.org/display/DW/Docker+DevOps#DockerDevOps-DockerBuild
10 .. _http://cd.onap.info:30223/mso/logging/debug: http://cd.onap.info:30223/mso/logging/debug
11 .. _Onboarding and Distributing a Vendor Software Product: https://wiki.onap.org/pages/viewpage.action?pageId=1018474
12 .. _README.md: https://gerrit.onap.org/r/gitweb?p=oom.git;a=blob;f=kubernetes/README.md
14 .. figure:: oomLogoV2-medium.png
17 .. _onap-on-kubernetes-with-rancher:
19 ONAP on Kubernetes with Rancher
20 ###############################
22 The following instructions will step you through the installation of Kubernetes
23 on an OpenStack environment with Rancher. The development lab used for this
24 installation is the ONAP Windriver lab.
26 This guide does not cover all of the steps required to setup your OpenStack
27 environment: e.g. OAM networks and security groups but there is a wealth of
28 OpenStack information on the web.
33 The following instructions describe how to create an Openstack VM running
34 Rancher. This node will not be used to host ONAP itself, it will be used
35 exclusively by Rancher.
37 Launch new VM instance to host the Rancher Server
38 -------------------------------------------------
40 .. image:: Rancher-Launch_new_VM_instance_to_host_the_Rancher_Server.jpeg
42 Select Ubuntu 16.04 as base image
43 ---------------------------------
44 Select "No" on "Create New Volume"
46 .. image:: Rancher-Select_Ubuntu_16.04_as_base_image.jpeg
50 Known issues exist if flavor is too small for Rancher. Please select a flavor
51 with at least 4 vCPU and 8GB ram.
53 .. image:: Rancher-Select_Flavor.jpeg
58 .. image:: Rancher-Networking.jpeg
63 .. image:: Rancher-Security_Groups.jpeg
67 Use an existing key pair (e.g. onap_key), import an existing one or create a
70 .. image:: Rancher-Key_Pair.jpeg
72 Apply customization script for the Rancher VM
73 ---------------------------------------------
75 Click :download:`openstack-rancher.sh <openstack-rancher.sh>` to download the script.
77 .. literalinclude:: openstack-rancher.sh
80 This customization script will:
82 * setup root access to the VM (comment out if you wish to disable this
83 capability and restrict access to ssh access only)
91 The Beijing release of OOM only supports Helm 2.8.2 not the 2.7.2 shown in
92 the screen capture below. The supported versions of all the software components
93 are listed in the :ref:`cloud-setup-guide-label`.
95 .. image:: Apply_customization_script_for_the_Rancher_VM.jpeg
100 .. image:: Rancher-Launch_Instance.jpeg
102 Assign Floating IP for external access
103 --------------------------------------
105 .. image:: Rancher-Allocate_Floating_IP.jpeg
107 .. image:: Rancher-Manage_Floating_IP_Associations.jpeg
109 .. image:: Rancher-Launch_Instance.jpeg
111 Kubernetes Installation
112 =======================
114 Launch new VM instance(s) to create a Kubernetes single host or cluster
115 -----------------------------------------------------------------------
120 #. do not append a '-1' suffix (e.g. sb4-k8s)
121 #. increase count to the # of of kubernetes worker nodes you want (eg. 3)
123 .. image:: K8s-Launch_new_VM_instance_to_create_a_Kubernetes_single_host_or_cluster.jpeg
125 Select Ubuntu 16.04 as base image
126 ---------------------------------
127 Select "No" on "Create New Volume"
129 .. image:: K8s-Select_Ubuntu_16.04_as_base_image.jpeg
133 The size of a Kubernetes host depends on the size of the ONAP deployment that
136 As of the Beijing release a minimum of 3 x 32GB hosts will be needed to run a
137 full ONAP deployment (all components).
139 If a small subset of ONAP components are being deployed for testing purposes,
140 then a single 16GB or 32GB host should suffice.
142 .. image:: K8s-Select_Flavor.jpeg
147 .. image:: K8s-Networking.jpeg
152 .. image:: K8s-Security_Group.jpeg
156 Use an existing key pair (e.g. onap_key), import an existing one or create a
159 .. image:: K8s-Key_Pair.jpeg
161 Apply customization script for Kubernetes VM(s)
162 -----------------------------------------------
164 Click :download:`openstack-k8s-node.sh <openstack-k8s-node.sh>` to
167 .. literalinclude:: openstack-k8s-node.sh
170 This customization script will:
172 * setup root access to the VM (comment out if you wish to disable this
173 capability and restrict access to ssh access only)
177 * install nfs common (see configuration step here)
180 Ensure you are using the correct versions as described in the
181 :ref:`cloud-setup-guide-label`
186 .. image:: K8s-Launch_Instance.jpeg
188 Assign Floating IP for external access
189 --------------------------------------
191 .. image:: K8s-Manage_Floating_IP_Associations.jpeg
193 .. image:: K8s-Launch_Instance.jpeg
195 Setting up an NFS share for Multinode Kubernetes Clusters
196 =========================================================
197 The figure below illustrates a possible topology of a multinode Kubernetes
200 .. image:: k8s-topology.jpg
202 One node, the Master Node, runs Rancher and Helm clients and connects to all
203 the Kubernetes nodes in the cluster. Kubernetes nodes, in turn, run Rancher,
204 Kubernetes and Tiller (Helm) agents, which receive, execute, and respond to
205 commands issued by the Master Node (e.g. kubectl or helm operations). Note that
206 the Master Node can be either a remote machine that the user can log in to or a
207 local machine (e.g. laptop, desktop) that has access to the Kubernetes cluster.
209 Deploying applications to a Kubernetes cluster requires Kubernetes nodes to
210 share a common, distributed filesystem. One node in the cluster plays the role
211 of NFS Master (not to confuse with the Master Node that runs Rancher and Helm
212 clients, which is located outside the cluster), while all the other cluster
213 nodes play the role of NFS slaves. In the figure above, the left-most cluster
214 node plays the role of NFS Master (indicated by the crown symbol). To properly
215 set up an NFS share on Master and Slave nodes, the user can run the scripts
218 Click :download:`master_nfs_node.sh <master_nfs_node.sh>` to download the script.
220 .. literalinclude:: master_nfs_node.sh
223 Click :download:`slave_nfs_node.sh <slave_nfs_node.sh>` to download the script.
225 .. literalinclude:: slave_nfs_node.sh
228 The master_nfs_node.sh script runs in the NFS Master node and needs the list of
229 NFS Slave nodes as input, e.g.::
231 > sudo ./master_nfs_node.sh node1_ip node2_ip ... nodeN_ip
233 The slave_nfs_node.sh script runs in each NFS Slave node and needs the IP of
234 the NFS Master node as input, e.g.::
236 > sudo ./slave_nfs_node.sh master_node_ip
238 Configuration (Rancher and Kubernetes)
239 ======================================
241 Access Rancher server via web browser
242 -------------------------------------
243 (e.g. http://10.12.6.16:8080/env/1a5/apps/stacks)
245 .. image:: Access_Rancher_server_via_web_browser.jpeg
247 Add Kubernetes Environment to Rancher
248 -------------------------------------
250 1. Select “Manage Environments”
252 .. image:: Add_Kubernetes_Environment_to_Rancher.png
254 2. Select “Add Environment”
256 .. image:: Select_Add_Environment.png
258 3. Add unique name for your new Rancher environment
260 4. Select the Kubernetes template
264 .. image:: Click_create.jpeg
266 6. Select the new named environment (ie. SB4) from the dropdown list (top left).
268 Rancher is now waiting for a Kubernetes Host to be added.
270 .. image:: K8s-Assign_Floating_IP_for_external_access.jpeg
275 1. If this is the first (or only) host being added - click on the "Add a host" link
277 .. image:: K8s-Assign_Floating_IP_for_external_access.jpeg
279 and click on "Save" (accept defaults).
281 .. image:: and_click_on_Save_accept_defaults.jpeg
283 otherwise select INFRASTRUCTURE→ Hosts and click on "Add Host"
285 .. image:: otherwise_select_INFRASTRUCTURE_Hosts_and_click_on_Add_Host.jpg
287 2. Enter the management IP for the k8s VM (e.g. 10.0.0.4) that was just created.
289 3. Click on “Copy to Clipboard” button
291 4. Click on “Close” button
293 .. image:: Click_on_Close_button.jpeg
295 Without the 10.0.0.4 IP - the CATTLE_AGENT will be derived on the host - but it
296 may not be a routable IP.
298 Configure Kubernetes Host
299 -------------------------
301 1. Login to the new Kubernetes Host::
303 > ssh -i ~/oom-key.pem ubuntu@10.12.5.1
304 The authenticity of host '10.12.5.172 (10.12.5.172)' can't be established.
305 ECDSA key fingerprint is SHA256:tqxayN58nCJKOJcWrEZzImkc0qKQHDDfUTHqk4WMcEI.
306 Are you sure you want to continue connecting (yes/no)? yes
307 Warning: Permanently added '10.12.5.172' (ECDSA) to the list of known hosts.
308 Welcome to Ubuntu 16.04.2 LTS (GNU/Linux 4.4.0-64-generic x86_64)
310 * Documentation: https://help.ubuntu.com
311 * Management: https://landscape.canonical.com
312 * Support: https://ubuntu.com/advantage
314 Get cloud support with Ubuntu Advantage Cloud Guest:
315 http://www.ubuntu.com/business/services/cloud
317 180 packages can be updated.
318 100 updates are security updates.
320 The programs included with the Ubuntu system are free software;
321 the exact distribution terms for each program are described in the
322 individual files in /usr/share/doc/*/copyright.
324 Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
327 To run a command as administrator (user "root"), use "sudo <command>".
328 See "man sudo_root" for details.
333 2. Paste Clipboard content and hit enter to install Rancher Agent::
335 ubuntu@sb4-k8s-1:~$ sudo docker run -e CATTLE_AGENT_IP="10.0.0.4“ --rm --privileged -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/rancher:/var/lib/rancher rancher/agent:v1.2.9 http://10.12.6.16:8080/v1/scripts/5D757C68BD0A2125602A:1514678400000:yKW9xHGJDLvq6drz2eDzR2mjato
336 Unable to find image 'rancher/agent:v1.2.9' locally
337 v1.2.9: Pulling From rancher/agent
338 b3e1c725a85f: Pull complete
339 6071086409fc: Pull complete
340 d0ac3b234321: Pull complete
341 87f567b5cf58: Pull complete
342 a63e24b217c4: Pull complete
343 d0a3f58caef0: Pull complete
344 16914729cfd3: Pull complete
345 dc5c21984c5b: Pull complete
346 d7e8f9784b20: Pull complete
347 Digest: sha256:c21255ac4d94ffbc7b523F870F20ea5189b68Fa3d642800adb4774aab4748e66
348 Status: Downloaded newer image for rancher/agent:v1.2.9
350 INFO: Running Agent Registration Process, CATTLE_URL=http://10.12.6.16:8080/v1
351 INFO: Attempting to connect to: http://10.12.6.16:8080/v1
352 INFO: http://10.12.6.16:8080/v1 is accessible
353 INFO: Inspecting host capabilities
354 INFO: Boot2Docker: false
355 INFO: Host writable: true
356 INFO: Token: xxxxxxxx
357 INFO: Running registration
358 INFO: Printing Environment
359 INFO: ENV: CATTLE_ACCESS_KEY=98B35AC484FBF820E0AD
360 INFO: ENV: CATTLE_AGENT_IP=10.0.9.4
361 INFO: ENV: CATTLE_HOME=/var/lib/cattle
362 INFO: ENV: CATTLE_REGISTRATION_ACCESS_KEY=registrationToken
363 INFO: ENV: CATTLE_REGISTRATION_SECRET_KEY=xxxxxxx
364 INFO: ENV: CATTLE_SECRET_KEY=xxxxxxx
365 INFO: ENV: CATTLE_URL=http://10.12.6.16:8080/v1
366 INFO: ENV: DETECTED_CATTLE_AGENT_IP=10.12.5.172
367 INFO: ENV: RANCHER_AGENT_IMAGE=rancher/agent:v1.2.9
368 INFO: Launched Rancher Agent: c27ee0f3dc4c783b0db647ea1f73c35b3843a4b8d60b96375b1a05aa77d83136
371 3. Return to Rancher environment (e.g. SB4) and wait for services to complete
374 .. image:: Return_to_Rancher_environment_eg_SB4_and_wait_for_services_to_complete_10-15_mins.jpeg
376 Configure kubectl and helm
377 ==========================
378 In this example we are configuring kubectl and helm that have been installed
379 (as a convenience) onto the rancher and kubernetes hosts. Typically you would
380 install them both on your PC and remotely connect to the cluster. The following
381 procedure would remain the same.
383 1. Click on CLI and then click on “Generate Config”
385 .. image:: Click_on_CLI_and_then_click_on_Generate_Config.jpeg
387 2. Click on “Copy to Clipboard” - wait until you see a "token" - do not copy
388 user+password - the server is not ready at that point
390 .. image:: Click_on_Copy_to_Clipboard-wait_until_you_see_a_token-do_not_copy_user+password-the_server_is_not_ready_at_that_point.jpeg
392 3. Create a .kube directory in user directory (if one does not exist)::
394 ubuntu@sb4-kSs-1:~$ mkdir .kube
395 ubuntu@sb4-kSs-1:~$ vi .kube/config
397 4. Paste contents of Clipboard into a file called “config” and save the file::
404 insecure-skip-tls-verify: true
405 server: "https://10.12.6.16:8080/r/projects/1a7/kubernetes:6443"
412 current-context: "SB4"
416 token: "QmFzaWMgTlRBd01qZzBOemc)TkRrMk1UWkNOMFpDTlVFNlExcHdSa1JhVZreE5XSm1TRGhWU2t0Vk1sQjVhalZaY0dWaFVtZGFVMHQzWW1WWVJtVmpSQT09"
422 5. Validate that kubectl is able to connect to the kubernetes cluster::
424 ubuntu@sb4-k8s-1:~$ kubectl config get-contexts
425 CURRENT NAME CLUSTER AUTHINFO NAMESPACE
429 and show running pods::
431 ubuntu@sb4-k8s-1:~$ kubectl get pods --all-namespaces -o=wide
432 NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
433 kube-system heapster—7Gb8cd7b5 -q7p42 1/1 Running 0 13m 10.42.213.49 sb4-k8s-1
434 kube-system kube-dns-5d7bM87c9-c6f67 3/3 Running 0 13m 10.42.181.110 sb4-k8s-1
435 kube-system kubernetes-dashboard-f9577fffd-kswjg 1/1 Running 0 13m 10.42.105.113 sb4-k8s-1
436 kube-system monitoring-grafana-997796fcf-vg9h9 1/1 Running 0 13m 10.42,141.58 sb4-k8s-1
437 kube-system monitoring-influxdb-56chd96b-hk66b 1/1 Running 0 13m 10.4Z.246.90 sb4-k8s-1
438 kube-system tiller-deploy-cc96d4f6b-v29k9 1/1 Running 0 13m 10.42.147.248 sb4-k8s-1
441 6. Validate helm is running at the right version. If not, an error like this
444 ubuntu@sb4-k8s-1:~$ helm list
445 Error: incompatible versions c1ient[v2.9.1] server[v2.6.1]
448 7. Upgrade the server-side component of helm (tiller) via `helm init --upgrade`::
450 ubuntu@sb4-k8s-1:~$ helm init --upgrade
451 Creating /home/ubuntu/.helm
452 Creating /home/ubuntu/.helm/repository
453 Creating /home/ubuntu/.helm/repository/cache
454 Creating /home/ubuntu/.helm/repository/local
455 Creating /home/ubuntu/.helm/plugins
456 Creating /home/ubuntu/.helm/starters
457 Creating /home/ubuntu/.helm/cache/archive
458 Creating /home/ubuntu/.helm/repository/repositories.yaml
459 Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
460 Adding local repo with URL: http://127.0.0.1:8879/charts
461 $HELM_HOME has been configured at /home/ubuntu/.helm.
463 Tiller (the Helm server-side component) has been upgraded to the current version.
467 ONAP Deployment via OOM
468 =======================
469 Now that kubernetes and Helm are installed and configured you can prepare to
470 deploy ONAP. Follow the instructions in the README.md_ or look at the official
471 documentation to get started:
473 - :ref:`quick-start-label` - deploy ONAP on an existing cloud
474 - :ref:`user-guide-label` - a guide for operators of an ONAP instance