X-Git-Url: https://gerrit.onap.org/r/gitweb?a=blobdiff_plain;f=docs%2Foom_setup_kubernetes_rancher.rst;h=428fa59a4e2e52efdea2e353ab06764a115c44a2;hb=refs%2Fchanges%2F80%2F110480%2F1;hp=3ccde8d41812fc3007fdb879c0b0ee589d1c28ce;hpb=47ad13899b9b91cc7266c93e1635ab97550a784e;p=oom.git diff --git a/docs/oom_setup_kubernetes_rancher.rst b/docs/oom_setup_kubernetes_rancher.rst index 3ccde8d418..428fa59a4e 100644 --- a/docs/oom_setup_kubernetes_rancher.rst +++ b/docs/oom_setup_kubernetes_rancher.rst @@ -30,19 +30,19 @@ to deploy and manage our Kubernetes Cluster. The result at the end of this tutorial will be: -*1.* Creation of a Key Pair to use with Open Stack and RKE +#. Creation of a Key Pair to use with Open Stack and RKE -*2.* Creation of OpenStack VMs to host Kubernetes Control Plane +#. Creation of OpenStack VMs to host Kubernetes Control Plane -*3.* Creation of OpenStack VMs to host Kubernetes Workers +#. Creation of OpenStack VMs to host Kubernetes Workers -*4.* Installation and configuration of RKE to setup an HA Kubernetes +#. Installation and configuration of RKE to setup an HA Kubernetes -*5.* Installation and configuration of kubectl +#. Installation and configuration of kubectl -*5.* Installation and configuration of helm +#. Installation and configuration of helm -*7.* Creation of an NFS Server to be used by ONAP as shared persistance +#. Creation of an NFS Server to be used by ONAP as shared persistance There are many ways one can execute the above steps. Including automation through the use of HEAT to setup the OpenStack VMs. To better illustrate the steps involved, we have captured the manual creation of such an environment using the ONAP Wind River Open Lab. @@ -62,12 +62,49 @@ Use an existing key pair, import one or create a new one to assign. For the purpose of this guide, we will assume a new local key called "onap-key" has been downloaded and is copied into **~/.ssh/**, from which it can be referenced. -Example: +Example:: + > mv onap-key ~/.ssh > chmod 600 ~/.ssh/onap-key +Create Network +============== + +An internal network is required in order to deploy our VMs that will host +Kubernetes. + +.. image:: images/network/network_1.png + +.. image:: images/network/network_2.png + +.. image:: images/network/network_3.png + +.. Note:: + It's better to have one network per deployment and obviously the name of this + network should be unique. + +Now we need to create a router to attach this network to outside: + +.. image:: images/network/network_4.png + +Create Security Group +===================== + +A specific security group is also required + +.. image:: images/sg/sg_1.png + +then click on `manage rules` of the newly created security group. +And finally click on `Add Rule` and create the following one: + +.. image:: images/sg/sg_2.png + +.. Note:: + the security is clearly not good here and the right SG will be proposed in a + future version + Create Kubernetes Control Plane VMs =================================== @@ -95,11 +132,15 @@ The recommended flavor is at least 4 vCPU and 8GB ram. Networking ---------- +Use the created network: + .. image:: images/cp_vms/control_plane_4.png Security Groups --------------- +Use the created security group: + .. image:: images/cp_vms/control_plane_5.png Key Pair @@ -111,7 +152,7 @@ Assign the key pair that was created/selected previously (e.g. onap_key). Apply customization script for Control Plane VMs ------------------------------------------------ -Click :download:`openstack-k8s-controlnode.sh ` +Click :download:`openstack-k8s-controlnode.sh ` to download the script. .. literalinclude:: openstack-k8s-controlnode.sh @@ -139,10 +180,10 @@ Highly-Available Kubernetes Workers. ONAP workloads will only be scheduled on th Launch new VM instances ----------------------- -The number and size of Worker VMs is depenedent on the size of the ONAP deployment. -By default, all ONAP applications are deployed. It's possible to customize the deployment +The number and size of Worker VMs is depenedent on the size of the ONAP deployment. +By default, all ONAP applications are deployed. It's possible to customize the deployment and enable a subset of the ONAP applications. For the purpose of this guide, however, -we will deploy 12 Kubernetes Workers that have been sized to handle the entire ONAP +we will deploy 12 Kubernetes Workers that have been sized to handle the entire ONAP application workload. .. image:: images/wk_vms/worker_1.png @@ -226,16 +267,12 @@ Configure Rancher Kubernetes Engine (RKE) Install RKE ----------- Download and install RKE on a VM, desktop or laptop. -Binaries can be found here for Linux and Mac: https://github.com/rancher/rke/releases/tag/v0.2.1 +Binaries can be found here for Linux and Mac: https://github.com/rancher/rke/releases/tag/v1.0.6 RKE requires a *cluster.yml* as input. An example file is show below that describes a Kubernetes cluster that will be mapped onto the OpenStack VMs created earlier in this guide. -Example: **cluster.yml** - -.. image:: images/rke/rke_1.png - Click :download:`cluster.yml ` to download the configuration file. @@ -250,13 +287,11 @@ in this file. Run RKE ------- -From within the same directory as the cluster.yml file, simply execute: +From within the same directory as the cluster.yml file, simply execute:: > rke up -The output will look something like: - -.. code-block:: +The output will look something like:: INFO[0000] Initiating Kubernetes cluster INFO[0000] [certificates] Generating admin certificates and kubeconfig @@ -302,11 +337,20 @@ Install Kubectl Download and install kubectl. Binaries can be found here for Linux and Mac: -https://storage.googleapis.com/kubernetes-release/release/v1.13.5/bin/linux/amd64/kubectl -https://storage.googleapis.com/kubernetes-release/release/v1.13.5/bin/darwin/amd64/kubectl +https://storage.googleapis.com/kubernetes-release/release/v1.15.11/bin/linux/amd64/kubectl +https://storage.googleapis.com/kubernetes-release/release/v1.15.11/bin/darwin/amd64/kubectl + +You only need to install kubectl where you'll launch kubernetes command. This +can be any machines of the kubernetes cluster or a machine that has IP access +to the APIs. +Usually, we use the first controller as it has also access to internal +Kubernetes services, which can be convenient. Validate deployment ------------------- + +:: + > cp kube_config_cluster.yml ~/.kube/config.onap > export KUBECONFIG=~/.kube/config.onap @@ -315,38 +359,42 @@ Validate deployment > kubectl get nodes -o=wide -.. code-block:: +:: NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME - onap-control-1 Ready controlplane,etcd 3h53m v1.13.5 10.0.0.8 Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5 - onap-control-2 Ready controlplane,etcd 3h53m v1.13.5 10.0.0.11 Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5 - onap-control-3 Ready controlplane,etcd 3h53m v1.13.5 10.0.0.12 Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5 - onap-k8s-1 Ready worker 3h53m v1.13.5 10.0.0.14 Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5 - onap-k8s-10 Ready worker 3h53m v1.13.5 10.0.0.16 Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5 - onap-k8s-11 Ready worker 3h53m v1.13.5 10.0.0.18 Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5 - onap-k8s-12 Ready worker 3h53m v1.13.5 10.0.0.7 Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5 - onap-k8s-2 Ready worker 3h53m v1.13.5 10.0.0.26 Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5 - onap-k8s-3 Ready worker 3h53m v1.13.5 10.0.0.5 Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5 - onap-k8s-4 Ready worker 3h53m v1.13.5 10.0.0.6 Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5 - onap-k8s-5 Ready worker 3h53m v1.13.5 10.0.0.9 Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5 - onap-k8s-6 Ready worker 3h53m v1.13.5 10.0.0.17 Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5 - onap-k8s-7 Ready worker 3h53m v1.13.5 10.0.0.20 Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5 - onap-k8s-8 Ready worker 3h53m v1.13.5 10.0.0.10 Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5 - onap-k8s-9 Ready worker 3h53m v1.13.5 10.0.0.4 Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5 + onap-control-1 Ready controlplane,etcd 3h53m v1.15.2 10.0.0.8 Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5 + onap-control-2 Ready controlplane,etcd 3h53m v1.15.2 10.0.0.11 Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5 + onap-control-3 Ready controlplane,etcd 3h53m v1.15.2 10.0.0.12 Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5 + onap-k8s-1 Ready worker 3h53m v1.15.2 10.0.0.14 Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5 + onap-k8s-10 Ready worker 3h53m v1.15.2 10.0.0.16 Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5 + onap-k8s-11 Ready worker 3h53m v1.15.2 10.0.0.18 Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5 + onap-k8s-12 Ready worker 3h53m v1.15.2 10.0.0.7 Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5 + onap-k8s-2 Ready worker 3h53m v1.15.2 10.0.0.26 Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5 + onap-k8s-3 Ready worker 3h53m v1.15.2 10.0.0.5 Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5 + onap-k8s-4 Ready worker 3h53m v1.15.2 10.0.0.6 Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5 + onap-k8s-5 Ready worker 3h53m v1.15.2 10.0.0.9 Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5 + onap-k8s-6 Ready worker 3h53m v1.15.2 10.0.0.17 Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5 + onap-k8s-7 Ready worker 3h53m v1.15.2 10.0.0.20 Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5 + onap-k8s-8 Ready worker 3h53m v1.15.2 10.0.0.10 Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5 + onap-k8s-9 Ready worker 3h53m v1.15.2 10.0.0.4 Ubuntu 18.04 LTS 4.15.0-22-generic docker://18.9.5 Install Helm ============ -Example Helm client install on Linux: - > wget http://storage.googleapis.com/kubernetes-helm/helm-v2.12.3-linux-amd64.tar.gz +Example Helm client install on Linux:: + + > wget https://get.helm.sh/helm-v2.16.6-linux-amd64.tar.gz - > tar -zxvf helm-v2.12.3-linux-amd64.tar.gz + > tar -zxvf helm-v2.16.6-linux-amd64.tar.gz > sudo mv linux-amd64/helm /usr/local/bin/helm Initialize Kubernetes Cluster for use by Helm --------------------------------------------- + +:: + > kubectl -n kube-system create serviceaccount tiller > kubectl create clusterrolebinding tiller --clusterrole=cluster-admin --serviceaccount=kube-system:tiller @@ -406,7 +454,7 @@ Apply customization script for NFS Server VM Click :download:`openstack-nfs-server.sh ` to download the script. -.. literalinclude:: openstack-k8s-workernode.sh +.. literalinclude:: openstack-nfs-server.sh :language: bash This customization script will: