5 **VFC Installation over OOM**
11 This is a guide to help developer or tester to try to install VF-C over OOM
13 2 **Component & function**
14 ==========================
19 Now VF-C have the following repos in https://gerrit.onap.org/r/admin/repos/q/filter:vfc
29 - NS life cycle management
31 * - vfc/nfvo/driver/vnfm/svnfm
32 - Specific VNFM drivers
34 * - vfc/nfvo/driver/vnfm/gvnfm
35 - Generic VNFM drivers
38 - Stand-alone database microservice, provides the database service for each VF-C component
40 * - vfc/nfvo/driver/ems
43 * - vfc/nfvo/driver/sfc
46 * - vfc/nfvo/resmanagement
47 - NS Resource Management
52 * - vfc/nfvo/multivimproxy
53 - Multi-vim proxy provides the multivim indirect mode proxy
56 - Generic VNFM VNF LCM
59 - Generic VNFM VNF Mgr
62 - Generic VNFM VNF Resource Management
66 ~~~~~~~~~~~~~~~~~~~~~~~~
70 nexus3.onap.org:10001/onap/vfc/nslcm
71 nexus3.onap.org:10001/onap/vfc/db
72 nexus3.onap.org:10001/onap/vfc/gvnfmdriver
73 nexus3.onap.org:10001/onap/vfc/nfvo/svnfm/huawei
74 nexus3.onap.org:10001/onap/vfc/ztevnfmdriver
75 nexus3.onap.org:10001/onap/vfc/vnflcm
76 nexus3.onap.org:10001/onap/vfc/vnfmgr
77 nexus3.onap.org:10001/onap/vfc/vnfres
80 Deprecated from Guilin Release:
83 nexus3.onap.org:10001/onap/vfc/nfvo/svnfm/nokia
84 nexus3.onap.org:10001/onap/vfc/nfvo/svnfm/nokiav2
85 nexus3.onap.org:10001/onap/vfc/emsdriver
86 nexus3.onap.org:10001/onap/vfc/jujudriver
87 nexus3.onap.org:10001/onap/vfc/multivimproxy
88 nexus3.onap.org:10001/onap/vfc/resmanagement
89 nexus3.onap.org:10001/onap/vfc/wfengine-activiti
90 nexus3.onap.org:10001/onap/vfc/wfengine-mgrservice
91 nexus3.onap.org:10001/onap/vfc/ztesdncdriver
97 For initialization of docker there are 2 deployment options currently used in ONAP:
101 From Casablanca release, OOM is the recommended way, so here mainly give the steps for OOM based deployment
103 For OOM deployment you can refer to the OOM section in ONAP documentation.
105 .. * https://docs.onap.org/projects/onap-oom/en/latest/oom_user_guide.html#oom-user-guide
106 .. * https://docs.onap.org/projects/onap-oom/en/latest/oom_quickstart_guide.html#oom-quickstart-guide
108 1. First ensure VF-C is marked true against field enabled in the oom/kubernetes/onap/values.yaml for successful deployment.
123 2. Upgrade Images in OOM charts
125 Ensure the component version is right, you should check the respective component image version in VF-C charts.
126 If you need update the version, please modify values.yaml
132 oom/kubernetes/vfc/charts/vfc-nslcm/values.yaml
134 #################################################################
135 # Application configuration defaults.
136 #################################################################
140 repository: nexus3.onap.org:10001
141 image: onap/vfc/nslcm:1.4.1
145 3. Rebuild all repos in helm
147 Every time you change the charts, you need to rebuild all repos to ensure the change can take effect.
149 Step1: Build vfc repo
156 Step2: Build ONAP repo
161 make onap(here can also execute make all)
163 Step3: Delete the release already deployed
168 helm delete dev-vfc --purge
170 Step4: Deploy the new pods
175 helm install local/vfc --namespace onap --name dev-vfc
178 Now VF-C will be upgraded with the new image version
180 You will see all the pods are running
184 dev-vfc-generic-vnfm-driver-6fcf454665-6pmfv 2/2 Running 0 11d
185 dev-vfc-huawei-vnfm-driver-6f6c465c76-ktpch 2/2 Running 0 11d
186 dev-vfc-mariadb-0 2/2 Running 0 11d
187 dev-vfc-mariadb-1 2/2 Running 2 11d
188 dev-vfc-mariadb-2 2/2 Running 0 11d
189 dev-vfc-nslcm-6dd99f94f4-vxdkc 2/2 Running 0 11d
190 dev-vfc-redis-5d7d494fdf-crv8c 1/1 Running 0 11d
191 dev-vfc-vnflcm-5497c66465-f5mh7 2/2 Running 0 11d
192 dev-vfc-vnfmgr-5459b488d9-6vg75 2/2 Running 0 11d
193 dev-vfc-vnfres-5577d674cf-g9fz7 2/2 Running 0 11d
194 dev-vfc-zte-vnfm-driver-6685b74f95-r5phc 2/2 Running 2 11d
197 **4 VF-C health check**
198 ========================
200 When VF-C pods are up, if you want to check the service status, you can visit the following APIs in K8S cluster to check.
201 These swagger API will also show the APIs VF-C provided.
203 +--------------------------+---------------------------------------------------------------------------+
204 | **Component Name** | health check API |
205 +==========================+===========================================================================+
206 | vfc/nfvo/lcm | http://ClusterIP:8403/api/nslcm/v1/swagger.yaml |
207 +--------------------------+---------------------------------------------------------------------------+
208 |vfc/gvnfm/vnflcm | http://ClusterIP:8801/api/vnflcm/v1/swagger.yaml |
209 +--------------------------+---------------------------------------------------------------------------+
210 |vfc/gvnfm/vnfmgr | http://ClusterIP:8803/api/vnfmgr/v1/swagger.yaml |
211 +--------------------------+---------------------------------------------------------------------------+
212 |vfc/gvnfm/vnfres | http://ClusterIP:8802/api/vnfres/v1/swagger.yaml |
213 +--------------------------+---------------------------------------------------------------------------+
215 Here are only a few components as an example.
217 Take vnflcm as an example, you can visit the API as follow:
221 ubuntu@oom-mr01-rancher:~$ kubectl -n onap get svc|grep vnflcm
222 vfc-vnflcm ClusterIP 10.43.71.4 <none> 8801/TCP 87d
223 ubuntu@oom-mr01-rancher:~$ curl http://10.43.71.4:8801/api/vnflcm/v1/swagger.json
224 {"swagger": "2.0", "info": {"title": "vnflcm API", "description": "\n\nThe `swagger-ui` view can be found [here](/api/vnflcm/v1/swagger).\n
225 The `ReDoc` view can be found [here](/api/vnflcm/v1/redoc).\nThe swagger YAML document can be found [here](/api/vnflcm/v1/swagger.yaml).\n
226 The swagger JSON document can be found [here](/api/vnflcm/v1/swagger.json)."........
229 Because VF-C expose service by ClusterIP, so that you can only visit the APIs in K8S cluster.
231 If you want to visit VF-C APIs outside of K8S cluster, you can visit these APIs via MSB, because all VF-C APIs have been registered to MSB.
233 You can execute the following steps:
237 ubuntu@oom-mr01-rancher:~$ kubectl -n onap get pod -o wide|grep msb-iag
238 dev-msb-msb-iag-6fbb5b4dbd-pxs8z 2/2 Running 4 28d 10.42.72.222 mr01-node1 <none>
239 ubuntu@oom-mr01-rancher:~$ cat /etc/hosts |grep mr01-node1
240 172.60.2.39 mr01-node1
241 ubuntu@oom-mr01-rancher:~$ kubectl -n onap get svc|grep msb-iag
242 msb-iag NodePort 10.43.213.250 <none> 80:30280/TCP,443:30283/TCP 87d
243 ubuntu@oom-mr01-rancher:~$ curl http://172.60.2.39:30280/api/vnflcm/v1/swagger.json
244 {"swagger": "2.0", "info": {"title": "vnflcm API", "description": "\n\nThe `swagger-ui` view can be found [here](/api/vnflcm/v1/swagger).\n
245 The `ReDoc` view can be found [here](/api/vnflcm/v1/redoc).\nThe swagger YAML document can be found [here](/api/vnflcm/v1/swagger.yaml).\n
246 The swagger JSON document can be found [here](/api/vnflcm/v1/swagger.json)."........
249 You can visit the http://172.60.2.39:30280/api/vnflcm/v1/swagger.json in the browser
252 **5 Debug and Testing in running Pod**
253 ======================================
255 When you are doing the testing and would like to replace some new file like binary or some script and want to check the new result.
257 Take vfc-nslcm pod as an example:
261 kubectl -n onap edit deployment dev-vfc-nslcm
267 - MYSQL_AUTH=${MYSQL_ROOT_USER}:${MYSQL_ROOT_PASSWORD} ./docker-entrypoint.sh
272 value: https://msb-iag:443
276 value: vfc-mariadb:3306
277 - name: MYSQL_ROOT_USER
279 - name: MYSQL_ROOT_PASSWORD
283 name: dev-vfc-db-root-pass
288 - name: REG_TO_MSB_WHEN_START
290 image: 192.168.235.22:10001/onap/vfc/nslcm:1.4.1
291 imagePullPolicy: IfNotPresent
294 initialDelaySeconds: 120
302 - containerPort: 8403
306 Then you can replace the value into the pod.
309 **6 Kubectl basic command**
310 ======================================
312 Basic operation of kubernests cluster(Take the namespace of onap in linux client as an example)
314 * Check the cluster node
320 * Check cluster namespace
326 * View the pod information and the pod on which the node is located, under the namespace specified (for example, namespace on onap)
330 kubectl get pod -o wide -n onap
332 * Connected to the docker in pod
336 Check the docker's name , return two dockers' name after execution, -c specifie the docker that needed ti go in.
338 kubectl -n onap get pod dev-vfc-nslcm-68cb7c9878-v4kt2 -o jsonpath={.spec.containers[*].name}
340 kubectl -n onap exec -it dev-vfc-nslcm-68cb7c9878-v4kt2 -c vfc-nslcm /bin/bash
342 * Copy files (take the catalog example). When the data copy is lost after the pod is restarted or migrated, the multi-copy pod copy operation only exists for the current pod
346 Copy from local to dockers in pod
348 kubectl -n onap cp copy_test.sh dev-vfc-nslcm-68cb7c9878-v4kt2: -c vfc-nslcm
350 Copy pod's content to local machine
352 kubectl -n onap cp dev-vfc-nslcm-68cb7c9878-v4kt2:copy_test.sh -c vfc-nslcm /tmp/copy_test.sh
354 * Remote command (to see the current path of the container as an example)
358 kubectl -n onap exec -it dev-vfc-nslcm-68cb7c9878-v4kt2 -c vfc-nslcm pwd
360 * View pod basic information and logs (no -c parameter added for single container pod)
364 kubectl -n onap describe pod dev-vfc-nslcm-68cb7c9878-v4kt2
366 kubectl -n onap logs dev-vfc-nslcm-68cb7c9878-v4kt2 -c vfc-nslcm
368 * Check the service listening port and manually expose the port, which is commonly used for testing, such as nginx under test namespace
374 kubectl create namespace test
376 2>create pod with 3 replication
378 kubectl run nginx --image=nginx --replicas=3 -n test
380 3>Pod exposed ports for nginx (target port, source port target-port)
382 kubectl expose deployment nginx --port=88 --target-port=80 --type=LoadBalancer -n test
386 kubectl expose deployment nginx --port=88 --target-port=80 --type=NodePort -n test
388 4> Check svc(ports that pod exposed , The cluster internally accesses this pod via port 88., external access to the cluster using floatingip+30531)
390 kubectl get svc -n test
392 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
393 nginx LoadBalancer 10.43.45.186 10.0.0.3 88:30531/TCP 3m
395 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
396 nginx NodePort 10.43.45.186 88:30531/TCP 3m
399 Nodes within the CLUSTER can be accessed via cluster-ip +88 port
400 Outside the cluster, it is accessible via either EXTERNAL IP or the Floating IP+30531, which is the node name of the pod
401 The floatingip corresponding to the node name can be viewed in the /etc/hosts of the rancher machine or in the documentation
404 * Modify the container image and pod strategy (deployment, statefulset), the completion of modification will trigger the rolling update
408 1>To determine whether the pod is a stateful application (efullset) or a stateful application (deployment)
410 kubectl -n onap describe pod dev-vfc-nslcm-68cb7c9878-v4kt2 |grep Controlled
412 2>Stateless application deployment
414 kubectl -n onap get deploy |grep nslcm
416 kubectl -n onap edit deploy dev-vfc-nslcm-68cb7c9878-v4kt2
418 3>Stateful application statefulset
420 kubectl -n onap get statefulset |grep cassandra
422 kubectl -n onap edit statefulset dev-aai-cassandra
425 * Restart pod(After removing the pod, deployment will recreate a same pod and randomly assign it to any node.)
429 kubectl -n onap delete pod dev-vfc-nslcm-68cb7c9878-v4kt2 -c vfc-nslcm
432 * View the virtual machine where the portal-app resides in order to add host resolution
436 10.0.0.13 corresponding Floating IP is 172.30.3.36
438 kubectl -n onap get svc |grep portal-app
440 portal-app LoadBalancer 10.43.181.163 10.0.0.13 8989:30215/TCP,8403:30213/TCP,8010:30214/TCP,8443:30225/TCP
442 * Pod expansion and shrinkage
446 kubectl scale deployment nginx --replicas=3
450 kubectl scale deployment nginx --replicas=1