5 **VFC Installation over OOM**
11 This is a guide to help developer or tester to try to install VF-C over OOM
13 2 **Component & function**
14 ==========================
19 Now VF-C have the following repos in https://gerrit.onap.org/r/admin/repos/q/filter:vfc
29 - NS life cycle management
31 * - vfc/nfvo/driver/vnfm/svnfm
32 - Specific VNFM drivers
34 * - vfc/nfvo/driver/vnfm/gvnfm
35 - Generic VNFM drivers
38 - Stand-alone database microservice, provides the database service for each VF-C component
40 * - vfc/nfvo/driver/ems
43 * - vfc/nfvo/driver/sfc
46 * - vfc/nfvo/resmanagement
47 - NS Resource Management
52 * - vfc/nfvo/multivimproxy
53 - Multi-vim proxy provides the multivim indirect mode proxy
56 - Generic VNFM VNF LCM
59 - Generic VNFM VNF Mgr
62 - Generic VNFM VNF Resource Management
66 ~~~~~~~~~~~~~~~~~~~~~~~~
70 nexus3.onap.org:10001/onap/vfc/nslcm
71 nexus3.onap.org:10001/onap/vfc/db
72 nexus3.onap.org:10001/onap/vfc/gvnfmdriver
73 nexus3.onap.org:10001/onap/vfc/nfvo/svnfm/huawei
74 nexus3.onap.org:10001/onap/vfc/ztevnfmdriver
75 nexus3.onap.org:10001/onap/vfc/vnflcm
76 nexus3.onap.org:10001/onap/vfc/vnfmgr
77 nexus3.onap.org:10001/onap/vfc/vnfres
80 Deprecated from Guilin Release:
83 nexus3.onap.org:10001/onap/vfc/nfvo/svnfm/nokia
84 nexus3.onap.org:10001/onap/vfc/nfvo/svnfm/nokiav2
85 nexus3.onap.org:10001/onap/vfc/emsdriver
86 nexus3.onap.org:10001/onap/vfc/jujudriver
87 nexus3.onap.org:10001/onap/vfc/multivimproxy
88 nexus3.onap.org:10001/onap/vfc/resmanagement
89 nexus3.onap.org:10001/onap/vfc/wfengine-activiti
90 nexus3.onap.org:10001/onap/vfc/wfengine-mgrservice
91 nexus3.onap.org:10001/onap/vfc/ztesdncdriver
97 For initialization of docker there are 2 deployment options currently used in ONAP:
101 From Casablanca release, OOM is the recommended way, so here mainly give the steps for OOM based deployment
103 For OOM deployment you can refer to the below links:
105 * https://docs.onap.org/projects/onap-oom/en/latest/oom_user_guide.html#oom-user-guide
107 * https://docs.onap.org/projects/onap-oom/en/latest/oom_quickstart_guide.html#oom-quickstart-guide
109 1. First ensure VF-C is marked true against field enabled in the oom/kubernetes/onap/values.yaml for successful deployment.
124 2. Upgrade Images in OOM charts
126 Ensure the component version is right, you should check the respective component image version in VF-C charts.
127 If you need update the version, please modify values.yaml
133 oom/kubernetes/vfc/charts/vfc-nslcm/values.yaml
135 #################################################################
136 # Application configuration defaults.
137 #################################################################
141 repository: nexus3.onap.org:10001
142 image: onap/vfc/nslcm:1.4.1
146 3. Rebuild all repos in helm
148 Every time you change the charts, you need to rebuild all repos to ensure the change can take effect.
150 Step1: Build vfc repo
157 Step2: Build ONAP repo
162 make onap(here can also execute make all)
164 Step3: Delete the release already deployed
169 helm delete dev-vfc --purge
171 Step4: Deploy the new pods
176 helm install local/vfc --namespace onap --name dev-vfc
179 Now VF-C will be upgraded with the new image version
181 You will see all the pods are running
185 dev-vfc-generic-vnfm-driver-6fcf454665-6pmfv 2/2 Running 0 11d
186 dev-vfc-huawei-vnfm-driver-6f6c465c76-ktpch 2/2 Running 0 11d
187 dev-vfc-mariadb-0 2/2 Running 0 11d
188 dev-vfc-mariadb-1 2/2 Running 2 11d
189 dev-vfc-mariadb-2 2/2 Running 0 11d
190 dev-vfc-nslcm-6dd99f94f4-vxdkc 2/2 Running 0 11d
191 dev-vfc-redis-5d7d494fdf-crv8c 1/1 Running 0 11d
192 dev-vfc-vnflcm-5497c66465-f5mh7 2/2 Running 0 11d
193 dev-vfc-vnfmgr-5459b488d9-6vg75 2/2 Running 0 11d
194 dev-vfc-vnfres-5577d674cf-g9fz7 2/2 Running 0 11d
195 dev-vfc-zte-vnfm-driver-6685b74f95-r5phc 2/2 Running 2 11d
198 **4 VF-C health check**
199 ========================
201 When VF-C pods are up, if you want to check the service status, you can visit the following APIs in K8S cluster to check.
202 These swagger API will also show the APIs VF-C provided.
204 +--------------------------+---------------------------------------------------------------------------+
205 | **Component Name** | health check API |
206 +==========================+===========================================================================+
207 | vfc/nfvo/lcm | http://ClusterIP:8403/api/nslcm/v1/swagger.yaml |
208 +--------------------------+---------------------------------------------------------------------------+
209 |vfc/gvnfm/vnflcm | http://ClusterIP:8801/api/vnflcm/v1/swagger.yaml |
210 +--------------------------+---------------------------------------------------------------------------+
211 |vfc/gvnfm/vnfmgr | http://ClusterIP:8803/api/vnfmgr/v1/swagger.yaml |
212 +--------------------------+---------------------------------------------------------------------------+
213 |vfc/gvnfm/vnfres | http://ClusterIP:8802/api/vnfres/v1/swagger.yaml |
214 +--------------------------+---------------------------------------------------------------------------+
216 Here are only a few components as an example.
218 Take vnflcm as an example, you can visit the API as follow:
222 ubuntu@oom-mr01-rancher:~$ kubectl -n onap get svc|grep vnflcm
223 vfc-vnflcm ClusterIP 10.43.71.4 <none> 8801/TCP 87d
224 ubuntu@oom-mr01-rancher:~$ curl http://10.43.71.4:8801/api/vnflcm/v1/swagger.json
225 {"swagger": "2.0", "info": {"title": "vnflcm API", "description": "\n\nThe `swagger-ui` view can be found [here](/api/vnflcm/v1/swagger).\n
226 The `ReDoc` view can be found [here](/api/vnflcm/v1/redoc).\nThe swagger YAML document can be found [here](/api/vnflcm/v1/swagger.yaml).\n
227 The swagger JSON document can be found [here](/api/vnflcm/v1/swagger.json)."........
230 Because VF-C expose service by ClusterIP, so that you can only visit the APIs in K8S cluster.
232 If you want to visit VF-C APIs outside of K8S cluster, you can visit these APIs via MSB, because all VF-C APIs have been registered to MSB.
234 You can execute the following steps:
238 ubuntu@oom-mr01-rancher:~$ kubectl -n onap get pod -o wide|grep msb-iag
239 dev-msb-msb-iag-6fbb5b4dbd-pxs8z 2/2 Running 4 28d 10.42.72.222 mr01-node1 <none>
240 ubuntu@oom-mr01-rancher:~$ cat /etc/hosts |grep mr01-node1
241 172.60.2.39 mr01-node1
242 ubuntu@oom-mr01-rancher:~$ kubectl -n onap get svc|grep msb-iag
243 msb-iag NodePort 10.43.213.250 <none> 80:30280/TCP,443:30283/TCP 87d
244 ubuntu@oom-mr01-rancher:~$ curl http://172.60.2.39:30280/api/vnflcm/v1/swagger.json
245 {"swagger": "2.0", "info": {"title": "vnflcm API", "description": "\n\nThe `swagger-ui` view can be found [here](/api/vnflcm/v1/swagger).\n
246 The `ReDoc` view can be found [here](/api/vnflcm/v1/redoc).\nThe swagger YAML document can be found [here](/api/vnflcm/v1/swagger.yaml).\n
247 The swagger JSON document can be found [here](/api/vnflcm/v1/swagger.json)."........
250 You can visit the http://172.60.2.39:30280/api/vnflcm/v1/swagger.json in the browser
253 **5 Debug and Testing in running Pod**
254 ======================================
256 When you are doing the testing and would like to replace some new file like binary or some script and want to check the new result.
258 Take vfc-nslcm pod as an example:
262 kubectl -n onap edit deployment dev-vfc-nslcm
268 - MYSQL_AUTH=${MYSQL_ROOT_USER}:${MYSQL_ROOT_PASSWORD} ./docker-entrypoint.sh
273 value: https://msb-iag:443
277 value: vfc-mariadb:3306
278 - name: MYSQL_ROOT_USER
280 - name: MYSQL_ROOT_PASSWORD
284 name: dev-vfc-db-root-pass
289 - name: REG_TO_MSB_WHEN_START
291 image: 192.168.235.22:10001/onap/vfc/nslcm:1.4.1
292 imagePullPolicy: IfNotPresent
295 initialDelaySeconds: 120
303 - containerPort: 8403
307 Then you can replace the value into the pod.
310 **6 Kubectl basic command**
311 ======================================
313 Basic operation of kubernests cluster(Take the namespace of onap in linux client as an example)
315 * Check the cluster node
321 * Check cluster namespace
327 * View the pod information and the pod on which the node is located, under the namespace specified (for example, namespace on onap)
331 kubectl get pod -o wide -n onap
333 * Connected to the docker in pod
337 Check the docker's name , return two dockers' name after execution, -c specifie the docker that needed ti go in.
339 kubectl -n onap get pod dev-vfc-nslcm-68cb7c9878-v4kt2 -o jsonpath={.spec.containers[*].name}
341 kubectl -n onap exec -it dev-vfc-nslcm-68cb7c9878-v4kt2 -c vfc-nslcm /bin/bash
343 * Copy files (take the catalog example). When the data copy is lost after the pod is restarted or migrated, the multi-copy pod copy operation only exists for the current pod
347 Copy from local to dockers in pod
349 kubectl -n onap cp copy_test.sh dev-vfc-nslcm-68cb7c9878-v4kt2: -c vfc-nslcm
351 Copy pod's content to local machine
353 kubectl -n onap cp dev-vfc-nslcm-68cb7c9878-v4kt2:copy_test.sh -c vfc-nslcm /tmp/copy_test.sh
355 * Remote command (to see the current path of the container as an example)
359 kubectl -n onap exec -it dev-vfc-nslcm-68cb7c9878-v4kt2 -c vfc-nslcm pwd
361 * View pod basic information and logs (no -c parameter added for single container pod)
365 kubectl -n onap describe pod dev-vfc-nslcm-68cb7c9878-v4kt2
367 kubectl -n onap logs dev-vfc-nslcm-68cb7c9878-v4kt2 -c vfc-nslcm
369 * Check the service listening port and manually expose the port, which is commonly used for testing, such as nginx under test namespace
375 kubectl create namespace test
377 2>create pod with 3 replication
379 kubectl run nginx --image=nginx --replicas=3 -n test
381 3>Pod exposed ports for nginx (target port, source port target-port)
383 kubectl expose deployment nginx --port=88 --target-port=80 --type=LoadBalancer -n test
387 kubectl expose deployment nginx --port=88 --target-port=80 --type=NodePort -n test
389 4> Check svc(ports that pod exposed , The cluster internally accesses this pod via port 88., external access to the cluster using floatingip+30531)
391 kubectl get svc -n test
393 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
394 nginx LoadBalancer 10.43.45.186 10.0.0.3 88:30531/TCP 3m
396 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
397 nginx NodePort 10.43.45.186 88:30531/TCP 3m
400 Nodes within the CLUSTER can be accessed via cluster-ip +88 port
401 Outside the cluster, it is accessible via either EXTERNAL IP or the Floating IP+30531, which is the node name of the pod
402 The floatingip corresponding to the node name can be viewed in the /etc/hosts of the rancher machine or in the documentation
405 * Modify the container image and pod strategy (deployment, statefulset), the completion of modification will trigger the rolling update
409 1>To determine whether the pod is a stateful application (efullset) or a stateful application (deployment)
411 kubectl -n onap describe pod dev-vfc-nslcm-68cb7c9878-v4kt2 |grep Controlled
413 2>Stateless application deployment
415 kubectl -n onap get deploy |grep nslcm
417 kubectl -n onap edit deploy dev-vfc-nslcm-68cb7c9878-v4kt2
419 3>Stateful application statefulset
421 kubectl -n onap get statefulset |grep cassandra
423 kubectl -n onap edit statefulset dev-aai-cassandra
426 * Restart pod(After removing the pod, deployment will recreate a same pod and randomly assign it to any node.)
430 kubectl -n onap delete pod dev-vfc-nslcm-68cb7c9878-v4kt2 -c vfc-nslcm
433 * View the virtual machine where the portal-app resides in order to add host resolution
437 10.0.0.13 corresponding Floating IP is 172.30.3.36
439 kubectl -n onap get svc |grep portal-app
441 portal-app LoadBalancer 10.43.181.163 10.0.0.13 8989:30215/TCP,8403:30213/TCP,8010:30214/TCP,8443:30225/TCP
443 * Pod expansion and shrinkage
447 kubectl scale deployment nginx --replicas=3
451 kubectl scale deployment nginx --replicas=1