1 *****************************************
2 vFWCL on Casablanca ONAP offline platform
3 *****************************************
7 This document is collecting notes we have from running vFirewall demo on offline Casablanca platform
8 installed by ONAP offline installer tool.
10 We were able to finish the vFirewall demo use case manually (only) by
11 combining multiple ONAP Wiki sources, videos and Jira tickets as well as following
12 onap-discuss mailing list as some issues were already detected by other teams.
14 Some of the most relevant materials are available on following links:
16 * `https://www.youtube.com/watch?v=2Wo5iHWnoKM <https://www.youtube.com/watch?v=2Wo5iHWnoKM>`_
17 * `https://wiki.onap.org/display/DW/ONAP+Beijing%3A+Understanding+the+vFWCL+use-case+mechanism <https://wiki.onap.org/display/DW/ONAP+Beijing%3A+Understanding+the+vFWCL+use-case+mechanism>`_
18 * `https://wiki.onap.org/display/DW/vFWCL+instantiation%2C+testing%2C+and+debuging <https://wiki.onap.org/display/DW/vFWCL+instantiation%2C+testing%2C+and+debuging>`_
19 * `https://docs.onap.org/en/casablanca/submodules/integration.git/docs/docs_vfw.html <https://docs.onap.org/en/casablanca/submodules/integration.git/docs/docs_vfw.html>`_
22 .. contents:: Table of Contents
27 Step 1. Preconditions - before ONAP deployment
28 ==============================================
30 Understanding of the underlying OpenStack deployment is required from anyone applying these instructions.
32 In addition, installation-specific location of the helm charts on the infra node must be known.
33 In this document it is referred to as <helm_charts_dir>
35 Snippets below are describing areas we need to configure for successfull vFWCL demo.
37 Pay attention to them and configure it (ideally before deployment) accordingly.
39 **1) <helm_charts_dir>/onap/values.yaml**::
44 openStackType: OpenStackProvider
45 openStackName: OpenStack
46 openStackKeyStoneUrl: "http://10.20.30.40:5000“
47 openStackServiceTenantName: “services“
48 openStackDomain: "default“
49 openStackUserName: "onap-tieto“
50 openStackEncryptedPassword: "f7920677e15e2678b0f33736189e8965"
55 #openStackEncryptedPasswordHere should match the encrypted string used in SO and APPC and overridden per environment
56 openStackEncryptedPasswordHere: "f7920677e15e2678b0f33736189e8965"
62 # necessary to disable liveness probe when setting breakpoints
63 # in debugger so K8s doesn't restart unresponsive container
65 # so server configuration
67 # message router configuration
69 # openstack configuration
70 openStackUserName: "onap-tieto"
71 openStackRegion: "RegionOne"
72 openStackKeyStoneUrl: "http://10.20.30.40:5000"
73 openStackServiceTenantName: "services"
74 openStackEncryptedPasswordHere: "f7920677e15e2678b0f33736189e8965"
77 **2) <helm_charts_dir>/robot/values.yaml**::
79 #################################################################
80 # Application configuration defaults.
81 #################################################################
84 openStackEncryptedPasswordHere: "c124921a3a0efbe579782cde8227681e"
86 demoArtifactsVersion: "1.3.0"
87 demoArtifactsRepoUrl: "https://nexus.onap.org/content/repositories/releases"
88 openStackFlavourMedium: "m1.medium"
89 openStackKeyStoneUrl: "http://10.20.30.40:5000"
90 openStackPublicNetId: "9403ceea-0738-4908-a826-316c8541e4bb" # this is not necessarily public network, robot script assumes that floating IPs can be created in that and that network’s name is „public“
91 openStackPassword: "some_good_password"
92 openStackRegion: "RegionOne"
93 openStackTenantId: "b1ce7742d956463999923ceaed71786e"
94 openStackUserName: "onap-tieto"
95 openStackProjectName: "onap-tieto"
96 openStackDomainId: "default"
97 openStackKeystoneAPIVersion: "v2.0"
98 ubuntu14Image: "vfwcl_trusty" # this is modified trusty image we are using with prepopulated honeycomb SW
99 ubuntu16Image: "ubuntu-16.04-server-cloudimg-amd64" # not relevant, vfwcl is on trusty
100 scriptVersion: "1.3.0"
101 openStackPrivateNetId: "3c7aa2bd-ba14-40ce-8070-6a0d6a617175" # needs to be matching and already existing network in Openstack
102 openStackSecurityGroup: "onap_sg" # needs to be matching and already created in Openstack
103 openStackPrivateSubnetId: "2bcb9938-9c94-4049-b580-550a44dc63b" # needs to be matching and already created in Openstack
104 openStackPrivateNetCidr: "10.0.0.0/16" # we probably needs whole 10.0.x.x range as more 10.0.x, 10.0.y Ips are needed for VFW VMs
105 openStackOamNetworkCidrPrefix: "10.0" # this is hardcoded in robot preload scripts
106 vidServerProtocol: "http"
107 vidServerPort: "8080"
108 vnfPubKey: "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDPwF2bYm2QuqZpjuAcZDJTcFdUkKv4Hbd/3qqbxf6g5ZgfQarCi+mYnKe9G9Px3CgFLPdgkBBnMSYaAzMjdIYOEdPKFTMQ9lIF0+i5KsrXvszWraGKwHjAflECfpTAWkPq2UJUvwkV/g7NS5lJN3fKa9LaqlXdtdQyeSBZAUJ6QeCE5vFUplk3X6QFbMXOHbZh2ziqu8mMtP+cWjHNBB47zHQ3RmNl81Rjv+QemD5zpdbK/h6AahDncOY3cfN88/HPWrENiSSxLC020sgZNYgERqfw+1YhHrclhf3jrSwCpZikjl7rqKroua2LBI/yeWEta3amTVvUnR2Y7gM8kHyh Generated-by-Nova"
109 dcaeCollectorIp: "10.8.8.22" # need to use one of k8s host ip’s (ves collector will be listening on port 30205 on it)
112 **3) <helm_charts_dir>/so/charts/so-openstack-adapter/values.yaml**::
115 openStackUserName: "onap-tieto"
116 openStackRegion: "RegionOne"
117 openStackKeyStoneUrl: "http://10.20.30.40:5000/v2.0"
118 openStackServiceTenantName: "services"
119 openStackEncryptedPasswordHere: "f7920677e15e2678b0f33736189e8965"
120 openStackTenantId: "b1ce7742d956463999923ceaed71786e"
122 **4) <helm\_charts\_dir>/policy/resources/config/pe/push-policies.sh**::
124 echo "Upload BRMS Param Template"
128 # adding --no-check-certificate
129 wget -O cl-amsterdam-template.drl https://git.onap.org/policy/drools-applications/plain/controlloop/templates/archetype-cl-amsterdam/src/main/resources/archetype-resources/src/main/resources/__closedLoopControlName__.drl?h=casablanca --no-check-certificate
135 Step 2. Preconditions - after ONAP deployment
136 =============================================
139 Run HealthChecks after successful deployment, all of them must pass
141 Relevant robot scripts are under <helm_charts_dir>/oom/kubernetes/robot
145 [root@tomas-infra robot]# ./ete-k8s.sh onap health
147 51 critical tests, 51 passed, 0 failed
148 51 tests total, 51 passed, 0 failed
152 Hints while troubleshooting issues:
154 *(1) increasing verbosity of robot regarding keystone credentials*
156 Note: 401 issues with robot_init might be troubleshoot just with that tweak
158 patch consists of replacing
168 VARIABLES="--removekeywords name:keystone\_interface.\*"
176 kubectl cp -n onap runTags.sh onap-robot-robot-5576c8f6cc-znvdp:/var/opt/OpenECOMP_ETE/runTags.sh
177 root@onap-robot-robot-5576c8f6cc-znvdp:/var/opt/OpenECOMP_ETE# chmod +x runTags.sh
180 *(2) hint for editing configmap / faster testing changes in robot/values.yml*
186 [root@tomas-infra robot]# kubectl edit configmap onap-robot-robot-eteshare-configmap -n onap
188 *(3) very useful page describing commands for manual checking of HC’s*
190 `https://wiki.onap.org/display/DW/Robot+Healthcheck+Tests+on+ONAP+Components#RobotHealthcheckTestsonONAPComponents-ApplicationController(APPC)Healthcheck <https://wiki.onap.org/display/DW/Robot+Healthcheck+Tests+on+ONAP+Components#RobotHealthcheckTestsonONAPComponents-ApplicationController(APPC)Healthcheck>`_
198 In this step we initialize robot apache server for accessing robot logs via browser
200 # demo-k8s.sh is also located under <helm_charts_dir>/oom/kubernetes/robot
204 root@tomas-infra robot]# ./demo-k8s.sh onap init_robot
206 Number of parameters: 2
208 WEB Site Password for user 'test': ++ kubectl --namespace onap get pods
211 + POD=onap-robot-robot-5576c8f6cc-znvdp
212 + ETEHOME=/var/opt/OpenECOMP_ETE
213 ++ kubectl --namespace onap exec onap-robot-robot-5576c8f6cc-znvdp --
214 bash -c 'ls -1q /share/logs/ | wc -l'
215 + export GLOBAL_BUILD_NUMBER=18
216 + GLOBAL_BUILD_NUMBER=18
218 + OUTPUT_FOLDER=0018_demo_init_robot
220 + VARIABLEFILES='-V /share/config/vm_properties.py -V /share/config/integration\_robot\_properties.py -V /share/config/integration\_preload\_parameters.py'
221 + kubectl --namespace onap exec onap-robot-robot-5576c8f6cc-znvdp --/var/opt/OpenECOMP\_ETE/runTags.sh -V /share/config/vm\_properties.py -V /share/config/integration\_robot\_properties.py -V /share/config/integration\_preload\_parameters.py -v WEB\_PASSWORD:test -d /share/logs/0018_demo_init_robot -i UpdateWebPage --display 108
222 Starting Xvfb on display :108 with res 1280x1024x24
223 Executing robot tests at log level TRACE
224 ==============================================================================
226 ==============================================================================
227 Testsuites.Update Onap Page :: Initializes ONAP Test Web Page and Password
228 ==============================================================================
229 Update ONAP Page | PASS |
230 ------------------------------------------------------------------------------
231 Testsuites.Update Onap Page :: Initializes ONAP Test Web Page and ... | PASS |
232 1 critical test, 1 passed, 0 failed
233 1 test total, 1 passed, 0 failed
234 ==============================================================================
236 1 critical test, 1 passed, 0 failed
237 1 test total, 1 passed, 0 failed
238 ==============================================================================
239 Output: /share/logs/0018_demo_init_robot/output.xml
240 Log: /share/logs/0018_demo_init_robot/log.html
241 Report: /share/logs/0018_demo_init_robot/report.html
244 After enabling this, robot starts listening on port 30209 within k8s cluster
248 root@hypervisor-tieto ~]# ssh root@1.2.3.4 -i ~/michal1_new_key -L 1235:127.0.0.1:1235
250 root@tomas-infra ~]# ssh tomas-node0 -L 1235:127.0.0.1:30209
256 Hints while troubleshooting issues:
258 Most common problems (i.e. 401 response) are coming from wrong openstack credentials in robot/values.yaml.
262 Step 4. robot init - demo services distribution
263 ==================================================
265 Run following robot script to execute both init_customer + distribute
269 # demo-k8s.sh <namespace> init
271 [root@tomas-infra robot]# ./demo-k8s.sh onap init
272 Number of parameters: 2
274 ++ kubectl --namespace onap get pods
277 + POD=onap-robot-robot-5576c8f6cc-lqpd7
278 + ETEHOME=/var/opt/OpenECOMP_ETE
279 ++ kubectl --namespace onap exec onap-robot-robot-5576c8f6cc-lqpd7 --
280 bash -c 'ls -1q /share/logs/ | wc -l'
281 + export GLOBAL_BUILD_NUMBER=3
282 + GLOBAL_BUILD_NUMBER=3
284 + OUTPUT_FOLDER=0003_demo_init
286 + VARIABLEFILES='-V /share/config/vm_properties.py -V /share/config/integration_robot_properties.py -V /share/config/integration_preload_parameters.py'
287 kubectl --namespace onap exec onap-robot-robot-5576c8f6cc-lqpd7 -- /var/opt/OpenECOMP_ETE/runTags.sh -V /share/config/vm_properties.py -V /share/config/integration_robot_properties.py -V /share/config/integration_preload_parameters.py -d /share/logs/0003_demo_init -i InitDemo --display 93
288 Starting Xvfb on display :93 with res 1280x1024x24
289 Executing robot tests at log level TRACE
290 ==============================================================================
292 ==============================================================================
293 Testsuites.Demo :: Executes the VNF Orchestration Test cases including setu ...
294 ==============================================================================
295 Initialize Customer And Models
297 Downloaded:service-Demovfwcl-csar.csar
299 Distibuting vCPEInfra
300 Downloaded:service-Demovcpeinfra-csar.csar
302 Downloaded:service-Demovcpevbng-csar.csar
303 Distibuting vCPEvBRGEMU
304 Downloaded:service-Demovcpevbrgemu-csar.csar
305 Distibuting vCPEvGMUX
306 Downloaded:service-Demovcpevgmux-csar.csar
307 Distibuting vCPEvGW (this is not vCPEResCust service)
308 Downloaded:service-Demovcpevgw-csar.csar
310 ------------------------------------------------------------------------------
311 Testsuites.Demo :: Executes the VNF Orchestration Test cases inclu... | PASS |
312 1 critical test, 1 passed, 0 failed
313 1 test total, 1 passed, 0 failed
314 ==============================================================================
316 1 critical test, 1 passed, 0 failed
317 1 test total, 1 passed, 0 failed
318 ==============================================================================
319 Output: /share/logs/0003_demo_init/output.xml
320 Log: /share/logs/0003_demo_init/log.html
321 Report: /share/logs/0003_demo_init/report.html
325 Step 5. Deploy vFW service
326 ==========================
329 Now we need to verify that vFWCL service is distributed, we would need to start using GUI for that.
330 We can either make ssh tunnel to respective ports, create SOCKS proxy or use VNC server on infra node.
332 .. note:: VNC server is installed as a part of offline platform to help with accessing ONAP GUIs.
334 Portal GUI should be reachable under following link:
336 `http://portal.api.simpledemo.onap.org:30215/ONAPPORTAL/login.htm <http://portal.api.simpledemo.onap.org:30215/ONAPPORTAL/login.htm>`_
339 .. note:: VNC way: need to get /etc/hosts updated on infra, which requires restart of dnsmasq container. IP’s of particular services should match with k8s node where service is present
346 root@tomas-infra files]# cat /etc/hosts
348 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
349 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
350 10.8.8.22 tomas-node0
352 10.8.8.19 tomas-node2
354 10.8.8.19 portal.api.simpledemo.onap.org
355 10.8.8.9 vid.api.simpledemo.onap.org
356 10.8.8.19 sdc.api.fe.simpledemo.onap.org
357 10.8.8.22 portal-sdk.simpledemo.onap.org
358 10.8.8.19 policy.api.simpledemo.onap.org
359 10.8.8.9 aai.api.sparky.simpledemo.onap.org
360 10.8.8.9 cli.api.simpledemo.onap.org
361 10.8.8.9 msb.api.discovery.simpledemo.onap.org
364 vnc server must be running
368 [root@tomas-infra files]# docker ps
370 CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
371 71821578bc32 rancher/server:v1.6.22 "/usr/bin/entrysh..." 8 days ago Up 8 days 3306/tcp, 0.0.0.0:8080->8080/tcp rancher-server
372 6241e0f86f71 sonatype/nexus3:3.15.2 "sh -c ${SONATYPE\_..." 8 days ago Up 8 days 8081/tcp nexus
373 b36f666e4ba0 own_nginx:2.0.0 "/bin/sh -c 'spawn..." 8 days ago Up 7 hours 0.0.0.0:80->80/tcp,0.0.0.0:443->443/tcp, 0.0.0.0:10001->443/tcp nginx-server
374 6e161228f43e consol/centos-icewm-vnc:1.2.0 "/dockerstartup/vn..." 8 days ago Up 8 days 0.0.0.0:5901->5901/tcp, 0.0.0.0:6901->6901/tcp vnc-server
375 00a496f85dcd andyshinn/dnsmasq:2.76 "dnsmasq -k -H /si..." 8 days ago Up 8 days dns-server
378 Browsers need some security exceptions – *enabled mixed content*
380 # HowTo enable mixed content
382 `https://kb.iu.edu/d/bdny <https://kb.iu.edu/d/bdny>`_
385 **Action: deploy vFWCL service (should be in DISTRIBUTION_COMPLETE_OK state)**
387 This step is done via Virtual Infrastructure Designer (VID), which *can not be opened* via Portal directly in Casablanca (see
388 \ `https://jira.onap.org/browse/PORTAL-555 <https://urldefense.proofpoint.com/v2/url?u=https-3A__jira.onap.org_browse_PORTAL-2D555&d=DwMFoQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=P9yc3y-ZUtmQKjhXoKwlVB81dNDgHvk04cBUt5jCEcQ&m=JzFxeZODOHqDP4wAZ04KmGPfiO7Z8bAYJEuT_VRk2Q8&s=GLgO4vYtl1AbLaZNU7UDqJ1-4urwtq0T8nANSb72JTo&e=>`__).
389 As described in the ticket, it can be nevertheless opened by going directly to the address that Portal is showing as failing.
398 **Action: get Service Instance ID (this id will be used at step nr.9 for heatbridge)**
402 e.g. d99c5026-719a-432d-b5bd-a25dc2cf6a4b
405 Step 6. Instantiate services
406 ============================
409 For instantiating service, we need to discover which Id is matching to which vFirewall VNF.
413 We can obtain it from csar file by the following steps (as described in following recording https://www.youtube.com/watch?v=2Wo5iHWnoKM)
415 * Login to SDC as designer (cs0008 / demo123456!)
416 * Open demoVFWCL service and its TOSCA artifacts tab
417 * Click on Download icon next to TOSCA_CSAR
421 Csar file can be opened e.g. using midnight-commander, browse it and go to ./Definitions folder
423 Various yml files will be there, from example below one can see that
427 vPKG VNF has id D5724ce5Ae8a4175B48d
431 vFW VNF has id E48fe0e2Dd744f2891dc
433 **Check the following expected problems before creating node instances:**
435 Service instantiation will not work out of the box, the following tickets
436 require workarounds described below to make the instantiation succeed.
438 **Issue 1: TEST-133**
442 This problem was reported in
444 `https://jira.onap.org/browse/TEST-133 <https://jira.onap.org/browse/TEST-133>`_
448 `https://jira.onap.org/browse/INT-705 <https://jira.onap.org/browse/INT-705>`_
450 but won’t be available in Casablanca.
454 2019-04-30T08:11:05.320Z|effc1fb9-84d0-462b-9c27-20eb6b04dc8a|camundaTaskExecutor-1|AssignVnfBB||||ERROR|300|Error
455 from SDNC: No availability zones found in AAI for cloud region RegionOne
456 |Error from SDNC: No availability zones found in AAI for cloud region RegionOne
460 !! More logs can be found in BPMN !!
464 [root@tomas-infra ~]# kubectl exec -it onap-so-so-bpmn-infra-6d57c84c7f-rdd59 -n onap sh
469 **It is a deficiency of the robot test case in Casablanca and we need to add missing availability zone to the cloud region manually**
475 # to be executed on infra node
476 # to insert availability zone
478 [root@tomas-infra ~]# curl -k -i -X PUT --user aai@aai.onap.org:demo123456! -H 'Accept: application/json' -H 'Content-Type: application/json' -H 'X-FromAppId: MSO' -H 'X-TransactionId: 89273498' -d '{"availability-zone-name": "AZ1", "hypervisor-type": "hypervisor"}' https://tomas-node0:30233/aai/v14/cloud-infrastructure/cloud-regions/cloud-region/CloudOwner/RegionOne/availability-zones/availability-zone/AZ1 --insecure
481 Date: Thu, 02 May 2019 19:35:26 GMT
483 Content-Type: application/json
484 X-AAI-TXID: 1-aai-resources-190502-19:35:26:951-38024
486 Strict-Transport-Security: max-age=16000000; includeSubDomains; preload;
491 # to verify that availability zone has been added successfully
493 [root@tomas-infra ~]# curl -k -i -X GET --user aai@aai.onap.org:demo123456! -H 'Accept: application/json' -H 'Content-Type: application/json' -H 'X-FromAppId: MSO' -H 'X-TransactionId: 89273498' -d '{"availability-zone-name": "AZ1", "hypervisor-type": "hypervisor"}' https://tomas-node0:30233/aai/v14/cloud-infrastructure/cloud-regions/cloud-region/CloudOwner/RegionOne/availability-zones/availability-zone/AZ1 --insecure
496 Date: Thu, 02 May 2019 19:35:32 GMT
498 Content-Type: application/json
499 X-AAI-TXID: 1-aai-resources-190502-19:35:32:538-84201
501 Strict-Transport-Security: max-age=16000000; includeSubDomains; preload;
506 Service creation (vFWSNK-1) is failing on error as follows (as seen in bpmn log):
510 2019-04-30T14:14:08.707Z|22326334-bf04-45e8-adc7-2652349b23e4|camundaTaskExecutor-1|AssignVnfBB||BPMN|AssignVnfBB|ERROR|900|Exception in org.onap.so.bpmn.infrastructure.sdnc.tasks.SDNCAssignTasks.assignVnf |BPMN_GENERAL_EXCEPTION_ARG
511 2019-04-30T14:17:07.665Z|bcb7687f-4e1d-4da8-861c-7d6239faedd6|camundaTaskExecutor-2|AssignVnfBB||BPMN|AssignVnfBB|ERROR|300|Error from SDNC: vnf-information.onap-model-information.model-customization-uuid is a required input|RA_RESPONSE_FROM_SDNC
512 2019-04-30T14:17:07.666Z|bcb7687f-4e1d-4da8-861c-7d6239faedd6|camundaTaskExecutor-2|AssignVnfBB||||ERROR|300|Error from SDNC: **vnf-information.onap-model-information.model-customization-uuid**
514 ** is a required input|Error from SDNC: vnf-information.onap-model-information.model-customization-uuid is a required input**
515 org.onap.so.client.exception.BadResponseException: Error from SDNC: vnf-information.onap-model-information.model-customization-uuid is a required input
518 Issue is discussed in `https://jira.onap.org/browse/SO-1150 <https://jira.onap.org/browse/SO-1150>`_
520 The suggested WA to switch to old VNF_API in VID works
526 **Action: As we have issues above fixed, we can fill form for both node instances as visible in screenshots below.**
535 Step 7. SDNC topology preload
536 =============================
538 We will use the demo_k8s.sh script again for this step. Some parts are hardcoded in the script for the preload keyword, but work well for our purposes.
542 # demo_k8s.sh <namespace> preload <vnf_name> <module_name>
543 # Preload data for VNF for the <module_name>
545 [root@tomas-infra robot]# ./demo-k8s.sh onap preload vFWSNK-1 vFWSNK-Module-1
547 Number of parameters: 4
549 ++ kubectl --namespace onap get pods
552 + POD=onap-robot-robot-5576c8f6cc-lqpd7
553 + ETEHOME=/var/opt/OpenECOMP_ETE
554 ++ kubectl --namespace onap exec onap-robot-robot-5576c8f6cc-lqpd7 --
555 bash -c 'ls -1q /share/logs/ | wc -l'
556 + export GLOBAL_BUILD_NUMBER=6
557 + GLOBAL_BUILD_NUMBER=6
559 + OUTPUT_FOLDER=0006_demo_preload
561 + VARIABLEFILES='-V /share/config/vm_properties.py -V /share/config/integration_robot_properties.py -V /share/config/integration_preload_parameters.py'
562 + kubectl --namespace onap exec onap-robot-robot-5576c8f6cc-lqpd7 -- /var/opt/OpenECOMP_ETE/runTags.sh -V /share/config/vm_properties.py -V /share/config/integration_robot_properties.py -V /share/config/integration_preload_parameters.py -v VNF_NAME:vFWSNK-1 -v MODULE_NAME:vFWSNK-Module-1 -d /share/logs/0006_demo_preload -i PreloadDemo --display 96
563 Starting Xvfb on display :96 with res 1280x1024x24
564 Executing robot tests at log level TRACE
565 ==============================================================================
567 ==============================================================================
568 Testsuites.Demo :: Executes the VNF Orchestration Test cases including
570 ==============================================================================
572 ------------------------------------------------------------------------------
573 Testsuites.Demo :: Executes the VNF Orchestration Test cases inclu... | PASS |
574 1 critical test, 1 passed, 0 failed
575 1 test total, 1 passed, 0 failed
576 ==============================================================================
578 1 critical test, 1 passed, 0 failed
579 1 test total, 1 passed, 0 failed
580 ==============================================================================
581 Output: /share/logs/0006_demo_preload/output.xml
582 Log: /share/logs/0006_demo_preload/log.html
583 Report: /share/logs/0006_demo_preload/report.html
586 Similarly let's preload vPKG VNF sdnc profile
590 root@tomas-infra robot]# ./demo-k8s.sh onap preload vPKG-1 PKG-Module-1
591 Number of parameters: 4
593 ++ kubectl --namespace onap get pods
596 + POD=onap-robot-robot-5576c8f6cc-lqpd7
597 + ETEHOME=/var/opt/OpenECOMP\_ETE
598 ++ kubectl --namespace onap exec onap-robot-robot-5576c8f6cc-lqpd7 --
599 bash -c 'ls -1q /share/logs/ | wc -l'
600 + export GLOBAL_BUILD_NUMBER=7
601 + GLOBAL_BUILD_NUMBER=7
603 + OUTPUT_FOLDER=0007_demo_preload
605 + VARIABLEFILES='-V /share/config/vm_properties.py -V /share/config/integration_robot_properties.py -V /share/config/integration_preload_parameters.py'
606 + kubectl --namespace onap exec onap-robot-robot-5576c8f6cc-lqpd7 -- /var/opt/OpenECOMP_ETE/runTags.sh -V /share/config/vm_properties.py -V /share/config/integration_robot_properties.py -V /share/config/integration_preload_parameters.py -v VNF_NAME:vPKG-1 -v MODULE_NAME:vPKG-Module-1 -d /share/logs/0007_demo_preload -i PreloadDemo --display 97
607 Starting Xvfb on display :97 with res 1280x1024x24
608 Executing robot tests at log level TRACE
609 ==============================================================================
611 ==============================================================================
612 Testsuites.Demo :: Executes the VNF Orchestration Test cases including
614 ==============================================================================
616 ------------------------------------------------------------------------------
617 Testsuites.Demo :: Executes the VNF Orchestration Test cases inclu... | PASS |
618 1 critical test, 1 passed, 0 failed
619 1 test total, 1 passed, 0 failed
620 ==============================================================================
622 1 critical test, 1 passed, 0 failed
623 1 test total, 1 passed, 0 failed
624 ==============================================================================
625 Output: /share/logs/0007_demo_preload/output.xml
626 Log: /share/logs/0007_demo_preload/log.html
627 Report: /share/logs/0007_demo_preload/report.html
631 Step 8. Creating VF Modules
632 ===========================
635 In this step vFW VMs are spawned in Openstack (VIM).
637 Instance Names which were used in SDNC preload step must be used also here
639 .. note:: vFWSNK-Module-1 must be created first and vFWPKG-Module-1 second.
645 End result of this step is 3 VMs created within 2 heat stacks in Openstack
650 **Issue: VF’s spawning failed due to MSO being unable to talk to Keystone (using default keystone 1.2.3.4)**
652 **(this will not appear if OOM charts are configured properly, see step 1)**
654 Hint: openstack credentials can be changed in so-mariadb directly (just when problem is there)
656 Note that in this context also keystone API version is expected:
662 IDENTITY_URL="http://10.20.30.40:5000/v2.0“
664 (ideally this should come from <helm_charts_dir>/so/charts/so-openstack-adapter/values.yaml)
669 root@mariadb:/# mysql -u root -p # password is : password
671 Welcome to the MariaDB monitor. Commands end with ; or \\g.
672 Your MariaDB connection id is 51192
673 Server version: 10.1.11-MariaDB-1~jessie-log mariadb.org binary distribution
674 Copyright (c) 2000, 2015, Oracle, MariaDB Corporation Ab and others.
675 Type 'help;' or '\\h' for help. Type '\\c' to clear the current input statement.
677 MariaDB [(none)]> use catalogdb;
680 MariaDB [catalogdb]> update identity_services set IDENTITY_URL="http://10.20.30.40:5000/v2.0" where ID="DEFAULT_KEYSTONE";
682 Query OK, 1 row affected (0.01 sec)
683 Rows matched: 1 Changed: 1 Warnings: 0
684 MariaDB [catalogdb]> select \* from identity\_services;
685 +------------------+----------------------------------------------+----------------------+----------------------------------+--------------+-------------+-----------------+----------------------+------------------------------+-----------------+---------------------+---------------------+
686 | ID | IDENTITY_URL | MSO_ID | MSO_PASS | ADMIN_TENANT | MEMBER_ROLE | TENANT_METADATA | IDENTITY_SERVER_TYPE | IDENTITY_AUTHENTICATION_TYPE | LAST_UPDATED_BY | CREATION_TIMESTAMP | UPDATE_TIMESTAMP |
687 +------------------+----------------------------------------------+----------------------+----------------------------------+--------------+-------------+-----------------+----------------------+------------------------------+-----------------+---------------------+---------------------+
688 | DEFAULT_KEYSTONE | http://10.20.30.40:5000 | vnf_user | c124921a3a0efbe579782cde8227681e | service | admin | 1 | KEYSTONE | USERNAME_PASSWORD | FLYWAY | 2019-04-29 09:13:56 | 2019-04-29 09:13:56 |
689 | RAX_KEYSTONE | https://identity.api.rackspacecloud.com/v2.0 | RACKSPACE_ACCOUNT_ID | RACKSPACE_ACCOUNT_APIKEY | service | admin | 1 | KEYSTONE | RACKSPACE_APIKEY | FLYWAY | 2019-04-29 9:13:56 | 2019-04-29 09:13:56 |
690 +------------------+----------------------------------------------+----------------------+----------------------------------+--------------+-------------+-----------------+----------------------+------------------------------+-----------------+---------------------+---------------------+
692 rows in set (0.00 sec)
697 **Hint: ** deleting VF module from aai directly **** (e.g. when in "pending-delete“ state)
701 [root@tomas-infra robot]# curl -i -X DELETE --user aai@aai.onap.org:demo123456! -H 'Accept: application/json' -H 'Content-Type: application/json' -H 'X-FromAppId: MSO' -H 'X-TransactionId: 89273498' https://tomas-node0:30233/aai/v14/network/generic-vnfs/generic-vnf/<vnfid>/vf-modules/vf-module/<vf-module-id>?resource-version=<version from VF info> --insecure
708 [root@tomas-infra robot]# curl -i -X DELETE --user aai@aai.onap.org:demo123456! -H 'Accept: application/json' -H 'Content-Type: application/json' -H 'X-FromAppId: MSO' -H 'X-TransactionId: 89273498' https://tomas-node0:30233/aai/v14/network/generic-vnfs/generic-vnf/959e279f-66be-463f-b9b0-078df5531c17/vf-modules/vf-module/ba07bae9-bc39-474f-aee6-69441f05f08f?resource-version=1556798583277 --insecure
710 **After this part we should have all 3 VMs running and this concludes the vFWCL instantiation part. Next parts are important for CL to actually happen.**
713 Step 9. Run heatbridge
714 ======================
717 To distribute info to AAI about the VMs that were spawned in previous step to AAI]:
721 ./demo-k8s.sh <namespace> heatbridge <stack_name> <service_instance_id> <service> <oam-ip-address>
722 Run heatbridge against the stack for the given service instance and service where:
723 - <stack_name> is the Instance Name used in SDNC Preload Step
724 - <service_instance_id> is the id we checked from GUI
725 - <service> is either vFWSNK or vPKG (hardcoded in robot scripts)
726 - <oam-ip-address> is vfw_private_up_2 (see the Hint below)
733 [root@tomas-infra robot]# ./demo-k8s.sh onap heatbridge vFWSNK-Module-1 d99c5026-719a-432d-b5bd-a25dc2cf6a4b vFWSNK 10.0.128.121
736 **Hint** : According to `https://wiki.onap.org/display/DW/IP+Addresses+in+AAI <https://wiki.onap.org/display/DW/IP+Addresses+in+AAI>`_
737 the ipv4-oam-address is obtained from vfw_private_ip_2 variable of the vLB which is in the vFWSNK VNF (vFWSNK/base_vfw.yaml)
738 which we can see from "openstack stack show"
740 e.g vfw_private_ip_2: 10.0.128.121
744 [root@tomas-infra robot]# ./demo-k8s.sh onap heatbridge vFWSNK-Module-1 d99c5026-719a-432d-b5bd-a25dc2cf6a4b vFWSNK 10.0.128.121
745 Number of parameters: 6
747 ++ kubectl --namespace onap get pods
750 + POD=onap-robot-robot-5576c8f6cc-rjbkf
751 + ETEHOME=/var/opt/OpenECOMP_ETE
752 ++ kubectl --namespace onap exec onap-robot-robot-5576c8f6cc-rjbkf --
753 bash -c 'ls -1q /share/logs/ | wc -l'
754 + export GLOBAL_BUILD_NUMBER=15
755 + GLOBAL_BUILD_NUMBER=15
757 + OUTPUT_FOLDER=0015_demo_heatbridge
759 + VARIABLEFILES='-V /share/config/vm_properties.py -V /share/config/integration_robot_properties.py -V /share/config/integration_preload_parameters.py'
760 + kubectl --namespace onap exec onap-robot-robot-5576c8f6cc-rjbkf -- /var/opt/OpenECOMP_ETE/runTags.sh -V /share/config/vm_properties.py -V /share/config/integration_robot_properties.py -V /share/config/integration_preload_parameters.py -v HB_STACK:vFWSNK-Module-1 -v HB_SERVICE_INSTANCE_ID:d99c5026-719a-432d-b5bd-a25dc2cf6a4b -v HB_SERVICE:vFWSNK -v HB_IPV4_OAM_ADDRESS:10.0.128.121 -d /share/logs/0015_demo_heatbridge -i heatbridge --display 105
761 Starting Xvfb on display :105 with res 1280x1024x24
762 Executing robot tests at log level TRACE
763 ==============================================================================
765 ==============================================================================
766 Testsuites.Demo :: Executes the VNF Orchestration Test cases including
768 ==============================================================================
769 Run Heatbridge :: Try to run heatbridge
770 Set VNF ProvStatus: 4831ed46-a19f-4f7c-89b3-e078c3509138 to ACTIVE | PASS |
771 ------------------------------------------------------------------------------
772 Testsuites.Demo :: Executes the VNF Orchestration Test cases inclu... | PASS |
773 1 critical test, 1 passed, 0 failed
774 1 test total, 1 passed, 0 failed
775 ==============================================================================
777 1 critical test, 1 passed, 0 failed
778 1 test total, 1 passed, 0 failed
779 ==============================================================================
780 Output: /share/logs/0015_demo_heatbridge/output.xml
781 Log: /share/logs/0015_demo_heatbridge/log.html
782 Report: /share/logs/0015_demo_heatbridge/report.html
786 We are not sure if AAI needs info from vPKG-Module-1 or not, information on wiki pages are contradicting. Nevertheless, the following can be done to be sure:
790 [root@tomas-infra robot]# ./demo-k8s.sh onap heatbridge vPKG-Module-1 d99c5026-719a-432d-b5bd-a25dc2cf6a4b vPKG 10.0.128.121
791 Number of parameters: 6
793 ++ kubectl --namespace onap get pods
796 + POD=onap-robot-robot-5576c8f6cc-rjbkf
797 + ETEHOME=/var/opt/OpenECOMP_ETE
798 ++ kubectl --namespace onap exec onap-robot-robot-5576c8f6cc-rjbkf --
799 bash -c 'ls -1q /share/logs/ | wc -l'
800 + export GLOBAL_BUILD_NUMBER=16
801 + GLOBAL_BUILD_NUMBER=16
803 + OUTPUT_FOLDER=0016_demo_heatbridge
805 + VARIABLEFILES='-V /share/config/vm_properties.py -V /share/config/integration_robot_properties.py -V /share/config/integration_preload_parameters.py'
806 + kubectl --namespace onap exec onap-robot-robot-5576c8f6cc-rjbkf -- /var/opt/OpenECOMP_ETE/runTags.sh -V /share/config/vm_properties.py -V /share/config/integration_robot_properties.py -V /share/config/integration_preload_parameters.py -v HB_STACK:vPKG-Module-1 -v HB_SERVICE_INSTANCE_ID:d99c5026-719a-432d-b5bd-a25dc2cf6a4b -v HB_SERVICE:vPKG -v HB_IPV4_OAM_ADDRESS:10.0.128.121 -d /share/logs/0016_demo_heatbridge -i heatbridge --display 106
807 Starting Xvfb on display :106 with res 1280x1024x24
808 Executing robot tests at log level TRACE
809 ==============================================================================
811 ==============================================================================
812 Testsuites.Demo :: Executes the VNF Orchestration Test cases including
814 ==============================================================================
815 Run Heatbridge :: Try to run heatbridge
816 Set VNF ProvStatus: f611b9d5-8715-4963-baba-e9c06e1a80cb to ACTIVE | PASS |
817 ------------------------------------------------------------------------------
818 Testsuites.Demo :: Executes the VNF Orchestration Test cases inclu... | PASS |
819 1 critical test, 1 passed, 0 failed
820 1 test total, 1 passed, 0 failed
821 ==============================================================================
823 1 critical test, 1 passed, 0 failed
824 1 test total, 1 passed, 0 failed
825 ==============================================================================
826 Output: /share/logs/0016_demo_heatbridge/output.xml
827 Log: /share/logs/0016_demo_heatbridge/log.html
828 Report: /share/logs/0016_demo_heatbridge/report.html
831 Step 10. Verify the existence of drools controller and facts
832 ============================================================
835 Check that controller is enabled
839 [root@tomas-infra policy]# kubectl exec -it onap-policy-drools-0 -n onap bash
840 policy@onap-policy-drools-0:/tmp/policy-install$ policy status
842 [drools-pdp-controllers]
843 L []: Policy Management (pid 3839) is running
844 21 cron jobs installed.
848 controlloop-casablanca 1.3.7 disabled
849 test-transaction 1.3.7 disabled
851 healthcheck 1.3.7 enabled
852 session-persistence 1.3.7 disabled
853 pooling-dmaap 1.3.7 disabled
854 active-standby-management 1.3.7 disabled
855 controlloop-utils 1.3.7 disabled
856 state-management 1.3.7 disabled
857 controlloop-trans 1.3.7 enabled
858 controlloop-amsterdam 1.3.7 enabled
859 distributed-locking 1.3.7 enabled
864 Alternatively using REST
868 [root@tomas-infra policy]# curl -k --silent --user 'demo@people.osaaf.org:demo123456!' -X GET \ `https://tomas-node0:30221/policy/pdp/engine/controllers/amsterdam/drools <https://tomas-node0:30221/policy/pdp/engine/controllers/amsterdam/drools>`_
871 Amsterdam drools controller should also have facts available, which can be checked using REST:
875 [root@tomas-infra policy]# curl -k --silent --user 'demo@people.osaaf.org:demo123456!' -X GET \ `https://tomas-node0:30221/policy/pdp/engine/controllers/amsterdam/drools/facts <https://tomas-node0:30221/policy/pdp/engine/controllers/amsterdam/drools/facts>`_
879 **If amsterdam controller is not enabled or facts can’t be found, follow instructions from** `PolicyInstallation <https://wiki.onap.org/display/DW/ONAP+Policy+Framework%3A+Installation+of+Amsterdam+Controller+and+vCPE+Policy>`_
881 **For installing amsterdam controller only – it should be noted that controlloop-utils feature must remain disabled as it’s used just for internal simulation as described in** `ClSimulation <https://docs.onap.org/en/latest/submodules/policy/engine.git/docs/platform/clsimulation.html>`_
882 **and will cause the real E2E use case to fail if activated.**
890 Demo policies are not pushed automatically due to following problem (this should not occur if OOM charts are patched before deployment as suggested in Step 1):
894 policy@onap-policy-pap-7666c9b6bb-rkpz2:/tmp/policy-install$ /tmp/policy-install/config/push-policies.sh
895 Upload BRMS Param Template
896 --2019-05-09 13:13:44--
897 https://git.onap.org/policy/drools-applications/plain/controlloop/templates/archetype-cl-amsterdam/src/main/resources/archetype-resources/src/main/resources/\_\_closedLoopControlName\_\_.drl?h=casablanca
898 Resolving git.onap.org (git.onap.org)... 10.8.8.8
899 Connecting to git.onap.org (git.onap.org)|10.8.8.8\|:443... connected.
900 ERROR: cannot verify git.onap.org's certificate, issued by '/C=PL/ST=Poland/L=Krakow/O=Samsung':
901 Unable to locally verify the issuer's authority.
902 To connect to git.onap.org insecurely, use `--no-check-certificate'.
904 push_policies.sh script is mounted (in readOnly) to pap container
910 "/etc/localtime:/etc/localtime:ro",
911 "/var/lib/kubelet/pods/90b0584e-70cd-11e9-a694-022ec4c4d330/volume-subpaths/pe-pap/pap/1:/tmp/policy-install/config/push-policies.sh:ro",
914 Therefore it can be patched even after deployment by executing following steps directly on kube node where the pap container is running:
918 [root@tomas-infra pe]# kubectl get pods -n onap -o=wide | grep pap
919 onap-policy-pap-7666c9b6bb-rkpz2 2/2
920 Running 0 1d 10.42.110.165
921 tomas-node0.novalocal <none>
924 [root@tomas-infra pe]# ssh tomas-node0
925 Last login: Thu May 9 13:32:13 2019 from gerrit.onap.org
926 [root@tomas-node0 ~]# vi /var/lib/kubelet/pods/90b0584e-70cd-11e9-a694-022ec4c4d330/volume-subpaths/pe-pap/pap/1
929 and we can add missing --no-check-certificate to it
933 wget -O cl-amsterdam-template.drl https://git.onap.org/policy/drools-applications/plain/controlloop/templates/archetype-cl-amsterdam/src/main/resources/archetype-resources/src/main/resources/__closedLoopControlName__.drl?h=casablanca --no-check-certificate
936 Afterwards we can re-run the push-policies.sh using:
940 [root@tomas-infra pe]# kubectl exec -it onap-policy-pap-7666c9b6bb-rkpz2 -n onap bash
941 Defaulting container name to pap.
942 Use 'kubectl describe pod/onap-policy-pap-7666c9b6bb-rkpz2 -n onap' to see all of the containers in this pod.
943 policy@onap-policy-pap-7666c9b6bb-rkpz2:/tmp/policy-install$ export PRELOAD_POLICIES=true
944 policy@onap-policy-pap-7666c9b6bb-rkpz2:/tmp/policy-install$ /tmp/policy-install/config/push-policies.sh
946 To verify that policies were pushed correctly open **Policy Editor** as Demo user (note that this also does not work from Portal directly, see `https://jira.onap.org/browse/PORTAL-554 <https://urldefense.proofpoint.com/v2/url?u=https-3A__jira.onap.org_browse_PORTAL-2D554&d=DwMFoQ&c=LFYZ-o9_HUMeMTSQicvjIg&r=P9yc3y-ZUtmQKjhXoKwlVB81dNDgHvk04cBUt5jCEcQ&m=JzFxeZODOHqDP4wAZ04KmGPfiO7Z8bAYJEuT_VRk2Q8&s=1Pk0oDBJzpg_vV7MFmnTkYYNWCXmuVVASQ-RkrYelM0&e=>`_
948 move to **“com”** folder - if you see policies as in the screenshot (especially BRMSParamvFirewall and MicroServicevFirewall), policies were pushed correctly
953 Step 12. Update policy
954 ======================
957 12.1 Obtain vPKG Invariant UUID (from TOSCA_CSAR file)
958 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
960 Download Tosca CSAR file:
963 - From ONAP-Portal’s SDC under Designer user
964 - move to demovFWCL Service
965 - open TOSCA Artifacts
966 - click download TOSCA_CSAR
972 - open csar e.g. in totalcommander and find part relevant for vPKG VNF
974 \\Definitions\\service-Demovfwcl-template.yml
976 and get invariantUUID from it
982 ad3826a0-36ff-4d7e-b393 0:
983 type: org.openecomp.resource.vf.Ad3826a036ff4d7eB393
985 invariantUUID: f4fa2471-167c-4353-81d5-7270eee74fe1
986 UUID: 597650d2-c048-46a0-8324-bfe535d7f749
988 2f35722e-2934-451a-98e3-f0cf123fc48c
993 12.2 Update vFW policy script
994 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
996 This step is automated within OOM in following script:
1000 oom/kubernetes/policy/charts/drools/resources/scripts/update-vfw-op-policy.sh
1002 This script requires following params
1006 [root@tomas-infra scripts]# ./update-vfw-op-policy.sh
1007 Usage: update-vfw-op-policy.sh <k8s-host> <policy-pdp-node-port> <policy-drools-node-port> <resource-id>
1010 ./update-vfw-op-policy.sh 10.8.8.19 30237 30221 e32a234a-9701-44d0-b2c8-4d6c38b045c6
1012 However it did not work for us, so we run it manually:
1014 *(a) Delete original Firewall policy*
1018 [root@tomas-infra scripts]# kubectl exec -it onap-policy-pdp-0 -n onap bash
1019 policy@onap-policy-pdp-0:/tmp/policy-install$ curl -v -k -X DELETE --header 'Content-Type: application/json' --header 'Accept: text/plain' --header 'ClientAuth: cHl0aG9uOnRlc3Q=' --header 'Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==' --header 'Environment: TEST' -d '{ "pdpGroup": "default", "policyComponent" : "PDP", "policyName": "com.BRMSParamvFirewall", "policyType": "BRMS\_Param" ' https://localhost:8081/pdp/api/deletePolicy
1021 *(b) Update Firewall policy*
1023 resource_id can be get from step 12.1
1027 RESOURCE_ID=f4fa2471-167c-4353-81d5-7270eee74fe1
1029 curl -v -k -X PUT --header 'Content-Type: application/json' --header 'Accept: text/plain' --header 'ClientAuth: cHl0aG9uOnRlc3Q=' --header 'Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==' --header 'Environment: TEST' -d '{ "policyConfigType": "BRMS_PARAM", "policyName": "com.BRMSParamvFirewall", "policyDescription": "BRMS Param vFirewall policy", "policyScope": "com", "attributes": { "MATCHING": { "controller": "amsterdam" }, "RULE": { "templateName": "ClosedLoopControlName", "closedLoopControlName": "ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a", "controlLoopYaml": "controlLoop%3A%0D%0A++version%3A+2.0.0%0D%0A++controlLoopName%3A+ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a%0D%0A++trigger_policy%3A+unique-policy-id-1-modifyConfig%0D%0A++timeout%3A+1200%0D%0A++abatement%3A+false%0D%0A+%0D%0Apolicies%3A%0D%0A++-+id%3A+unique-policy-id-1-modifyConfig%0D%0A++++name%3A+modify+packet+gen+config%0D%0A++++description%3A%0D%0A++++actor%3A+APPC%0D%0A++++recipe%3A+ModifyConfig%0D%0A++++target%3A%0D%0A++++++%23+TBD+-+Cannot+be+known+until+instantiation+is+done%0D%0A++++++resourceID%3A+'${RESOURCE_ID}'%0D%0A++++++type%3A+VNF%0D%0A++++retry%3A+0%0D%0A++++timeout%3A+300%0D%0A++++success%3A+final_success%0D%0A++++failure%3A+final_failure%0D%0A++++failure_timeout%3A+final_failure_timeout%0D%0A++++failure_retries%3A+final_failure_retries%0D%0A++++failure_exception%3A+final_failure_exception%0D%0A++++failure_guard%3A+final_failure_guard" } } }' `https://localhost:8081/pdp/api/updatePolicy <https://localhost:8081/pdp/api/updatePolicy>`_
1032 *(c) Push policy – this will trigger maven build on brmsgw*
1036 curl -v -k --silent -X PUT --header 'Content-Type: application/json' --header 'Accept: text/plain' --header 'ClientAuth: cHl0aG9uOnRlc3Q=' --header 'Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==' --header 'Environment: TEST' -d '{ "pdpGroup": "default", "policyName": "com.BRMSParamvFirewall", "policyType": "BRMS\_Param" ' \ `https://localhost:8081/pdp/api/pushPolicy <https://localhost:8081/pdp/api/pushPolicy>`_
1044 [root@tomas-infra scripts]# kubectl exec -it onap-policy-drools-0 -n onap bash
1045 policy@onap-policy-drools-0:/tmp/policy-install$ source /opt/app/policy/etc/profile.d/env.sh && policy stop && sleep 1 && policy start
1046 [drools-pdp-controllers]
1047 L []: Stopping Policy Management... Policy Management (pid=3839) is stopping... Policy Management has stopped.
1048 [drools-pdp-controllers]
1049 L []: Policy Management (pid 12172) is running
1052 At this step policy should be updated and drools rules for Firewall should be available/loaded into amsterdam controller,
1053 which can be verified using:
1057 curl -k --silent --user 'demo@people.osaaf.org:demo123456!' -X GET `https://tomas-node0:30221/policy/pdp/engine/controllers/amsterdam/drools/facts/closedloop-amntrolloop.Params <https://tomas-node0:30221/policy/pdp/engine/controllers/amsterdam/drools/facts/closedloop-amntrolloop.Params>`_
1062 We can use following robot script for creating APPC mount:
1066 demo-k8s.sh <namespace> appc <module_name>
1067 - provide APPC with vFW module mount point for closedloop
1072 If public network used for Firewall VMs is not called *public*
1078 openStackPublicNetId: "9403ceea-0738-4908-a826-316c8541e4bb“
1080 neutron net-list | grep 9403ceea-0738-4908-a826-316c8541e4bb
1081 neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
1082 | 9403ceea-0738-4908-a826-316c8541e4bb | rc3-offline-network | b1ce7742d956463999923ceaed71786e | 1782c82c-cd92-4fb6-a292-5e396afe63ec 10.8.8.0/24 |
1084 Robot script will fail on:
1088 [root@tomas-infra robot]# ./demo-k8s.sh onap appc vPKG-Module-1
1090 Number of parameters: 3
1092 ++ kubectl --namespace onap get pods
1095 + POD=onap-robot-robot-5576c8f6cc-xfh9q
1096 + ETEHOME=/var/opt/OpenECOMP_ETE
1097 ++ kubectl --namespace onap exec onap-robot-robot-5576c8f6cc-xfh9q -- bash -c 'ls -1q /share/logs/ | wc -l'
1099 export GLOBAL_BUILD_NUMBER=9
1101 + GLOBAL_BUILD_NUMBER=9
1104 + OUTPUT_FOLDER=0009_demo_appc
1108 + VARIABLEFILES='-V /share/config/vm_properties.py -V /share/config/integration_robot_properties.py -V /share/config/integration_preload_parameters.py'
1109 + kubectl --namespace onap exec onap-robot-robot-5576c8f6cc-xfh9q -- /var/opt/OpenECOMP_ETE/runTags.sh -V /share/config/vm_properties.py -V /share/config/integration_robot_properties.py -V /share/config/integration_preload_parameters.py -v MODULE_NAME:vPKG-Module-1 -d /share/logs/0009_demo_appc -i APPCMountPointDemo --display 99
1111 Starting Xvfb on display :99 with res 1280x1024x24
1112 Executing robot tests at log level TRACE
1113 ==============================================================================
1115 ==============================================================================
1116 Testsuites.Demo :: Executes the VNF Orchestration Test cases including
1118 ==============================================================================
1119 Create APPC Mount Point |
1122 {u'OS-EXT-STS:task_state': None, u'addresses': {u'demofwlsnk_unprotecteddemo': [{u'OS-EXT-IPS-MAC:mac_addr': u'fa:16:3e:04:42:b0', u'version': 4, u'addr': u'192.168.10.200', u'OS-EXT-IPS:type': u'fixed'}], u'rc3-offline-network': [{u'OS-EXT-IPS-MAC:mac_addr': u'fa:16:3e:9c:51:f7', u'version': 4, u'addr': u'10.8.8.26', u'OS-EXT-IPS:type': u'fixed'}], u'onap_private_vFWCL': [{u'OS-EXT-IPS-MAC:mac_addr': u'fa:16:3e:1d:27:e4', u'version': 4, u'addr': u'10.0.128.123', u'OS-EXT-IPS:type': u'fixed'}]}, u'links': [{u'href': u'http://10.20.30.40:8774/v2.1/b1ce7742d956463999923ceaed71786e/servers/d9d926ec-d14f-48f1-b473-35d43095b2e7', u'rel': u'self'}, {u'href': u'http://10.20.30.40:8774/b1ce7742d956463999923ceaed71786e/servers/d9d926ec-d14f-48f1-b473-35d43095b2e7', u'rel': u'bookmark'}], u'image': {u'id': u'fa1d0c4e-63f2-4458-b571-6c8fd17b917b', u'links': [{u'href': u'http://10.20.30.40:8774/b1ce7742d956463999923ceaed71786e/images/fa1d0c4e-63f2-4458-b571-6c8fd17b917b', u'rel': u'bookmark'}]}, u'OS-EXT-STS:vm\_state': u'active', u'OS-EXT-SRV-ATTR:instance\_name': u'demofwl01pgndemo-d9d926ec', u'OS-SRV-USG:launched\_at': u'2019-05-09T11:42:22.000000', u'flavor': {u'id': u'3', u'links': [{u'href': u'http://10.20.30.40:8774/b1ce7742d956463999923ceaed71786e/flavors/3', u'rel': u'bookmark'}]}, u'id': u'd9d926ec-d14f-48f1-b473-35d43095b2e7', u'security\_groups': [{u'name': u'onap\_sg'}, {u'name': u'default'}, {u'name': u'onap\_sg'}], u'user\_id': u'4fcfcc93d94d4534b71593fa3039801c', u'OS-DCF:diskConfig': u'MANUAL', u'accessIPv4': u'', u'accessIPv6': u'', u'progress': 0, u'OS-EXT-STS:power\_state': 1, u'OS-EXT-AZ:availability\_zone': u'nova', u'metadata': {u'vf\_module\_id': u'5c63abb2-83fa-4bf1-8264-44a36f12252a', u'vnf\_id': u'9a0d7dbb-f4a3-4f1f-a781-3b119722848f'}, u'status': u'ACTIVE', u'updated': u'2019-05-09T11:42:22Z', u'hostId': u'92f5e26cd9accaa41b8e491925f59537cc421acffe2da5ab298d0d4a', u'OS-EXT-SRV-ATTR:host': u'ONAP-3', u'OS-SRV-USG:terminated\_at': None, u'key\_name': u'vfw\_keydemo\_t2z5', u'OS-EXT-SRV-ATTR:hypervisor\_hostname': u'ONAP-3', u'name': u'demofwl01pgndemo', u'created': u'2019-05-09T11:42:07Z', u'tenant\_id': u'b1ce7742d956463999923ceaed71786e', u'os-extended-volumes:volumes\_attached': [], u'config\_drive': u''}/ public Not Found
1123 ------------------------------------------------------------------------------
1124 Testsuites.Demo :: Executes the VNF Orchestration Test cases inclu... | FAIL |
1125 1 critical test, 0 passed, 1 failed
1126 1 test total, 0 passed, 1 failed
1127 ==============================================================================
1129 1 critical test, 0 passed, 1 failed
1130 1 test total, 0 passed, 1 failed
1131 ==============================================================================
1132 Output: /share/logs/0009_demo_appc/output.xml
1133 Log: /share/logs/0009_demo_appc/log.html
1134 Report: /share/logs/0009_demo_appc/report.html
1136 which can be patched in robot scripts
1140 root@tomas-infra robot]# kubectl get pods -n onap | grep robot
1141 onap-robot-robot-5576c8f6cc-xfh9q 1/1
1144 [root@tomas-infra robot]# kubectl cp -n onap onap-robot-robot-5576c8f6cc-xfh9q:/var/opt/OpenECOMP_ETE/robot/resources/demo_preload.robot ~/
1147 [root@tomas-infra robot]# vi ~/demo_preload.robot
1149 # change network_name from network_name=public to match your network name
1152 [Arguments] ${vf_module_name}
1153 Run Openstack Auth Request auth
1154 ${status} ${stack_info}= Run Keyword and Ignore Error Wait or Stack to Be Deployed auth ${vf_module_name} timeout=120s
1155 Run Keyword if '${status}' == 'FAIL' FAIL ${vf_module_name} Stack is not found
1156 ${stack_id}= Get From Dictionary ${stack_info} id
1157 ${server_list}= Get Openstack Servers auth
1158 ${vpg_name_0}= Get From Dictionary ${stack_info} vpg_name_0
1159 ${vnf_id}= Get From Dictionary ${stack_info} vnf_id
1160 ${vpg_public_ip}= Get Server Ip ${server_list}
1161 ${stack_info} vpg_name_0 network_name=rc3-offline-network
1162 ${vpg_oam_ip}= Get From Dictionary ${stack_info} vpg_private_ip_1
1163 #${appc}= Create Mount Point In APPC ${vpg_name_0} ${vpg_oam_ip}
1164 #${appc}= Create Mount Point In APPC ${vnf_id} ${vpg_oam_ip}
1165 ${appc}= Create Mount Point In APPC ${vnf_id} ${vpg_public_ip}
1169 [root@tomas-infra robot]# kubectl cp -n onap ~/demo_preload.robot onap-robot-robot-5576c8f6cc-xfh9q:/var/opt/OpenECOMP_ETE/robot/resources/demo_preload.robot
1175 [root@tomas-infra robot]# ./demo-k8s.sh onap appc vPKG-Module-1
1176 Number of parameters: 3
1178 ++ kubectl --namespace onap get pods
1181 + POD=onap-robot-robot-5576c8f6cc-rjbkf
1182 + ETEHOME=/var/opt/OpenECOMP_ETE
1183 ++ kubectl --namespace onap exec onap-robot-robot-5576c8f6cc-rjbkf --
1184 bash -c 'ls -1q /share/logs/ | wc -l'
1185 + export GLOBAL_BUILD_NUMBER=19
1186 + GLOBAL_BUILD_NUMBER=19
1188 + OUTPUT_FOLDER=0019\_demo\_appc
1190 + VARIABLEFILES='-V /share/config/vm_properties.py -V /share/config/integration_robot_properties.py -V /share/config/integration_preload_parameters.py'
1191 + kubectl --namespace onap exec onap-robot-robot-5576c8f6cc-rjbkf -- /var/opt/OpenECOMP\_ETE/runTags.sh -V /share/config/vm_properties.py -V /share/config/integration_robot_properties.py -V /share/config/integration_preload_parameters.py -v MODULE\_NAME:vPKG-Module-1 -d /share/logs/0019_demo_appc -i
1192 APPCMountPointDemo --display 109
1193 Starting Xvfb on display :109 with res 1280x1024x24
1194 Executing robot tests at log level TRACE
1195 ==============================================================================
1197 ==============================================================================
1198 Testsuites.Demo :: Executes the VNF Orchestration Test cases including
1200 ==============================================================================
1201 Create APPC Mount Point | PASS |
1202 ------------------------------------------------------------------------------
1203 Testsuites.Demo :: Executes the VNF Orchestration Test cases inclu... | PASS |
1204 1 critical test, 1 passed, 0 failed
1205 1 test total, 1 passed, 0 failed
1206 ==============================================================================
1208 1 critical test, 1 passed, 0 failed
1209 1 test total, 1 passed, 0 failed
1210 ==============================================================================
1211 Output: /share/logs/0019_demo_appc/output.xml
1212 Log: /share/logs/0019_demo_appc/log.html
1213 Report: /share/logs/0019_demo_appc/report.html
1217 Step 14. Resolving potential problems
1218 =====================================
1221 **After this step CL should be visible in Darkstat on vSinc VM**
1227 **If something goes wrong following hints can be used for troubleshooting purposes:**
1230 *(1) Check that VES events are coming from vFW VM to VesCollector*
1236 curl -X GET -H 'Accept: application/json' -H 'Content-Type: application/cambria' `http://tomas-node0:30227/events/unauthenticated.VES_MEASUREMENT_OUTPUT/group1/C1 <http://tomas-node0:30227/events/unauthenticated.VES_MEASUREMENT_OUTPUT/group1/C1>`_
1238 If the output is empty, one can verify DCAE IP from vFW VM
1240 login to vfwsnk-1 and check
1244 root@vfwsnk-1:~# cat /opt/config/dcae_collector_ip.txt
1248 root@vfwsnk-1:~# cat /opt/config/dcae_collector_port.txt
1251 VES collector is by default listening on 8080:30235 port
1255 [root@tomas-infra robot]# kubectl get services -n onap | grep xdcae-ves-collector
1256 xdcae-ves-collector NodePort 10.43.195.188 <none> 8080:30235/TCP 2d
1258 So that combination of IP and port should be reachable from vFW VM
1262 ubuntu@vfwsnk-1:~$ telnet 10.8.8.22 30235
1264 Connected to 10.8.8.22.
1265 Escape character is '^]'.
1269 *2) Check if OnSet events are created by TCA analytics*
1273 # OnSet events from TCA to policy
1274 curl -i -X GET -H 'Accept: application/json' `http://tomas-node0:30227/events/unauthenticated.DCAE_CL_OUTPUT/group1/c1 <http://tomas-node0:30227/events/unauthenticated.DCAE_CL_OUTPUT/group1/c1>`_
1277 if the list is empty, go to analytics pod and check tca logs
1281 [root@tomas-infra robot]# kubectl exec -it dep-dcae-tca-analytics-68dcfcd767-wv97h -n onap bash
1282 root@dcae-tca-analytics:~# cat /opt/cdap/sdk/logs/cdap-debug.log
1284 In our lab following problem was visible
1288 2019-05-09 15:13:54,858 - DEBUG [FlowletProcessDriver-TCAVESThresholdViolationCalculatorFlowlet-0-executor:o.o.d.a.a.c.s.p.AbstractMessageProcessor@157] - Processor: TCACEFPolicyEventNameFilter, Successful Completion Message: Policy Event Name and CEF Message Event Name match successful.Message EventName: vFirewallBroadcastPackets, Policy Event Names: vFirewallBroadcastPackets,vLoadBalancer,Measurement\_vGMUX, Incoming Message: {"event":{"commonEventHeader":{"startEpochMicrosec":1557414790568806,"eventId":"TrafficStats\_1.2.3.4","nfcNamingCode":"vVNF","reportingEntityId":"No UUID available","internalHeaderFields":{"collectorTimeStamp":"Thu, 05 09 2019 03:13:17 UTC"},"eventType":"HTTP request rate","priority":"Normal","version":3,"reportingEntityName":"fwll","sequence":0,"domain":"measurementsForVfScaling","lastEpochMicrosec":1557414800780189,"eventName":"vFirewallBroadcastPackets","sourceName":"vFWSNK-1","nfNamingCode":"vVNF"},"measurementsForVfScalingFields":{"cpuUsageArray":[{"percentUsage":0,"cpuIdentifier":"cpu1","cpuIdle":100,"cpuUsageSystem":0,"cpuUsageUser":0}],"measurementInterval":10,"requestRate":6444,"vNicUsageArray":[{"transmittedOctetsDelta":0,"receivedTotalPacketsDelta":102,"vNicIdentifier":"eth0","valuesAreSuspect":"true","transmittedTotalPacketsDelta":0,"receivedOctetsDelta":4386}],"measurementsForVfScalingVersion":2}}}
1292 Alarm is not triggered because there is mismatch in tca_policy settings
1294 One can update the policy and replace vNicPerformanceArray with vNicUsageArray in CDAP gui.
1295 Similar problem is documented in https://lists.onap.org/g/onap-discuss/topic/27329400
1299 Please note that one need to restart (stop/start) TCAVESCollectorFlow to propagate new tca_policy.
1301 - Click on TCAVESAlertsTable
1302 - Click on stopping TCAVESCollectorFlow
1303 - Click on starting TCAVESCollectorFlow once again.
1305 .. note:: we had to repeat this procedure couple of times (change tca_policy, restart flow) in order to get it working with new tca_policy. sometimes it resets back to default policy after 15 seconds for unknown reasons).
1307 Effective policy should be printed after flow restart in cdap-debug.log
1311 root@dcae-tca-analytics:/opt/cdap/sdk/logs# tail -f cdap-debug.log | grep Effective
1313 Also DCAE_CL_OUTPUT topic should start having some records, which can be verified using
1317 [root@tomas-infra ~]# curl -i -X GET -H 'Accept: application/json' http://tomas-node0:30227/events/unauthenticated.DCAE_CL_OUTPUT/group1/c1
1320 **Further hints/notes (less frequently needed)**
1323 In some deployments we had to change hostname of vFW VM, because it was putting hostname as UUID into VES events,
1324 such hostname is was not matching with AAI database, which was fixed by changing hostname to VNF name we used for VFW VM.
1328 [root@tomas-infra robot]# ssh ubuntu@10.8.8.17
1329 ubuntu@vfwsnk-1:~$ sudo su –
1330 root@vfwsnk-1:~# vi /etc/hostname
1333 root@vfwsnk-1:~# cat /etc/hostname
1335 root@vfwsnk-1:~# reboot
1336 Broadcast message from ubuntu@vfwsnk-1
1337 (/dev/pts/0) at 15:17 ...
1339 The system is going down for reboot NOW!
1340 root@vfwsnk-1:~# Connection to 10.8.8.17 closed by remote host.
1342 Check amsterdam controller for existence
1346 curl -k --silent --user 'demo@people.osaaf.org:demo123456!' -X GET `https://tomas-node0:30221/policy/pdp/engine/controllers/amsterdam/ <https://tomas-node0:30221/policy/pdp/engine/controllers/amsterdam/>`_
1349 Check drools fact for amsterdam controller
1353 curl -k --silent --user 'demo@people.osaaf.org:demo123456!' -X GET `https://tomas-node0:30221/policy/pdp/engine/controllers/amsterdam/drools/ <https://tomas-node0:30221/policy/pdp/engine/controllers/amsterdam/drools/>`_
1355 Check for available topics in DMAAP
1359 curl -i -X GET -H 'Accept: application/json' http://tomas-node0:30227/topics
1362 "ECOMP-PORTAL-OUTBOX-APP1",
1363 "__consumer_offsets",
1365 "ECOMP-PORTAL-OUTBOX-POL1",
1366 "SDC-DISTR-STATUS-TOPIC-AUTO",
1367 "unauthenticated.SEC_HEARTBEAT_OUTPUT",
1369 "PDPD-CONFIGURATION",
1371 "SDC-DISTR-NOTIF-TOPIC-AUTO",
1373 "unauthenticated.DCAE_CL_OUTPUT",
1374 "POA-RULE-VALIDATION",
1376 "unauthenticated.VES_MEASUREMENT_OUTPUT",
1377 "ECOMP-PORTAL-OUTBOX-VID1",
1378 "ECOMP-PORTAL-INBOX",
1379 "ECOMP-PORTAL-OUTBOX-SDC1",
1380 "org.onap.dmaap.mr.PNF_READY",
1381 "org.onap.dmaap.mr.PNF_REGISTRATION",
1384 "ECOMP-PORTAL-OUTBOX-DBC1"
1388 Check for available VNFs in AAI
1392 curl -i -X GET --user aai@aai.onap.org:demo123456! -H 'Accept:application/json' -H 'Content-Type: application/json' -H 'X-FromAppId:MSO' -H 'X-TransactionId:73498' https://tomas-node0:30233/aai/v14/network/generic-vnfs/ --insecure
1395 Check for demo service instances in AAI
1399 curl -i -X GET --user aai@aai.onap.org:demo123456! -H 'Accept:application/json' -H 'Content-Type: application/json' -H 'X-FromAppId:MSO' -H 'X-TransactionId: 89273498' https://tomas-node0:30233/aai/v14/business/customers/customer/Demonstration/service-subscriptions/service-subscription/vFWCL/service-instances/service-instance --insecure
1401 Check APPC for currently active PG streams
1405 # VNF ID of vPKG-1 VF is used in node index
1406 curl -i -X GET --user appc@appc.onap.org:demo123456! http://tomas-node0:30230/restconf/config/network-topology:network-topology/topology/topology-netconf/node/f611b9d5-8715-4963-baba-e9c06e1a80cb/yang-ext:mount/sample-plugin:sample-plugin/pg-streams
1410 .. |image0| image:: images/image002.jpg
1413 .. |image1| image:: images/image004.jpg
1416 .. |image2| image:: images/image006.jpg
1419 .. |image3| image:: images/image008.jpg
1422 .. |image4| image:: images/image010.jpg
1425 .. |image5| image:: images/image012.jpg
1428 .. |image6| image:: images/image014.jpg
1431 .. |image7| image:: images/image016.jpg
1434 .. |image8| image:: images/image018.jpg
1437 .. |image9| image:: images/image020.jpg
1440 .. |image10| image:: images/image022.jpg
1443 .. |image11| image:: images/image024.jpg
1446 .. |image12| image:: images/image026.jpg
1449 .. |image13| image:: images/image028.jpg
1452 .. |image14| image:: images/image030.jpg
1455 .. |image15| image:: images/image032.jpg
1458 .. |image16| image:: images/image034.jpg
1461 .. |image17| image:: images/image036.jpg
1464 .. |image18| image:: images/image038.gif
1467 .. |image19| image:: images/image040.gif
1470 .. |image20| image:: images/image042.jpg
1473 .. |image21| image:: images/image044.jpg