Document usecases controller
[policy/parent.git] / docs / drools / pdpdApps.rst
1
2 .. This work is licensed under a Creative Commons Attribution 4.0 International License.
3 .. http://creativecommons.org/licenses/by/4.0
4
5 .. _pdpd-apps-label:
6
7 PDP-D Applications
8 ##################
9
10 .. contents::
11     :depth: 2
12
13 Overview
14 ========
15
16 PDP-D applications uses the PDP-D Engine middleware to provide domain specific services.
17 See :ref:`pdpd-engine-label` for the description of the PDP-D infrastructure.
18
19 At this time *Control Loops* are the only type of applications supported.
20
21 *Control Loop* applications must support at least one of the following *Policy Types*:
22
23 - **onap.policies.controlloop.Operational**  (Operational Policies for Legacy Control Loops)
24 - **onap.policies.controlloop.operational.common.Drools** (Tosca Compliant Operational Policies)
25
26 Software
27 ========
28
29 Source Code repositories
30 ~~~~~~~~~~~~~~~~~~~~~~~~
31
32 The PDP-D Applications software resides on the `policy/drools-applications <https://git.onap.org/policy/drools-applications>`__ repository.    The actor libraries introduced in the *frankfurt* release reside in
33 the `policy/models repository <https://git.onap.org/policy/models>`__.
34
35 At this time, the *control loop* application is the only application supported in ONAP.
36 All the application projects reside under the
37 `controlloop directory <https://git.onap.org/policy/drools-applications/tree/controlloop>`__.
38
39 Docker Image
40 ~~~~~~~~~~~~
41
42 See the *drools-applications*
43 `released versions <https://wiki.onap.org/display/DW/Policy+Framework+Project%3A+Component+Versions>`__
44 for the latest images:
45
46 .. code-block:: bash
47
48     docker pull onap/policy-pdpd-cl:1.6.4
49
50 At the time of this writing *1.6.4* is the latest version.
51
52 The *onap/policy-pdpd-cl* image extends the *onap/policy-drools* image with
53 the *usecases* controller that realizes the *control loop* application.
54
55 Usecases Controller
56 ====================
57
58 The `usecases <https://git.onap.org/policy/drools-applications/tree/controlloop/common/controller-usecases>`__
59 controller is the *control loop* application in ONAP.
60
61 There are three parts in this controller:
62
63 * The `drl rules <https://git.onap.org/policy/drools-applications/tree/controlloop/common/controller-usecases/src/main/resources/usecases.drl>`__.
64 * The `kmodule.xml <https://git.onap.org/policy/drools-applications/tree/controlloop/common/controller-usecases/src/main/resources/META-INF/kmodule.xml>`__.
65 * The `dependencies <https://git.onap.org/policy/drools-applications/tree/controlloop/common/controller-usecases/pom.xml>`__.
66
67 The `kmodule.xml` specifies only one session, and declares in the *kbase* section the two operational policy types that
68 it supports.
69
70 The Usecases controller relies on the new Actor framework to interact with remote
71 components, part of a control loop transaction.   The reader is referred to the
72 *Policy Platform Actor Development Guidelines* in the documentation for further information.
73
74 Operational Policy Types
75 ========================
76
77 The *usecases* controller supports the two Operational policy types:
78
79 - *onap.policies.controlloop.Operational*.
80 - *onap.policies.controlloop.operational.common.Drools*.
81
82 The *onap.policies.controlloop.Operational* is the legacy operational type, used before
83 the *frankfurt* release.    The *onap.policies.controlloop.operational.common.Drools*
84 is the Tosca compliant policy type introduced in *frankfurt*.
85
86 The legacy operational policy type is defined at the
87 `onap.policies.controlloop.Operational.yaml <https://git.onap.org/policy/models/tree/models-examples/src/main/resources/policytypes/onap.policies.controlloop.Operational.yaml>`__.
88
89 The Tosca Compliant Operational Policy Type is defined at the
90 `onap.policies.controlloop.operational.common.Drools <https://git.onap.org/policy/models/tree/models-examples/src/main/resources/policytypes/onap.policies.controlloop.operational.common.Drools.yaml>`__.
91
92 An example of a Legacy Operational Policy can be found
93 `here <https://git.onap.org/policy/models/tree/models-examples/src/main/resources/policies/vDNS.policy.operational.legacy.input.json>`__.
94
95 An example of a Tosca Compliant Operational Policy can be found
96 `here <https://git.onap.org/policy/models/tree/models-examples/src/main/resources/policies/vDNS.policy.operational.input.tosca.json>`__.
97
98 Features
99 ========
100
101 Since the PDP-D Control Loop Application image was created from the PDP-D Engine one (*onap/policy-drools*),
102 it inherits all features and functionality.
103
104 The enabled features in the *onap/policy-pdpd-cl* image are:
105
106 - **distributed locking**: distributed resource locking.
107 - **healthcheck**: healthcheck.
108 - **lifecycle**: enables the lifecycle APIs.
109 - **controlloop-trans**: control loop transaction tracking.
110 - **controlloop-management**: generic controller capabilities.
111 - **controlloop-usecases**: new *controller* introduced in the guilin release to realize the ONAP use cases.
112
113 The following features are installed but disabled:
114
115 - **controlloop-frankfurt**: *controller* used in the frankfurt release.
116 - **controlloop-tdjam**: experimental java-only *controller* to be deprecated post guilin.
117 - **controlloop-utils**: *actor* simulators.
118
119 Control Loops Transaction (controlloop-trans)
120 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
121
122 It tracks Control Loop Transactions and Operations.   These are recorded in
123 the *$POLICY_LOGS/audit.log* and *$POLICY_LOGS/metrics.log*, and accessible
124 through the telemetry APIs.
125
126 Control Loops Management (controlloop-management)
127 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
128
129 It installs common control loop application resources, and provides
130 telemetry API extensions.   *Actor* configurations are packaged in this
131 feature.
132
133 Usecases Controller (controlloop-usecases)
134 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
135
136 It is the *guilin* release implementation of the ONAP use cases.
137 It relies on the new *Actor* model framework to carry out a policy's
138 execution.
139
140 Frankfurt Controller (controlloop-frankfurt)
141 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
142
143 This is the frankfurt controller that will be deprecated after the
144 guilin release.
145
146 TDJAM Controller (controlloop-tdjam)
147 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
148
149 This is an experimental, java-only controller that will be deprecated after the
150 guilin release.
151
152 Utilities (controlloop-utils)
153 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
154
155 Enables *actor simulators* for testing purposes.
156
157 Offline Mode
158 ============
159
160 The default ONAP installation in *onap/policy-pdpd-cl:1.6.4* is *OFFLINE*.
161 In this configuration, the *rules* artifact and the *dependencies* are all in the local
162 maven repository.   This requires that the maven dependencies are preloaded in the local
163 repository.
164
165 An offline configuration requires two configuration items:
166
167 - *OFFLINE* environment variable set to true (see `values.yaml <https://git.onap.org/oom/tree/kubernetes/policy/values.yaml>`__.
168 - override of the default *settings.xml* (see
169   `settings.xml <https://git.onap.org/oom/tree/kubernetes/policy/charts/drools/resources/configmaps/settings.xml>`__) override.
170
171 Running the PDP-D Control Loop Application in a single container
172 ================================================================
173
174 Environment File
175 ~~~~~~~~~~~~~~~~
176
177 First create an environment file (in this example *env.conf*) to configure the PDP-D.
178
179 .. code-block:: bash
180
181     # SYSTEM software configuration
182
183     POLICY_HOME=/opt/app/policy
184     POLICY_LOGS=/var/log/onap/policy/pdpd
185     KEYSTORE_PASSWD=Pol1cy_0nap
186     TRUSTSTORE_PASSWD=Pol1cy_0nap
187
188     # Telemetry credentials
189
190     TELEMETRY_PORT=9696
191     TELEMETRY_HOST=0.0.0.0
192     TELEMETRY_USER=demo@people.osaaf.org
193     TELEMETRY_PASSWORD=demo123456!
194
195     # nexus repository
196
197     SNAPSHOT_REPOSITORY_ID=
198     SNAPSHOT_REPOSITORY_URL=
199     RELEASE_REPOSITORY_ID=
200     RELEASE_REPOSITORY_URL=
201     REPOSITORY_USERNAME=
202     REPOSITORY_PASSWORD=
203     REPOSITORY_OFFLINE=true
204
205     MVN_SNAPSHOT_REPO_URL=
206     MVN_RELEASE_REPO_URL=
207
208     # Relational (SQL) DB access
209
210     SQL_HOST=
211     SQL_USER=
212     SQL_PASSWORD=
213
214     # AAF
215
216     AAF=false
217     AAF_NAMESPACE=org.onap.policy
218     AAF_HOST=aaf.api.simpledemo.onap.org
219
220     # PDP-D DMaaP configuration channel
221
222     PDPD_CONFIGURATION_TOPIC=PDPD-CONFIGURATION
223     PDPD_CONFIGURATION_API_KEY=
224     PDPD_CONFIGURATION_API_SECRET=
225     PDPD_CONFIGURATION_CONSUMER_GROUP=
226     PDPD_CONFIGURATION_CONSUMER_INSTANCE=
227     PDPD_CONFIGURATION_PARTITION_KEY=
228
229     # PAP-PDP configuration channel
230
231     POLICY_PDP_PAP_TOPIC=POLICY-PDP-PAP
232     POLICY_PDP_PAP_GROUP=defaultGroup
233
234     # Symmetric Key for encoded sensitive data
235
236     SYMM_KEY=
237
238     # Healthcheck Feature
239
240     HEALTHCHECK_USER=demo@people.osaaf.org
241     HEALTHCHECK_PASSWORD=demo123456!
242
243     # Pooling Feature
244
245     POOLING_TOPIC=POOLING
246
247     # PAP
248
249     PAP_HOST=
250     PAP_USERNAME=
251     PAP_PASSWORD=
252
253     # PAP legacy
254
255     PAP_LEGACY_USERNAME=
256     PAP_LEGACY_PASSWORD=
257
258     # PDP-X
259
260     PDP_HOST=localhost
261     PDP_PORT=6669
262     PDP_CONTEXT_URI=pdp/api/getDecision
263     PDP_USERNAME=policy
264     PDP_PASSWORD=password
265     GUARD_DISABLED=true
266
267     # DCAE DMaaP
268
269     DCAE_TOPIC=unauthenticated.DCAE_CL_OUTPUT
270     DCAE_SERVERS=localhost
271     DCAE_CONSUMER_GROUP=dcae.policy.shared
272
273     # Open DMaaP
274
275     DMAAP_SERVERS=localhost
276
277     # AAI
278
279     AAI_HOST=localhost
280     AAI_PORT=6666
281     AAI_CONTEXT_URI=
282     AAI_USERNAME=policy
283     AAI_PASSWORD=policy
284
285     # SO
286
287     SO_HOST=localhost
288     SO_PORT=6667
289     SO_CONTEXT_URI=
290     SO_URL=https://localhost:6667/
291     SO_USERNAME=policy
292     SO_PASSWORD=policy
293
294     # VFC
295
296     VFC_HOST=localhost
297     VFC_PORT=6668
298     VFC_CONTEXT_URI=api/nslcm/v1/
299     VFC_USERNAME=policy
300     VFC_PASSWORD=policy
301
302     # SDNC
303
304     SDNC_HOST=localhost
305     SDNC_PORT=6670
306     SDNC_CONTEXT_URI=restconf/operations/
307
308 Configuration
309 ~~~~~~~~~~~~~
310
311 noop.pre.sh
312 """""""""""
313
314 In order to avoid the noise in the logs that relate to dmaap configuration, a startup script (*noop.pre.sh*) is added
315 to convert *dmaap* endpoints to *noop* in the host directory to be mounted.
316
317 .. code-block:: bash
318
319     #!/bin/bash -x
320
321     sed -i "s/^dmaap/noop/g" $POLICY_HOME/config/*.properties
322
323 features.pre.sh
324 """""""""""""""
325
326 We can enable the *controlloop-utils* and disable the *distributed-locking* feature to avoid using the database.
327
328 .. code-block:: bash
329
330     #!/bin/bash -x
331
332     bash -c "/opt/app/policy/bin/features disable distributed-locking"
333     bash -c "/opt/app/policy/bin/features enable controlloop-utils"
334
335 active.post.sh
336 """"""""""""""
337
338 The *active.post.sh* script makes the PDP-D active.
339
340 .. code-block:: bash
341
342     #!/bin/bash -x
343
344     bash -c "http --verify=no -a ${TELEMETRY_USER}:${TELEMETRY_PASSWORD} PUT https://localhost:9696/policy/pdp/engine/lifecycle/state/ACTIVE"
345
346 Actor Properties
347 """"""""""""""""
348
349 In the *guilin* release, some *actors* configurations need to be overridden to support *http* for compatibility
350 with the *controlloop-utils* feature.
351
352 AAI-http-client.properties
353 """"""""""""""""""""""""""
354
355 .. code-block:: bash
356
357     http.client.services=AAI
358
359     http.client.services.AAI.managed=true
360     http.client.services.AAI.https=false
361     http.client.services.AAI.host=${envd:AAI_HOST}
362     http.client.services.AAI.port=${envd:AAI_PORT}
363     http.client.services.AAI.userName=${envd:AAI_USERNAME}
364     http.client.services.AAI.password=${envd:AAI_PASSWORD}
365     http.client.services.AAI.contextUriPath=${envd:AAI_CONTEXT_URI}
366
367 SDNC-http-client.properties
368 """""""""""""""""""""""""""
369
370 .. code-block:: bash
371
372     http.client.services=SDNC
373
374     http.client.services.SDNC.managed=true
375     http.client.services.SDNC.https=false
376     http.client.services.SDNC.host=${envd:SDNC_HOST}
377     http.client.services.SDNC.port=${envd:SDNC_PORT}
378     http.client.services.SDNC.userName=${envd:SDNC_USERNAME}
379     http.client.services.SDNC.password=${envd:SDNC_PASSWORD}
380     http.client.services.SDNC.contextUriPath=${envd:SDNC_CONTEXT_URI}
381
382 VFC-http-client.properties
383 """"""""""""""""""""""""""
384
385 .. code-block:: bash
386
387     http.client.services=VFC
388
389     http.client.services.VFC.managed=true
390     http.client.services.VFC.https=false
391     http.client.services.VFC.host=${envd:VFC_HOST}
392     http.client.services.VFC.port=${envd:VFC_PORT}
393     http.client.services.VFC.userName=${envd:VFC_USERNAME}
394     http.client.services.VFC.password=${envd:VFC_PASSWORD}
395     http.client.services.VFC.contextUriPath=${envd:VFC_CONTEXT_URI:api/nslcm/v1/}
396
397 settings.xml
398 """"""""""""
399
400 The *standalone-settings.xml* file is the default maven settings override in the container.
401
402 .. code-block:: bash
403
404     <settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
405               xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
406               xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd">
407
408         <offline>true</offline>
409
410         <profiles>
411             <profile>
412                 <id>policy-local</id>
413                 <repositories>
414                     <repository>
415                         <id>file-repository</id>
416                         <url>file:${user.home}/.m2/file-repository</url>
417                         <releases>
418                             <enabled>true</enabled>
419                             <updatePolicy>always</updatePolicy>
420                         </releases>
421                         <snapshots>
422                             <enabled>true</enabled>
423                             <updatePolicy>always</updatePolicy>
424                         </snapshots>
425                     </repository>
426                 </repositories>
427             </profile>
428         </profiles>
429
430         <activeProfiles>
431             <activeProfile>policy-local</activeProfile>
432         </activeProfiles>
433
434     </settings>
435
436 Bring up the PDP-D Control Loop Application
437 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
438
439 .. code-block:: bash
440
441     docker run --rm -p 9696:9696 -v ${PWD}/config:/tmp/policy-install/config --env-file ${PWD}/env/env.conf -it --name PDPD -h pdpd nexus3.onap.org:10001/onap/policy-pdpd-cl:1.6.4
442
443 To run the container in detached mode, add the *-d* flag.
444
445 Note that we are opening the *9696* telemetry API port to the outside world, mounting the *config* host directory,
446 and setting environment variables.
447
448 To open a shell into the PDP-D:
449
450 .. code-block:: bash
451
452     docker exec -it pdp-d bash
453
454 Once in the container, run tools such as *telemetry*, *db-migrator*, *policy* to look at the system state:
455
456 .. code-block:: bash
457
458     docker exec -it PDPD bash -c "/opt/app/policy/bin/telemetry"
459     docker exec -it PDPD bash -c "/opt/app/policy/bin/policy status"
460     docker exec -it PDPD bash -c "/opt/app/policy/bin/db-migrator -s ALL -o report"
461
462 Controlled instantiation of the PDP-D Control Loop Appplication
463 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
464
465 Sometimes a developer may want to start and stop the PDP-D manually:
466
467 .. code-block:: bash
468
469    # start a bash
470
471    docker run --rm -p 9696:9696 -v ${PWD}/config:/tmp/policy-install/config --env-file ${PWD}/env/env.conf -it --name PDPD -h pdpd nexus3.onap.org:10001/onap/policy-pdpd-cl:1.6.4 bash
472
473    # use this command to start policy applying host customizations from /tmp/policy-install/config
474
475    pdpd-cl-entrypoint.sh vmboot
476
477    # or use this command to start policy without host customization
478
479    policy start
480
481    # at any time use the following command to stop the PDP-D
482
483    policy stop
484
485    # and this command to start the PDP-D back again
486
487    policy start
488
489 Scale-out use case testing
490 ==========================
491
492 First step is to create the *operational.scaleout* policy.
493
494 policy.vdns.json
495 ~~~~~~~~~~~~~~~~
496
497 .. code-block:: bash
498
499     {
500       "type": "onap.policies.controlloop.operational.common.Drools",
501       "type_version": "1.0.0",
502       "name": "operational.scaleout",
503       "version": "1.0.0",
504       "metadata": {
505         "policy-id": "operational.scaleout"
506       },
507       "properties": {
508         "id": "ControlLoop-vDNS-6f37f56d-a87d-4b85-b6a9-cc953cf779b3",
509         "timeout": 60,
510         "abatement": false,
511         "trigger": "unique-policy-id-1-scale-up",
512         "operations": [
513           {
514             "id": "unique-policy-id-1-scale-up",
515             "description": "Create a new VF Module",
516             "operation": {
517               "actor": "SO",
518               "operation": "VF Module Create",
519               "target": {
520                 "targetType": "VFMODULE",
521                 "entityIds": {
522                   "modelInvariantId": "e6130d03-56f1-4b0a-9a1d-e1b2ebc30e0e",
523                   "modelVersionId": "94b18b1d-cc91-4f43-911a-e6348665f292",
524                   "modelName": "VfwclVfwsnkBbefb8ce2bde..base_vfw..module-0",
525                   "modelVersion": 1,
526                   "modelCustomizationId": "47958575-138f-452a-8c8d-d89b595f8164"
527                 }
528               },
529               "payload": {
530                 "requestParameters": "{\"usePreload\":true,\"userParams\":[]}",
531                 "configurationParameters": "[{\"ip-addr\":\"$.vf-module-topology.vf-module-parameters.param[9]\",\"oam-ip-addr\":\"$.vf-module-topology.vf-module-parameters.param[16]\",\"enabled\":\"$.vf-module-topology.vf-module-parameters.param[23]\"}]"
532               }
533             },
534             "timeout": 20,
535             "retries": 0,
536             "success": "final_success",
537             "failure": "final_failure",
538             "failure_timeout": "final_failure_timeout",
539             "failure_retries": "final_failure_retries",
540             "failure_exception": "final_failure_exception",
541             "failure_guard": "final_failure_guard"
542           }
543         ]
544       }
545     }
546
547 To provision the *scale-out policy*, issue the following command:
548
549 .. code-block:: bash
550
551     http --verify=no -a "${TELEMETRY_USER}:${TELEMETRY_PASSWORD}" https://localhost:9696/policy/pdp/engine/lifecycle/policies @usecases/policy.vdns.json
552
553 Verify that the policy shows with the telemetry tools:
554
555 .. code-block:: bash
556
557     docker exec -it PDPD bash -c "/opt/app/policy/bin/telemetry"
558     > get /policy/pdp/engine/lifecycle/policies
559     > get /policy/pdp/engine/controllers/usecases/drools/facts/usecases/controlloops
560
561
562 dcae.vdns.onset.json
563 ~~~~~~~~~~~~~~~~~~~~
564
565 .. code-block:: bash
566
567     {
568       "closedLoopControlName": "ControlLoop-vDNS-6f37f56d-a87d-4b85-b6a9-cc953cf779b3",
569       "closedLoopAlarmStart": 1463679805324,
570       "closedLoopEventClient": "microservice.stringmatcher",
571       "closedLoopEventStatus": "ONSET",
572       "requestID": "c7c6a4aa-bb61-4a15-b831-ba1472dd4a65",
573       "target_type": "VNF",
574       "target": "vserver.vserver-name",
575       "AAI": {
576         "vserver.is-closed-loop-disabled": "false",
577         "vserver.prov-status": "ACTIVE",
578         "vserver.vserver-name": "OzVServer"
579       },
580       "from": "DCAE",
581       "version": "1.0.2"
582     }
583
584 To initiate a control loop transaction, simulate a DCAE ONSET to Policy:
585
586 .. code-block:: bash
587
588     http --verify=no -a "${TELEMETRY_USER}:${TELEMETRY_PASSWORD}" PUT https://localhost:9696/policy/pdp/engine/topics/sources/noop/DCAE_TOPIC/events @dcae.vdns.onset.json Content-Type:'text/plain'
589
590 This will trigger the scale out control loop transaction that will interact with the *SO*
591 simulator to complete the transaction.
592
593 Verify in *$POLICY_LOGS/network.log* that a *FINAL: SUCCESS* notification is sent over the POLICY-CL-MGT channel.
594 An entry in the *$POLICY_LOGS/audit.log* should indicate successful completion as well.
595
596 vCPE use case testing
597 =====================
598
599 First step is to create the *operational.restart* policy.
600
601 policy.vcpe.json
602 ~~~~~~~~~~~~~~~~
603
604 .. code-block:: bash
605
606     {
607       "type": "onap.policies.controlloop.operational.common.Drools",
608       "type_version": "1.0.0",
609       "name": "operational.restart",
610       "version": "1.0.0",
611       "metadata": {
612         "policy-id": "operational.restart"
613       },
614       "properties": {
615         "id": "ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e",
616         "timeout": 300,
617         "abatement": false,
618         "trigger": "unique-policy-id-1-restart",
619         "operations": [
620           {
621             "id": "unique-policy-id-1-restart",
622             "description": "Restart the VM",
623             "operation": {
624               "actor": "APPC",
625               "operation": "Restart",
626               "target": {
627                 "targetType": "VNF"
628               }
629             },
630             "timeout": 240,
631             "retries": 0,
632             "success": "final_success",
633             "failure": "final_failure",
634             "failure_timeout": "final_failure_timeout",
635             "failure_retries": "final_failure_retries",
636             "failure_exception": "final_failure_exception",
637             "failure_guard": "final_failure_guard"
638           }
639         ]
640       }
641     }
642
643 To provision the *operational.restart policy* issue the following command:
644
645 .. code-block:: bash
646
647     http --verify=no -a "${TELEMETRY_USER}:${TELEMETRY_PASSWORD}" https://localhost:9696/policy/pdp/engine/lifecycle/policies @usecases/policy.vcpe.json
648
649 Verify that the policy shows with the telemetry tools:
650
651 .. code-block:: bash
652
653     docker exec -it PDPD bash -c "/opt/app/policy/bin/telemetry"
654     > get /policy/pdp/engine/lifecycle/policies
655     > get /policy/pdp/engine/controllers/usecases/drools/facts/usecases/controlloops
656
657
658 dcae.vcpe.onset.json
659 ~~~~~~~~~~~~~~~~~~~~
660
661 .. code-block:: bash
662
663     {
664       "closedLoopControlName": "ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e",
665       "closedLoopAlarmStart": 1463679805324,
666       "closedLoopEventClient": "DCAE_INSTANCE_ID.dcae-tca",
667       "closedLoopEventStatus": "ONSET",
668       "requestID": "664be3d2-6c12-4f4b-a3e7-c349acced200",
669       "target_type": "VNF",
670       "target": "generic-vnf.vnf-id",
671       "AAI": {
672         "vserver.is-closed-loop-disabled": "false",
673         "vserver.prov-status": "ACTIVE",
674         "generic-vnf.vnf-id": "vCPE_Infrastructure_vGMUX_demo_app"
675       },
676       "from": "DCAE",
677       "version": "1.0.2"
678     }
679
680 To initiate a control loop transaction, simulate a DCAE ONSET to Policy:
681
682 .. code-block:: bash
683
684     http --verify=no -a "${TELEMETRY_USER}:${TELEMETRY_PASSWORD}" PUT https://localhost:9696/policy/pdp/engine/topics/sources/noop/DCAE_TOPIC/events @dcae.vcpe.onset.json Content-Type:'text/plain'
685
686 This will spawn a vCPE control loop transaction in the PDP-D.  Policy will send a *restart* message over the
687 *APPC-LCM-READ* channel to APPC and wait for a response.
688
689 Verify that you see this message in the network.log by looking for *APPC-LCM-READ* messages.
690
691 Note the *sub-request-id* value from the restart message in the *APPC-LCM-READ* channel.
692
693 Replace *REPLACEME* in the *appc.vcpe.success.json* with this sub-request-id.
694
695 appc.vcpe.success.json
696 ~~~~~~~~~~~~~~~~~~~~~~
697
698 .. code-block:: bash
699
700     {
701       "body": {
702         "output": {
703           "common-header": {
704             "timestamp": "2017-08-25T21:06:23.037Z",
705             "api-ver": "5.00",
706             "originator-id": "664be3d2-6c12-4f4b-a3e7-c349acced200",
707             "request-id": "664be3d2-6c12-4f4b-a3e7-c349acced200",
708             "sub-request-id": "REPLACEME",
709             "flags": {}
710           },
711           "status": {
712             "code": 400,
713             "message": "Restart Successful"
714           }
715         }
716       },
717       "version": "2.0",
718       "rpc-name": "restart",
719       "correlation-id": "664be3d2-6c12-4f4b-a3e7-c349acced200-1",
720       "type": "response"
721     }
722
723
724 Send a simulated APPC response back to the PDP-D over the *APPC-LCM-WRITE* channel.
725
726 .. code-block:: bash
727
728     http --verify=no -a "${TELEMETRY_USER}:${TELEMETRY_PASSWORD}" PUT https://localhost:9696/policy/pdp/engine/topics/sources/noop/APPC-LCM-WRITE/events @appc.vcpe.success.json  Content-Type:'text/plain'
729
730 Verify in *$POLICY_LOGS/network.log* that a *FINAL: SUCCESS* notification is sent over the *POLICY-CL-MGT* channel,
731 and an entry is added to the *$POLICY_LOGS/audit.log* indicating successful completion.
732
733 vFirewall use case testing
734 ===========================
735
736 First step is to create the *operational.modifyconfig* policy.
737
738 policy.vfw.json
739 ~~~~~~~~~~~~~~~
740
741 .. code-block:: bash
742
743     {
744       "type": "onap.policies.controlloop.operational.common.Drools",
745       "type_version": "1.0.0",
746       "name": "operational.modifyconfig",
747       "version": "1.0.0",
748       "metadata": {
749         "policy-id": "operational.modifyconfig"
750       },
751       "properties": {
752         "id": "ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a",
753         "timeout": 300,
754         "abatement": false,
755         "trigger": "unique-policy-id-1-modifyConfig",
756         "operations": [
757           {
758             "id": "unique-policy-id-1-modifyConfig",
759             "description": "Modify the packet generator",
760             "operation": {
761               "actor": "APPC",
762               "operation": "ModifyConfig",
763               "target": {
764                 "targetType": "VNF",
765                 "entityIds": {
766                   "resourceID": "bbb3cefd-01c8-413c-9bdd-2b92f9ca3d38"
767                 }
768               },
769               "payload": {
770                 "streams": "{\"active-streams\": 5 }"
771               }
772             },
773             "timeout": 240,
774             "retries": 0,
775             "success": "final_success",
776             "failure": "final_failure",
777             "failure_timeout": "final_failure_timeout",
778             "failure_retries": "final_failure_retries",
779             "failure_exception": "final_failure_exception",
780             "failure_guard": "final_failure_guard"
781           }
782         ]
783       }
784     }
785
786
787 To provision the *operational.modifyconfig policy*, issue the following command:
788
789 .. code-block:: bash
790
791     http --verify=no -a "${TELEMETRY_USER}:${TELEMETRY_PASSWORD}" https://localhost:9696/policy/pdp/engine/lifecycle/policies @usecases/policy.vfw.json
792
793 Verify that the policy shows with the telemetry tools:
794
795 .. code-block:: bash
796
797     docker exec -it PDPD bash -c "/opt/app/policy/bin/telemetry"
798     > get /policy/pdp/engine/lifecycle/policies
799     > get /policy/pdp/engine/controllers/usecases/drools/facts/usecases/controlloops
800
801
802 dcae.vfw.onset.json
803 ~~~~~~~~~~~~~~~~~~~~
804
805 .. code-block:: bash
806
807     {
808       "closedLoopControlName": "ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a",
809       "closedLoopAlarmStart": 1463679805324,
810       "closedLoopEventClient": "microservice.stringmatcher",
811       "closedLoopEventStatus": "ONSET",
812       "requestID": "c7c6a4aa-bb61-4a15-b831-ba1472dd4a65",
813       "target_type": "VNF",
814       "target": "generic-vnf.vnf-name",
815       "AAI": {
816         "vserver.is-closed-loop-disabled": "false",
817         "vserver.prov-status": "ACTIVE",
818         "generic-vnf.vnf-name": "fw0002vm002fw002",
819         "vserver.vserver-name": "OzVServer"
820       },
821       "from": "DCAE",
822       "version": "1.0.2"
823     }
824
825
826 To initiate a control loop transaction, simulate a DCAE ONSET to Policy:
827
828 .. code-block:: bash
829
830     http --verify=no -a "${TELEMETRY_USER}:${TELEMETRY_PASSWORD}" PUT https://localhost:9696/policy/pdp/engine/topics/sources/noop/DCAE_TOPIC/events @dcae.vfw.onset.json Content-Type:'text/plain'
831
832 This will spawn a vFW control loop transaction in the PDP-D.  Policy will send a *ModifyConfig* message over the
833 *APPC-CL* channel to APPC and wait for a response.  This can be seen by searching the network.log for *APPC-CL*.
834
835 Note the *SubRequestId* field in the *ModifyConfig* message in the *APPC-CL* topic in the network.log
836
837 Send a simulated APPC response back to the PDP-D over the *APPC-CL* channel.
838 To do this, change the *REPLACEME* text in the *appc.vcpe.success.json* with this *SubRequestId*.
839
840 appc.vcpe.success.json
841 ~~~~~~~~~~~~~~~~~~~~~~
842
843 .. code-block:: bash
844
845     {
846       "CommonHeader": {
847         "TimeStamp": 1506051879001,
848         "APIver": "1.01",
849         "RequestID": "c7c6a4aa-bb61-4a15-b831-ba1472dd4a65",
850         "SubRequestID": "REPLACEME",
851         "RequestTrack": [],
852         "Flags": []
853       },
854       "Status": {
855         "Code": 400,
856         "Value": "SUCCESS"
857       },
858       "Payload": {
859         "generic-vnf.vnf-id": "f17face5-69cb-4c88-9e0b-7426db7edddd"
860       }
861     }
862
863 .. code-block:: bash
864
865     http --verify=no -a "${TELEMETRY_USER}:${TELEMETRY_PASSWORD}" PUT https://localhost:9696/policy/pdp/engine/topics/sources/noop/APPC-CL/events @appc.vcpe.success.json Content-Type:'text/plain'
866
867 Verify in *$POLICY_LOGS/network.log* that a *FINAL: SUCCESS* notification is sent over the POLICY-CL-MGT channel,
868 and an entry is added to the *$POLICY_LOGS/audit.log* indicating successful completion.
869
870
871 Running PDP-D Control Loop Application with other components
872 ============================================================
873
874 The reader can also look at the `integration/csit repository <https://git.onap.org/integration/csit>`__.
875 More specifically, these directories have examples of other PDP-D Control Loop configurations:
876
877 * `plans <https://git.onap.org/integration/csit/tree/plans/policy/drools-applications>`__: startup scripts.
878 * `scripts <https://git.onap.org/integration/csit/tree/scripts/policy/drools-apps/docker-compose-drools-apps.yml>`__: docker-compose and related files.
879 * `plans <https://git.onap.org/integration/csit/tree/tests/policy/drools-applications>`__: test plan.
880
881 Additional information
882 ======================
883
884 For additional information, please see the
885 `Drools PDP Development and Testing (In Depth) <https://wiki.onap.org/display/DW/2020+Frankfurt+Tutorials>`__ page.
886
887