Update release notes for Jakarta
[policy/parent.git] / docs / drools / pdpdApps.rst
1
2 .. This work is licensed under a Creative Commons Attribution 4.0 International License.
3 .. http://creativecommons.org/licenses/by/4.0
4
5 .. _pdpd-apps-label:
6
7 PDP-D Applications
8 ##################
9
10 .. contents::
11     :depth: 2
12
13 Overview
14 ========
15
16 PDP-D applications uses the PDP-D Engine middleware to provide domain specific services.
17 See :ref:`pdpd-engine-label` for the description of the PDP-D infrastructure.
18
19 At this time *Control Loops* are the only type of applications supported.
20
21 *Control Loop* applications must support the following *Policy Type*:
22
23 - **onap.policies.controlloop.operational.common.Drools** (Tosca Compliant Operational Policies)
24
25 Software
26 ========
27
28 Source Code repositories
29 ~~~~~~~~~~~~~~~~~~~~~~~~
30
31 The PDP-D Applications software resides on the `policy/drools-applications <https://git.onap.org/policy/drools-applications>`__ repository.    The actor libraries introduced in the *frankfurt* release reside in
32 the `policy/models repository <https://git.onap.org/policy/models>`__.
33
34 At this time, the *control loop* application is the only application supported in ONAP.
35 All the application projects reside under the
36 `controlloop directory <https://git.onap.org/policy/drools-applications/tree/controlloop>`__.
37
38 Docker Image
39 ~~~~~~~~~~~~
40
41 See the *drools-applications*
42 `released versions <https://wiki.onap.org/display/DW/Policy+Framework+Project%3A+Component+Versions>`__
43 for the latest images:
44
45 .. code-block:: bash
46
47     docker pull onap/policy-pdpd-cl:1.8.2
48
49 At the time of this writing *1.8.2* is the latest version.
50
51 The *onap/policy-pdpd-cl* image extends the *onap/policy-drools* image with
52 the *usecases* controller that realizes the *control loop* application.
53
54 Usecases Controller
55 ====================
56
57 The `usecases <https://git.onap.org/policy/drools-applications/tree/controlloop/common/controller-usecases>`__
58 controller is the *control loop* application in ONAP.
59
60 There are three parts in this controller:
61
62 * The `drl rules <https://git.onap.org/policy/drools-applications/tree/controlloop/common/controller-usecases/src/main/resources/usecases.drl>`__.
63 * The `kmodule.xml <https://git.onap.org/policy/drools-applications/tree/controlloop/common/controller-usecases/src/main/resources/META-INF/kmodule.xml>`__.
64 * The `dependencies <https://git.onap.org/policy/drools-applications/tree/controlloop/common/controller-usecases/pom.xml>`__.
65
66 The `kmodule.xml` specifies only one session, and declares in the *kbase* section the two operational policy types that
67 it supports.
68
69 The Usecases controller relies on the new Actor framework to interact with remote
70 components, part of a control loop transaction.   The reader is referred to the
71 *Policy Platform Actor Development Guidelines* in the documentation for further information.
72
73 Operational Policy Types
74 ========================
75
76 The *usecases* controller supports the following policy type:
77
78 - *onap.policies.controlloop.operational.common.Drools*.
79
80 The *onap.policies.controlloop.operational.common.Drools*
81 is the Tosca compliant policy type introduced in *frankfurt*.
82
83 The Tosca Compliant Operational Policy Type is defined at the
84 `onap.policies.controlloop.operational.common.Drools <https://git.onap.org/policy/models/tree/models-examples/src/main/resources/policytypes/onap.policies.controlloop.operational.common.Drools.yaml>`__.
85
86 An example of a Tosca Compliant Operational Policy can be found
87 `here <https://git.onap.org/policy/models/tree/models-examples/src/main/resources/policies/vDNS.policy.operational.input.tosca.json>`__.
88
89 Policy Chaining
90 ===============
91
92 The *usecases* controller supports chaining of multiple operations inside a Tosca Operational Policy. The next operation can be chained based on the result/output from an operation.
93 The possibilities available for chaining are:
94
95 - *success: chain after the result of operation is success*
96 - *failure: chain after the result of operation is failure due to issues with controller/actor*
97 - *failure_timeout: chain after the result of operation is failure due to timeout*
98 - *failure_retries: chain after the result of operation is failure after all retries*
99 - *failure_exception: chain after the result of operation is failure due to exception*
100 - *failure_guard: chain after the result of operation is failure due to guard not allowing the operation*
101
102 An example of policy chaining for VNF can be found
103 `here <https://github.com/onap/policy-models/blob/master/models-examples/src/main/resources/policies/vFirewall.cds.policy.operational.chaining.yaml>`__.
104
105 An example of policy chaining for PNF can be found
106 `here <https://github.com/onap/policy-models/blob/master/models-examples/src/main/resources/policies/pnf.cds.policy.operational.chaining.yaml>`__.
107
108 Features
109 ========
110
111 Since the PDP-D Control Loop Application image was created from the PDP-D Engine one (*onap/policy-drools*),
112 it inherits all features and functionality.
113
114 The enabled features in the *onap/policy-pdpd-cl* image are:
115
116 - **distributed locking**: distributed resource locking.
117 - **healthcheck**: healthcheck.
118 - **lifecycle**: enables the lifecycle APIs.
119 - **controlloop-trans**: control loop transaction tracking.
120 - **controlloop-management**: generic controller capabilities.
121 - **controlloop-usecases**: new *controller* introduced in the guilin release to realize the ONAP use cases.
122
123 The following features are installed but disabled:
124
125 - **controlloop-tdjam**: experimental java-only *controller* to be deprecated post guilin.
126 - **controlloop-utils**: *actor* simulators.
127
128 Control Loops Transaction (controlloop-trans)
129 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
130
131 It tracks Control Loop Transactions and Operations.   These are recorded in
132 the *$POLICY_LOGS/audit.log* and *$POLICY_LOGS/metrics.log*, and accessible
133 through the telemetry APIs.
134
135 Control Loops Management (controlloop-management)
136 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
137
138 It installs common control loop application resources, and provides
139 telemetry API extensions.   *Actor* configurations are packaged in this
140 feature.
141
142 Usecases Controller (controlloop-usecases)
143 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
144
145 It is the *guilin* release implementation of the ONAP use cases.
146 It relies on the new *Actor* model framework to carry out a policy's
147 execution.
148
149 TDJAM Controller (controlloop-tdjam)
150 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
151
152 This is an experimental, java-only controller that will be deprecated after the
153 guilin release.
154
155 Utilities (controlloop-utils)
156 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
157
158 Enables *actor simulators* for testing purposes.
159
160 Offline Mode
161 ============
162
163 The default ONAP installation in *onap/policy-pdpd-cl:1.8.2* is *OFFLINE*.
164 In this configuration, the *rules* artifact and the *dependencies* are all in the local
165 maven repository.   This requires that the maven dependencies are preloaded in the local
166 repository.
167
168 An offline configuration requires two configuration items:
169
170 - *OFFLINE* environment variable set to true (see `values.yaml <https://git.onap.org/oom/tree/kubernetes/policy/values.yaml>`__.
171 - override of the default *settings.xml* (see
172   `settings.xml <https://git.onap.org/oom/tree/kubernetes/policy/components/policy-drools-pdp/resources/configmaps/settings.xml>`__) override.
173
174 Running the PDP-D Control Loop Application in a single container
175 ================================================================
176
177 Environment File
178 ~~~~~~~~~~~~~~~~
179
180 First create an environment file (in this example *env.conf*) to configure the PDP-D.
181
182 .. code-block:: bash
183
184     # SYSTEM software configuration
185
186     POLICY_HOME=/opt/app/policy
187     POLICY_LOGS=/var/log/onap/policy/pdpd
188     KEYSTORE_PASSWD=Pol1cy_0nap
189     TRUSTSTORE_PASSWD=Pol1cy_0nap
190
191     # Telemetry credentials
192
193     TELEMETRY_PORT=9696
194     TELEMETRY_HOST=0.0.0.0
195     TELEMETRY_USER=demo@people.osaaf.org
196     TELEMETRY_PASSWORD=demo123456!
197
198     # nexus repository
199
200     SNAPSHOT_REPOSITORY_ID=
201     SNAPSHOT_REPOSITORY_URL=
202     RELEASE_REPOSITORY_ID=
203     RELEASE_REPOSITORY_URL=
204     REPOSITORY_USERNAME=
205     REPOSITORY_PASSWORD=
206     REPOSITORY_OFFLINE=true
207
208     MVN_SNAPSHOT_REPO_URL=
209     MVN_RELEASE_REPO_URL=
210
211     # Relational (SQL) DB access
212
213     SQL_HOST=
214     SQL_USER=
215     SQL_PASSWORD=
216
217     # AAF
218
219     AAF=false
220     AAF_NAMESPACE=org.onap.policy
221     AAF_HOST=aaf.api.simpledemo.onap.org
222
223     # PDP-D DMaaP configuration channel
224
225     PDPD_CONFIGURATION_TOPIC=PDPD-CONFIGURATION
226     PDPD_CONFIGURATION_API_KEY=
227     PDPD_CONFIGURATION_API_SECRET=
228     PDPD_CONFIGURATION_CONSUMER_GROUP=
229     PDPD_CONFIGURATION_CONSUMER_INSTANCE=
230     PDPD_CONFIGURATION_PARTITION_KEY=
231
232     # PAP-PDP configuration channel
233
234     POLICY_PDP_PAP_TOPIC=POLICY-PDP-PAP
235     POLICY_PDP_PAP_GROUP=defaultGroup
236
237     # Symmetric Key for encoded sensitive data
238
239     SYMM_KEY=
240
241     # Healthcheck Feature
242
243     HEALTHCHECK_USER=demo@people.osaaf.org
244     HEALTHCHECK_PASSWORD=demo123456!
245
246     # Pooling Feature
247
248     POOLING_TOPIC=POOLING
249
250     # PAP
251
252     PAP_HOST=
253     PAP_USERNAME=
254     PAP_PASSWORD=
255
256     # PAP legacy
257
258     PAP_LEGACY_USERNAME=
259     PAP_LEGACY_PASSWORD=
260
261     # PDP-X
262
263     PDP_HOST=localhost
264     PDP_PORT=6669
265     PDP_CONTEXT_URI=pdp/api/getDecision
266     PDP_USERNAME=policy
267     PDP_PASSWORD=password
268     GUARD_DISABLED=true
269
270     # DCAE DMaaP
271
272     DCAE_TOPIC=unauthenticated.DCAE_CL_OUTPUT
273     DCAE_SERVERS=localhost
274     DCAE_CONSUMER_GROUP=dcae.policy.shared
275
276     # Open DMaaP
277
278     DMAAP_SERVERS=localhost
279
280     # AAI
281
282     AAI_HOST=localhost
283     AAI_PORT=6666
284     AAI_CONTEXT_URI=
285     AAI_USERNAME=policy
286     AAI_PASSWORD=policy
287
288     # SO
289
290     SO_HOST=localhost
291     SO_PORT=6667
292     SO_CONTEXT_URI=
293     SO_URL=https://localhost:6667/
294     SO_USERNAME=policy
295     SO_PASSWORD=policy
296
297     # VFC
298
299     VFC_HOST=localhost
300     VFC_PORT=6668
301     VFC_CONTEXT_URI=api/nslcm/v1/
302     VFC_USERNAME=policy
303     VFC_PASSWORD=policy
304
305     # SDNC
306
307     SDNC_HOST=localhost
308     SDNC_PORT=6670
309     SDNC_CONTEXT_URI=restconf/operations/
310
311 Configuration
312 ~~~~~~~~~~~~~
313
314 noop.pre.sh
315 """""""""""
316
317 In order to avoid the noise in the logs that relate to dmaap configuration, a startup script (*noop.pre.sh*) is added
318 to convert *dmaap* endpoints to *noop* in the host directory to be mounted.
319
320 .. code-block:: bash
321
322     #!/bin/bash -x
323
324     sed -i "s/^dmaap/noop/g" $POLICY_HOME/config/*.properties
325
326 features.pre.sh
327 """""""""""""""
328
329 We can enable the *controlloop-utils* and disable the *distributed-locking* feature to avoid using the database.
330
331 .. code-block:: bash
332
333     #!/bin/bash -x
334
335     bash -c "/opt/app/policy/bin/features disable distributed-locking"
336     bash -c "/opt/app/policy/bin/features enable controlloop-utils"
337
338 active.post.sh
339 """"""""""""""
340
341 The *active.post.sh* script makes the PDP-D active.
342
343 .. code-block:: bash
344
345     #!/bin/bash -x
346
347     bash -c "http --verify=no -a ${TELEMETRY_USER}:${TELEMETRY_PASSWORD} PUT https://localhost:9696/policy/pdp/engine/lifecycle/state/ACTIVE"
348
349 Actor Properties
350 """"""""""""""""
351
352 In the *guilin* release, some *actors* configurations need to be overridden to support *http* for compatibility
353 with the *controlloop-utils* feature.
354
355 AAI-http-client.properties
356 """"""""""""""""""""""""""
357
358 .. code-block:: bash
359
360     http.client.services=AAI
361
362     http.client.services.AAI.managed=true
363     http.client.services.AAI.https=false
364     http.client.services.AAI.host=${envd:AAI_HOST}
365     http.client.services.AAI.port=${envd:AAI_PORT}
366     http.client.services.AAI.userName=${envd:AAI_USERNAME}
367     http.client.services.AAI.password=${envd:AAI_PASSWORD}
368     http.client.services.AAI.contextUriPath=${envd:AAI_CONTEXT_URI}
369
370 SDNC-http-client.properties
371 """""""""""""""""""""""""""
372
373 .. code-block:: bash
374
375     http.client.services=SDNC
376
377     http.client.services.SDNC.managed=true
378     http.client.services.SDNC.https=false
379     http.client.services.SDNC.host=${envd:SDNC_HOST}
380     http.client.services.SDNC.port=${envd:SDNC_PORT}
381     http.client.services.SDNC.userName=${envd:SDNC_USERNAME}
382     http.client.services.SDNC.password=${envd:SDNC_PASSWORD}
383     http.client.services.SDNC.contextUriPath=${envd:SDNC_CONTEXT_URI}
384
385 VFC-http-client.properties
386 """"""""""""""""""""""""""
387
388 .. code-block:: bash
389
390     http.client.services=VFC
391
392     http.client.services.VFC.managed=true
393     http.client.services.VFC.https=false
394     http.client.services.VFC.host=${envd:VFC_HOST}
395     http.client.services.VFC.port=${envd:VFC_PORT}
396     http.client.services.VFC.userName=${envd:VFC_USERNAME}
397     http.client.services.VFC.password=${envd:VFC_PASSWORD}
398     http.client.services.VFC.contextUriPath=${envd:VFC_CONTEXT_URI:api/nslcm/v1/}
399
400 settings.xml
401 """"""""""""
402
403 The *standalone-settings.xml* file is the default maven settings override in the container.
404
405 .. code-block:: bash
406
407     <settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
408               xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
409               xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd">
410
411         <offline>true</offline>
412
413         <profiles>
414             <profile>
415                 <id>policy-local</id>
416                 <repositories>
417                     <repository>
418                         <id>file-repository</id>
419                         <url>file:${user.home}/.m2/file-repository</url>
420                         <releases>
421                             <enabled>true</enabled>
422                             <updatePolicy>always</updatePolicy>
423                         </releases>
424                         <snapshots>
425                             <enabled>true</enabled>
426                             <updatePolicy>always</updatePolicy>
427                         </snapshots>
428                     </repository>
429                 </repositories>
430             </profile>
431         </profiles>
432
433         <activeProfiles>
434             <activeProfile>policy-local</activeProfile>
435         </activeProfiles>
436
437     </settings>
438
439 Bring up the PDP-D Control Loop Application
440 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
441
442 .. code-block:: bash
443
444     docker run --rm -p 9696:9696 -v ${PWD}/config:/tmp/policy-install/config --env-file ${PWD}/env/env.conf -it --name PDPD -h pdpd nexus3.onap.org:10001/onap/policy-pdpd-cl:1.6.4
445
446 To run the container in detached mode, add the *-d* flag.
447
448 Note that we are opening the *9696* telemetry API port to the outside world, mounting the *config* host directory,
449 and setting environment variables.
450
451 To open a shell into the PDP-D:
452
453 .. code-block:: bash
454
455     docker exec -it pdp-d bash
456
457 Once in the container, run tools such as *telemetry*, *db-migrator*, *policy* to look at the system state:
458
459 .. code-block:: bash
460
461     docker exec -it PDPD bash -c "/opt/app/policy/bin/telemetry"
462     docker exec -it PDPD bash -c "/opt/app/policy/bin/policy status"
463     docker exec -it PDPD bash -c "/opt/app/policy/bin/db-migrator -s ALL -o report"
464
465 Controlled instantiation of the PDP-D Control Loop Appplication
466 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
467
468 Sometimes a developer may want to start and stop the PDP-D manually:
469
470 .. code-block:: bash
471
472    # start a bash
473
474    docker run --rm -p 9696:9696 -v ${PWD}/config:/tmp/policy-install/config --env-file ${PWD}/env/env.conf -it --name PDPD -h pdpd nexus3.onap.org:10001/onap/policy-pdpd-cl:1.6.4 bash
475
476    # use this command to start policy applying host customizations from /tmp/policy-install/config
477
478    pdpd-cl-entrypoint.sh vmboot
479
480    # or use this command to start policy without host customization
481
482    policy start
483
484    # at any time use the following command to stop the PDP-D
485
486    policy stop
487
488    # and this command to start the PDP-D back again
489
490    policy start
491
492 Scale-out use case testing
493 ==========================
494
495 First step is to create the *operational.scaleout* policy.
496
497 policy.vdns.json
498 ~~~~~~~~~~~~~~~~
499
500 .. code-block:: bash
501
502     {
503       "type": "onap.policies.controlloop.operational.common.Drools",
504       "type_version": "1.0.0",
505       "name": "operational.scaleout",
506       "version": "1.0.0",
507       "metadata": {
508         "policy-id": "operational.scaleout"
509       },
510       "properties": {
511         "id": "ControlLoop-vDNS-6f37f56d-a87d-4b85-b6a9-cc953cf779b3",
512         "timeout": 60,
513         "abatement": false,
514         "trigger": "unique-policy-id-1-scale-up",
515         "operations": [
516           {
517             "id": "unique-policy-id-1-scale-up",
518             "description": "Create a new VF Module",
519             "operation": {
520               "actor": "SO",
521               "operation": "VF Module Create",
522               "target": {
523                 "targetType": "VFMODULE",
524                 "entityIds": {
525                   "modelInvariantId": "e6130d03-56f1-4b0a-9a1d-e1b2ebc30e0e",
526                   "modelVersionId": "94b18b1d-cc91-4f43-911a-e6348665f292",
527                   "modelName": "VfwclVfwsnkBbefb8ce2bde..base_vfw..module-0",
528                   "modelVersion": 1,
529                   "modelCustomizationId": "47958575-138f-452a-8c8d-d89b595f8164"
530                 }
531               },
532               "payload": {
533                 "requestParameters": "{\"usePreload\":true,\"userParams\":[]}",
534                 "configurationParameters": "[{\"ip-addr\":\"$.vf-module-topology.vf-module-parameters.param[9]\",\"oam-ip-addr\":\"$.vf-module-topology.vf-module-parameters.param[16]\",\"enabled\":\"$.vf-module-topology.vf-module-parameters.param[23]\"}]"
535               }
536             },
537             "timeout": 20,
538             "retries": 0,
539             "success": "final_success",
540             "failure": "final_failure",
541             "failure_timeout": "final_failure_timeout",
542             "failure_retries": "final_failure_retries",
543             "failure_exception": "final_failure_exception",
544             "failure_guard": "final_failure_guard"
545           }
546         ]
547       }
548     }
549
550 To provision the *scale-out policy*, issue the following command:
551
552 .. code-block:: bash
553
554     http --verify=no -a "${TELEMETRY_USER}:${TELEMETRY_PASSWORD}" https://localhost:9696/policy/pdp/engine/lifecycle/policies @usecases/policy.vdns.json
555
556 Verify that the policy shows with the telemetry tools:
557
558 .. code-block:: bash
559
560     docker exec -it PDPD bash -c "/opt/app/policy/bin/telemetry"
561     > get /policy/pdp/engine/lifecycle/policies
562     > get /policy/pdp/engine/controllers/usecases/drools/facts/usecases/controlloops
563
564
565 dcae.vdns.onset.json
566 ~~~~~~~~~~~~~~~~~~~~
567
568 .. code-block:: bash
569
570     {
571       "closedLoopControlName": "ControlLoop-vDNS-6f37f56d-a87d-4b85-b6a9-cc953cf779b3",
572       "closedLoopAlarmStart": 1463679805324,
573       "closedLoopEventClient": "microservice.stringmatcher",
574       "closedLoopEventStatus": "ONSET",
575       "requestID": "c7c6a4aa-bb61-4a15-b831-ba1472dd4a65",
576       "target_type": "VNF",
577       "target": "vserver.vserver-name",
578       "AAI": {
579         "vserver.is-closed-loop-disabled": "false",
580         "vserver.prov-status": "ACTIVE",
581         "vserver.vserver-name": "OzVServer"
582       },
583       "from": "DCAE",
584       "version": "1.0.2"
585     }
586
587 To initiate a control loop transaction, simulate a DCAE ONSET to Policy:
588
589 .. code-block:: bash
590
591     http --verify=no -a "${TELEMETRY_USER}:${TELEMETRY_PASSWORD}" PUT https://localhost:9696/policy/pdp/engine/topics/sources/noop/DCAE_TOPIC/events @dcae.vdns.onset.json Content-Type:'text/plain'
592
593 This will trigger the scale out control loop transaction that will interact with the *SO*
594 simulator to complete the transaction.
595
596 Verify in *$POLICY_LOGS/network.log* that a *FINAL: SUCCESS* notification is sent over the POLICY-CL-MGT channel.
597 An entry in the *$POLICY_LOGS/audit.log* should indicate successful completion as well.
598
599 vCPE use case testing
600 =====================
601
602 First step is to create the *operational.restart* policy.
603
604 policy.vcpe.json
605 ~~~~~~~~~~~~~~~~
606
607 .. code-block:: bash
608
609     {
610       "type": "onap.policies.controlloop.operational.common.Drools",
611       "type_version": "1.0.0",
612       "name": "operational.restart",
613       "version": "1.0.0",
614       "metadata": {
615         "policy-id": "operational.restart"
616       },
617       "properties": {
618         "id": "ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e",
619         "timeout": 300,
620         "abatement": false,
621         "trigger": "unique-policy-id-1-restart",
622         "operations": [
623           {
624             "id": "unique-policy-id-1-restart",
625             "description": "Restart the VM",
626             "operation": {
627               "actor": "APPC",
628               "operation": "Restart",
629               "target": {
630                 "targetType": "VNF"
631               }
632             },
633             "timeout": 240,
634             "retries": 0,
635             "success": "final_success",
636             "failure": "final_failure",
637             "failure_timeout": "final_failure_timeout",
638             "failure_retries": "final_failure_retries",
639             "failure_exception": "final_failure_exception",
640             "failure_guard": "final_failure_guard"
641           }
642         ]
643       }
644     }
645
646 To provision the *operational.restart policy* issue the following command:
647
648 .. code-block:: bash
649
650     http --verify=no -a "${TELEMETRY_USER}:${TELEMETRY_PASSWORD}" https://localhost:9696/policy/pdp/engine/lifecycle/policies @usecases/policy.vcpe.json
651
652 Verify that the policy shows with the telemetry tools:
653
654 .. code-block:: bash
655
656     docker exec -it PDPD bash -c "/opt/app/policy/bin/telemetry"
657     > get /policy/pdp/engine/lifecycle/policies
658     > get /policy/pdp/engine/controllers/usecases/drools/facts/usecases/controlloops
659
660
661 dcae.vcpe.onset.json
662 ~~~~~~~~~~~~~~~~~~~~
663
664 .. code-block:: bash
665
666     {
667       "closedLoopControlName": "ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e",
668       "closedLoopAlarmStart": 1463679805324,
669       "closedLoopEventClient": "DCAE_INSTANCE_ID.dcae-tca",
670       "closedLoopEventStatus": "ONSET",
671       "requestID": "664be3d2-6c12-4f4b-a3e7-c349acced200",
672       "target_type": "VNF",
673       "target": "generic-vnf.vnf-id",
674       "AAI": {
675         "vserver.is-closed-loop-disabled": "false",
676         "vserver.prov-status": "ACTIVE",
677         "generic-vnf.vnf-id": "vCPE_Infrastructure_vGMUX_demo_app"
678       },
679       "from": "DCAE",
680       "version": "1.0.2"
681     }
682
683 To initiate a control loop transaction, simulate a DCAE ONSET to Policy:
684
685 .. code-block:: bash
686
687     http --verify=no -a "${TELEMETRY_USER}:${TELEMETRY_PASSWORD}" PUT https://localhost:9696/policy/pdp/engine/topics/sources/noop/DCAE_TOPIC/events @dcae.vcpe.onset.json Content-Type:'text/plain'
688
689 This will spawn a vCPE control loop transaction in the PDP-D.  Policy will send a *restart* message over the
690 *APPC-LCM-READ* channel to APPC and wait for a response.
691
692 Verify that you see this message in the network.log by looking for *APPC-LCM-READ* messages.
693
694 Note the *sub-request-id* value from the restart message in the *APPC-LCM-READ* channel.
695
696 Replace *REPLACEME* in the *appc.vcpe.success.json* with this sub-request-id.
697
698 appc.vcpe.success.json
699 ~~~~~~~~~~~~~~~~~~~~~~
700
701 .. code-block:: bash
702
703     {
704       "body": {
705         "output": {
706           "common-header": {
707             "timestamp": "2017-08-25T21:06:23.037Z",
708             "api-ver": "5.00",
709             "originator-id": "664be3d2-6c12-4f4b-a3e7-c349acced200",
710             "request-id": "664be3d2-6c12-4f4b-a3e7-c349acced200",
711             "sub-request-id": "REPLACEME",
712             "flags": {}
713           },
714           "status": {
715             "code": 400,
716             "message": "Restart Successful"
717           }
718         }
719       },
720       "version": "2.0",
721       "rpc-name": "restart",
722       "correlation-id": "664be3d2-6c12-4f4b-a3e7-c349acced200-1",
723       "type": "response"
724     }
725
726
727 Send a simulated APPC response back to the PDP-D over the *APPC-LCM-WRITE* channel.
728
729 .. code-block:: bash
730
731     http --verify=no -a "${TELEMETRY_USER}:${TELEMETRY_PASSWORD}" PUT https://localhost:9696/policy/pdp/engine/topics/sources/noop/APPC-LCM-WRITE/events @appc.vcpe.success.json  Content-Type:'text/plain'
732
733 Verify in *$POLICY_LOGS/network.log* that a *FINAL: SUCCESS* notification is sent over the *POLICY-CL-MGT* channel,
734 and an entry is added to the *$POLICY_LOGS/audit.log* indicating successful completion.
735
736 vFirewall use case testing
737 ===========================
738
739 First step is to create the *operational.modifyconfig* policy.
740
741 policy.vfw.json
742 ~~~~~~~~~~~~~~~
743
744 .. code-block:: bash
745
746     {
747       "type": "onap.policies.controlloop.operational.common.Drools",
748       "type_version": "1.0.0",
749       "name": "operational.modifyconfig",
750       "version": "1.0.0",
751       "metadata": {
752         "policy-id": "operational.modifyconfig"
753       },
754       "properties": {
755         "id": "ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a",
756         "timeout": 300,
757         "abatement": false,
758         "trigger": "unique-policy-id-1-modifyConfig",
759         "operations": [
760           {
761             "id": "unique-policy-id-1-modifyConfig",
762             "description": "Modify the packet generator",
763             "operation": {
764               "actor": "APPC",
765               "operation": "ModifyConfig",
766               "target": {
767                 "targetType": "VNF",
768                 "entityIds": {
769                   "resourceID": "bbb3cefd-01c8-413c-9bdd-2b92f9ca3d38"
770                 }
771               },
772               "payload": {
773                 "streams": "{\"active-streams\": 5 }"
774               }
775             },
776             "timeout": 240,
777             "retries": 0,
778             "success": "final_success",
779             "failure": "final_failure",
780             "failure_timeout": "final_failure_timeout",
781             "failure_retries": "final_failure_retries",
782             "failure_exception": "final_failure_exception",
783             "failure_guard": "final_failure_guard"
784           }
785         ]
786       }
787     }
788
789
790 To provision the *operational.modifyconfig policy*, issue the following command:
791
792 .. code-block:: bash
793
794     http --verify=no -a "${TELEMETRY_USER}:${TELEMETRY_PASSWORD}" https://localhost:9696/policy/pdp/engine/lifecycle/policies @usecases/policy.vfw.json
795
796 Verify that the policy shows with the telemetry tools:
797
798 .. code-block:: bash
799
800     docker exec -it PDPD bash -c "/opt/app/policy/bin/telemetry"
801     > get /policy/pdp/engine/lifecycle/policies
802     > get /policy/pdp/engine/controllers/usecases/drools/facts/usecases/controlloops
803
804
805 dcae.vfw.onset.json
806 ~~~~~~~~~~~~~~~~~~~~
807
808 .. code-block:: bash
809
810     {
811       "closedLoopControlName": "ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a",
812       "closedLoopAlarmStart": 1463679805324,
813       "closedLoopEventClient": "microservice.stringmatcher",
814       "closedLoopEventStatus": "ONSET",
815       "requestID": "c7c6a4aa-bb61-4a15-b831-ba1472dd4a65",
816       "target_type": "VNF",
817       "target": "generic-vnf.vnf-name",
818       "AAI": {
819         "vserver.is-closed-loop-disabled": "false",
820         "vserver.prov-status": "ACTIVE",
821         "generic-vnf.vnf-name": "fw0002vm002fw002",
822         "vserver.vserver-name": "OzVServer"
823       },
824       "from": "DCAE",
825       "version": "1.0.2"
826     }
827
828
829 To initiate a control loop transaction, simulate a DCAE ONSET to Policy:
830
831 .. code-block:: bash
832
833     http --verify=no -a "${TELEMETRY_USER}:${TELEMETRY_PASSWORD}" PUT https://localhost:9696/policy/pdp/engine/topics/sources/noop/DCAE_TOPIC/events @dcae.vfw.onset.json Content-Type:'text/plain'
834
835 This will spawn a vFW control loop transaction in the PDP-D.  Policy will send a *ModifyConfig* message over the
836 *APPC-CL* channel to APPC and wait for a response.  This can be seen by searching the network.log for *APPC-CL*.
837
838 Note the *SubRequestId* field in the *ModifyConfig* message in the *APPC-CL* topic in the network.log
839
840 Send a simulated APPC response back to the PDP-D over the *APPC-CL* channel.
841 To do this, change the *REPLACEME* text in the *appc.vcpe.success.json* with this *SubRequestId*.
842
843 appc.vcpe.success.json
844 ~~~~~~~~~~~~~~~~~~~~~~
845
846 .. code-block:: bash
847
848     {
849       "CommonHeader": {
850         "TimeStamp": 1506051879001,
851         "APIver": "1.01",
852         "RequestID": "c7c6a4aa-bb61-4a15-b831-ba1472dd4a65",
853         "SubRequestID": "REPLACEME",
854         "RequestTrack": [],
855         "Flags": []
856       },
857       "Status": {
858         "Code": 400,
859         "Value": "SUCCESS"
860       },
861       "Payload": {
862         "generic-vnf.vnf-id": "f17face5-69cb-4c88-9e0b-7426db7edddd"
863       }
864     }
865
866 .. code-block:: bash
867
868     http --verify=no -a "${TELEMETRY_USER}:${TELEMETRY_PASSWORD}" PUT https://localhost:9696/policy/pdp/engine/topics/sources/noop/APPC-CL/events @appc.vcpe.success.json Content-Type:'text/plain'
869
870 Verify in *$POLICY_LOGS/network.log* that a *FINAL: SUCCESS* notification is sent over the POLICY-CL-MGT channel,
871 and an entry is added to the *$POLICY_LOGS/audit.log* indicating successful completion.
872
873
874 Running PDP-D Control Loop Application with other components
875 ============================================================
876
877 The reader can also look at the `policy/docker repository <https://github.com/onap/policy-docker/tree/master/csit>`__.
878 More specifically, these directories have examples of other PDP-D Control Loop configurations:
879
880 * `plans <https://github.com/onap/policy-docker/tree/master/csit/drools-applications/plans>`__: startup & teardown scripts.
881 * `scripts <https://github.com/onap/policy-docker/blob/master/csit/docker-compose-all.yml>`__: docker-compose file.
882 * `tests <https://github.com/onap/policy-docker/tree/master/csit/drools-applications/tests>`__: test plan.
883
884 Additional information
885 ======================
886
887 For additional information, please see the
888 `Drools PDP Development and Testing (In Depth) <https://wiki.onap.org/display/DW/2020+Frankfurt+Tutorials>`__ page.
889
890