drools documentation redo
[policy/parent.git] / docs / drools / pdpdApps.rst
1
2 .. This work is licensed under a Creative Commons Attribution 4.0 International License.
3 .. http://creativecommons.org/licenses/by/4.0
4
5 .. _pdpd-apps-label:
6
7 PDP-D Applications
8 ##################
9
10 .. contents::
11     :depth: 2
12
13 Overview
14 ========
15
16 PDP-D applications uses the PDP-D Engine middleware to provide domain specific services.
17 See :ref:`pdpd-engine-label` for the description of the PDP-D infrastructure.
18
19 At this time *Control Loops* are the only type of applications supported.
20
21 *Control Loop* applications must support at least one of the following *Policy Types*:
22
23 - **onap.policies.controlloop.Operational**  (Operational Policies for Legacy Control Loops)
24 - **onap.policies.controlloop.operational.common.Drools** (Tosca Compliant Operational Policies)
25
26 Software
27 ========
28
29 Source Code repositories
30 ~~~~~~~~~~~~~~~~~~~~~~~~
31
32 The PDP-D Applications software resides on the `policy/drools-applications <https://git.onap.org/policy/drools-applications>`__ repository.    The actor libraries introduced in the *frankfurt* release reside in
33 the `policy/models repository <https://git.onap.org/policy/models>`__.
34
35 At this time, the *control loop* application is the only application supported in ONAP.
36 All the application projects reside under the
37 `controlloop directory <https://git.onap.org/policy/drools-applications/tree/controlloop>`__.
38
39 Docker Image
40 ~~~~~~~~~~~~
41
42 See the *drools-applications*
43 `released versions <https://wiki.onap.org/display/DW/Policy+Framework+Project%3A+Component+Versions>`__
44 for the latest images:
45
46 .. code-block:: bash
47
48     docker pull onap/policy-pdpd-cl:1.6.4
49
50 At the time of this writing *1.6.4* is the latest version.
51
52 The *onap/policy-pdpd-cl* image extends the *onap/policy-drools* image with
53 the *frankfurt* controller that realizes the *control loop* application.
54
55 Frankfurt Controller
56 ====================
57
58 The `frankfurt <https://git.onap.org/policy/drools-applications/tree/controlloop/common/controller-frankfurt>`__
59 controller is the *control loop* application in ONAP.
60
61 There are three parts in this controller:
62
63 * The `drl rules <https://git.onap.org/policy/drools-applications/tree/controlloop/common/controller-frankfurt/src/main/resources/frankfurt.drl>`__.
64 * The `kmodule.xml <https://git.onap.org/policy/drools-applications/tree/controlloop/common/controller-frankfurt/src/main/resources/META-INF/kmodule.xml>`__.
65 * The `dependencies <https://git.onap.org/policy/drools-applications/tree/controlloop/common/controller-frankfurt/pom.xml>`__.
66
67 The `kmodule.xml` specifies only one session, and declares in the *kbase* section the two operational policy types that
68 it supports.
69
70 The Frankfurt controller relies on the new Actor framework to interact with remote
71 components, part of a control loop transaction.   The reader is referred to the
72 *Policy Platform Actor Development Guidelines* in the documentation for further information.
73
74 Operational Policy Types
75 ========================
76
77 The *frankfurt* controller supports the two Operational policy types:
78
79 - *onap.policies.controlloop.Operational*.
80 - *onap.policies.controlloop.operational.common.Drools*.
81
82 The *onap.policies.controlloop.Operational* is the legacy operational type, used before
83 the *frankfurt* release.    The *onap.policies.controlloop.operational.common.Drools*
84 is the Tosca compliant policy type introduced in *frankfurt*.
85
86 The legacy operational policy type is defined at the
87 `onap.policies.controlloop.Operational.yaml <https://git.onap.org/policy/models/tree/models-examples/src/main/resources/policytypes/onap.policies.controlloop.Operational.yaml>`__.
88
89 The Tosca Compliant Operational Policy Type is defined at the
90 `onap.policies.controlloop.operational.common.Drools <https://git.onap.org/policy/models/tree/models-examples/src/main/resources/policytypes/onap.policies.controlloop.operational.common.Drools.yaml>`__.
91
92 An example of a Legacy Operational Policy can be found
93 `here <https://git.onap.org/policy/models/tree/models-examples/src/main/resources/policies/vDNS.policy.operational.legacy.input.json>`__.
94
95 An example of a Tosca Compliant Operational Policy can be found
96 `here <https://git.onap.org/policy/models/tree/models-examples/src/main/resources/policies/vDNS.policy.operational.input.tosca.json>`__.
97
98 Features
99 ========
100
101 Since the PDP-D Control Loop Application image was created from the PDP-D Engine one (*onap/policy-drools*),
102 it inherits all features and functionality.
103
104 The enabled features in the *onap/policy-pdpd-cl* image are:
105
106 - **distributed locking**: distributed resource locking.
107 - **healthcheck**: healthcheck.
108 - **lifecycle**: enables the lifecycle APIs.
109 - **controlloop-trans**: control loop transaction tracking.
110 - **controlloop-management**: generic controller capabilities.
111 - **controlloop-frankfurt**: new *controller* introduced in the frankfurt release to realize the ONAP use cases.
112
113 The following features are installed but disabled:
114
115 - **controlloop-usecases**: *controller* used pre-frankfurt releases.
116 - **controlloop-utils**: *actor* simulators.
117
118 Control Loops Transaction (controlloop-trans)
119 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
120
121 It tracks Control Loop Transactions and Operations.   These are recorded in
122 the *$POLICY_LOGS/audit.log* and *$POLICY_LOGS/metrics.log*, and accessible
123 through the telemetry APIs.
124
125 Control Loops Management (controlloop-management)
126 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
127
128 It installs common control loop application resources, and provides
129 telemetry API extensions.   *Actor* configurations are packaged in this
130 feature.
131
132 Frankfurt Controller (controlloop-frankfurt)
133 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
134
135 It is the *frankfurt* release implementation of the ONAP use cases.
136 It relies on the new *Actor* model framework to carry out a policy's
137 execution.
138
139 Usecases Controller (controlloop-usecases)
140 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
141
142 This is the deprecated pre-frankfurt controller.
143
144 Utilities (controlloop-utils)
145 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
146
147 Enables *actor simulators* for testing purposes.
148
149 Offline Mode
150 ============
151
152 The default ONAP installation in *onap/policy-pdpd-cl:1.6.4* is *OFFLINE*.
153 In this configuration, the *rules* artifact and the *dependencies* are all in the local
154 maven repository.   This requires that the maven dependencies are preloaded in the local
155 repository.
156
157 An offline configuration requires two configuration items:
158
159 - *OFFLINE* environment variable set to true (see `values.yaml <https://git.onap.org/oom/tree/kubernetes/policy/values.yaml>`__.
160 - override of the default *settings.xml* (see
161   `settings.xml <https://git.onap.org/oom/tree/kubernetes/policy/charts/drools/resources/configmaps/settings.xml>`__) override.
162
163 Running the PDP-D Control Loop Application in a single container
164 ================================================================
165
166 Environment File
167 ~~~~~~~~~~~~~~~~
168
169 First create an environment file (in this example *env.conf*) to configure the PDP-D.
170
171 .. code-block:: bash
172
173     # SYSTEM software configuration
174
175     POLICY_HOME=/opt/app/policy
176     POLICY_LOGS=/var/log/onap/policy/pdpd
177     KEYSTORE_PASSWD=Pol1cy_0nap
178     TRUSTSTORE_PASSWD=Pol1cy_0nap
179
180     # Telemetry credentials
181
182     TELEMETRY_PORT=9696
183     TELEMETRY_HOST=0.0.0.0
184     TELEMETRY_USER=demo@people.osaaf.org
185     TELEMETRY_PASSWORD=demo123456!
186
187     # nexus repository
188
189     SNAPSHOT_REPOSITORY_ID=
190     SNAPSHOT_REPOSITORY_URL=
191     RELEASE_REPOSITORY_ID=
192     RELEASE_REPOSITORY_URL=
193     REPOSITORY_USERNAME=
194     REPOSITORY_PASSWORD=
195     REPOSITORY_OFFLINE=true
196
197     MVN_SNAPSHOT_REPO_URL=
198     MVN_RELEASE_REPO_URL=
199
200     # Relational (SQL) DB access
201
202     SQL_HOST=
203     SQL_USER=
204     SQL_PASSWORD=
205
206     # AAF
207
208     AAF=false
209     AAF_NAMESPACE=org.onap.policy
210     AAF_HOST=aaf.api.simpledemo.onap.org
211
212     # PDP-D DMaaP configuration channel
213
214     PDPD_CONFIGURATION_TOPIC=PDPD-CONFIGURATION
215     PDPD_CONFIGURATION_API_KEY=
216     PDPD_CONFIGURATION_API_SECRET=
217     PDPD_CONFIGURATION_CONSUMER_GROUP=
218     PDPD_CONFIGURATION_CONSUMER_INSTANCE=
219     PDPD_CONFIGURATION_PARTITION_KEY=
220
221     # PAP-PDP configuration channel
222
223     POLICY_PDP_PAP_TOPIC=POLICY-PDP-PAP
224     POLICY_PDP_PAP_GROUP=defaultGroup
225
226     # Symmetric Key for encoded sensitive data
227
228     SYMM_KEY=
229
230     # Healthcheck Feature
231
232     HEALTHCHECK_USER=demo@people.osaaf.org
233     HEALTHCHECK_PASSWORD=demo123456!
234
235     # Pooling Feature
236
237     POOLING_TOPIC=POOLING
238
239     # PAP
240
241     PAP_HOST=
242     PAP_USERNAME=
243     PAP_PASSWORD=
244
245     # PAP legacy
246
247     PAP_LEGACY_USERNAME=
248     PAP_LEGACY_PASSWORD=
249
250     # PDP-X
251
252     PDP_HOST=localhost
253     PDP_PORT=6669
254     PDP_CONTEXT_URI=pdp/api/getDecision
255     PDP_USERNAME=policy
256     PDP_PASSWORD=password
257     GUARD_DISABLED=true
258
259     # DCAE DMaaP
260
261     DCAE_TOPIC=unauthenticated.DCAE_CL_OUTPUT
262     DCAE_SERVERS=localhost
263     DCAE_CONSUMER_GROUP=dcae.policy.shared
264
265     # Open DMaaP
266
267     DMAAP_SERVERS=localhost
268
269     # AAI
270
271     AAI_HOST=localhost
272     AAI_PORT=6666
273     AAI_CONTEXT_URI=
274     AAI_USERNAME=policy
275     AAI_PASSWORD=policy
276
277     # SO
278
279     SO_HOST=localhost
280     SO_PORT=6667
281     SO_CONTEXT_URI=
282     SO_URL=https://localhost:6667/
283     SO_USERNAME=policy
284     SO_PASSWORD=policy
285
286     # VFC
287
288     VFC_HOST=localhost
289     VFC_PORT=6668
290     VFC_CONTEXT_URI=api/nslcm/v1/
291     VFC_USERNAME=policy
292     VFC_PASSWORD=policy
293
294     # SDNC
295
296     SDNC_HOST=localhost
297     SDNC_PORT=6670
298     SDNC_CONTEXT_URI=restconf/operations/
299
300 Configuration
301 ~~~~~~~~~~~~~
302
303 noop.pre.sh
304 """""""""""
305
306 In order to avoid the noise in the logs that relate to dmaap configuration, a startup script (*noop.pre.sh*) is added
307 to convert *dmaap* endpoints to *noop* in the host directory to be mounted.
308
309 .. code-block:: bash
310
311     #!/bin/bash -x
312
313     sed -i "s/^dmaap/noop/g" $POLICY_HOME/config/*.properties
314
315 features.pre.sh
316 """""""""""""""
317
318 We can enable the *controlloop-utils* and disable the *distributed-locking* feature to avoid using the database.
319
320 .. code-block:: bash
321
322     #!/bin/bash -x
323
324     bash -c "/opt/app/policy/bin/features disable distributed-locking"
325     bash -c "/opt/app/policy/bin/features enable controlloop-utils"
326
327 active.post.sh
328 """"""""""""""
329
330 The *active.post.sh* script makes the PDP-D active.
331
332 .. code-block:: bash
333
334     #!/bin/bash -x
335
336     bash -c "http --verify=no -a ${TELEMETRY_USER}:${TELEMETRY_PASSWORD} PUT https://localhost:9696/policy/pdp/engine/lifecycle/state/ACTIVE"
337
338 Actor Properties
339 """"""""""""""""
340
341 In the *frankfurt* release, some *actors* configurations need to be overridden to support *http* for compatibility
342 with the *controlloop-utils* feature.
343
344 AAI-http-client.properties
345 """"""""""""""""""""""""""
346
347 .. code-block:: bash
348
349     http.client.services=AAI
350
351     http.client.services.AAI.managed=true
352     http.client.services.AAI.https=false
353     http.client.services.AAI.host=${envd:AAI_HOST}
354     http.client.services.AAI.port=${envd:AAI_PORT}
355     http.client.services.AAI.userName=${envd:AAI_USERNAME}
356     http.client.services.AAI.password=${envd:AAI_PASSWORD}
357     http.client.services.AAI.contextUriPath=${envd:AAI_CONTEXT_URI}
358
359 SDNC-http-client.properties
360 """""""""""""""""""""""""""
361
362 .. code-block:: bash
363
364     http.client.services=SDNC
365
366     http.client.services.SDNC.managed=true
367     http.client.services.SDNC.https=false
368     http.client.services.SDNC.host=${envd:SDNC_HOST}
369     http.client.services.SDNC.port=${envd:SDNC_PORT}
370     http.client.services.SDNC.userName=${envd:SDNC_USERNAME}
371     http.client.services.SDNC.password=${envd:SDNC_PASSWORD}
372     http.client.services.SDNC.contextUriPath=${envd:SDNC_CONTEXT_URI}
373
374 VFC-http-client.properties
375 """"""""""""""""""""""""""
376
377 .. code-block:: bash
378
379     http.client.services=VFC
380
381     http.client.services.VFC.managed=true
382     http.client.services.VFC.https=false
383     http.client.services.VFC.host=${envd:VFC_HOST}
384     http.client.services.VFC.port=${envd:VFC_PORT}
385     http.client.services.VFC.userName=${envd:VFC_USERNAME}
386     http.client.services.VFC.password=${envd:VFC_PASSWORD}
387     http.client.services.VFC.contextUriPath=${envd:VFC_CONTEXT_URI:api/nslcm/v1/}
388
389 settings.xml
390 """"""""""""
391
392 The *standalone-settings.xml* file is the default maven settings override in the container.
393
394 .. code-block:: bash
395
396     <settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
397               xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
398               xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd">
399
400         <offline>true</offline>
401
402         <profiles>
403             <profile>
404                 <id>policy-local</id>
405                 <repositories>
406                     <repository>
407                         <id>file-repository</id>
408                         <url>file:${user.home}/.m2/file-repository</url>
409                         <releases>
410                             <enabled>true</enabled>
411                             <updatePolicy>always</updatePolicy>
412                         </releases>
413                         <snapshots>
414                             <enabled>true</enabled>
415                             <updatePolicy>always</updatePolicy>
416                         </snapshots>
417                     </repository>
418                 </repositories>
419             </profile>
420         </profiles>
421
422         <activeProfiles>
423             <activeProfile>policy-local</activeProfile>
424         </activeProfiles>
425
426     </settings>
427
428 Bring up the PDP-D Control Loop Application
429 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
430
431 .. code-block:: bash
432
433     docker run --rm -p 9696:9696 -v ${PWD}/config:/tmp/policy-install/config --env-file ${PWD}/env/env.conf -it --name PDPD -h pdpd nexus3.onap.org:10001/onap/policy-pdpd-cl:1.6.4
434
435 To run the container in detached mode, add the *-d* flag.
436
437 Note that we are opening the *9696* telemetry API port to the outside world, mounting the *config* host directory,
438 and setting environment variables.
439
440 To open a shell into the PDP-D:
441
442 .. code-block:: bash
443
444     docker exec -it pdp-d bash
445
446 Once in the container, run tools such as *telemetry*, *db-migrator*, *policy* to look at the system state:
447
448 .. code-block:: bash
449
450     docker exec -it PDPD bash -c "/opt/app/policy/bin/telemetry"
451     docker exec -it PDPD bash -c "/opt/app/policy/bin/policy status"
452     docker exec -it PDPD bash -c "/opt/app/policy/bin/db-migrator -s ALL -o report"
453
454 Controlled instantiation of the PDP-D Control Loop Appplication
455 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
456
457 Sometimes a developer may want to start and stop the PDP-D manually:
458
459 .. code-block:: bash
460
461    # start a bash
462
463    docker run --rm -p 9696:9696 -v ${PWD}/config:/tmp/policy-install/config --env-file ${PWD}/env/env.conf -it --name PDPD -h pdpd nexus3.onap.org:10001/onap/policy-pdpd-cl:1.6.4 bash
464
465    # use this command to start policy applying host customizations from /tmp/policy-install/config
466
467    pdpd-cl-entrypoint.sh vmboot
468
469    # or use this command to start policy without host customization
470
471    policy start
472
473    # at any time use the following command to stop the PDP-D
474
475    policy stop
476
477    # and this command to start the PDP-D back again
478
479    policy start
480
481 Scale-out use case testing
482 ==========================
483
484 First step is to create the *operational.scaleout* policy.
485
486 policy.vdns.json
487 ~~~~~~~~~~~~~~~~
488
489 .. code-block:: bash
490
491     {
492       "type": "onap.policies.controlloop.operational.common.Drools",
493       "type_version": "1.0.0",
494       "name": "operational.scaleout",
495       "version": "1.0.0",
496       "metadata": {
497         "policy-id": "operational.scaleout"
498       },
499       "properties": {
500         "id": "ControlLoop-vDNS-6f37f56d-a87d-4b85-b6a9-cc953cf779b3",
501         "timeout": 60,
502         "abatement": false,
503         "trigger": "unique-policy-id-1-scale-up",
504         "operations": [
505           {
506             "id": "unique-policy-id-1-scale-up",
507             "description": "Create a new VF Module",
508             "operation": {
509               "actor": "SO",
510               "operation": "VF Module Create",
511               "target": {
512                 "targetType": "VFMODULE",
513                 "entityIds": {
514                   "modelInvariantId": "e6130d03-56f1-4b0a-9a1d-e1b2ebc30e0e",
515                   "modelVersionId": "94b18b1d-cc91-4f43-911a-e6348665f292",
516                   "modelName": "VfwclVfwsnkBbefb8ce2bde..base_vfw..module-0",
517                   "modelVersion": 1,
518                   "modelCustomizationId": "47958575-138f-452a-8c8d-d89b595f8164"
519                 }
520               },
521               "payload": {
522                 "requestParameters": "{\"usePreload\":true,\"userParams\":[]}",
523                 "configurationParameters": "[{\"ip-addr\":\"$.vf-module-topology.vf-module-parameters.param[9]\",\"oam-ip-addr\":\"$.vf-module-topology.vf-module-parameters.param[16]\",\"enabled\":\"$.vf-module-topology.vf-module-parameters.param[23]\"}]"
524               }
525             },
526             "timeout": 20,
527             "retries": 0,
528             "success": "final_success",
529             "failure": "final_failure",
530             "failure_timeout": "final_failure_timeout",
531             "failure_retries": "final_failure_retries",
532             "failure_exception": "final_failure_exception",
533             "failure_guard": "final_failure_guard"
534           }
535         ]
536       }
537     }
538
539 To provision the *scale-out policy*, issue the following command:
540
541 .. code-block:: bash
542
543     http --verify=no -a "${TELEMETRY_USER}:${TELEMETRY_PASSWORD}" https://localhost:9696/policy/pdp/engine/lifecycle/policies @usecases/policy.vdns.json
544
545 Verify that the policy shows with the telemetry tools:
546
547 .. code-block:: bash
548
549     docker exec -it PDPD bash -c "/opt/app/policy/bin/telemetry"
550     > get /policy/pdp/engine/lifecycle/policies
551     > get /policy/pdp/engine/controllers/frankfurt/drools/facts/frankfurt/controlloops
552
553
554 dcae.vdns.onset.json
555 ~~~~~~~~~~~~~~~~~~~~
556
557 .. code-block:: bash
558
559     {
560       "closedLoopControlName": "ControlLoop-vDNS-6f37f56d-a87d-4b85-b6a9-cc953cf779b3",
561       "closedLoopAlarmStart": 1463679805324,
562       "closedLoopEventClient": "microservice.stringmatcher",
563       "closedLoopEventStatus": "ONSET",
564       "requestID": "c7c6a4aa-bb61-4a15-b831-ba1472dd4a65",
565       "target_type": "VNF",
566       "target": "vserver.vserver-name",
567       "AAI": {
568         "vserver.is-closed-loop-disabled": "false",
569         "vserver.prov-status": "ACTIVE",
570         "vserver.vserver-name": "OzVServer"
571       },
572       "from": "DCAE",
573       "version": "1.0.2"
574     }
575
576 To initiate a control loop transaction, simulate a DCAE ONSET to Policy:
577
578 .. code-block:: bash
579
580     http --verify=no -a "${TELEMETRY_USER}:${TELEMETRY_PASSWORD}" PUT https://localhost:9696/policy/pdp/engine/topics/sources/noop/DCAE_TOPIC/events @dcae.vdns.onset.json Content-Type:'text/plain'
581
582 This will trigger the scale out control loop transaction that will interact with the *SO*
583 simulator to complete the transaction.
584
585 Verify in *$POLICY_LOGS/network.log* that a *FINAL: SUCCESS* notification is sent over the POLICY-CL-MGT channel.
586 An entry in the *$POLICY_LOGS/audit.log* should indicate successful completion as well.
587
588 vCPE use case testing
589 =====================
590
591 First step is to create the *operational.restart* policy.
592
593 policy.vcpe.json
594 ~~~~~~~~~~~~~~~~
595
596 .. code-block:: bash
597
598     {
599       "type": "onap.policies.controlloop.operational.common.Drools",
600       "type_version": "1.0.0",
601       "name": "operational.restart",
602       "version": "1.0.0",
603       "metadata": {
604         "policy-id": "operational.restart"
605       },
606       "properties": {
607         "id": "ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e",
608         "timeout": 300,
609         "abatement": false,
610         "trigger": "unique-policy-id-1-restart",
611         "operations": [
612           {
613             "id": "unique-policy-id-1-restart",
614             "description": "Restart the VM",
615             "operation": {
616               "actor": "APPC",
617               "operation": "Restart",
618               "target": {
619                 "targetType": "VNF"
620               }
621             },
622             "timeout": 240,
623             "retries": 0,
624             "success": "final_success",
625             "failure": "final_failure",
626             "failure_timeout": "final_failure_timeout",
627             "failure_retries": "final_failure_retries",
628             "failure_exception": "final_failure_exception",
629             "failure_guard": "final_failure_guard"
630           }
631         ]
632       }
633     }
634
635 To provision the *operational.restart policy* issue the following command:
636
637 .. code-block:: bash
638
639     http --verify=no -a "${TELEMETRY_USER}:${TELEMETRY_PASSWORD}" https://localhost:9696/policy/pdp/engine/lifecycle/policies @usecases/policy.vcpe.json
640
641 Verify that the policy shows with the telemetry tools:
642
643 .. code-block:: bash
644
645     docker exec -it PDPD bash -c "/opt/app/policy/bin/telemetry"
646     > get /policy/pdp/engine/lifecycle/policies
647     > get /policy/pdp/engine/controllers/frankfurt/drools/facts/frankfurt/controlloops
648
649
650 dcae.vcpe.onset.json
651 ~~~~~~~~~~~~~~~~~~~~
652
653 .. code-block:: bash
654
655     {
656       "closedLoopControlName": "ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e",
657       "closedLoopAlarmStart": 1463679805324,
658       "closedLoopEventClient": "DCAE_INSTANCE_ID.dcae-tca",
659       "closedLoopEventStatus": "ONSET",
660       "requestID": "664be3d2-6c12-4f4b-a3e7-c349acced200",
661       "target_type": "VNF",
662       "target": "generic-vnf.vnf-id",
663       "AAI": {
664         "vserver.is-closed-loop-disabled": "false",
665         "vserver.prov-status": "ACTIVE",
666         "generic-vnf.vnf-id": "vCPE_Infrastructure_vGMUX_demo_app"
667       },
668       "from": "DCAE",
669       "version": "1.0.2"
670     }
671
672 To initiate a control loop transaction, simulate a DCAE ONSET to Policy:
673
674 .. code-block:: bash
675
676     http --verify=no -a "${TELEMETRY_USER}:${TELEMETRY_PASSWORD}" PUT https://localhost:9696/policy/pdp/engine/topics/sources/noop/DCAE_TOPIC/events @dcae.vcpe.onset.json Content-Type:'text/plain'
677
678 This will spawn a vCPE control loop transaction in the PDP-D.  Policy will send a *restart* message over the
679 *APPC-LCM-READ* channel to APPC and wait for a response.
680
681 Verify that you see this message in the network.log by looking for *APPC-LCM-READ* messages.
682
683 Note the *sub-request-id* value from the restart message in the *APPC-LCM-READ* channel.
684
685 Replace *REPLACEME* in the *appc.vcpe.success.json* with this sub-request-id.
686
687 appc.vcpe.success.json
688 ~~~~~~~~~~~~~~~~~~~~~~
689
690 .. code-block:: bash
691
692     {
693       "body": {
694         "output": {
695           "common-header": {
696             "timestamp": "2017-08-25T21:06:23.037Z",
697             "api-ver": "5.00",
698             "originator-id": "664be3d2-6c12-4f4b-a3e7-c349acced200",
699             "request-id": "664be3d2-6c12-4f4b-a3e7-c349acced200",
700             "sub-request-id": "REPLACEME",
701             "flags": {}
702           },
703           "status": {
704             "code": 400,
705             "message": "Restart Successful"
706           }
707         }
708       },
709       "version": "2.0",
710       "rpc-name": "restart",
711       "correlation-id": "664be3d2-6c12-4f4b-a3e7-c349acced200-1",
712       "type": "response"
713     }
714
715
716 Send a simulated APPC response back to the PDP-D over the *APPC-LCM-WRITE* channel.
717
718 .. code-block:: bash
719
720     http --verify=no -a "${TELEMETRY_USER}:${TELEMETRY_PASSWORD}" PUT https://localhost:9696/policy/pdp/engine/topics/sources/noop/APPC-LCM-WRITE/events @appc.vcpe.success.json  Content-Type:'text/plain'
721
722 Verify in *$POLICY_LOGS/network.log* that a *FINAL: SUCCESS* notification is sent over the *POLICY-CL-MGT* channel,
723 and an entry is added to the *$POLICY_LOGS/audit.log* indicating successful completion.
724
725 vFirewall use case testing
726 ===========================
727
728 First step is to create the *operational.modifyconfig* policy.
729
730 policy.vfw.json
731 ~~~~~~~~~~~~~~~
732
733 .. code-block:: bash
734
735     {
736       "type": "onap.policies.controlloop.operational.common.Drools",
737       "type_version": "1.0.0",
738       "name": "operational.modifyconfig",
739       "version": "1.0.0",
740       "metadata": {
741         "policy-id": "operational.modifyconfig"
742       },
743       "properties": {
744         "id": "ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a",
745         "timeout": 300,
746         "abatement": false,
747         "trigger": "unique-policy-id-1-modifyConfig",
748         "operations": [
749           {
750             "id": "unique-policy-id-1-modifyConfig",
751             "description": "Modify the packet generator",
752             "operation": {
753               "actor": "APPC",
754               "operation": "ModifyConfig",
755               "target": {
756                 "targetType": "VNF",
757                 "entityIds": {
758                   "resourceID": "bbb3cefd-01c8-413c-9bdd-2b92f9ca3d38"
759                 }
760               },
761               "payload": {
762                 "streams": "{\"active-streams\": 5 }"
763               }
764             },
765             "timeout": 240,
766             "retries": 0,
767             "success": "final_success",
768             "failure": "final_failure",
769             "failure_timeout": "final_failure_timeout",
770             "failure_retries": "final_failure_retries",
771             "failure_exception": "final_failure_exception",
772             "failure_guard": "final_failure_guard"
773           }
774         ]
775       }
776     }
777
778
779 To provision the *operational.modifyconfig policy*, issue the following command:
780
781 .. code-block:: bash
782
783     http --verify=no -a "${TELEMETRY_USER}:${TELEMETRY_PASSWORD}" https://localhost:9696/policy/pdp/engine/lifecycle/policies @usecases/policy.vfw.json
784
785 Verify that the policy shows with the telemetry tools:
786
787 .. code-block:: bash
788
789     docker exec -it PDPD bash -c "/opt/app/policy/bin/telemetry"
790     > get /policy/pdp/engine/lifecycle/policies
791     > get /policy/pdp/engine/controllers/frankfurt/drools/facts/frankfurt/controlloops
792
793
794 dcae.vfw.onset.json
795 ~~~~~~~~~~~~~~~~~~~~
796
797 .. code-block:: bash
798
799     {
800       "closedLoopControlName": "ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a",
801       "closedLoopAlarmStart": 1463679805324,
802       "closedLoopEventClient": "microservice.stringmatcher",
803       "closedLoopEventStatus": "ONSET",
804       "requestID": "c7c6a4aa-bb61-4a15-b831-ba1472dd4a65",
805       "target_type": "VNF",
806       "target": "generic-vnf.vnf-name",
807       "AAI": {
808         "vserver.is-closed-loop-disabled": "false",
809         "vserver.prov-status": "ACTIVE",
810         "generic-vnf.vnf-name": "fw0002vm002fw002",
811         "vserver.vserver-name": "OzVServer"
812       },
813       "from": "DCAE",
814       "version": "1.0.2"
815     }
816
817
818 To initiate a control loop transaction, simulate a DCAE ONSET to Policy:
819
820 .. code-block:: bash
821
822     http --verify=no -a "${TELEMETRY_USER}:${TELEMETRY_PASSWORD}" PUT https://localhost:9696/policy/pdp/engine/topics/sources/noop/DCAE_TOPIC/events @dcae.vfw.onset.json Content-Type:'text/plain'
823
824 This will spawn a vFW control loop transaction in the PDP-D.  Policy will send a *ModifyConfig* message over the
825 *APPC-CL* channel to APPC and wait for a response.  This can be seen by searching the network.log for *APPC-CL*.
826
827 Note the *SubRequestId* field in the *ModifyConfig* message in the *APPC-CL* topic in the network.log
828
829 Send a simulated APPC response back to the PDP-D over the *APPC-CL* channel.
830 To do this, change the *REPLACEME* text in the *appc.vcpe.success.json* with this *SubRequestId*.
831
832 appc.vcpe.success.json
833 ~~~~~~~~~~~~~~~~~~~~~~
834
835 .. code-block:: bash
836
837     {
838       "CommonHeader": {
839         "TimeStamp": 1506051879001,
840         "APIver": "1.01",
841         "RequestID": "c7c6a4aa-bb61-4a15-b831-ba1472dd4a65",
842         "SubRequestID": "REPLACEME",
843         "RequestTrack": [],
844         "Flags": []
845       },
846       "Status": {
847         "Code": 400,
848         "Value": "SUCCESS"
849       },
850       "Payload": {
851         "generic-vnf.vnf-id": "f17face5-69cb-4c88-9e0b-7426db7edddd"
852       }
853     }
854
855 .. code-block:: bash
856
857     http --verify=no -a "${TELEMETRY_USER}:${TELEMETRY_PASSWORD}" PUT https://localhost:9696/policy/pdp/engine/topics/sources/noop/APPC-CL/events @appc.vcpe.success.json Content-Type:'text/plain'
858
859 Verify in *$POLICY_LOGS/network.log* that a *FINAL: SUCCESS* notification is sent over the POLICY-CL-MGT channel,
860 and an entry is added to the *$POLICY_LOGS/audit.log* indicating successful completion.
861
862
863 Running PDP-D Control Loop Application with other components
864 ============================================================
865
866 The reader can also look at the `integration/csit repository <https://git.onap.org/integration/csit>`__.
867 More specifically, these directories have examples of other PDP-D Control Loop configurations:
868
869 * `plans <https://git.onap.org/integration/csit/tree/plans/policy/drools-applications>`__: startup scripts.
870 * `scripts <https://git.onap.org/integration/csit/tree/scripts/policy/drools-apps/docker-compose-drools-apps.yml>`__: docker-compose and related files.
871 * `plans <https://git.onap.org/integration/csit/tree/tests/policy/drools-applications>`__: test plan.
872
873 Additional information
874 ======================
875
876 For additional information, please see the
877 `Drools PDP Development and Testing (In Depth) <https://wiki.onap.org/display/DW/2020+Frankfurt+Tutorials>`__ page.
878
879