2 .. This work is licensed under a Creative Commons Attribution 4.0 International License.
3 .. http://creativecommons.org/licenses/by/4.0
16 PDP-D applications uses the PDP-D Engine middleware to provide domain specific services.
17 See :ref:`pdpd-engine-label` for the description of the PDP-D infrastructure.
19 At this time *Control Loops* are the only type of applications supported.
21 *Control Loop* applications must support at least one of the following *Policy Types*:
23 - **onap.policies.controlloop.Operational** (Operational Policies for Legacy Control Loops)
24 - **onap.policies.controlloop.operational.common.Drools** (Tosca Compliant Operational Policies)
29 Source Code repositories
30 ~~~~~~~~~~~~~~~~~~~~~~~~
32 The PDP-D Applications software resides on the `policy/drools-applications <https://git.onap.org/policy/drools-applications>`__ repository. The actor libraries introduced in the *frankfurt* release reside in
33 the `policy/models repository <https://git.onap.org/policy/models>`__.
35 At this time, the *control loop* application is the only application supported in ONAP.
36 All the application projects reside under the
37 `controlloop directory <https://git.onap.org/policy/drools-applications/tree/controlloop>`__.
42 See the *drools-applications*
43 `released versions <https://wiki.onap.org/display/DW/Policy+Framework+Project%3A+Component+Versions>`__
44 for the latest images:
48 docker pull onap/policy-pdpd-cl:1.6.4
50 At the time of this writing *1.6.4* is the latest version.
52 The *onap/policy-pdpd-cl* image extends the *onap/policy-drools* image with
53 the *frankfurt* controller that realizes the *control loop* application.
58 The `frankfurt <https://git.onap.org/policy/drools-applications/tree/controlloop/common/controller-frankfurt>`__
59 controller is the *control loop* application in ONAP.
61 There are three parts in this controller:
63 * The `drl rules <https://git.onap.org/policy/drools-applications/tree/controlloop/common/controller-frankfurt/src/main/resources/frankfurt.drl>`__.
64 * The `kmodule.xml <https://git.onap.org/policy/drools-applications/tree/controlloop/common/controller-frankfurt/src/main/resources/META-INF/kmodule.xml>`__.
65 * The `dependencies <https://git.onap.org/policy/drools-applications/tree/controlloop/common/controller-frankfurt/pom.xml>`__.
67 The `kmodule.xml` specifies only one session, and declares in the *kbase* section the two operational policy types that
70 The Frankfurt controller relies on the new Actor framework to interact with remote
71 components, part of a control loop transaction. The reader is referred to the
72 *Policy Platform Actor Development Guidelines* in the documentation for further information.
74 Operational Policy Types
75 ========================
77 The *frankfurt* controller supports the two Operational policy types:
79 - *onap.policies.controlloop.Operational*.
80 - *onap.policies.controlloop.operational.common.Drools*.
82 The *onap.policies.controlloop.Operational* is the legacy operational type, used before
83 the *frankfurt* release. The *onap.policies.controlloop.operational.common.Drools*
84 is the Tosca compliant policy type introduced in *frankfurt*.
86 The legacy operational policy type is defined at the
87 `onap.policies.controlloop.Operational.yaml <https://git.onap.org/policy/models/tree/models-examples/src/main/resources/policytypes/onap.policies.controlloop.Operational.yaml>`__.
89 The Tosca Compliant Operational Policy Type is defined at the
90 `onap.policies.controlloop.operational.common.Drools <https://git.onap.org/policy/models/tree/models-examples/src/main/resources/policytypes/onap.policies.controlloop.operational.common.Drools.yaml>`__.
92 An example of a Legacy Operational Policy can be found
93 `here <https://git.onap.org/policy/models/tree/models-examples/src/main/resources/policies/vDNS.policy.operational.legacy.input.json>`__.
95 An example of a Tosca Compliant Operational Policy can be found
96 `here <https://git.onap.org/policy/models/tree/models-examples/src/main/resources/policies/vDNS.policy.operational.input.tosca.json>`__.
101 Since the PDP-D Control Loop Application image was created from the PDP-D Engine one (*onap/policy-drools*),
102 it inherits all features and functionality.
104 The enabled features in the *onap/policy-pdpd-cl* image are:
106 - **distributed locking**: distributed resource locking.
107 - **healthcheck**: healthcheck.
108 - **lifecycle**: enables the lifecycle APIs.
109 - **controlloop-trans**: control loop transaction tracking.
110 - **controlloop-management**: generic controller capabilities.
111 - **controlloop-frankfurt**: new *controller* introduced in the frankfurt release to realize the ONAP use cases.
113 The following features are installed but disabled:
115 - **controlloop-usecases**: *controller* used pre-frankfurt releases.
116 - **controlloop-utils**: *actor* simulators.
118 Control Loops Transaction (controlloop-trans)
119 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
121 It tracks Control Loop Transactions and Operations. These are recorded in
122 the *$POLICY_LOGS/audit.log* and *$POLICY_LOGS/metrics.log*, and accessible
123 through the telemetry APIs.
125 Control Loops Management (controlloop-management)
126 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
128 It installs common control loop application resources, and provides
129 telemetry API extensions. *Actor* configurations are packaged in this
132 Frankfurt Controller (controlloop-frankfurt)
133 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
135 It is the *frankfurt* release implementation of the ONAP use cases.
136 It relies on the new *Actor* model framework to carry out a policy's
139 Usecases Controller (controlloop-usecases)
140 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
142 This is the deprecated pre-frankfurt controller.
144 Utilities (controlloop-utils)
145 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
147 Enables *actor simulators* for testing purposes.
152 The default ONAP installation in *onap/policy-pdpd-cl:1.6.4* is *OFFLINE*.
153 In this configuration, the *rules* artifact and the *dependencies* are all in the local
154 maven repository. This requires that the maven dependencies are preloaded in the local
157 An offline configuration requires two configuration items:
159 - *OFFLINE* environment variable set to true (see `values.yaml <https://git.onap.org/oom/tree/kubernetes/policy/values.yaml>`__.
160 - override of the default *settings.xml* (see
161 `settings.xml <https://git.onap.org/oom/tree/kubernetes/policy/charts/drools/resources/configmaps/settings.xml>`__) override.
163 Running the PDP-D Control Loop Application in a single container
164 ================================================================
169 First create an environment file (in this example *env.conf*) to configure the PDP-D.
173 # SYSTEM software configuration
175 POLICY_HOME=/opt/app/policy
176 POLICY_LOGS=/var/log/onap/policy/pdpd
177 KEYSTORE_PASSWD=Pol1cy_0nap
178 TRUSTSTORE_PASSWD=Pol1cy_0nap
180 # Telemetry credentials
183 TELEMETRY_HOST=0.0.0.0
184 TELEMETRY_USER=demo@people.osaaf.org
185 TELEMETRY_PASSWORD=demo123456!
189 SNAPSHOT_REPOSITORY_ID=
190 SNAPSHOT_REPOSITORY_URL=
191 RELEASE_REPOSITORY_ID=
192 RELEASE_REPOSITORY_URL=
195 REPOSITORY_OFFLINE=true
197 MVN_SNAPSHOT_REPO_URL=
198 MVN_RELEASE_REPO_URL=
200 # Relational (SQL) DB access
209 AAF_NAMESPACE=org.onap.policy
210 AAF_HOST=aaf.api.simpledemo.onap.org
212 # PDP-D DMaaP configuration channel
214 PDPD_CONFIGURATION_TOPIC=PDPD-CONFIGURATION
215 PDPD_CONFIGURATION_API_KEY=
216 PDPD_CONFIGURATION_API_SECRET=
217 PDPD_CONFIGURATION_CONSUMER_GROUP=
218 PDPD_CONFIGURATION_CONSUMER_INSTANCE=
219 PDPD_CONFIGURATION_PARTITION_KEY=
221 # PAP-PDP configuration channel
223 POLICY_PDP_PAP_TOPIC=POLICY-PDP-PAP
224 POLICY_PDP_PAP_GROUP=defaultGroup
226 # Symmetric Key for encoded sensitive data
230 # Healthcheck Feature
232 HEALTHCHECK_USER=demo@people.osaaf.org
233 HEALTHCHECK_PASSWORD=demo123456!
237 POOLING_TOPIC=POOLING
254 PDP_CONTEXT_URI=pdp/api/getDecision
256 PDP_PASSWORD=password
261 DCAE_TOPIC=unauthenticated.DCAE_CL_OUTPUT
262 DCAE_SERVERS=localhost
263 DCAE_CONSUMER_GROUP=dcae.policy.shared
267 DMAAP_SERVERS=localhost
282 SO_URL=https://localhost:6667/
290 VFC_CONTEXT_URI=api/nslcm/v1/
298 SDNC_CONTEXT_URI=restconf/operations/
306 In order to avoid the noise in the logs that relate to dmaap configuration, a startup script (*noop.pre.sh*) is added
307 to convert *dmaap* endpoints to *noop* in the host directory to be mounted.
313 sed -i "s/^dmaap/noop/g" $POLICY_HOME/config/*.properties
318 We can enable the *controlloop-utils* and disable the *distributed-locking* feature to avoid using the database.
324 bash -c "/opt/app/policy/bin/features disable distributed-locking"
325 bash -c "/opt/app/policy/bin/features enable controlloop-utils"
330 The *active.post.sh* script makes the PDP-D active.
336 bash -c "http --verify=no -a ${TELEMETRY_USER}:${TELEMETRY_PASSWORD} PUT https://localhost:9696/policy/pdp/engine/lifecycle/state/ACTIVE"
341 In the *frankfurt* release, some *actors* configurations need to be overridden to support *http* for compatibility
342 with the *controlloop-utils* feature.
344 AAI-http-client.properties
345 """"""""""""""""""""""""""
349 http.client.services=AAI
351 http.client.services.AAI.managed=true
352 http.client.services.AAI.https=false
353 http.client.services.AAI.host=${envd:AAI_HOST}
354 http.client.services.AAI.port=${envd:AAI_PORT}
355 http.client.services.AAI.userName=${envd:AAI_USERNAME}
356 http.client.services.AAI.password=${envd:AAI_PASSWORD}
357 http.client.services.AAI.contextUriPath=${envd:AAI_CONTEXT_URI}
359 SDNC-http-client.properties
360 """""""""""""""""""""""""""
364 http.client.services=SDNC
366 http.client.services.SDNC.managed=true
367 http.client.services.SDNC.https=false
368 http.client.services.SDNC.host=${envd:SDNC_HOST}
369 http.client.services.SDNC.port=${envd:SDNC_PORT}
370 http.client.services.SDNC.userName=${envd:SDNC_USERNAME}
371 http.client.services.SDNC.password=${envd:SDNC_PASSWORD}
372 http.client.services.SDNC.contextUriPath=${envd:SDNC_CONTEXT_URI}
374 VFC-http-client.properties
375 """"""""""""""""""""""""""
379 http.client.services=VFC
381 http.client.services.VFC.managed=true
382 http.client.services.VFC.https=false
383 http.client.services.VFC.host=${envd:VFC_HOST}
384 http.client.services.VFC.port=${envd:VFC_PORT}
385 http.client.services.VFC.userName=${envd:VFC_USERNAME}
386 http.client.services.VFC.password=${envd:VFC_PASSWORD}
387 http.client.services.VFC.contextUriPath=${envd:VFC_CONTEXT_URI:api/nslcm/v1/}
392 The *standalone-settings.xml* file is the default maven settings override in the container.
396 <settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
397 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
398 xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd">
400 <offline>true</offline>
404 <id>policy-local</id>
407 <id>file-repository</id>
408 <url>file:${user.home}/.m2/file-repository</url>
410 <enabled>true</enabled>
411 <updatePolicy>always</updatePolicy>
414 <enabled>true</enabled>
415 <updatePolicy>always</updatePolicy>
423 <activeProfile>policy-local</activeProfile>
428 Bring up the PDP-D Control Loop Application
429 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
433 docker run --rm -p 9696:9696 -v ${PWD}/config:/tmp/policy-install/config --env-file ${PWD}/env/env.conf -it --name PDPD -h pdpd nexus3.onap.org:10001/onap/policy-pdpd-cl:1.6.4
435 To run the container in detached mode, add the *-d* flag.
437 Note that we are opening the *9696* telemetry API port to the outside world, mounting the *config* host directory,
438 and setting environment variables.
440 To open a shell into the PDP-D:
444 docker exec -it pdp-d bash
446 Once in the container, run tools such as *telemetry*, *db-migrator*, *policy* to look at the system state:
450 docker exec -it PDPD bash -c "/opt/app/policy/bin/telemetry"
451 docker exec -it PDPD bash -c "/opt/app/policy/bin/policy status"
452 docker exec -it PDPD bash -c "/opt/app/policy/bin/db-migrator -s ALL -o report"
454 Controlled instantiation of the PDP-D Control Loop Appplication
455 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
457 Sometimes a developer may want to start and stop the PDP-D manually:
463 docker run --rm -p 9696:9696 -v ${PWD}/config:/tmp/policy-install/config --env-file ${PWD}/env/env.conf -it --name PDPD -h pdpd nexus3.onap.org:10001/onap/policy-pdpd-cl:1.6.4 bash
465 # use this command to start policy applying host customizations from /tmp/policy-install/config
467 pdpd-cl-entrypoint.sh vmboot
469 # or use this command to start policy without host customization
473 # at any time use the following command to stop the PDP-D
477 # and this command to start the PDP-D back again
481 Scale-out use case testing
482 ==========================
484 First step is to create the *operational.scaleout* policy.
492 "type": "onap.policies.controlloop.operational.common.Drools",
493 "type_version": "1.0.0",
494 "name": "operational.scaleout",
497 "policy-id": "operational.scaleout"
500 "id": "ControlLoop-vDNS-6f37f56d-a87d-4b85-b6a9-cc953cf779b3",
503 "trigger": "unique-policy-id-1-scale-up",
506 "id": "unique-policy-id-1-scale-up",
507 "description": "Create a new VF Module",
510 "operation": "VF Module Create",
512 "targetType": "VFMODULE",
514 "modelInvariantId": "e6130d03-56f1-4b0a-9a1d-e1b2ebc30e0e",
515 "modelVersionId": "94b18b1d-cc91-4f43-911a-e6348665f292",
516 "modelName": "VfwclVfwsnkBbefb8ce2bde..base_vfw..module-0",
518 "modelCustomizationId": "47958575-138f-452a-8c8d-d89b595f8164"
522 "requestParameters": "{\"usePreload\":true,\"userParams\":[]}",
523 "configurationParameters": "[{\"ip-addr\":\"$.vf-module-topology.vf-module-parameters.param[9]\",\"oam-ip-addr\":\"$.vf-module-topology.vf-module-parameters.param[16]\",\"enabled\":\"$.vf-module-topology.vf-module-parameters.param[23]\"}]"
528 "success": "final_success",
529 "failure": "final_failure",
530 "failure_timeout": "final_failure_timeout",
531 "failure_retries": "final_failure_retries",
532 "failure_exception": "final_failure_exception",
533 "failure_guard": "final_failure_guard"
539 To provision the *scale-out policy*, issue the following command:
543 http --verify=no -a "${TELEMETRY_USER}:${TELEMETRY_PASSWORD}" https://localhost:9696/policy/pdp/engine/lifecycle/policies @usecases/policy.vdns.json
545 Verify that the policy shows with the telemetry tools:
549 docker exec -it PDPD bash -c "/opt/app/policy/bin/telemetry"
550 > get /policy/pdp/engine/lifecycle/policies
551 > get /policy/pdp/engine/controllers/frankfurt/drools/facts/frankfurt/controlloops
560 "closedLoopControlName": "ControlLoop-vDNS-6f37f56d-a87d-4b85-b6a9-cc953cf779b3",
561 "closedLoopAlarmStart": 1463679805324,
562 "closedLoopEventClient": "microservice.stringmatcher",
563 "closedLoopEventStatus": "ONSET",
564 "requestID": "c7c6a4aa-bb61-4a15-b831-ba1472dd4a65",
565 "target_type": "VNF",
566 "target": "vserver.vserver-name",
568 "vserver.is-closed-loop-disabled": "false",
569 "vserver.prov-status": "ACTIVE",
570 "vserver.vserver-name": "OzVServer"
576 To initiate a control loop transaction, simulate a DCAE ONSET to Policy:
580 http --verify=no -a "${TELEMETRY_USER}:${TELEMETRY_PASSWORD}" PUT https://localhost:9696/policy/pdp/engine/topics/sources/noop/DCAE_TOPIC/events @dcae.vdns.onset.json Content-Type:'text/plain'
582 This will trigger the scale out control loop transaction that will interact with the *SO*
583 simulator to complete the transaction.
585 Verify in *$POLICY_LOGS/network.log* that a *FINAL: SUCCESS* notification is sent over the POLICY-CL-MGT channel.
586 An entry in the *$POLICY_LOGS/audit.log* should indicate successful completion as well.
588 vCPE use case testing
589 =====================
591 First step is to create the *operational.restart* policy.
599 "type": "onap.policies.controlloop.operational.common.Drools",
600 "type_version": "1.0.0",
601 "name": "operational.restart",
604 "policy-id": "operational.restart"
607 "id": "ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e",
610 "trigger": "unique-policy-id-1-restart",
613 "id": "unique-policy-id-1-restart",
614 "description": "Restart the VM",
617 "operation": "Restart",
624 "success": "final_success",
625 "failure": "final_failure",
626 "failure_timeout": "final_failure_timeout",
627 "failure_retries": "final_failure_retries",
628 "failure_exception": "final_failure_exception",
629 "failure_guard": "final_failure_guard"
635 To provision the *operational.restart policy* issue the following command:
639 http --verify=no -a "${TELEMETRY_USER}:${TELEMETRY_PASSWORD}" https://localhost:9696/policy/pdp/engine/lifecycle/policies @usecases/policy.vcpe.json
641 Verify that the policy shows with the telemetry tools:
645 docker exec -it PDPD bash -c "/opt/app/policy/bin/telemetry"
646 > get /policy/pdp/engine/lifecycle/policies
647 > get /policy/pdp/engine/controllers/frankfurt/drools/facts/frankfurt/controlloops
656 "closedLoopControlName": "ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e",
657 "closedLoopAlarmStart": 1463679805324,
658 "closedLoopEventClient": "DCAE_INSTANCE_ID.dcae-tca",
659 "closedLoopEventStatus": "ONSET",
660 "requestID": "664be3d2-6c12-4f4b-a3e7-c349acced200",
661 "target_type": "VNF",
662 "target": "generic-vnf.vnf-id",
664 "vserver.is-closed-loop-disabled": "false",
665 "vserver.prov-status": "ACTIVE",
666 "generic-vnf.vnf-id": "vCPE_Infrastructure_vGMUX_demo_app"
672 To initiate a control loop transaction, simulate a DCAE ONSET to Policy:
676 http --verify=no -a "${TELEMETRY_USER}:${TELEMETRY_PASSWORD}" PUT https://localhost:9696/policy/pdp/engine/topics/sources/noop/DCAE_TOPIC/events @dcae.vcpe.onset.json Content-Type:'text/plain'
678 This will spawn a vCPE control loop transaction in the PDP-D. Policy will send a *restart* message over the
679 *APPC-LCM-READ* channel to APPC and wait for a response.
681 Verify that you see this message in the network.log by looking for *APPC-LCM-READ* messages.
683 Note the *sub-request-id* value from the restart message in the *APPC-LCM-READ* channel.
685 Replace *REPLACEME* in the *appc.vcpe.success.json* with this sub-request-id.
687 appc.vcpe.success.json
688 ~~~~~~~~~~~~~~~~~~~~~~
696 "timestamp": "2017-08-25T21:06:23.037Z",
698 "originator-id": "664be3d2-6c12-4f4b-a3e7-c349acced200",
699 "request-id": "664be3d2-6c12-4f4b-a3e7-c349acced200",
700 "sub-request-id": "REPLACEME",
705 "message": "Restart Successful"
710 "rpc-name": "restart",
711 "correlation-id": "664be3d2-6c12-4f4b-a3e7-c349acced200-1",
716 Send a simulated APPC response back to the PDP-D over the *APPC-LCM-WRITE* channel.
720 http --verify=no -a "${TELEMETRY_USER}:${TELEMETRY_PASSWORD}" PUT https://localhost:9696/policy/pdp/engine/topics/sources/noop/APPC-LCM-WRITE/events @appc.vcpe.success.json Content-Type:'text/plain'
722 Verify in *$POLICY_LOGS/network.log* that a *FINAL: SUCCESS* notification is sent over the *POLICY-CL-MGT* channel,
723 and an entry is added to the *$POLICY_LOGS/audit.log* indicating successful completion.
725 vFirewall use case testing
726 ===========================
728 First step is to create the *operational.modifyconfig* policy.
736 "type": "onap.policies.controlloop.operational.common.Drools",
737 "type_version": "1.0.0",
738 "name": "operational.modifyconfig",
741 "policy-id": "operational.modifyconfig"
744 "id": "ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a",
747 "trigger": "unique-policy-id-1-modifyConfig",
750 "id": "unique-policy-id-1-modifyConfig",
751 "description": "Modify the packet generator",
754 "operation": "ModifyConfig",
758 "resourceID": "bbb3cefd-01c8-413c-9bdd-2b92f9ca3d38"
762 "streams": "{\"active-streams\": 5 }"
767 "success": "final_success",
768 "failure": "final_failure",
769 "failure_timeout": "final_failure_timeout",
770 "failure_retries": "final_failure_retries",
771 "failure_exception": "final_failure_exception",
772 "failure_guard": "final_failure_guard"
779 To provision the *operational.modifyconfig policy*, issue the following command:
783 http --verify=no -a "${TELEMETRY_USER}:${TELEMETRY_PASSWORD}" https://localhost:9696/policy/pdp/engine/lifecycle/policies @usecases/policy.vfw.json
785 Verify that the policy shows with the telemetry tools:
789 docker exec -it PDPD bash -c "/opt/app/policy/bin/telemetry"
790 > get /policy/pdp/engine/lifecycle/policies
791 > get /policy/pdp/engine/controllers/frankfurt/drools/facts/frankfurt/controlloops
800 "closedLoopControlName": "ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a",
801 "closedLoopAlarmStart": 1463679805324,
802 "closedLoopEventClient": "microservice.stringmatcher",
803 "closedLoopEventStatus": "ONSET",
804 "requestID": "c7c6a4aa-bb61-4a15-b831-ba1472dd4a65",
805 "target_type": "VNF",
806 "target": "generic-vnf.vnf-name",
808 "vserver.is-closed-loop-disabled": "false",
809 "vserver.prov-status": "ACTIVE",
810 "generic-vnf.vnf-name": "fw0002vm002fw002",
811 "vserver.vserver-name": "OzVServer"
818 To initiate a control loop transaction, simulate a DCAE ONSET to Policy:
822 http --verify=no -a "${TELEMETRY_USER}:${TELEMETRY_PASSWORD}" PUT https://localhost:9696/policy/pdp/engine/topics/sources/noop/DCAE_TOPIC/events @dcae.vfw.onset.json Content-Type:'text/plain'
824 This will spawn a vFW control loop transaction in the PDP-D. Policy will send a *ModifyConfig* message over the
825 *APPC-CL* channel to APPC and wait for a response. This can be seen by searching the network.log for *APPC-CL*.
827 Note the *SubRequestId* field in the *ModifyConfig* message in the *APPC-CL* topic in the network.log
829 Send a simulated APPC response back to the PDP-D over the *APPC-CL* channel.
830 To do this, change the *REPLACEME* text in the *appc.vcpe.success.json* with this *SubRequestId*.
832 appc.vcpe.success.json
833 ~~~~~~~~~~~~~~~~~~~~~~
839 "TimeStamp": 1506051879001,
841 "RequestID": "c7c6a4aa-bb61-4a15-b831-ba1472dd4a65",
842 "SubRequestID": "REPLACEME",
851 "generic-vnf.vnf-id": "f17face5-69cb-4c88-9e0b-7426db7edddd"
857 http --verify=no -a "${TELEMETRY_USER}:${TELEMETRY_PASSWORD}" PUT https://localhost:9696/policy/pdp/engine/topics/sources/noop/APPC-CL/events @appc.vcpe.success.json Content-Type:'text/plain'
859 Verify in *$POLICY_LOGS/network.log* that a *FINAL: SUCCESS* notification is sent over the POLICY-CL-MGT channel,
860 and an entry is added to the *$POLICY_LOGS/audit.log* indicating successful completion.
863 Running PDP-D Control Loop Application with other components
864 ============================================================
866 The reader can also look at the `integration/csit repository <https://git.onap.org/integration/csit>`__.
867 More specifically, these directories have examples of other PDP-D Control Loop configurations:
869 * `plans <https://git.onap.org/integration/csit/tree/plans/policy/drools-applications>`__: startup scripts.
870 * `scripts <https://git.onap.org/integration/csit/tree/scripts/policy/drools-apps/docker-compose-drools-apps.yml>`__: docker-compose and related files.
871 * `plans <https://git.onap.org/integration/csit/tree/tests/policy/drools-applications>`__: test plan.
873 Additional information
874 ======================
876 For additional information, please see the
877 `Drools PDP Development and Testing (In Depth) <https://wiki.onap.org/display/DW/2020+Frankfurt+Tutorials>`__ page.