2 .. This work is licensed under a Creative Commons Attribution 4.0 International License.
3 .. http://creativecommons.org/licenses/by/4.0
12 The new Beijing release capabilities for OOM are described here.
16 Follow the OOM installation instructions at http://onap.readthedocs.io/en/latest/submodules/oom.git/docs/index.html
18 Overview of the running system
19 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
21 Upon initialization, you should see the following pods, one instance for each Policy component: PAP, PDP-X, BRMSGW, PDP-D, policydb, and nexus.
23 Note the "-0" suffix for PDP-X and PDP-D components, which will be increased as they are scaled out to improve runtime performance and reliability.
28 kubectl get pods --all-namespaces -o=wid
30 onap dev-brmsgw-5dbc4c8dc4-llk5s 1/1 Running 0 18m 10.42.120.43 k8sx
31 onap dev-drools-0 1/1 Running 0 18m 10.42.60.27 k8sx
32 onap dev-nexus-7d96568f5f-qp5td 1/1 Running 0 18m 10.42.172.8 k8sx
33 onap dev-pap-8587696769-vwj6k 2/2 Running 0 18m 10.42.19.137 k8sx
34 onap dev-pdp-0 2/2 Running 0 18m 10.42.144.218 k8sx
35 onap dev-policydb-587d55bdff-4f5dz 1/1 Running 0 18m 10.42.12.242 k8sx
38 You will also see a service for every component:
41 :caption: verify services
43 kubectl get services --all-namespaces
45 onap brmsgw NodePort 10.43.209.173 <none> 9989:30216/TCP 24m
46 onap drools NodePort 10.43.27.92 <none> 6969:30217/TCP,9696:30221/TCP 24m
47 onap nexus NodePort 10.43.19.171 <none> 8081:30236/TCP 24m
48 onap pap NodePort 10.43.9.166 <none> 8443:30219/TCP,9091:30218/TCP 24m
49 onap pdp ClusterIP None <none> 8081/TCP 24m
50 onap policydb ClusterIP None <none> 3306/TCP 24m
52 Config and Decision policy requests will be distributed across PDP-Xs through the *pdp* service. PDP-X clients (such as DCAE) should configure their URLs to go through the *pdp* service. Their requests will be distributed across the available PDP-X replicas.
53 The PDP-Xs can be also accessed individually (dev-pdp-0, or dev-pdp-x if scaled out), but is preferable for PDP-X external clients to interface through the service.
55 PDP-Ds are also accessible on a group fashion by using the service IP, but DMaaP is the main means of communication with other ONAP components.
61 Verify that the policy healthcheck passes by the robot framework:
64 :caption: robot healthcheck
66 ~/oom/kubernetes/robot/ete-k8s.sh onap health 2> /dev/null | grep PASS
68 Basic Policy Health Check | PASS |
72 A policy healthcheck (with more detailed output) can be done directly by invoking the drools service in the policy VM.
75 :caption: PDP-D service (more detailed) healthcheck
77 # Using default credentials for the healtcheck service.
78 # To change the default username and passwords for this service,
79 # please modify configuration pre-installation at:
80 # oom/kubernetes/policy/charts/drools/resources/config/opt/policy/config/drools/keys/feature-healthcheck.conf
82 curl --silent --user 'healthcheck:zb!XztG34' -X GET http://localhost:30217/healthcheck | python -m json.tool
97 "url": "http://pap:9091/pap/test"
104 "url": "http://pdp:8081/pdp/test"
111 PDP-X Active/Active Pool
112 ^^^^^^^^^^^^^^^^^^^^^^^^
114 The policy engine UI (console container in the pap pod) can be used to check that the PAP and the PDP-Xs are synchronized.
115 The console URL is accessible at ``http://<oom-vm>:30219/onap/login.htm``. Select the *PDP* menu entry on the left side panel under *Policy*.
117 .. image:: srmPdpxPdpMgmt.png
119 After initialization, there will be no policies loaded into the policy subsystem. This can be verified by accessing the Editor tab in the UI.
122 PDP-D Active/Active Pool
123 ^^^^^^^^^^^^^^^^^^^^^^^^
125 The PDP-Ds replicas will come up with the amsterdam controller installed in brainless mode (no maven coordinates) since the controller has not been associated with a set of drools rules to run (control loop rules).
127 The following command can be issued on each of the PDP-D replicas IPs:
130 :caption: Querying the rules association for a PDP-D replica
132 # Using default credentials for the drools telemetry service.
133 # To change the default username and passwords for this service,
134 # please modify configuration pre-installation at:
135 # oom/kubernetes/policy/charts/drools/resources/config/opt/policy/config/drools/base.conf
138 curl --silent --user '@1b3rt:31nst31n' -X GET http://<drools-replica-ip>:9696/policy/pdp/engine/controllers/amsterdam/drools | python -m json.tool
141 "artifactId": "NO-ARTIFACT-ID",
143 "canonicalSessionNames": [],
145 "groupId": "NO-GROUP-ID",
147 "recentSinkEvents": [],
148 "recentSourceEvents": [],
150 "version": "NO-VERSION"
155 Before Installing Policies
156 ^^^^^^^^^^^^^^^^^^^^^^^^^^
158 It has been experienced in large OOM k8s multi-node full ONAP installations that components DNS and connectivity problems across pods through services. Eventually, the system becomes stable and ready to be used. Single node, smaller installations, do not seem to have these issues. Give the system enough time to make sure it has been initialized properly before pushing policies.
160 Make sure the policy subsystem is initialized by:
162 1. Verify that the "PDP Management" screen shows the 1 pooled PDP-X "UP_TO_DATE". If the PDP-X does not show the correct state, restart the faulty one to force re-synchronization with the pap.
165 :caption: Force re-synchronization of a PDP-X
167 kubectl exec -it dev-pdp-0 --container pdp -n onap -- bash -c "source /opt/app/policy/etc/profile.d/env.sh; policy.sh stop; policy.sh start"
169 # bounce the BRMSGW as well since it synchronizes with PDP-Xs via websockets:
171 kubectl exec -it dev-brmsgw-b877bc567-wbnbz -n onap -- bash -c "source /opt/app/policy/etc/profile.d/env.sh; policy.sh stop; policy.sh start"
174 2. Verify service name resolution is OK across policy components
177 :caption: Verify policy services connectivity
179 # pick any policy pod to run these tests from:
180 # kubectl get pods --all-namespaces -o=wide
182 kubectl exec -it dev-brmsgw-b877bc567-wbnbz -n onap -- bash -c "ping policydb"
183 kubectl exec -it dev-brmsgw-b877bc567-wbnbz -n onap -- bash -c "ping pdp"
184 kubectl exec -it dev-brmsgw-b877bc567-wbnbz -n onap -- bash -c "ping drools"
185 kubectl exec -it dev-brmsgw-b877bc567-wbnbz -n onap -- bash -c "ping nexus"
186 kubectl exec -it dev-brmsgw-b877bc567-wbnbz -n onap -- bash -c "ping message-router"
192 The OOM default installation will come with no policies pre-configured. There is a sample script used by integration teams to load policies to support all four use cases at: */tmp/policy-install/config/push-policies.sh* in the pap container within the pap pod. This script can be modified for your own particular installation, for example if only interested in vCPE use cases, remove those vCPE related API REST calls. For the vFW use case, you may want to edit the encoded operational policy to point to the proper resourceID in your installation.
194 The above mentioned *push-policies.sh* script can be executed as follows:
197 :caption: Installing the default policies
199 # NOTE: If modifications are required to the /tmp/policy-install/config/push-policies.sh, it should be copied
200 # to a different location, for example /tmp as /tmp/policy-install/config directory is read-only.
202 kubectl exec -it dev-pap-8587696769-vwj6k -c pap -n onap -- bash -c "export PRELOAD_POLICIES=true; /tmp/policy-install/config/push-policies.sh"
205 Create BRMSParam Operational Policies
207 Create BRMSParamvFirewall Policy
209 Transaction ID: ef08cc65-9950-4478-a4ab-0f3bc2519f60 --Policy with the name com.Config_BRMS_Param_BRMSParamvFirewall.1.xml was successfully created.Create BRMSParamvDNS Policy
211 Transaction ID: 52e33efe-ba66-47de-b404-8d441107d8a9 --Policy with the name com.Config_BRMS_Param_BRMSParamvDNS.1.xml was successfully created.Create BRMSParamVOLTE Policy
213 Transaction ID: f13072b7-6258-4c16-99da-f908d29363ec --Policy with the name com.Config_BRMS_Param_BRMSParamVOLTE.1.xml was successfully created.Create BRMSParamvCPE Policy
215 Transaction ID: 616f970a-b45e-40f7-88cd-d63000d22cca --Policy with the name com.Config_BRMS_Param_BRMSParamvCPE.1.xml was successfully created.Create MicroService Config Policies
216 Create MicroServicevFirewall Policy
218 Transaction ID: 4c143a15-20af-408a-9285-bc7940261829 --Policy with the name com.Config_MS_MicroServicevFirewall.1.xml was successfully created.Create MicroServicevDNS Policy
220 Transaction ID: 1e54ae73-509b-490e-bf62-1fea7989fd5f --Policy with the name com.Config_MS_MicroServicevDNS.1.xml was successfully created.Create MicroServicevCPE Policy
222 Transaction ID: 32239868-bab2-4e12-9fd9-81a0ed4a6b1c --Policy with the name com.Config_MS_MicroServicevCPE.1.xml was successfully created.Creating Decision Guard policy
224 Transaction ID: b43cb9d5-42c7-4654-aacf-d4898c4d13bb --Policy with the name com.Decision_AllPermitGuard.1.xml was successfully created.Push Decision policy
226 Transaction ID: 3c1e4ae6-6991-415b-9f2d-c665a8c5a026 --Policy 'com.Decision_AllPermitGuard.1.xml' was successfully pushed to the PDP group 'default'.Pushing BRMSParam Operational policies
228 Transaction ID: 58d26d03-b5b8-4fd3-b2df-1411a1c36420 --Policy 'com.Config_BRMS_Param_BRMSParamvFirewall.1.xml' was successfully pushed to the PDP group 'default'.pushPolicy : PUT : com.BRMSParamvDNS
230 Transaction ID: 0854e54a-504b-4f06-bc2f-30f491cb9f5a --Policy 'com.Config_BRMS_Param_BRMSParamvDNS.1.xml' was successfully pushed to the PDP group 'default'.pushPolicy : PUT : com.BRMSParamVOLTE
232 Transaction ID: d33c7dde-5c99-4dab-b4ff-9988473cd88d --Policy 'com.Config_BRMS_Param_BRMSParamVOLTE.1.xml' was successfully pushed to the PDP group 'default'.pushPolicy : PUT : com.BRMSParamvCPE
234 Transaction ID: e8c8a73e-127c-4318-9e59-3cae9dcbe011 --Policy 'com.Config_BRMS_Param_BRMSParamvCPE.1.xml' was successfully pushed to the PDP group 'default'.Pushing MicroService Config policies
236 Transaction ID: ec0429d7-e35f-4978-8a6c-40d2b5b3be61 --Policy 'com.Config_MS_MicroServicevFirewall.1.xml' was successfully pushed to the PDP group 'default'.pushPolicy : PUT : com.MicroServicevDNS
238 Transaction ID: f7072f05-7b74-45b5-9bd3-99b7f8023e3e --Policy 'com.Config_MS_MicroServicevDNS.1.xml' was successfully pushed to the PDP group 'default'.pushPolicy : PUT : com.MicroServicevCPE
240 Transaction ID: 6d47db63-7956-4f5f-ab34-aeb5a124a90d --Policy 'com.Config_MS_MicroServicevCPE.1.xml' was successfully pushed to the PDP group 'default'.
242 The policies pushed could be viewed eventually through the Policy UI:
244 .. image:: srmEditor.png
246 As part of the process pushing of policies through the policy, the brmsgw component will compose drools rules artifacts and publish them to the nexus respository at ``http://<oom-vm>:30236/nexus/``.
248 .. image:: srmNexus.png
250 At the same time each replica of the PDP-Ds will receive notifications for each new version of the policies to run for the Amsterdam controller. The following command can be run to see how the amsterdam controller is associated with the latest rules version.
252 The following command can be used for verifying each replica:
256 :caption: Querying the rules association of a PDP-D replica
258 # Using default credentials for the drools telemetry service.
259 # To change the default username and passwords for this service,
260 # please modify configuration pre-installation at:
261 # oom/kubernetes/policy/charts/drools/resources/config/opt/policy/config/drools/base.conf
263 curl --silent --user '@1b3rt:31nst31n' -X GET http://<replica-ip>:9696/policy/pdp/engine/controllers/amsterdam/drools | python -m json.tool
267 "artifactId": "policy-amsterdam-rules",
269 "groupId": "org.onap.policy-engine.drools.amsterdam",
271 "modelClassLoaderHash": 1223551265,
272 "recentSinkEvents": [],
273 "recentSourceEvents": [],
274 "sessionCoordinates": [
275 "org.onap.policy-engine.drools.amsterdam:policy-amsterdam-rules:0.4.0:closedloop-amsterdam"
278 "closedloop-amsterdam"
283 Likewise, for verification purposes, each PDP-X replica can be queried directly to retrieve policy information.
285 The following commands can be used to query a policy through the pdp service:
289 :caption: Querying the "pdp" service for the vFirewal policy
291 # Open a shell into the pap pod
293 ubuntu@k8sx:~$ kubectl exec -it dev-pap-8587696769-vwj6k -c pap -n onap bash
295 # In this example the vFirewall policy is queried.
297 policy@dev-pap-8587696769-vwj6k:/tmp/policy-install$ curl --silent -X POST --header 'Content-Type: application/json' --header 'Accept: application/json' --header 'ClientAuth: cHl0aG9uOnRlc3Q=' --header 'Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==' --header 'Environment: TEST' -d '{"policyName": ".*vFirewall.*"}' http://pdp:8081/pdp/api/getConfig | python -m json.tool
300 "config": "{\"service\":\"tca_policy\",\"location\":\"SampleServiceLocation\",\"uuid\":\"test\",\"policyName\":\"MicroServicevFirewall\",\"description\":\"MicroService vFirewall Policy\",\"configName\":\"SampleConfigName\",\"templateVersion\":\"OpenSource.version.1\",\"version\":\"1.1.0\",\"priority\":\"1\",\"policyScope\":\"resource=SampleResource,service=SampleService,type=SampleType,closedLoopControlName=ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a\",\"riskType\":\"SampleRiskType\",\"riskLevel\":\"1\",\"guard\":\"False\",\"content\":{\"tca_policy\":{\"domain\":\"measurementsForVfScaling\",\"metricsPerEventName\":[{\"eventName\":\"vFirewallBroadcastPackets\",\"controlLoopSchemaType\":\"VNF\",\"policyScope\":\"DCAE\",\"policyName\":\"DCAE.Config_tca-hi-lo\",\"policyVersion\":\"v0.0.1\",\"thresholds\":[{\"closedLoopControlName\":\"ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a\",\"version\":\"1.0.2\",\"fieldPath\":\"$.event.measurementsForVfScalingFields.vNicUsageArray[*].receivedTotalPacketsDelta\",\"thresholdValue\":300,\"direction\":\"LESS_OR_EQUAL\",\"severity\":\"MAJOR\",\"closedLoopEventStatus\":\"ONSET\"},{\"closedLoopControlName\":\"ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a\",\"version\":\"1.0.2\",\"fieldPath\":\"$.event.measurementsForVfScalingFields.vNicUsageArray[*].receivedTotalPacketsDelta\",\"thresholdValue\":700,\"direction\":\"GREATER_OR_EQUAL\",\"severity\":\"CRITICAL\",\"closedLoopEventStatus\":\"ONSET\"}]}]}}}",
301 "matchingConditions": {
302 "ConfigName": "SampleConfigName",
303 "Location": "SampleServiceLocation",
305 "service": "tca_policy",
308 "policyConfigMessage": "Config Retrieved! ",
309 "policyConfigStatus": "CONFIG_RETRIEVED",
310 "policyName": "com.Config_MS_MicroServicevFirewall.1.xml",
311 "policyType": "MicroService",
312 "policyVersion": "1",
314 "responseAttributes": {},
319 "matchingConditions": {
320 "ConfigName": "BRMS_PARAM_RULE",
323 "policyConfigMessage": "Config Retrieved! ",
324 "policyConfigStatus": "CONFIG_RETRIEVED",
325 "policyName": "com.Config_BRMS_Param_BRMSParamvFirewall.1.xml",
326 "policyType": "BRMS_PARAM",
327 "policyVersion": "1",
329 "responseAttributes": {
330 "controller": "amsterdam"
337 While the following commands could be used to query an specific PDP-X replica:
341 :caption: Querying PDP-X 0 for the vCPE policy
343 # open a shell into the pap pod
345 ubuntu@k8sx:~$ kubectl exec -it dev-pap-8587696769-vwj6k -c pap -n onap bash
347 # in this example the vCPE policy is queried.
349 curl --silent -X POST --header 'Content-Type: application/json' --header 'Accept: application/json' --header 'ClientAuth: cHl0aG9uOnRlc3Q=' --header 'Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==' --header 'Environment: TEST' -d '{"policyName": ".*vCPE.*"}' http://10.42.144.218:8081/pdp/api/getConfig | python -m json.tool
353 "matchingConditions": {
354 "ConfigName": "BRMS_PARAM_RULE",
357 "policyConfigMessage": "Config Retrieved! ",
358 "policyConfigStatus": "CONFIG_RETRIEVED",
359 "policyName": "com.Config_BRMS_Param_BRMSParamvCPE.1.xml",
360 "policyType": "BRMS_PARAM",
361 "policyVersion": "1",
363 "responseAttributes": {
364 "controller": "amsterdam"
369 "config": "{\"service\":\"tca_policy\",\"location\":\"SampleServiceLocation\",\"uuid\":\"test\",\"policyName\":\"MicroServicevCPE\",\"description\":\"MicroService vCPE Policy\",\"configName\":\"SampleConfigName\",\"templateVersion\":\"OpenSource.version.1\",\"version\":\"1.1.0\",\"priority\":\"1\",\"policyScope\":\"resource=SampleResource,service=SampleService,type=SampleType,closedLoopControlName=ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e\",\"riskType\":\"SampleRiskType\",\"riskLevel\":\"1\",\"guard\":\"False\",\"content\":{\"tca_policy\":{\"domain\":\"measurementsForVfScaling\",\"metricsPerEventName\":[{\"eventName\":\"Measurement_vGMUX\",\"controlLoopSchemaType\":\"VNF\",\"policyScope\":\"DCAE\",\"policyName\":\"DCAE.Config_tca-hi-lo\",\"policyVersion\":\"v0.0.1\",\"thresholds\":[{\"closedLoopControlName\":\"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e\",\"version\":\"1.0.2\",\"fieldPath\":\"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value\",\"thresholdValue\":0,\"direction\":\"EQUAL\",\"severity\":\"MAJOR\",\"closedLoopEventStatus\":\"ABATED\"},{\"closedLoopControlName\":\"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e\",\"version\":\"1.0.2\",\"fieldPath\":\"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value\",\"thresholdValue\":0,\"direction\":\"GREATER\",\"severity\":\"CRITICAL\",\"closedLoopEventStatus\":\"ONSET\"}]}]}}}",
370 "matchingConditions": {
371 "ConfigName": "SampleConfigName",
372 "Location": "SampleServiceLocation",
374 "service": "tca_policy",
377 "policyConfigMessage": "Config Retrieved! ",
378 "policyConfigStatus": "CONFIG_RETRIEVED",
379 "policyName": "com.Config_MS_MicroServicevCPE.1.xml",
380 "policyType": "MicroService",
381 "policyVersion": "1",
383 "responseAttributes": {},
391 A PDP-X container failure can be simulated by either:
392 a) performing a"policy.sh stop" operation within the PDP-X container, which in fact will shutdown the PDP-X service, and eventually will be detected by the liveness checks, or
393 b) by plainly deleting the corresponding pod.
395 In the following example, the PDP-X 0 is forced to fail.
398 :caption: Causing PDP-X 0 service to fail
400 # In these scenarios the liveness check will fail and recovery actions will take place.
403 # Alternative 1: In this scenario we shutdown the PDP-X 0 service, so the liveness monitored ports will be down
404 # (but the pod is up) and corrective measures will be applied
405 ubuntu@k8sx:~$ kubectl exec -it dev-pdp-0 --container pdp -n onap -- bash -c "source /opt/app/policy/etc/profile.d/env.sh; policy.sh stop;"
409 # Alternative 2: Brute force delete of the PDP-X 0 pod.
410 ubuntu@k8sx:~$ kubectl delete pod dev-pdp-0 -n onap
411 pod "dev-pdp-0" deleted
413 Upon detection of the service being down through the liveness check, the container will be restarted. Note the **restart count** when querying the status of the pods:
416 :caption: Checking PDP-X 0 restart count
418 ubuntu@k8sx:~$ kubectl get pods --all-namespaces -o=wide
420 NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
422 onap dev-brmsgw-5dbc4c8dc4-llk5s 1/1 Running 0 3d 10.42.120.43 k8sx
423 onap dev-drools-0 1/1 Running 0 3d 10.42.60.27 k8sx
424 onap dev-nexus-7d96568f5f-qp5td 1/1 Running 0 3d 10.42.172.8 k8sx
425 onap dev-pap-8587696769-vwj6k 2/2 Running 0 3d 10.42.19.137 k8sx
426 onap dev-pdp-0 2/2 Running 0 3d 10.42.144.218 k8sx
427 onap dev-policydb-587d55bdff-4f5dz 1/1 Running 0 3d 10.42.12.242 k8sx
430 During the restart process, the PAP component, will detect that PDP-X 0 is down and therefore its state being reflected in the PDP-X screen:
432 .. image:: srmPdpxResiliencyPdpMgmt1.png
434 This screen will be updated to reflect PDP-X 0 is back alive, after PDP-X 0 synchronizes itself with the PAP.
436 .. image:: srmPdpxResiliencyPdpMgmt2.png
438 At that point, the PDP-X is usable either directly or through the service to query for policies.
442 :caption: Query PDP-X 1 for vCPE policy
444 # in this example we perform the vCPE query from the OOM VM
445 # the default installation credentials are used for querying the vCPE policy
447 ubuntu@k8sx:~$ curl --silent -X POST --header 'Content-Type: application/json' --header 'Accept: application/json' --header 'ClientAuth: cHl0aG9uOnRlc3Q=' --header 'Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==' --header 'Environment: TEST' -d '{"policyName": ".*vCPE.*"}' http://10.42.233.111:8081/pdp/api/getConfig | python -m json.tool
451 "matchingConditions": {
452 "ConfigName": "BRMS_PARAM_RULE",
455 "policyConfigMessage": "Config Retrieved! ",
456 "policyConfigStatus": "CONFIG_RETRIEVED",
457 "policyName": "com.Config_BRMS_Param_BRMSParamvCPE.1.xml",
458 "policyType": "BRMS_PARAM",
459 "policyVersion": "1",
461 "responseAttributes": {
462 "controller": "amsterdam"
467 "config": "{\"service\":\"tca_policy\",\"location\":\"SampleServiceLocation\",\"uuid\":\"test\",\"policyName\":\"MicroServicevCPE\",\"description\":\"MicroService vCPE Policy\",\"configName\":\"SampleConfigName\",\"templateVersion\":\"OpenSource.version.1\",\"version\":\"1.1.0\",\"priority\":\"1\",\"policyScope\":\"resource=SampleResource,service=SampleService,type=SampleType,closedLoopControlName=ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e\",\"riskType\":\"SampleRiskType\",\"riskLevel\":\"1\",\"guard\":\"False\",\"content\":{\"tca_policy\":{\"domain\":\"measurementsForVfScaling\",\"metricsPerEventName\":[{\"eventName\":\"Measurement_vGMUX\",\"controlLoopSchemaType\":\"VNF\",\"policyScope\":\"DCAE\",\"policyName\":\"DCAE.Config_tca-hi-lo\",\"policyVersion\":\"v0.0.1\",\"thresholds\":[{\"closedLoopControlName\":\"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e\",\"version\":\"1.0.2\",\"fieldPath\":\"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value\",\"thresholdValue\":0,\"direction\":\"EQUAL\",\"severity\":\"MAJOR\",\"closedLoopEventStatus\":\"ABATED\"},{\"closedLoopControlName\":\"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e\",\"version\":\"1.0.2\",\"fieldPath\":\"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value\",\"thresholdValue\":0,\"direction\":\"GREATER\",\"severity\":\"CRITICAL\",\"closedLoopEventStatus\":\"ONSET\"}]}]}}}",
468 "matchingConditions": {
469 "ConfigName": "SampleConfigName",
470 "Location": "SampleServiceLocation",
472 "service": "tca_policy",
475 "policyConfigMessage": "Config Retrieved! ",
476 "policyConfigStatus": "CONFIG_RETRIEVED",
477 "policyName": "com.Config_MS_MicroServicevCPE.1.xml",
478 "policyType": "MicroService",
479 "policyVersion": "1",
481 "responseAttributes": {},
489 A PDP-D container failure can be simulated by either:
490 a) performing a"policy stop" operation within the PDP-D pod, which in fact will shutdown the PDP-D service, and eventually will be detected by the liveness checks, or
491 b) by plainly deleting the corresponding pod.
493 In the following example, the PDP-D 0 is forced to fail.
496 :caption: Causing PDP-D 0 to fail
498 # In these scenarios the liveness check will fail and recovery actions will take place.
500 # Alternative 1: in this scenario we shutdown the PDP-D 0 policy process, so the liveness monitored ports
501 # will be down (but the pod is up) and corrective measures will be applied
503 ubuntu@k8sx:~/oom/kubernetes$ kubectl exec -it dev-drools-0 --container drools -n onap -- bash -c "source /opt/app/policy/etc/profile.d/env.sh; policy stop"
504 [drools-pdp-controllers]
505 L []: Stopping Policy Management... Policy Management (pid=3284) is stopping... Policy Management has stopped.
508 Upon detection of the service being down through the liveness check, the container will be restarted. Note the restart count when querying the status of the pods:
511 :caption: Checking PDP-D 0 restart count
513 ubuntu@k8sx:~$ kubectl get pods --all-namespaces -o=wide | grep drool
514 onap dev-drools-0 0/1 Running 0 1d 10.42.10.21 k8sx
516 ubuntu@k8sx:~$ kubectl get pods --all-namespaces -o=wide | grep drools
517 onap dev-drools-0 1/1 Running 1 1d 10.42.10.21 k8sx <-- note restart count
519 Verification that the restarted PDP-D 0 comes up with the appropriate policy loaded can be verified by checking its maven coordinates:
522 :caption: Verifying restarted PDP-D points to policies pre-failure.
524 ubuntu@k8sx:~$ curl --silent --user '@1b3rt:31nst31n' -X GET http://10.42.10.21:9696/policy/pdp/engine/controllers/amsterdam/drools | python -m json.tool
527 "artifactId": "policy-amsterdam-rules",
529 "groupId": "org.onap.policy-engine.drools.amsterdam",
531 "modelClassLoaderHash": 189820624,
532 "recentSinkEvents": [],
533 "recentSourceEvents": [],
534 "sessionCoordinates": [
535 "org.onap.policy-engine.drools.amsterdam:policy-amsterdam-rules:0.4.0:closedloop-amsterdam"
538 "closedloop-amsterdam"
547 To scale a new PDP-X, set the replica count appropriately.
549 In our tests below, we are going to work with the OOM policy component in isolation. In this exercise, we scale the PDP-X with 1 additional replica, PDP-X 1.
552 :caption: Scaling a PDP-X
554 ubuntu@k8sx:~$ helm upgrade -i dev local/onap --namespace onap --set global.pullPolicy=IfNotPresent --set policy.pdp.replicaCount=2
555 Release "dev" has been upgraded. Happy Helming!
556 LAST DEPLOYED: Mon Jun 4 15:19:05 2018
562 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
563 dbc-pg-primary ClusterIP 10.43.29.226 <none> 5432/TCP 2d
564 dbc-pg-replica ClusterIP 10.43.202.168 <none> 5432/TCP 2d
565 dbc-postgres ClusterIP 10.43.181.134 <none> 5432/TCP 2d
566 dmaap-bc NodePort 10.43.254.230 <none> 8080:30241/TCP,8443:30242/TCP 2d
567 message-router-kafka ClusterIP 10.43.69.159 <none> 9092/TCP 2d
568 message-router-zookeeper ClusterIP None <none> 2181/TCP 2d
569 message-router NodePort 10.43.123.102 <none> 3904:30227/TCP,3905:30226/TCP 2d
570 msb-consul NodePort 10.43.27.77 <none> 8500:30285/TCP 2d
571 msb-discovery NodePort 10.43.178.20 <none> 10081:30281/TCP 2d
572 msb-eag NodePort 10.43.77.235 <none> 80:30282/TCP,443:30284/TCP 2d
573 msb-iag NodePort 10.43.221.196 <none> 80:30280/TCP,443:30283/TCP 2d
574 brmsgw NodePort 10.43.21.222 <none> 9989:30216/TCP 2d
575 nexus NodePort 10.43.159.27 <none> 8081:30236/TCP 2d
576 drools NodePort 10.43.233.67 <none> 6969:30217/TCP,9696:30221/TCP 2d
577 policydb ClusterIP None <none> 3306/TCP 2d
578 pdp ClusterIP None <none> 8081/TCP 2d
579 pap NodePort 10.43.110.50 <none> 8443:30219/TCP,9091:30218/TCP 2d
580 robot NodePort 10.43.172.248 <none> 88:30209/TCP 2d
582 ==> v1beta1/Deployment
583 NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
584 dev-dmaap-bus-controller 1 1 1 1 2d
585 dev-message-router-kafka 1 1 1 1 2d
586 dev-message-router-zookeeper 1 1 1 1 2d
587 dev-message-router 1 1 1 1 2d
588 dev-kube2msb 1 1 1 1 2d
589 dev-msb-consul 1 1 1 1 2d
590 dev-msb-discovery 1 1 1 1 2d
591 dev-msb-eag 1 1 1 1 2d
592 dev-msb-iag 1 1 1 1 2d
593 dev-brmsgw 1 1 1 1 2d
595 dev-policydb 1 1 1 1 2d
599 ==> v1beta1/StatefulSet
600 NAME DESIRED CURRENT AGE
605 ==> v1/PersistentVolumeClaim
606 NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
607 dev-message-router-kafka Bound dev-message-router-kafka 2Gi RWX 2d
608 dev-message-router-zookeeper Bound dev-message-router-zookeeper 2Gi RWX 2d
609 dev-nexus Bound dev-nexus 2Gi RWX 2d
610 dev-policydb Bound dev-policydb 2Gi RWX 2d
614 dev-dmaap-bus-controller-config 1 2d
615 dev-message-router-cadi-prop-configmap 1 2d
616 dev-message-router-msgrtrapi-prop-configmap 1 2d
617 dev-msb-discovery 1 2d
620 dev-brmsgw-pe-configmap 2 2d
621 dev-drools-configmap 6 2d
622 dev-drools-log-configmap 1 2d
623 dev-drools-settings-configmap 1 2d
624 dev-policydb-configmap 1 2d
625 dev-pdp-log-configmap 1 2d
626 dev-pdp-pe-configmap 3 2d
627 dev-pe-scripts-configmap 1 2d
628 dev-filebeat-configmap 1 2d
629 dev-pe-configmap 1 2d
630 dev-pap-pe-configmap 7 2d
631 dev-pap-sdk-log-configmap 1 2d
632 dev-pap-log-configmap 1 2d
633 dev-robot-resources-configmap 3 2d
634 dev-robot-lighttpd-authorization-configmap 1 2d
635 dev-robot-eteshare-configmap 4 2d
637 ==> v1/PersistentVolume
638 NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
639 dev-dbc-pg-data0 1Gi RWO Retain Bound onap/dev-dbc-pg-data-dev-dbc-pg-0 dev-dbc-pg-data 2d
640 dev-dbc-pg-data1 1Gi RWO Retain Bound onap/dev-dbc-pg-data-dev-dbc-pg-1 dev-dbc-pg-data 2d
641 dev-message-router-kafka 2Gi RWX Retain Bound onap/dev-message-router-kafka 2d
642 dev-message-router-zookeeper 2Gi RWX Retain Bound onap/dev-message-router-zookeeper 2d
643 dev-nexus 2Gi RWX Retain Bound onap/dev-nexus 2d
644 dev-policydb 2Gi RWX Retain Bound onap/dev-policydb 2d
646 ==> v1beta1/ClusterRoleBinding
651 NAME READY STATUS RESTARTS AGE
652 dev-dmaap-bus-controller-5bd859c7dc-blzdc 1/1 Running 0 2d
653 dev-message-router-kafka-748cdf7b9c-srv7l 1/1 Running 0 2d
654 dev-message-router-zookeeper-5b5969f6f-8rk9w 1/1 Running 0 2d
655 dev-message-router-b5bdc599c-5h56k 1/1 Running 0 2d
656 dev-kube2msb-579fc77c54-m84qx 1/1 Running 0 2d
657 dev-msb-consul-7bc4fcc8-94gsc 1/1 Running 0 2d
658 dev-msb-discovery-768547bcb-2hr7j 2/2 Running 0 2d
659 dev-msb-eag-5d95686c67-9lkzs 2/2 Running 0 2d
660 dev-msb-iag-675b649848-pv2gh 2/2 Running 0 2d
661 dev-brmsgw-5675f5877b-wv68s 1/1 Running 0 2d
662 dev-nexus-7d96568f5f-m8c4l 1/1 Running 0 2d
663 dev-policydb-587d55bdff-9gdjv 1/1 Running 0 2d
664 dev-pap-678b44cd87-wxbww 2/2 Running 0 2d
665 dev-robot-589c76bb6b-hrrdn 1/1 Running 0 2d
666 dev-dbc-pg-0 1/1 Running 0 2d
667 dev-dbc-pg-1 1/1 Running 0 2d
668 dev-drools-0 1/1 Running 1 2d
669 dev-pdp-0 2/2 Running 1 2d
670 dev-pdp-1 0/2 Init:0/1 0 0s
674 dev-dbc-pg Opaque 3 2d
675 dev-message-router-secret Opaque 1 2d
676 dev-drools-secret Opaque 2 2d
677 dev-policydb-secret Opaque 2 2d
678 onap-docker-registry-key kubernetes.io/dockercfg 1 2d
681 Check Policy Engine UI how the PDP-Xs are coming up and request policies to the PAP.
683 Eventually the new PDP-X will be connected and serving policies:
685 .. image:: srmPdpxScalingPdpMgmt1.png
687 The new PDP-X should be now ready to serve policies:
690 :caption: Check that the new PDP-X 3 and 4 can serve policies
692 ubuntu@k8sx:~/oom/kubernetes$ curl --silent -X POST --header 'Content-Type: application/json' --header 'Accept: application/json' --header 'ClientAuth: cHl0aG9uOnRlc3Q=' --header 'Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==' --header 'Environment: TEST' -d '{"policyName": ".*vCPE.*"}' http://10.42.183.0:8081/pdp/api/getConfig | python -m json.tool
696 "matchingConditions": {
697 "ConfigName": "BRMS_PARAM_RULE",
700 "policyConfigMessage": "Config Retrieved! ",
701 "policyConfigStatus": "CONFIG_RETRIEVED",
702 "policyName": "com.Config_BRMS_Param_BRMSParamvCPE.1.xml",
703 "policyType": "BRMS_PARAM",
704 "policyVersion": "1",
706 "responseAttributes": {
707 "controller": "amsterdam"
712 "config": "{\"service\":\"tca_policy\",\"location\":\"SampleServiceLocation\",\"uuid\":\"test\",\"policyName\":\"MicroServicevCPE\",\"description\":\"MicroService vCPE Policy\",\"configName\":\"SampleConfigName\",\"templateVersion\":\"OpenSource.version.1\",\"version\":\"1.1.0\",\"priority\":\"1\",\"policyScope\":\"resource=SampleResource,service=SampleService,type=SampleType,closedLoopControlName=ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e\",\"riskType\":\"SampleRiskType\",\"riskLevel\":\"1\",\"guard\":\"False\",\"content\":{\"tca_policy\":{\"domain\":\"measurementsForVfScaling\",\"metricsPerEventName\":[{\"eventName\":\"Measurement_vGMUX\",\"controlLoopSchemaType\":\"VNF\",\"policyScope\":\"DCAE\",\"policyName\":\"DCAE.Config_tca-hi-lo\",\"policyVersion\":\"v0.0.1\",\"thresholds\":[{\"closedLoopControlName\":\"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e\",\"version\":\"1.0.2\",\"fieldPath\":\"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value\",\"thresholdValue\":0,\"direction\":\"EQUAL\",\"severity\":\"MAJOR\",\"closedLoopEventStatus\":\"ABATED\"},{\"closedLoopControlName\":\"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e\",\"version\":\"1.0.2\",\"fieldPath\":\"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value\",\"thresholdValue\":0,\"direction\":\"GREATER\",\"severity\":\"CRITICAL\",\"closedLoopEventStatus\":\"ONSET\"}]}]}}}",
713 "matchingConditions": {
714 "ConfigName": "SampleConfigName",
715 "Location": "SampleServiceLocation",
717 "service": "tca_policy",
720 "policyConfigMessage": "Config Retrieved! ",
721 "policyConfigStatus": "CONFIG_RETRIEVED",
722 "policyName": "com.Config_MS_MicroServicevCPE.1.xml",
723 "policyType": "MicroService",
724 "policyVersion": "1",
726 "responseAttributes": {},
732 ubuntu@k8sx:~/oom/kubernetes$ curl --silent -X POST --header 'Content-Type: application/json' --header 'Accept: application/json' --header 'ClientAuth: cHl0aG9uOnRlc3Q=' --header 'Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==' --header 'Environment: TEST' -d '{"policyName": ".*vCPE.*"}' http://10.42.137.241:8081/pdp/api/getConfig | python -m json.tool
735 "config": "{\"service\":\"tca_policy\",\"location\":\"SampleServiceLocation\",\"uuid\":\"test\",\"policyName\":\"MicroServicevCPE\",\"description\":\"MicroService vCPE Policy\",\"configName\":\"SampleConfigName\",\"templateVersion\":\"OpenSource.version.1\",\"version\":\"1.1.0\",\"priority\":\"1\",\"policyScope\":\"resource=SampleResource,service=SampleService,type=SampleType,closedLoopControlName=ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e\",\"riskType\":\"SampleRiskType\",\"riskLevel\":\"1\",\"guard\":\"False\",\"content\":{\"tca_policy\":{\"domain\":\"measurementsForVfScaling\",\"metricsPerEventName\":[{\"eventName\":\"Measurement_vGMUX\",\"controlLoopSchemaType\":\"VNF\",\"policyScope\":\"DCAE\",\"policyName\":\"DCAE.Config_tca-hi-lo\",\"policyVersion\":\"v0.0.1\",\"thresholds\":[{\"closedLoopControlName\":\"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e\",\"version\":\"1.0.2\",\"fieldPath\":\"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value\",\"thresholdValue\":0,\"direction\":\"EQUAL\",\"severity\":\"MAJOR\",\"closedLoopEventStatus\":\"ABATED\"},{\"closedLoopControlName\":\"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e\",\"version\":\"1.0.2\",\"fieldPath\":\"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value\",\"thresholdValue\":0,\"direction\":\"GREATER\",\"severity\":\"CRITICAL\",\"closedLoopEventStatus\":\"ONSET\"}]}]}}}",
736 "matchingConditions": {
737 "ConfigName": "SampleConfigName",
738 "Location": "SampleServiceLocation",
740 "service": "tca_policy",
743 "policyConfigMessage": "Config Retrieved! ",
744 "policyConfigStatus": "CONFIG_RETRIEVED",
745 "policyName": "com.Config_MS_MicroServicevCPE.1.xml",
746 "policyType": "MicroService",
747 "policyVersion": "1",
749 "responseAttributes": {},
754 "ConfigName": "BRMS_PARAM_RULE",
757 "policyConfigMessage": "Config Retrieved! ",
758 "policyConfigStatus": "CONFIG_RETRIEVED",
759 "policyName": "com.Config_BRMS_Param_BRMSParamvCPE.1.xml",
760 "policyType": "BRMS_PARAM",
761 "policyVersion": "1",
763 "responseAttributes": {
764 "controller": "amsterdam"
774 To scale a new PDP-D, set the replica count appropriately. In our scenario below, we are going to scale the PDP-D service to add a new pod (2 active PDP-Ds).
777 :caption: Scaling a PDP-D
779 # Note: we also set the PDP-X pool to 2 instances (matching the previous section)
782 ubuntu@k8sx:~$ helm upgrade -i dev local/onap --namespace onap --set global.pullPolicy=IfNotPresent --set policy.pdp.replicaCount=2 --set policy.drools.replicaCount=2
783 Release "dev" has been upgraded. Happy Helming!
784 LAST DEPLOYED: Mon Jun 4 15:52:46 2018
791 dev-dmaap-bus-controller-config 1 2d
792 dev-message-router-cadi-prop-configmap 1 2d
793 dev-message-router-msgrtrapi-prop-configmap 1 2d
794 dev-msb-discovery 1 2d
797 dev-brmsgw-pe-configmap 2 2d
798 dev-drools-configmap 6 2d
799 dev-drools-log-configmap 1 2d
800 dev-drools-settings-configmap 1 2d
801 dev-policydb-configmap 1 2d
802 dev-pdp-pe-configmap 3 2d
803 dev-pdp-log-configmap 1 2d
804 dev-pe-scripts-configmap 1 2d
805 dev-filebeat-configmap 1 2d
806 dev-pe-configmap 1 2d
807 dev-pap-pe-configmap 7 2d
808 dev-pap-log-configmap 1 2d
809 dev-pap-sdk-log-configmap 1 2d
810 dev-robot-resources-configmap 3 2d
811 dev-robot-lighttpd-authorization-configmap 1 2d
812 dev-robot-eteshare-configmap 4 2d
814 ==> v1/PersistentVolume
815 NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
816 dev-dbc-pg-data0 1Gi RWO Retain Bound onap/dev-dbc-pg-data-dev-dbc-pg-0 dev-dbc-pg-data 2d
817 dev-dbc-pg-data1 1Gi RWO Retain Bound onap/dev-dbc-pg-data-dev-dbc-pg-1 dev-dbc-pg-data 2d
818 dev-message-router-kafka 2Gi RWX Retain Bound onap/dev-message-router-kafka 2d
819 dev-message-router-zookeeper 2Gi RWX Retain Bound onap/dev-message-router-zookeeper 2d
820 dev-nexus 2Gi RWX Retain Bound onap/dev-nexus 2d
821 dev-policydb 2Gi RWX Retain Bound onap/dev-policydb 2d
823 ==> v1/PersistentVolumeClaim
824 NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
825 dev-message-router-kafka Bound dev-message-router-kafka 2Gi RWX 2d
826 dev-message-router-zookeeper Bound dev-message-router-zookeeper 2Gi RWX 2d
827 dev-nexus Bound dev-nexus 2Gi RWX 2d
828 dev-policydb Bound dev-policydb 2Gi RWX 2d
830 ==> v1beta1/ClusterRoleBinding
834 ==> v1beta1/Deployment
835 NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
836 dev-dmaap-bus-controller 1 1 1 1 2d
837 dev-message-router-kafka 1 1 1 1 2d
838 dev-message-router-zookeeper 1 1 1 1 2d
839 dev-message-router 1 1 1 1 2d
840 dev-kube2msb 1 1 1 1 2d
841 dev-msb-consul 1 1 1 1 2d
842 dev-msb-discovery 1 1 1 1 2d
843 dev-msb-eag 1 1 1 1 2d
844 dev-msb-iag 1 1 1 1 2d
845 dev-brmsgw 1 1 1 1 2d
847 dev-policydb 1 1 1 1 2d
852 NAME READY STATUS RESTARTS AGE
853 dev-dmaap-bus-controller-5bd859c7dc-blzdc 1/1 Running 0 2d
854 dev-message-router-kafka-748cdf7b9c-srv7l 1/1 Running 0 2d
855 dev-message-router-zookeeper-5b5969f6f-8rk9w 1/1 Running 0 2d
856 dev-message-router-b5bdc599c-5h56k 1/1 Running 0 2d
857 dev-kube2msb-579fc77c54-m84qx 1/1 Running 0 2d
858 dev-msb-consul-7bc4fcc8-94gsc 1/1 Running 0 2d
859 dev-msb-discovery-768547bcb-2hr7j 2/2 Running 0 2d
860 dev-msb-eag-5d95686c67-9lkzs 2/2 Running 0 2d
861 dev-msb-iag-675b649848-pv2gh 2/2 Running 0 2d
862 dev-brmsgw-5675f5877b-wv68s 1/1 Running 0 2d
863 dev-nexus-7d96568f5f-m8c4l 1/1 Running 0 2d
864 dev-policydb-587d55bdff-9gdjv 1/1 Running 0 2d
865 dev-pap-678b44cd87-wxbww 2/2 Running 0 2d
866 dev-robot-589c76bb6b-hrrdn 1/1 Running 0 2d
867 dev-dbc-pg-0 1/1 Running 0 2d
868 dev-dbc-pg-1 1/1 Running 0 2d
869 dev-drools-0 1/1 Running 1 2d
870 dev-drools-1 0/1 Init:0/1 0 1s
871 dev-pdp-0 2/2 Running 1 2d
872 dev-pdp-1 2/2 Running 0 33m
876 dev-dbc-pg Opaque 3 2d
877 dev-message-router-secret Opaque 1 2d
878 dev-drools-secret Opaque 2 2d
879 dev-policydb-secret Opaque 2 2d
880 onap-docker-registry-key kubernetes.io/dockercfg 1 2d
882 ==> v1beta1/StatefulSet
883 NAME DESIRED CURRENT AGE
889 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
890 dbc-postgres ClusterIP 10.43.181.134 <none> 5432/TCP 2d
891 dbc-pg-replica ClusterIP 10.43.202.168 <none> 5432/TCP 2d
892 dbc-pg-primary ClusterIP 10.43.29.226 <none> 5432/TCP 2d
893 dmaap-bc NodePort 10.43.254.230 <none> 8080:30241/TCP,8443:30242/TCP 2d
894 message-router-kafka ClusterIP 10.43.69.159 <none> 9092/TCP 2d
895 message-router-zookeeper ClusterIP None <none> 2181/TCP 2d
896 message-router NodePort 10.43.123.102 <none> 3904:30227/TCP,3905:30226/TCP 2d
897 msb-consul NodePort 10.43.27.77 <none> 8500:30285/TCP 2d
898 msb-discovery NodePort 10.43.178.20 <none> 10081:30281/TCP 2d
899 msb-eag NodePort 10.43.77.235 <none> 80:30282/TCP,443:30284/TCP 2d
900 msb-iag NodePort 10.43.221.196 <none> 80:30280/TCP,443:30283/TCP 2d
901 brmsgw NodePort 10.43.21.222 <none> 9989:30216/TCP 2d
902 nexus NodePort 10.43.159.27 <none> 8081:30236/TCP 2d
903 drools NodePort 10.43.233.67 <none> 6969:30217/TCP,9696:30221/TCP 2d
904 policydb ClusterIP None <none> 3306/TCP 2d
905 pdp ClusterIP None <none> 8081/TCP 2d
906 pap NodePort 10.43.110.50 <none> 8443:30219/TCP,9091:30218/TCP 2d
907 robot NodePort 10.43.172.248 <none> 88:30209/TCP 2d
909 Verify that the new PDP-D comes up with the latest policy coordinates:
912 :caption: Verify new PDP-D 2 comes up with policies loaded
914 ubuntu@k8sx:~$ curl --silent --user '@1b3rt:31nst31n' -X GET http://10.42.172.88:9696/policy/pdp/engine/controllers/amsterdam/drools | python -m json.tool
917 "artifactId": "policy-amsterdam-rules",
919 "groupId": "org.onap.policy-engine.drools.amsterdam",
921 "modelClassLoaderHash": 1657760388,
922 "recentSinkEvents": [],
923 "recentSourceEvents": [],
924 "sessionCoordinates": [
925 "org.onap.policy-engine.drools.amsterdam:policy-amsterdam-rules:0.5.0:closedloop-amsterdam"
928 "closedloop-amsterdam"
936 .. SSNote: Wiki page ref. https://wiki.onap.org/display/DW/Policy+on+OOM
937 .. SSNote: Old Wiki page ref. https://wiki.onap.org/display/DW/Scalability%2C+Resiliency+and+Manageability