2de6baebc2d7a6bfb705212316d0bc08ad84af59
[policy/engine.git] / docs / platform / swarch_srm.rst
1
2 .. This work is licensed under a Creative Commons Attribution 4.0 International License.
3 .. http://creativecommons.org/licenses/by/4.0
4
5 *****************************************
6 Scalability, Resiliency and Manageability 
7 *****************************************
8
9 .. contents::
10     :depth: 3
11
12 The new Beijing release scalability, resiliency, and manageablity are described here.   These capabilities apply to the OOM/Kubernetes installation.
13
14 Installation
15 ^^^^^^^^^^^^
16 Follow the OOM installation instructions at http://onap.readthedocs.io/en/latest/submodules/oom.git/docs/index.html
17
18 Overview of the running system
19 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
20
21 Upon initialization, you should see pools of 4 PDP-Ds and 2 PDP-Xs:
22
23 .. code-block:: bash
24    :caption: verify pods
25
26     kubectl get pods --all-namespaces -o=wid
27      
28     onap    dev-brmsgw-5dbc4c8dc4-llk5s        1/1       Running   0     18m     10.42.120.43    k8sx
29     onap    dev-drools-0                       1/1       Running   0     18m     10.42.60.27     k8sx
30     onap    dev-drools-1                       1/1       Running   0     16m     10.42.105.190   k8sx
31     onap    dev-drools-2                       1/1       Running   0     15m     10.42.139.82    k8sx
32     onap    dev-drools-3                       1/1       Running   0     15m     10.42.128.4     k8sx
33     onap    dev-nexus-7d96568f5f-qp5td         1/1       Running   0     18m     10.42.172.8     k8sx
34     onap    dev-pap-8587696769-vwj6k           2/2       Running   0     18m     10.42.19.137    k8sx
35     onap    dev-pdp-0                          2/2       Running   0     18m     10.42.144.218   k8sx
36     onap    dev-pdp-1                          2/2       Running   0     15m     10.42.233.111   k8sx
37     onap    dev-policydb-587d55bdff-4f5dz      1/1       Running   0     18m     10.42.12.242    k8sx
38
39
40 and a service for every component:
41
42 .. code-block:: bash
43    :caption: verify services
44
45     kubectl get services --all-namespaces
46      
47     onap    brmsgw         NodePort    10.43.209.173   <none>     9989:30216/TCP                  24m
48     onap    drools         NodePort    10.43.27.92     <none>     6969:30217/TCP,9696:30221/TCP   24m
49     onap    nexus          NodePort    10.43.19.171    <none>     8081:30236/TCP                  24m
50     onap    pap            NodePort    10.43.9.166     <none>     8443:30219/TCP,9091:30218/TCP   24m
51     onap    pdp            ClusterIP   None            <none>     8081/TCP                        24m
52     onap    policydb       ClusterIP   None            <none>     3306/TCP                        24m
53
54 Config and Decision policy requests will be distributed across PDP-Xs through the *pdp* service.    PDP-X clients (such as DCAE) should configure their URLs to go through the *pdp* service.   Their requests will be distributed across the available PDP-X replicas.    The PDP-Xs can be also accessed individually (dev-pdp-0 and dev-pdp-1 above), but is preferable to that external clients use the service.
55
56 PDP-Ds are also accessible on a group fashion by using the service IP.   Nevertheless, as DMaaP is the main means of communication with other ONAP components, the service interface is not used heavily.
57
58
59 Healthchecks
60 ^^^^^^^^^^^^
61
62 Verify that the policy healtcheck passes by the robot framework:
63
64 .. code-block:: bash
65    :caption: ~/oom/kubernetes/robot/ete-k8s.sh onap health 2> /dev/null | grep PASS
66
67     Basic Policy Health Check                                             | PASS |
68
69 A policy healthcheck (with more detailed output) can be done directly to the drools service in the policy VM.
70
71 .. code-block:: none
72    :caption: Healtcheck on the PDP-D service
73
74     curl --silent --user '<username>:<password> -X GET http://localhost:30217/healthcheck | python -m json.tool
75      
76     {
77         "details": [
78             {
79                 "code": 200,
80                 "healthy": true,
81                 "message": "alive",
82                 "name": "PDP-D",
83                 "url": "self"
84             },
85             {
86                 "code": 200,
87                 "healthy": true,
88                 "message": "",
89                 "name": "PAP",
90                 "url": "http://pap:9091/pap/test"
91             },
92             {
93                 "code": 200,
94                 "healthy": true,
95                 "message": "",
96                 "name": "PDP",
97                 "url": "http://pdp:8081/pdp/test"
98             }
99         ],
100         "healthy": true
101     }
102
103
104 PDP-X active/active pool
105 ^^^^^^^^^^^^^^^^^^^^^^^^
106
107 The policy engine UI (console container in the pap pod) can be used to check that the the 2 individual PDP-Xs are synchronized.
108 The console URL is accessible at  ``http://<oom-vm>:30219/onap/login.htm``.   Select the PDP tab.
109
110     .. image:: srmPdpxPdpMgmt.png
111
112 After initialization, there will be no policies loaded into the policy subsystem.    You can verify it, by accessing the Editor tab in the UI.
113
114
115 PDP-D Active/Active Pool
116 ^^^^^^^^^^^^^^^^^^^^^^^^
117
118 The PDP-Ds replicas will come up with the amsterdam controller installed in brainless mode (no maven coordinates) since the controller has not been associated with a set of drools rules to run (control loop rules).
119
120 The following command can be issued on each of the PDP-D replicas IPs:
121
122 .. code-block:: bash
123    :caption: Querying the rules association for a PDP-D replica 
124
125     curl --silent --user '<username>:<password>' -X GET http://<drools-replica-ip>:9696/policy/pdp/engine/controllers/amsterdam/drools | python -m json.tool
126     
127     {
128         "alive": false,
129         "artifactId": "NO-ARTIFACT-ID",
130         "brained": false,
131         "canonicalSessionNames": [],
132         "container": null,
133         "groupId": "NO-GROUP-ID",
134         "locked": false,
135         "recentSinkEvents": [],
136         "recentSourceEvents": [],
137         "sessionNames": [],
138         "version": "NO-VERSION"
139     }
140
141 Installing Policies
142 ^^^^^^^^^^^^^^^^^^^
143
144 The OOM default installation will come with no policies pre-configured.  There is a sample script used by integration teams to load policies to support all 4 use cases at:   /tmp/policy-install/config/push-policies.sh in the pap container within the pap pod.   This script can be modified for your own particular installation, for example if only interested in vCPE use cases, remove those vCPE related API REST calls.   For the vFW use case, you may want to edit the encoded operational policy to point to the proper resourceID in your installation.
145
146 The above mentioned push-policies.sh script can be executed as follows:
147
148 .. code-block:: bash
149    :caption: Installing the default policies
150
151     kubectl exec -it dev-pap-8587696769-vwj6k -c pap -n onap -- bash -c "export PRELOAD_POLICIES=true; /tmp/policy-install/config/push-policies.sh"
152      
153      
154     ..
155     Create BRMSParam Operational Policies
156     ..
157     Create BRMSParamvFirewall Policy
158     ..
159     Transaction ID: ef08cc65-9950-4478-a4ab-0f3bc2519f60 --Policy with the name com.Config_BRMS_Param_BRMSParamvFirewall.1.xml was successfully created.Create BRMSParamvDNS Policy
160     ..
161     Transaction ID: 52e33efe-ba66-47de-b404-8d441107d8a9 --Policy with the name com.Config_BRMS_Param_BRMSParamvDNS.1.xml was successfully created.Create BRMSParamVOLTE Policy
162     ..
163     Transaction ID: f13072b7-6258-4c16-99da-f908d29363ec --Policy with the name com.Config_BRMS_Param_BRMSParamVOLTE.1.xml was successfully created.Create BRMSParamvCPE Policy
164     ..
165     Transaction ID: 616f970a-b45e-40f7-88cd-d63000d22cca --Policy with the name com.Config_BRMS_Param_BRMSParamvCPE.1.xml was successfully created.Create MicroService Config Policies
166     Create MicroServicevFirewall Policy
167     ..
168     Transaction ID: 4c143a15-20af-408a-9285-bc7940261829 --Policy with the name com.Config_MS_MicroServicevFirewall.1.xml was successfully created.Create MicroServicevDNS Policy
169     ..
170     Transaction ID: 1e54ae73-509b-490e-bf62-1fea7989fd5f --Policy with the name com.Config_MS_MicroServicevDNS.1.xml was successfully created.Create MicroServicevCPE Policy
171     ..
172     Transaction ID: 32239868-bab2-4e12-9fd9-81a0ed4a6b1c --Policy with the name com.Config_MS_MicroServicevCPE.1.xml was successfully created.Creating Decision Guard policy
173     ..
174     Transaction ID: b43cb9d5-42c7-4654-aacf-d4898c4d13bb --Policy with the name com.Decision_AllPermitGuard.1.xml was successfully created.Push Decision policy
175     ..
176     Transaction ID: 3c1e4ae6-6991-415b-9f2d-c665a8c5a026 --Policy 'com.Decision_AllPermitGuard.1.xml' was successfully pushed to the PDP group 'default'.Pushing BRMSParam Operational policies
177     ..
178     Transaction ID: 58d26d03-b5b8-4fd3-b2df-1411a1c36420 --Policy 'com.Config_BRMS_Param_BRMSParamvFirewall.1.xml' was successfully pushed to the PDP group 'default'.pushPolicy : PUT : com.BRMSParamvDNS
179     ..
180     Transaction ID: 0854e54a-504b-4f06-bc2f-30f491cb9f5a --Policy 'com.Config_BRMS_Param_BRMSParamvDNS.1.xml' was successfully pushed to the PDP group 'default'.pushPolicy : PUT : com.BRMSParamVOLTE
181     ..
182     Transaction ID: d33c7dde-5c99-4dab-b4ff-9988473cd88d --Policy 'com.Config_BRMS_Param_BRMSParamVOLTE.1.xml' was successfully pushed to the PDP group 'default'.pushPolicy : PUT : com.BRMSParamvCPE
183     ..
184     Transaction ID: e8c8a73e-127c-4318-9e59-3cae9dcbe011 --Policy 'com.Config_BRMS_Param_BRMSParamvCPE.1.xml' was successfully pushed to the PDP group 'default'.Pushing MicroService Config policies
185     ..
186     Transaction ID: ec0429d7-e35f-4978-8a6c-40d2b5b3be61 --Policy 'com.Config_MS_MicroServicevFirewall.1.xml' was successfully pushed to the PDP group 'default'.pushPolicy : PUT : com.MicroServicevDNS
187     ..
188     Transaction ID: f7072f05-7b74-45b5-9bd3-99b7f8023e3e --Policy 'com.Config_MS_MicroServicevDNS.1.xml' was successfully pushed to the PDP group 'default'.pushPolicy : PUT : com.MicroServicevCPE
189     ..
190     Transaction ID: 6d47db63-7956-4f5f-ab34-aeb5a124a90d --Policy 'com.Config_MS_MicroServicevCPE.1.xml' was successfully pushed to the PDP group 'default'.
191
192
193 The policies pushed can be viewed through the Policy UI:
194
195     .. image:: srmEditor.png
196
197 As a consequence of pushing the policies, the brmsgw component will compose drools rules artifacts and publish them to the nexus respository at ``http://<oom-vm>:30236/nexus/``
198
199     .. image:: srmNexus.png
200
201 At the same time each replica of the PDP-Ds will receive notifications for each new version of the policies to run for the amsterdam controller.   You can run the following command to see how the amsterdam controller is associated with the latest rules version.    The following command can be used for verification for each replica:
202
203
204 .. code-block:: none
205    :caption: Querying the rules association of a PDP-D replica
206
207     curl --silent --user '<username><password> -X GET http://<replica-ip>:9696/policy/pdp/engine/controllers/amsterdam/drools | python -m json.tool
208     {
209         "alive": true,
210         "artifactId": "policy-amsterdam-rules",
211         "brained": true,
212         "groupId": "org.onap.policy-engine.drools.amsterdam",
213         "locked": false,
214         "modelClassLoaderHash": 1223551265,
215         "recentSinkEvents": [],
216         "recentSourceEvents": [],
217         "sessionCoordinates": [
218             "org.onap.policy-engine.drools.amsterdam:policy-amsterdam-rules:0.4.0:closedloop-amsterdam"
219         ],
220         "sessions": [
221             "closedloop-amsterdam"
222         ],
223         "version": "0.4.0"
224     }
225
226 Likewise, for verification purposes, each PDP-X replica can be queried directly to retrieve policy information.   The following commands can be used to query a policy through the pdp service:
227
228
229 .. code-block:: bash
230    :caption: Querying the "pdp" service for the vFirewal policy
231
232     ubuntu@k8sx:~$ kubectl exec -it dev-pap-8587696769-vwj6k -c pap -n onap bash
233     policy@dev-pap-8587696769-vwj6k:/tmp/policy-install$ curl --silent -X POST --header 'Content-Type: application/json' --header 'Accept: application/json' --header 'ClientAuth: cHl0aG9uOnRlc3Q=' --header 'Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==' --header 'Environment: TEST' -d '{"policyName": ".*vFirewall.*"}' http://pdp:8081/pdp/api/getConfig | python -m json.tool
234     [
235         {
236             "config": "{\"service\":\"tca_policy\",\"location\":\"SampleServiceLocation\",\"uuid\":\"test\",\"policyName\":\"MicroServicevFirewall\",\"description\":\"MicroService vFirewall Policy\",\"configName\":\"SampleConfigName\",\"templateVersion\":\"OpenSource.version.1\",\"version\":\"1.1.0\",\"priority\":\"1\",\"policyScope\":\"resource=SampleResource,service=SampleService,type=SampleType,closedLoopControlName=ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a\",\"riskType\":\"SampleRiskType\",\"riskLevel\":\"1\",\"guard\":\"False\",\"content\":{\"tca_policy\":{\"domain\":\"measurementsForVfScaling\",\"metricsPerEventName\":[{\"eventName\":\"vFirewallBroadcastPackets\",\"controlLoopSchemaType\":\"VNF\",\"policyScope\":\"DCAE\",\"policyName\":\"DCAE.Config_tca-hi-lo\",\"policyVersion\":\"v0.0.1\",\"thresholds\":[{\"closedLoopControlName\":\"ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a\",\"version\":\"1.0.2\",\"fieldPath\":\"$.event.measurementsForVfScalingFields.vNicUsageArray[*].receivedTotalPacketsDelta\",\"thresholdValue\":300,\"direction\":\"LESS_OR_EQUAL\",\"severity\":\"MAJOR\",\"closedLoopEventStatus\":\"ONSET\"},{\"closedLoopControlName\":\"ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a\",\"version\":\"1.0.2\",\"fieldPath\":\"$.event.measurementsForVfScalingFields.vNicUsageArray[*].receivedTotalPacketsDelta\",\"thresholdValue\":700,\"direction\":\"GREATER_OR_EQUAL\",\"severity\":\"CRITICAL\",\"closedLoopEventStatus\":\"ONSET\"}]}]}}}",
237             "matchingConditions": {
238                 "ConfigName": "SampleConfigName",
239                 "Location": "SampleServiceLocation",
240                 "ONAPName": "DCAE",
241                 "service": "tca_policy",
242                 "uuid": "test"
243             },
244             "policyConfigMessage": "Config Retrieved! ",
245             "policyConfigStatus": "CONFIG_RETRIEVED",
246             "policyName": "com.Config_MS_MicroServicevFirewall.1.xml",
247             "policyType": "MicroService",
248             "policyVersion": "1",
249             "property": null,
250             "responseAttributes": {},
251             "type": "JSON"
252         },
253         {
254             "config":  ..... 
255             "matchingConditions": {
256                 "ConfigName": "BRMS_PARAM_RULE",
257                 "ONAPName": "DROOLS"
258             },
259             "policyConfigMessage": "Config Retrieved! ",
260             "policyConfigStatus": "CONFIG_RETRIEVED",
261             "policyName": "com.Config_BRMS_Param_BRMSParamvFirewall.1.xml",
262             "policyType": "BRMS_PARAM",
263             "policyVersion": "1",
264             "property": null,
265             "responseAttributes": {
266                 "controller": "amsterdam"
267             },
268             "type": "OTHER"
269         }
270     ]
271     
272
273 while the following commands could be used to query an specific PDP-X replica:
274
275
276 .. code-block:: bash
277    :caption: Querying PDP-X 0 for the vCPE policy
278
279     curl --silent -X POST --header 'Content-Type: application/json' --header 'Accept: application/json' --header 'ClientAuth: cHl0aG9uOnRlc3Q=' --header 'Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==' --header 'Environment: TEST' -d '{"policyName": ".*vCPE.*"}' http://10.42.144.218:8081/pdp/api/getConfig | python -m json.tool
280     [
281         {
282             "config": ...,
283             "matchingConditions": {
284                 "ConfigName": "BRMS_PARAM_RULE",
285                 "ONAPName": "DROOLS"
286             },
287             "policyConfigMessage": "Config Retrieved! ",
288             "policyConfigStatus": "CONFIG_RETRIEVED",
289             "policyName": "com.Config_BRMS_Param_BRMSParamvCPE.1.xml",
290             "policyType": "BRMS_PARAM",
291             "policyVersion": "1",
292             "property": null,
293             "responseAttributes": {
294                 "controller": "amsterdam"
295             },
296             "type": "OTHER"
297         },
298         {
299             "config": "{\"service\":\"tca_policy\",\"location\":\"SampleServiceLocation\",\"uuid\":\"test\",\"policyName\":\"MicroServicevCPE\",\"description\":\"MicroService vCPE Policy\",\"configName\":\"SampleConfigName\",\"templateVersion\":\"OpenSource.version.1\",\"version\":\"1.1.0\",\"priority\":\"1\",\"policyScope\":\"resource=SampleResource,service=SampleService,type=SampleType,closedLoopControlName=ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e\",\"riskType\":\"SampleRiskType\",\"riskLevel\":\"1\",\"guard\":\"False\",\"content\":{\"tca_policy\":{\"domain\":\"measurementsForVfScaling\",\"metricsPerEventName\":[{\"eventName\":\"Measurement_vGMUX\",\"controlLoopSchemaType\":\"VNF\",\"policyScope\":\"DCAE\",\"policyName\":\"DCAE.Config_tca-hi-lo\",\"policyVersion\":\"v0.0.1\",\"thresholds\":[{\"closedLoopControlName\":\"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e\",\"version\":\"1.0.2\",\"fieldPath\":\"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value\",\"thresholdValue\":0,\"direction\":\"EQUAL\",\"severity\":\"MAJOR\",\"closedLoopEventStatus\":\"ABATED\"},{\"closedLoopControlName\":\"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e\",\"version\":\"1.0.2\",\"fieldPath\":\"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value\",\"thresholdValue\":0,\"direction\":\"GREATER\",\"severity\":\"CRITICAL\",\"closedLoopEventStatus\":\"ONSET\"}]}]}}}",
300             "matchingConditions": {
301                 "ConfigName": "SampleConfigName",
302                 "Location": "SampleServiceLocation",
303                 "ONAPName": "DCAE",
304                 "service": "tca_policy",
305                 "uuid": "test"
306             },
307             "policyConfigMessage": "Config Retrieved! ",
308             "policyConfigStatus": "CONFIG_RETRIEVED",
309             "policyName": "com.Config_MS_MicroServicevCPE.1.xml",
310             "policyType": "MicroService",
311             "policyVersion": "1",
312             "property": null,
313             "responseAttributes": {},
314             "type": "JSON"
315         }
316     ]
317     
318 PDP-X Resiliency
319 ^^^^^^^^^^^^^^^^
320
321 A PDP-X container failure can be simulated by performing a"policy.sh stop" operation within the PDP-X container, this in fact will shutdown the PDP-X service.    The kubernetes liveness operation will detect that the ports are down, inferring there's a problem with the service, and in turn, will restart the container.   In the following example will cause PDP-X 1 to fail.
322
323 .. code-block:: bash
324    :caption: Causing PDP-X 1 service to fail
325
326     ubuntu@k8sx:~$ kubectl exec -it dev-pdp-1 --container pdp -n onap -- bash -c "source /opt/app/policy/etc/profile.d/env.sh; policy.sh stop;"
327         pdplp: STOPPING ..
328         pdp: STOPPING ..
329
330 Upon detection of the service being down through the liveness check, the container will be restarted.   Note the restart count when querying the status of the pods:
331
332 .. code-block:: bash
333    :caption: Checking PDP-X 1 restart count
334
335     ubuntu@k8sx:~$ kubectl get pods --all-namespaces -o=wide
336      
337     NAMESPACE  NAME                             READY     STATUS    RESTARTS   AGE     IP              NODE
338
339     onap       dev-brmsgw-5dbc4c8dc4-llk5s      1/1       Running   0          3d      10.42.120.43    k8sx
340     onap       dev-drools-0                     1/1       Running   0          3d      10.42.60.27     k8sx
341     onap       dev-drools-1                     1/1       Running   0          3d      10.42.105.190   k8sx
342     onap       dev-drools-2                     1/1       Running   0          3d      10.42.139.82    k8sx
343     onap       dev-drools-3                     1/1       Running   0          3d      10.42.128.4     k8sx
344     onap       dev-nexus-7d96568f5f-qp5td       1/1       Running   0          3d      10.42.172.8     k8sx
345     onap       dev-pap-8587696769-vwj6k         2/2       Running   0          3d      10.42.19.137    k8sx
346     onap       dev-pdp-0                        2/2       Running   0          3d      10.42.144.218   k8sx
347     onap       dev-pdp-1                        2/2       Running   1          3d      10.42.233.111   k8sx    <--- **
348     onap       dev-policydb-587d55bdff-4f5dz    1/1       Running   0          3d      10.42.12.242    k8sx
349     
350
351 During the restart process, the PAP component, will detect that PDP-X 1 is down and therefore its state being reflected in the PDP-X screen:
352
353     .. image:: srmPdpxResiliencyPdpMgmt1.png
354
355 This screen will be updated to reflect PDP-X 1 is back alive, after PDP-X 1 synchronizes itself with the PAP. 
356
357     .. image:: srmPdpxResiliencyPdpMgmt2.png
358
359 At that point, PDP-X is usable either directly or through the service to query for policies.
360
361
362 .. code-block:: bash
363    :caption: Query PDP-X 1 for vCPE policy
364
365     ubuntu@k8sx:~$ curl --silent -X POST --header 'Content-Type: application/json' --header 'Accept: application/json' --header 'ClientAuth: cHl0aG9uOnRlc3Q=' --header 'Authorization: Basic dGVzdHBkcDphbHBoYTEyMw==' --header 'Environment: TEST' -d '{"policyName": ".*vCPE.*"}' http://10.42.233.111:8081/pdp/api/getConfig | python -m json.tool
366     [
367         {
368             "config": "..",
369             "matchingConditions": {
370                 "ConfigName": "BRMS_PARAM_RULE",
371                 "ONAPName": "DROOLS"
372             },
373             "policyConfigMessage": "Config Retrieved! ",
374             "policyConfigStatus": "CONFIG_RETRIEVED",
375             "policyName": "com.Config_BRMS_Param_BRMSParamvCPE.1.xml",
376             "policyType": "BRMS_PARAM",
377             "policyVersion": "1",
378             "property": null,
379             "responseAttributes": {
380                 "controller": "amsterdam"
381             },
382             "type": "OTHER"
383         },
384         {
385             "config": "{\"service\":\"tca_policy\",\"location\":\"SampleServiceLocation\",\"uuid\":\"test\",\"policyName\":\"MicroServicevCPE\",\"description\":\"MicroService vCPE Policy\",\"configName\":\"SampleConfigName\",\"templateVersion\":\"OpenSource.version.1\",\"version\":\"1.1.0\",\"priority\":\"1\",\"policyScope\":\"resource=SampleResource,service=SampleService,type=SampleType,closedLoopControlName=ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e\",\"riskType\":\"SampleRiskType\",\"riskLevel\":\"1\",\"guard\":\"False\",\"content\":{\"tca_policy\":{\"domain\":\"measurementsForVfScaling\",\"metricsPerEventName\":[{\"eventName\":\"Measurement_vGMUX\",\"controlLoopSchemaType\":\"VNF\",\"policyScope\":\"DCAE\",\"policyName\":\"DCAE.Config_tca-hi-lo\",\"policyVersion\":\"v0.0.1\",\"thresholds\":[{\"closedLoopControlName\":\"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e\",\"version\":\"1.0.2\",\"fieldPath\":\"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value\",\"thresholdValue\":0,\"direction\":\"EQUAL\",\"severity\":\"MAJOR\",\"closedLoopEventStatus\":\"ABATED\"},{\"closedLoopControlName\":\"ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e\",\"version\":\"1.0.2\",\"fieldPath\":\"$.event.measurementsForVfScalingFields.additionalMeasurements[*].arrayOfFields[0].value\",\"thresholdValue\":0,\"direction\":\"GREATER\",\"severity\":\"CRITICAL\",\"closedLoopEventStatus\":\"ONSET\"}]}]}}}",
386             "matchingConditions": {
387                 "ConfigName": "SampleConfigName",
388                 "Location": "SampleServiceLocation",
389                 "ONAPName": "DCAE",
390                 "service": "tca_policy",
391                 "uuid": "test"
392             },
393             "policyConfigMessage": "Config Retrieved! ",
394             "policyConfigStatus": "CONFIG_RETRIEVED",
395             "policyName": "com.Config_MS_MicroServicevCPE.1.xml",
396             "policyType": "MicroService",
397             "policyVersion": "1",
398             "property": null,
399             "responseAttributes": {},
400             "type": "JSON"
401         }
402     ]
403
404 PDP-D Resiliency
405 ^^^^^^^^^^^^^^^^
406
407 A PDP-D container failure can be simulated by performing a"policy stop" operation within the PDP-D container, this in fact will shutdown the PDP-D service.    The kubernetes liveness operation will detect that the ports are down, inferring there's a problem with the service, and in turn, will restart the container.   In the following example will cause PDP-D 3 to fail.
408
409 .. code-block:: bash
410    :caption: Causing PDP-D 3 to fail
411
412     ubuntu@k8sx:~/oom/kubernetes$ kubectl exec -it dev-drools-3 --container drools -n onap -- bash -c "source /opt/app/policy/etc/profile.d/env.sh; policy stop"
413     [drools-pdp-controllers]
414     L []: Stopping Policy Management... Policy Management (pid=3284) is stopping... Policy Management has stopped.
415
416
417 Upon detection of the service being down through the liveness check, the container will be restarted.   Note the restart count when querying the status of the pods:
418
419 .. code-block:: bash
420    :caption: Checking PDP-D 3 restart count
421
422     ubuntu@k8sx:~/oom/kubernetes$ kubectl get pods --all-namespaces -o=wide
423     
424     NAMESPACE  NAME                             READY     STATUS    RESTARTS   AGE     IP              NODE
425
426     onap       dev-brmsgw-5549d99466-7989k      1/1       Running   0          1h      10.42.252.245   k8sx
427     onap       dev-drools-0                     1/1       Running   0          1h      10.42.30.52     k8sx
428     onap       dev-drools-1                     1/1       Running   0          1h      10.42.9.245     k8sx
429     onap       dev-drools-2                     1/1       Running   0          1h      10.42.95.0      k8sx
430     onap       dev-drools-3                     1/1       Running   1          1h      10.42.224.52    k8sx
431     onap       dev-nexus-6558979c95-xlxcc       1/1       Running   0          1h      10.42.142.36    k8sx
432     onap       dev-pap-64b67f66b9-lc8vl         2/2       Running   0          1h      10.42.187.255   k8sx
433     onap       dev-pdp-0                        2/2       Running   0          1h      10.42.164.57    k8sx
434     onap       dev-pdp-1                        2/2       Running   0          1h      10.42.155.145   k8sx
435     onap       dev-policydb-7d4b75869-qd8n5     1/1       Running   0          1h      10.42.148.37    k8sx
436    
437
438 PDP-X Scaling
439 ^^^^^^^^^^^^^
440
441 To scale a new PDP-X, set the replica count appropriately.   In our scenario below, we are going to scale the PDP-X with a new replica, PDP-X 2, to have a pool of 3 PDP-X.
442
443 .. code-block:: bash
444    :caption: Scaling a PDP-X
445
446     helm upgrade -i dev local/onap --namespace onap --set policy.pdp.replicaCount=3
447      
448     Release "dev" has been upgraded. Happy Helming!
449     LAST DEPLOYED: Mon May 14 01:37:03 2018
450     NAMESPACE: onap
451     STATUS: DEPLOYED
452     ..
453      
454     kubectl get pods --all-namespaces -o=wide
455      
456     NAMESPACE  NAME                             READY     STATUS    RESTARTS   AGE     IP              NODE
457     ..
458     onap       dev-pdp-0                        2/2       Running   0          1h      10.42.164.57    k8sx
459     onap       dev-pdp-1                        2/2       Running   0          1h      10.42.155.145   k8sx
460     onap       dev-pdp-2                        2/2       Running   0          1m      10.42.47.58     k8sx
461     ..
462
463
464 PDP-D Scaling
465 ^^^^^^^^^^^^^
466
467 To scale a new PDP-D, set the replica count appropriately.   In our scenario below, we are going to scale the PDP-D service with a new replica, PDP-D 4, to have a pool of 5 PDP-D.
468
469 .. code-block:: bash
470    :caption: Scaling a PDP-D
471
472     helm upgrade -i dev local/onap --namespace onap --set policy.drools.replicaCount=5
473     Release "dev" has been upgraded. Happy Helming!
474     LAST DEPLOYED: Mon May 14 01:45:19 2018
475     NAMESPACE: onap
476     STATUS: DEPLOYED
477      
478     ubuntu@k8sx:~/oom/kubernetes$ kubectl get pods --all-namespaces -o=wide
479     NAMESPACE  NAME                             READY     STATUS    RESTARTS   AGE     IP              NODE
480     ..
481     onap       dev-drools-0                     1/1       Running   0          1h      10.42.30.52     k8sx
482     onap       dev-drools-1                     1/1       Running   0          1h      10.42.9.245     k8sx
483     onap       dev-drools-2                     1/1       Running   0          1h      10.42.95.0      k8sx
484     onap       dev-drools-3                     1/1       Running   1          1h      10.42.224.52    k8sx
485     onap       dev-drools-4                     1/1       Running   0          1m      10.42.237.251   k8sx
486     ..
487     
488         
489
490
491 End of Document
492
493 .. SSNote: Wiki page ref. https://wiki.onap.org/display/DW/Scalability%2C+Resiliency+and+Manageability
494
495