Policy Drools PDP component
~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Both the Performance and the Stability tests were executed against a default ONAP installation in the PFPP tenant, from an independent VM running the jmeter tool to inject the load.
+Both the Performance and the Stability tests were executed against a default ONAP installation in the policy-k8s tenant in the windriver lab, from an independent VM running the jmeter tool to inject the load.
General Setup
*************
-The kubernetes installation allocated all policy components in the same worker node VM and some additional ones. The worker VM hosting the policy components has the
-following spec:
+The kubernetes installation allocated all policy components in the same worker node VM and some additional ones.
+The worker VM hosting the policy components has the following spec:
-- 16GB RAM
+- 16GB RAM
- 8 VCPU
- 160GB Ephemeral Disk
-The standalone VM designated to run jmeter has the same configuration and was only
-used to run this tool allocating 12G of heap memory to the jmeter tool.
+The standalone VM designated to run jmeter has the same configuration. The jmeter JVM
+was instantiated with a max heap configuration of 12G.
-Other ONAP components used during the estability tests are:
+Other ONAP components used during the stability tests are:
- Policy XACML PDP to process guard queries for each transaction.
- DMaaP to carry PDP-D and jmeter initiated traffic to complete transactions.
- SO actor for the vDNS use case.
- APPC responses for the vCPE and vFW use cases.
-- AAI to answer queries for the usecases under test.
+- AAI to answer queries for the use cases under test.
-In order to restrict APPC responses to just the jmeter too driving all transactions,
+In order to avoid interferences with the APPC component while running the tests,
the APPC component was disabled.
-SO, and AAI actors were simulated internally within the PDP-D by enabling the
-feature-controlloop-utils previous to run the tests.
+SO, and AAI actors were simulated within the PDP-D JVM by enabling the
+feature-controlloop-utils before running the tests.
PDP-D Setup
***********
The kubernetes charts were modified previous to the installation with
the changes below.
-The oom/kubernetes/policy/charts/drools/resources/configmaps/base.conf was
-modified:
-
-.. code-block:: bash
-
- --- a/kubernetes/policy/charts/drools/resources/configmaps/base.conf
- +++ b/kubernetes/policy/charts/drools/resources/configmaps/base.conf
- @@ -85,27 +85,27 @@ DMAAP_SERVERS=message-router
-
- # AAI
-
- -AAI_HOST=aai.{{.Release.Namespace}}
- -AAI_PORT=8443
- +AAI_HOST=localhost
- +AAI_PORT=6666
- AAI_CONTEXT_URI=
-
- # MSO
-
- -SO_HOST=so.{{.Release.Namespace}}
- -SO_PORT=8080
- -SO_CONTEXT_URI=onap/so/infra/
- -SO_URL=https://so.{{.Release.Namespace}}:8080/onap/so/infra
- +SO_HOST=localhost
- +SO_PORT=6667
- +SO_CONTEXT_URI=
- +SO_URL=https://localhost:6667/
-
- # VFC
-
- -VFC_HOST=
- -VFC_PORT=
- +VFC_HOST=localhost
- +VFC_PORT=6668
- VFC_CONTEXT_URI=api/nslcm/v1/
-
- # SDNC
-
- -SDNC_HOST=sdnc.{{.Release.Namespace}}
- -SDNC_PORT=8282
- +SDNC_HOST=localhost
- +SDNC_PORT=6670
- SDNC_CONTEXT_URI=restconf/operations/
-
-The AAI actor had to be modified to disable https to talk to the AAI simulator.
-
-.. code-block:: bash
-
- ~/oom/kubernetes/policy/charts/drools/resources/configmaps/AAI-http-client.properties
-
- http.client.services=AAI
-
- http.client.services.AAI.managed=true
- http.client.services.AAI.https=false
- http.client.services.AAI.host=${envd:AAI_HOST}
- http.client.services.AAI.port=${envd:AAI_PORT}
- http.client.services.AAI.userName=${envd:AAI_USERNAME}
- http.client.services.AAI.password=${envd:AAI_PASSWORD}
- http.client.services.AAI.contextUriPath=${envd:AAI_CONTEXT_URI}
-
-The SO actor had to be modified similarly.
-
-.. code-block:: bash
-
- oom/kubernetes/policy/charts/drools/resources/configmaps/SO-http-client.properties:
-
- http.client.services=SO
-
- http.client.services.SO.managed=true
- http.client.services.SO.https=false
- http.client.services.SO.host=${envd:SO_HOST}
- http.client.services.SO.port=${envd:SO_PORT}
- http.client.services.SO.userName=${envd:SO_USERNAME}
- http.client.services.SO.password=${envd:SO_PASSWORD}
- http.client.services.SO.contextUriPath=${envd:SO_CONTEXT_URI}
-
The feature-controlloop-utils was started by adding the following script:
.. code-block:: bash
#!/bin/bash
bash -c "features enable controlloop-utils"
-The PDP-D uses a small configuration:
-
-
Stability Test of Policy PDP-D
******************************
-The 72 hour stability test happened in parallel with the estability run of the API component.
-
-.. code-block:: bash
-
- small:
- limits:
- cpu: 1
- memory: 4Gi
- requests:
- cpu: 100m
- memory: 1Gi
-
-Approximately 3.75G heap was allocated to the PDP-D JVM at initialization.
+The 72 hour stability test happened in parallel with the stability run of the API component.
Worker Node performance
=======================
-The VM named onap-k8s-07 was monitored for the duration of the two parallel
-stability runs. The table below show the usage ranges:
+The VM named onap-k8s-09 was monitored for the duration of the 72 hours
+stability run. The table below show the usage ranges:
.. code-block:: bash
- NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
- onap-k8s-07 <=1374m <=20% <=10643Mi <=66%
+ NAME CPU(cores) CPU%
+ onap-k8s-09 <=1214m <=20%
PDP-D performance
=================
-The PDP-D was monitored during the run an stayed below the following ranges:
-
-.. code-block:: bash
-
- NAME CPU(cores) MEMORY(bytes)
- dev-drools-0 <=142m 684Mi
-
-Garbagge collection was monitored without detecting any major spike.
-
-The following use cases were tested:
+The test set focused on the following use cases:
- vCPE
- vDNS
- vFirewall
-For 72 hours the following 5 scenarios were run in parallel:
+For 72 hours the following 5 scenarios ran in parallel:
- vCPE success scenario
- vCPE failure scenario (failure returned by simulated APPC recipient through DMaaP).
- vDNS failure scenario.
- vFirewall success scenario.
-Five threads, one for each scenario described above, push the traffic back to back
-with no pauses.
+Five threads ran in parallel, one for each scenario. The transactions were initiated
+by each jmeter thread group. Each thread initiated a transaction, monitored the transaction, and
+as soon as the transaction ending was detected, it initiated the next one, so back to back with no
+pauses.
-All transactions completed successfully as expected in each scenario.
+All transactions completed successfully as it was expected in each scenario, with no failures.
The command executed was
.. code-block:: bash
- jmeter -n -t /home/ubuntu/jhh/s3p.jmx > /dev/null 2>&1
+ ./jmeter -n -t /home/ubuntu/drools-applications/testsuites/stability/src/main/resources/frankfurt/s3p.jmx -l /home/ubuntu/jmeter_result/jmeter.jtl -e -o /home/ubuntu/jmeter_result > /dev/null 2>&1
The results were computed by taking the ellapsed time from the audit.log
(this log reports all end to end transactions, marking the start, end, and
The count reflects the number of successful transactions as expected in the
use case, as well as the average, standard deviation, and max/min. An histogram
-of the response times have been added as a visual indication on the most common transaction times.
+of the response times have been added as a visual indication on the most common
+transaction times.
vCPE Success scenario
=====================
.. code-block:: bash
- count 155246.000000
- mean 269.894226
- std 64.556282
- min 133.000000
- 50% 276.000000
- max 1125.000000
-
-
-Transaction Times histogram:
+ Max: 4323 ms, Min: 143 ms, Average: 380 ms [samples taken for average: 260628]
.. image:: images/ControlLoop-vCPE-48f0c2c3-a172-4192-9ae3-052274181b6e.png
.. code-block:: bash
- ControlLoop-vCPE-Fail :
- count 149621.000000
- mean 280.483522
- std 67.226550
- min 134.000000
- 50% 279.000000
- max 5394.000000
-
-
-Transaction Times histogram:
+ Max: 3723 ms, Min: 148 ms, Average: 671 ms [samples taken for average: 87888]
.. image:: images/ControlLoop-vCPE-Fail.png
.. code-block:: bash
- count 293000.000000
- mean 21.961792
- std 7.921396
- min 15.000000
- 50% 20.000000
- max 672.000000
-
-Transaction Times histogram:
+ Max: 6437 ms, Min: 19 ms, Average: 165 ms [samples taken for average: 59259]
.. image:: images/ControlLoop-vDNS-6f37f56d-a87d-4b85-b6a9-cc953cf779b3.png
.. code-block:: bash
- count 59357.000000
- mean 3010.261267
- std 76.599948
- min 0.000000
- 50% 3010.000000
- max 3602.000000
-
-Transaction Times histogram:
+ Max: 1176 ms, Min: 4 ms, Average: 5 ms [samples taken for average: 340810]
.. image:: images/ControlLoop-vDNS-Fail.png
-vFirewall Failure scenario
+vFirewall Success scenario
==========================
ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a:
.. code-block:: bash
- count 175401.000000
- mean 184.581251
- std 35.619075
- min 136.000000
- 50% 181.000000
- max 3972.000000
-
-Transaction Times histogram:
+ Max: 4016 ms, Min: 177 ms, Average: 644 ms [samples taken for average: 36460]
.. image:: images/ControlLoop-vFirewall-d0a1dfc6-94f5-4fd4-a5b5-4630b438850a.png
-
-
-
-