Merge "Update dependencies for security vulnerabilities"
authorRam Krishna Verma <ram_krishna.verma@bell.ca>
Fri, 22 Apr 2022 03:14:27 +0000 (03:14 +0000)
committerGerrit Code Review <gerrit@onap.org>
Fri, 22 Apr 2022 03:14:27 +0000 (03:14 +0000)
24 files changed:
docs/clamp/acm/clamp-gui/policy-gui.rst
docs/development/devtools/distribution-s3p-results/distribution-jmeter-testcases.png
docs/development/devtools/distribution-s3p-results/performance-monitor.png
docs/development/devtools/distribution-s3p-results/performance-statistics.png
docs/development/devtools/distribution-s3p-results/performance-threads.png [changed mode: 0755->0644]
docs/development/devtools/distribution-s3p-results/performance-threshold.png
docs/development/devtools/distribution-s3p-results/stability-monitor.png
docs/development/devtools/distribution-s3p-results/stability-statistics.png
docs/development/devtools/distribution-s3p-results/stability-threads.png
docs/development/devtools/distribution-s3p-results/stability-threshold.png
docs/development/devtools/distribution-s3p.rst
docs/development/devtools/pap-s3p-results/pap-s3p-mem-at.png [deleted file]
docs/development/devtools/pap-s3p-results/pap-s3p-mem-bt.png [deleted file]
docs/development/devtools/pap-s3p-results/pap-s3p-performance-result-jmeter.png [deleted file]
docs/development/devtools/pap-s3p-results/pap-s3p-stability-result-jmeter.png [deleted file]
docs/development/devtools/pap-s3p-results/pap_metrics_after_72h.txt [new file with mode: 0644]
docs/development/devtools/pap-s3p-results/pap_metrics_before_72h.txt [new file with mode: 0644]
docs/development/devtools/pap-s3p-results/pap_performance_jmeter_results.jpg [new file with mode: 0644]
docs/development/devtools/pap-s3p-results/pap_stability_jmeter_results.jpg [new file with mode: 0644]
docs/development/devtools/pap-s3p-results/pap_top_after_72h.jpg [new file with mode: 0644]
docs/development/devtools/pap-s3p-results/pap_top_before_72h.jpg [new file with mode: 0644]
docs/development/devtools/pap-s3p.rst
docs/development/prometheus-metrics.rst
integration/pom.xml

index bba5278..63d603b 100644 (file)
@@ -98,8 +98,6 @@ see `Clamp ACM Smoke Tests <https://docs.onap.org/projects/onap-policy-parent/en
 2 - Checking out and building the UI
 ====================================
 
-.. _Building UI
-
 **Step 1:** Checkout the UI from the repo
 
 .. code-block:: bash
@@ -124,28 +122,25 @@ see `Clamp ACM Smoke Tests <https://docs.onap.org/projects/onap-policy-parent/en
 
     npm start --scripts-prepend-node-path
 
-** If you get the following error
+*If you get the following error*
 
 .. image:: images/03-gui.png
 
-    gedit package.json
-
 .. code-block:: bash
 
-   change the following
-   "version": "${project.version}",
+   gedit package.json
 
-   to
+Then change ``"version": "${project.version}",`` to ``"version": "2.1.1",``
 
-   "version": "2.1.1",
+Save and close, then delete the node_modules directory
 
-    save and close
-
-    then delete the node_modules directory
+.. code-block:: bash
 
     rm -rf node_modules/
 
-    then run again
+Then run again
+
+.. code-block:: bash
 
     npm install
 
@@ -165,7 +160,7 @@ This section describes how to commission and decommission the Tosca Service Temp
 
 ** Prerequisite:
 
-see clamp-policy-gui-label_
+See :ref:`Policy GUI Prerequisites<clamp-policy-gui-label>`
 
 **Step 1:** From the Main Menu Click on TOSCA Automation Composition Dropdown
 
@@ -314,4 +309,6 @@ see building-ui-label_
 
 .. image:: images/23-gui.png
 
-* NOTE: Can't change from Passive to Running in a local developer machine, can only change in the production environment
\ No newline at end of file
+* NOTE: Can't change from Passive to Running in a local developer machine, can only change in the production environment
+
+End of document
index db28a7b..86a437a 100644 (file)
Binary files a/docs/development/devtools/distribution-s3p-results/distribution-jmeter-testcases.png and b/docs/development/devtools/distribution-s3p-results/distribution-jmeter-testcases.png differ
index e7a12ed..71fd7fc 100644 (file)
Binary files a/docs/development/devtools/distribution-s3p-results/performance-monitor.png and b/docs/development/devtools/distribution-s3p-results/performance-monitor.png differ
index e621853..3f8693c 100644 (file)
Binary files a/docs/development/devtools/distribution-s3p-results/performance-statistics.png and b/docs/development/devtools/distribution-s3p-results/performance-statistics.png differ
old mode 100755 (executable)
new mode 100644 (file)
index b59b7db..2488abd
Binary files a/docs/development/devtools/distribution-s3p-results/performance-threads.png and b/docs/development/devtools/distribution-s3p-results/performance-threads.png differ
index 85c2f5d..73b20ff 100644 (file)
Binary files a/docs/development/devtools/distribution-s3p-results/performance-threshold.png and b/docs/development/devtools/distribution-s3p-results/performance-threshold.png differ
index 2d2848d..bebaaeb 100644 (file)
Binary files a/docs/development/devtools/distribution-s3p-results/stability-monitor.png and b/docs/development/devtools/distribution-s3p-results/stability-monitor.png differ
index 04cd906..f8465eb 100644 (file)
Binary files a/docs/development/devtools/distribution-s3p-results/stability-statistics.png and b/docs/development/devtools/distribution-s3p-results/stability-statistics.png differ
index a2e9e9f..4cfd7a7 100644 (file)
Binary files a/docs/development/devtools/distribution-s3p-results/stability-threads.png and b/docs/development/devtools/distribution-s3p-results/stability-threads.png differ
index a9cc71e..f348761 100644 (file)
Binary files a/docs/development/devtools/distribution-s3p-results/stability-threshold.png and b/docs/development/devtools/distribution-s3p-results/stability-threshold.png differ
index 9ae9337..9a169ba 100644 (file)
@@ -10,22 +10,6 @@ Policy Distribution component
 72h Stability and 4h Performance Tests of Distribution
 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
 
-VM Details
-----------
-
-The stability and performance tests are performed on VM's running in the OpenStack cloud
-environment in the ONAP integration lab.
-
-**Policy VM details**
-
-- OS: Ubuntu 18.04 LTS (GNU/Linux 4.15.0-151-generic x86_64)
-- CPU: 4 core
-- RAM: 15 GB
-- HardDisk: 39 GB
-- Docker version 20.10.7, build 20.10.7-0ubuntu1~18.04.2
-- Java: openjdk 11.0.11 2021-04-20
-
-
 Common Setup
 ------------
 
@@ -88,7 +72,7 @@ Install and verify docker-compose
 
 .. code-block:: bash
 
-    # Install compose
+    # Install compose (check if version is still available or update as necessary)
     sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
     sudo chmod +x /usr/local/bin/docker-compose
 
@@ -118,9 +102,9 @@ Modify the versions.sh script to match all the versions being tested.
 
     vi ~/distribution/testsuites/stability/src/main/resources/setup/versions.sh
 
-Ensure the correct docker image versions are specified - e.g. for Istanbul-M4
+Ensure the correct docker image versions are specified - e.g. for Jakarta-M4
 
-- export POLICY_DIST_VERSION=2.6.1-SNAPSHOT
+- export POLICY_DIST_VERSION=2.7-SNAPSHOT
 
 Run the start.sh script to start the components. After installation, script will execute
 ``docker ps`` and show the running containers.
@@ -137,14 +121,13 @@ Run the start.sh script to start the components. After installation, script will
     Creating policy-api          ... done
     Creating policy-pap          ... done
 
-    CONTAINER ID   IMAGE                                                               COMMAND                  CREATED         STATUS                  PORTS                NAMES
-    f91be98ad1f4   nexus3.onap.org:10001/onap/policy-pap:2.5.1-SNAPSHOT                "/opt/app/policy/pap…"   1 second ago    Up Less than a second   6969/tcp             policy-pap
-    d92cdbe971d4   nexus3.onap.org:10001/onap/policy-api:2.5.1-SNAPSHOT                "/opt/app/policy/api…"   1 second ago    Up Less than a second   6969/tcp             policy-api
-    9a019f5d641e   nexus3.onap.org:10001/onap/policy-db-migrator:2.3.1-SNAPSHOT        "/opt/app/policy/bin…"   2 seconds ago   Up 1 second             6824/tcp             policy-db-migrator
-    108ba238edeb   nexus3.onap.org:10001/mariadb:10.5.8                                "docker-entrypoint.s…"   3 seconds ago   Up 1 second             3306/tcp             mariadb
-    bec9b223e79f   nexus3.onap.org:10001/onap/policy-models-simulator:2.5.1-SNAPSHOT   "simulators.sh"          3 seconds ago   Up 1 second             3905/tcp             simulator
-    74aa5abeeb08   nexus3.onap.org:10001/onap/policy-distribution:2.6.1-SNAPSHOT       "/opt/app/policy/bin…"   3 seconds ago   Up 1 second             6969/tcp, 9090/tcp   policy-distribution
-
+    fa4e9bd26e60   nexus3.onap.org:10001/onap/policy-pap:2.6-SNAPSHOT-latest                "/opt/app/policy/pap…"   1 second ago    Up Less than a second   6969/tcp             policy-pap
+    efb65dd95020   nexus3.onap.org:10001/onap/policy-api:2.6-SNAPSHOT-latest                "/opt/app/policy/api…"   1 second ago    Up Less than a second   6969/tcp             policy-api
+    cf602c2770ba   nexus3.onap.org:10001/onap/policy-db-migrator:2.4-SNAPSHOT-latest        "/opt/app/policy/bin…"   2 seconds ago   Up 1 second             6824/tcp             policy-db-migrator
+    99383d2fecf4   pdp/simulator                                                            "sh /opt/app/policy/…"   2 seconds ago   Up 1 second                                  pdp-simulator
+    3c0e205c5f47   nexus3.onap.org:10001/onap/policy-models-simulator:2.6-SNAPSHOT-latest   "simulators.sh"          3 seconds ago   Up 2 seconds            3904/tcp             simulator
+    3ad00d90d6a3   nexus3.onap.org:10001/onap/policy-distribution:2.7-SNAPSHOT-latest       "/opt/app/policy/bin…"   3 seconds ago   Up 2 seconds            6969/tcp, 9090/tcp   policy-distribution
+    bb0b915cdecc   nexus3.onap.org:10001/mariadb:10.5.8                                     "docker-entrypoint.s…"   3 seconds ago   Up 2 seconds            3306/tcp             mariadb
 
 .. note::
     The containers on this docker-compose are running with HTTP configuration. For HTTPS, ports
@@ -165,7 +148,7 @@ Download and install JMeter
     # Install JMeter
     mkdir -p jmeter
     cd jmeter
-    wget https://dlcdn.apache.org//jmeter/binaries/apache-jmeter-5.4.1.zip
+    wget https://dlcdn.apache.org//jmeter/binaries/apache-jmeter-5.4.1.zip # check if valid version
     unzip -q apache-jmeter-5.4.1.zip
     rm apache-jmeter-5.4.1.zip
 
@@ -180,7 +163,7 @@ monitor CPU, Memory and GC for Distribution while the stability tests are runnin
 
     sudo apt install -y visualvm
 
-Run these commands to configure permissions
+Run these commands to configure permissions (if permission errors happens, use ``sudo su``)
 
 .. code-block:: bash
 
@@ -255,6 +238,7 @@ The 72h stability test will run the following steps sequentially in a single thr
 - **Add CSAR** - Adds CSAR to the directory that distribution is watching
 - **Get Healthcheck** - Ensures Healthcheck is returning 200 OK
 - **Get Statistics** - Ensures Statistics is returning 200 OK
+- **Get Metrics** - Ensures Metrics is returning 200 OK
 - **Assert PDP Group Query** - Checks that PDPGroupQuery contains the deployed policy
 - **Assert PoliciesDeployed** - Checks that the policy is deployed
 - **Undeploy/Delete Policy** - Undeploys and deletes the Policy for the next loop
@@ -342,7 +326,7 @@ time and rest call throughput for all the requests when the number of requests a
 saturate the resource and find the bottleneck.
 
 It also tests that distribution can handle multiple policy CSARs and that these are deployed within
-30 seconds consistently.
+60 seconds consistently.
 
 
 Setup Details
@@ -358,7 +342,7 @@ Performance test plan is different from the stability test plan.
 
 - Instead of handling one policy csar at a time, multiple csar's are deployed within the watched
   folder at the exact same time.
-- We expect all policies from these csar's to be deployed within 30 seconds.
+- We expect all policies from these csar's to be deployed within 60 seconds.
 - There are also multithreaded tests running towards the healthcheck and statistics endpoints of
   the distribution service.
 
@@ -368,7 +352,7 @@ Running the Test Plan
 
 Check if /tmp folder permissions to allow the Testplan to insert the CSAR into the
 /tmp/policydistribution/distributionmount folder.
-Clean up from previous run. If necessary, put containers down with script `down.sh` from setup
+Clean up from previous run. If necessary, put containers down with script ``down.sh`` from setup
 folder mentioned on :ref:`Setup components <setup-distribution-s3p-components>`
 
 .. code-block:: bash
@@ -401,3 +385,5 @@ Test Results
 
 .. image:: distribution-s3p-results/performance-monitor.png
 .. image:: distribution-s3p-results/performance-threads.png
+
+End of document
diff --git a/docs/development/devtools/pap-s3p-results/pap-s3p-mem-at.png b/docs/development/devtools/pap-s3p-results/pap-s3p-mem-at.png
deleted file mode 100644 (file)
index dd88022..0000000
Binary files a/docs/development/devtools/pap-s3p-results/pap-s3p-mem-at.png and /dev/null differ
diff --git a/docs/development/devtools/pap-s3p-results/pap-s3p-mem-bt.png b/docs/development/devtools/pap-s3p-results/pap-s3p-mem-bt.png
deleted file mode 100644 (file)
index 7c90983..0000000
Binary files a/docs/development/devtools/pap-s3p-results/pap-s3p-mem-bt.png and /dev/null differ
diff --git a/docs/development/devtools/pap-s3p-results/pap-s3p-performance-result-jmeter.png b/docs/development/devtools/pap-s3p-results/pap-s3p-performance-result-jmeter.png
deleted file mode 100644 (file)
index be8bd99..0000000
Binary files a/docs/development/devtools/pap-s3p-results/pap-s3p-performance-result-jmeter.png and /dev/null differ
diff --git a/docs/development/devtools/pap-s3p-results/pap-s3p-stability-result-jmeter.png b/docs/development/devtools/pap-s3p-results/pap-s3p-stability-result-jmeter.png
deleted file mode 100644 (file)
index 5ebc769..0000000
Binary files a/docs/development/devtools/pap-s3p-results/pap-s3p-stability-result-jmeter.png and /dev/null differ
diff --git a/docs/development/devtools/pap-s3p-results/pap_metrics_after_72h.txt b/docs/development/devtools/pap-s3p-results/pap_metrics_after_72h.txt
new file mode 100644 (file)
index 0000000..8864726
--- /dev/null
@@ -0,0 +1,306 @@
+# HELP logback_events_total Number of error level events that made it to the logs
+# TYPE logback_events_total counter
+logback_events_total{level="warn",} 23.0
+logback_events_total{level="debug",} 0.0
+logback_events_total{level="error",} 1.0
+logback_events_total{level="trace",} 0.0
+logback_events_total{level="info",} 1709270.0
+# HELP system_cpu_usage The "recent cpu usage" for the whole system
+# TYPE system_cpu_usage gauge
+system_cpu_usage 0.1270718232044199
+# HELP hikaricp_connections_acquire_seconds Connection acquire time
+# TYPE hikaricp_connections_acquire_seconds summary
+hikaricp_connections_acquire_seconds_count{pool="HikariPool-1",} 298222.0
+hikaricp_connections_acquire_seconds_sum{pool="HikariPool-1",} 321.533641537
+# HELP hikaricp_connections_acquire_seconds_max Connection acquire time
+# TYPE hikaricp_connections_acquire_seconds_max gauge
+hikaricp_connections_acquire_seconds_max{pool="HikariPool-1",} 0.006766789
+# HELP tomcat_sessions_created_sessions_total  
+# TYPE tomcat_sessions_created_sessions_total counter
+tomcat_sessions_created_sessions_total 158246.0
+# HELP jvm_classes_unloaded_classes_total The total number of classes unloaded since the Java virtual machine has started execution
+# TYPE jvm_classes_unloaded_classes_total counter
+jvm_classes_unloaded_classes_total 799.0
+# HELP jvm_gc_memory_allocated_bytes_total Incremented for an increase in the size of the (young) heap memory pool after one GC to before the next
+# TYPE jvm_gc_memory_allocated_bytes_total counter
+jvm_gc_memory_allocated_bytes_total 3.956513686328E12
+# HELP tomcat_sessions_alive_max_seconds  
+# TYPE tomcat_sessions_alive_max_seconds gauge
+tomcat_sessions_alive_max_seconds 2488.0
+# HELP spring_data_repository_invocations_seconds_max  
+# TYPE spring_data_repository_invocations_seconds_max gauge
+spring_data_repository_invocations_seconds_max{exception="None",method="findByKeyParentKeyNameAndKeyParentKeyVersion",repository="PolicyStatusRepository",state="SUCCESS",} 0.0
+spring_data_repository_invocations_seconds_max{exception="None",method="saveAll",repository="PolicyStatusRepository",state="SUCCESS",} 0.0
+spring_data_repository_invocations_seconds_max{exception="None",method="save",repository="PdpGroupRepository",state="SUCCESS",} 0.0
+spring_data_repository_invocations_seconds_max{exception="None",method="findByPdpGroup",repository="PolicyAuditRepository",state="SUCCESS",} 0.0
+spring_data_repository_invocations_seconds_max{exception="None",method="findById",repository="ToscaServiceTemplateRepository",state="SUCCESS",} 0.0
+spring_data_repository_invocations_seconds_max{exception="None",method="findByPdpGroup",repository="PolicyStatusRepository",state="SUCCESS",} 0.0
+spring_data_repository_invocations_seconds_max{exception="None",method="findAll",repository="PdpGroupRepository",state="SUCCESS",} 0.863253324
+spring_data_repository_invocations_seconds_max{exception="None",method="deleteById",repository="PdpGroupRepository",state="SUCCESS",} 0.0
+spring_data_repository_invocations_seconds_max{exception="None",method="findAll",repository="PolicyStatusRepository",state="SUCCESS",} 0.144251855
+spring_data_repository_invocations_seconds_max{exception="None",method="saveAll",repository="PolicyAuditRepository",state="SUCCESS",} 0.0
+spring_data_repository_invocations_seconds_max{exception="None",method="findByPdpGroupAndNameAndVersion",repository="PolicyAuditRepository",state="SUCCESS",} 0.0
+spring_data_repository_invocations_seconds_max{exception="None",method="findById",repository="ToscaNodeTemplateRepository",state="SUCCESS",} 0.0
+spring_data_repository_invocations_seconds_max{exception="None",method="findByKeyName",repository="PdpGroupRepository",state="SUCCESS",} 0.0
+spring_data_repository_invocations_seconds_max{exception="None",method="save",repository="PdpSubGroupRepository",state="SUCCESS",} 0.0
+spring_data_repository_invocations_seconds_max{exception="None",method="save",repository="PdpRepository",state="SUCCESS",} 0.0
+spring_data_repository_invocations_seconds_max{exception="None",method="deleteAll",repository="PolicyStatusRepository",state="SUCCESS",} 0.0
+# HELP spring_data_repository_invocations_seconds  
+# TYPE spring_data_repository_invocations_seconds summary
+spring_data_repository_invocations_seconds_count{exception="None",method="findByKeyParentKeyNameAndKeyParentKeyVersion",repository="PolicyStatusRepository",state="SUCCESS",} 15740.0
+spring_data_repository_invocations_seconds_sum{exception="None",method="findByKeyParentKeyNameAndKeyParentKeyVersion",repository="PolicyStatusRepository",state="SUCCESS",} 3116.970495755
+spring_data_repository_invocations_seconds_count{exception="None",method="saveAll",repository="PolicyStatusRepository",state="SUCCESS",} 113798.0
+spring_data_repository_invocations_seconds_sum{exception="None",method="saveAll",repository="PolicyStatusRepository",state="SUCCESS",} 480.71823635
+spring_data_repository_invocations_seconds_count{exception="None",method="save",repository="PdpGroupRepository",state="SUCCESS",} 28085.0
+spring_data_repository_invocations_seconds_sum{exception="None",method="save",repository="PdpGroupRepository",state="SUCCESS",} 9.645079055
+spring_data_repository_invocations_seconds_count{exception="None",method="findByPdpGroup",repository="PolicyAuditRepository",state="SUCCESS",} 6981.0
+spring_data_repository_invocations_seconds_sum{exception="None",method="findByPdpGroup",repository="PolicyAuditRepository",state="SUCCESS",} 616.931466813
+spring_data_repository_invocations_seconds_count{exception="None",method="findById",repository="ToscaServiceTemplateRepository",state="SUCCESS",} 46250.0
+spring_data_repository_invocations_seconds_sum{exception="None",method="findById",repository="ToscaServiceTemplateRepository",state="SUCCESS",} 8406.051483096
+spring_data_repository_invocations_seconds_count{exception="None",method="findByPdpGroup",repository="PolicyStatusRepository",state="SUCCESS",} 42765.0
+spring_data_repository_invocations_seconds_sum{exception="None",method="findByPdpGroup",repository="PolicyStatusRepository",state="SUCCESS",} 10979.997264985
+spring_data_repository_invocations_seconds_count{exception="None",method="findAll",repository="PdpGroupRepository",state="SUCCESS",} 101780.0
+spring_data_repository_invocations_seconds_sum{exception="None",method="findAll",repository="PdpGroupRepository",state="SUCCESS",} 20530.858991818
+spring_data_repository_invocations_seconds_count{exception="None",method="deleteById",repository="PdpGroupRepository",state="SUCCESS",} 1.0
+spring_data_repository_invocations_seconds_sum{exception="None",method="deleteById",repository="PdpGroupRepository",state="SUCCESS",} 0.004567796
+spring_data_repository_invocations_seconds_count{exception="None",method="findAll",repository="PolicyStatusRepository",state="SUCCESS",} 32620.0
+spring_data_repository_invocations_seconds_sum{exception="None",method="findAll",repository="PolicyStatusRepository",state="SUCCESS",} 11459.109680167
+spring_data_repository_invocations_seconds_count{exception="None",method="saveAll",repository="PolicyAuditRepository",state="SUCCESS",} 28080.0
+spring_data_repository_invocations_seconds_sum{exception="None",method="saveAll",repository="PolicyAuditRepository",state="SUCCESS",} 45.836464781
+spring_data_repository_invocations_seconds_count{exception="None",method="findByPdpGroupAndNameAndVersion",repository="PolicyAuditRepository",state="SUCCESS",} 13960.0
+spring_data_repository_invocations_seconds_sum{exception="None",method="findByPdpGroupAndNameAndVersion",repository="PolicyAuditRepository",state="SUCCESS",} 1765.653676534
+spring_data_repository_invocations_seconds_count{exception="None",method="findById",repository="ToscaNodeTemplateRepository",state="SUCCESS",} 21331.0
+spring_data_repository_invocations_seconds_sum{exception="None",method="findById",repository="ToscaNodeTemplateRepository",state="SUCCESS",} 1.286926983
+spring_data_repository_invocations_seconds_count{exception="None",method="findByKeyName",repository="PdpGroupRepository",state="SUCCESS",} 13970.0
+spring_data_repository_invocations_seconds_sum{exception="None",method="findByKeyName",repository="PdpGroupRepository",state="SUCCESS",} 4175.556697162
+spring_data_repository_invocations_seconds_count{exception="None",method="save",repository="PdpSubGroupRepository",state="SUCCESS",} 2.0
+spring_data_repository_invocations_seconds_sum{exception="None",method="save",repository="PdpSubGroupRepository",state="SUCCESS",} 0.864602048
+spring_data_repository_invocations_seconds_count{exception="None",method="save",repository="PdpRepository",state="SUCCESS",} 36866.0
+spring_data_repository_invocations_seconds_sum{exception="None",method="save",repository="PdpRepository",state="SUCCESS",} 7686.38602325
+spring_data_repository_invocations_seconds_count{exception="None",method="deleteAll",repository="PolicyStatusRepository",state="SUCCESS",} 56899.0
+spring_data_repository_invocations_seconds_sum{exception="None",method="deleteAll",repository="PolicyStatusRepository",state="SUCCESS",} 882.098525295
+# HELP jvm_threads_states_threads The current number of threads having NEW state
+# TYPE jvm_threads_states_threads gauge
+jvm_threads_states_threads{state="runnable",} 9.0
+jvm_threads_states_threads{state="blocked",} 0.0
+jvm_threads_states_threads{state="waiting",} 29.0
+jvm_threads_states_threads{state="timed-waiting",} 8.0
+jvm_threads_states_threads{state="new",} 0.0
+jvm_threads_states_threads{state="terminated",} 0.0
+# HELP process_cpu_usage The "recent cpu usage" for the Java Virtual Machine process
+# TYPE process_cpu_usage gauge
+process_cpu_usage 0.006697923643670462
+# HELP tomcat_sessions_expired_sessions_total  
+# TYPE tomcat_sessions_expired_sessions_total counter
+tomcat_sessions_expired_sessions_total 158186.0
+# HELP jvm_buffer_total_capacity_bytes An estimate of the total capacity of the buffers in this pool
+# TYPE jvm_buffer_total_capacity_bytes gauge
+jvm_buffer_total_capacity_bytes{id="mapped",} 0.0
+jvm_buffer_total_capacity_bytes{id="direct",} 169210.0
+# HELP process_start_time_seconds Start time of the process since unix epoch.
+# TYPE process_start_time_seconds gauge
+process_start_time_seconds 1.649849957815E9
+# HELP hikaricp_connections_creation_seconds_max Connection creation time
+# TYPE hikaricp_connections_creation_seconds_max gauge
+hikaricp_connections_creation_seconds_max{pool="HikariPool-1",} 0.51
+# HELP hikaricp_connections_creation_seconds Connection creation time
+# TYPE hikaricp_connections_creation_seconds summary
+hikaricp_connections_creation_seconds_count{pool="HikariPool-1",} 3936.0
+hikaricp_connections_creation_seconds_sum{pool="HikariPool-1",} 942.369
+# HELP hikaricp_connections_max Max connections
+# TYPE hikaricp_connections_max gauge
+hikaricp_connections_max{pool="HikariPool-1",} 10.0
+# HELP jdbc_connections_min Minimum number of idle connections in the pool.
+# TYPE jdbc_connections_min gauge
+jdbc_connections_min{name="dataSource",} 10.0
+# HELP jvm_memory_committed_bytes The amount of memory in bytes that is committed for the Java virtual machine to use
+# TYPE jvm_memory_committed_bytes gauge
+jvm_memory_committed_bytes{area="heap",id="Tenured Gen",} 1.76160768E8
+jvm_memory_committed_bytes{area="nonheap",id="CodeHeap 'profiled nmethods'",} 4.9020928E7
+jvm_memory_committed_bytes{area="heap",id="Eden Space",} 7.0582272E7
+jvm_memory_committed_bytes{area="nonheap",id="Metaspace",} 1.1890688E8
+jvm_memory_committed_bytes{area="nonheap",id="CodeHeap 'non-nmethods'",} 2555904.0
+jvm_memory_committed_bytes{area="heap",id="Survivor Space",} 8781824.0
+jvm_memory_committed_bytes{area="nonheap",id="Compressed Class Space",} 1.5450112E7
+jvm_memory_committed_bytes{area="nonheap",id="CodeHeap 'non-profiled nmethods'",} 3.1850496E7
+# HELP jvm_threads_peak_threads The peak live thread count since the Java virtual machine started or peak was reset
+# TYPE jvm_threads_peak_threads gauge
+jvm_threads_peak_threads 51.0
+# HELP hikaricp_connections_idle Idle connections
+# TYPE hikaricp_connections_idle gauge
+hikaricp_connections_idle{pool="HikariPool-1",} 10.0
+# HELP hikaricp_connections Total connections
+# TYPE hikaricp_connections gauge
+hikaricp_connections{pool="HikariPool-1",} 10.0
+# HELP http_server_requests_seconds  
+# TYPE http_server_requests_seconds summary
+http_server_requests_seconds_count{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/policies/audit/{pdpGroupName}/{policyName}/{policyVersion}",} 13960.0
+http_server_requests_seconds_sum{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/policies/audit/{pdpGroupName}/{policyName}/{policyVersion}",} 4066.52698026
+http_server_requests_seconds_count{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/metrics",} 22470.0
+http_server_requests_seconds_sum{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/metrics",} 3622.506076129
+http_server_requests_seconds_count{exception="None",method="POST",outcome="SUCCESS",status="202",uri="/policy/pap/v1/pdps/deployments/batch",} 13961.0
+http_server_requests_seconds_sum{exception="None",method="POST",outcome="SUCCESS",status="202",uri="/policy/pap/v1/pdps/deployments/batch",} 27890.47103474
+http_server_requests_seconds_count{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/policies/status",} 14404.0
+http_server_requests_seconds_sum{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/policies/status",} 7821.856496806
+http_server_requests_seconds_count{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/policies/status/{pdpGroupName}/{policyName}/{policyVersion}",} 15738.0
+http_server_requests_seconds_sum{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/policies/status/{pdpGroupName}/{policyName}/{policyVersion}",} 5848.655389921
+http_server_requests_seconds_count{exception="None",method="DELETE",outcome="SUCCESS",status="202",uri="/policy/pap/v1/pdps/policies/{name}",} 7059.0
+http_server_requests_seconds_sum{exception="None",method="DELETE",outcome="SUCCESS",status="202",uri="/policy/pap/v1/pdps/policies/{name}",} 15554.208182423
+http_server_requests_seconds_count{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/policies/audit/{pdpGroupName}",} 6981.0
+http_server_requests_seconds_sum{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/policies/audit/{pdpGroupName}",} 1756.291465092
+http_server_requests_seconds_count{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/policies/deployed",} 6979.0
+http_server_requests_seconds_sum{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/policies/deployed",} 1934.785157616
+http_server_requests_seconds_count{exception="None",method="PUT",outcome="SUCCESS",status="200",uri="/policy/pap/v1/pdps/groups/{name}",} 4.0
+http_server_requests_seconds_sum{exception="None",method="PUT",outcome="SUCCESS",status="200",uri="/policy/pap/v1/pdps/groups/{name}",} 7.281567744
+http_server_requests_seconds_count{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/pdps",} 31395.0
+http_server_requests_seconds_sum{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/pdps",} 13046.055299896
+http_server_requests_seconds_count{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/healthcheck",} 11237.0
+http_server_requests_seconds_sum{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/healthcheck",} 6979.030310367
+http_server_requests_seconds_count{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/components/healthcheck",} 6979.0
+http_server_requests_seconds_sum{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/components/healthcheck",} 3741.773622509
+http_server_requests_seconds_count{exception="None",method="GET",outcome="CLIENT_ERROR",status="404",uri="/policy/pap/v1/policies/status/{pdpGroupName}/{policyName}/{policyVersion}",} 2.0
+http_server_requests_seconds_sum{exception="None",method="GET",outcome="CLIENT_ERROR",status="404",uri="/policy/pap/v1/policies/status/{pdpGroupName}/{policyName}/{policyVersion}",} 1.318371311
+http_server_requests_seconds_count{exception="None",method="DELETE",outcome="SUCCESS",status="200",uri="/policy/pap/v1/pdps/groups/{name}",} 1.0
+http_server_requests_seconds_sum{exception="None",method="DELETE",outcome="SUCCESS",status="200",uri="/policy/pap/v1/pdps/groups/{name}",} 1.026191347
+http_server_requests_seconds_count{exception="None",method="POST",outcome="SUCCESS",status="202",uri="/policy/pap/v1/pdps/policies",} 7077.0
+http_server_requests_seconds_sum{exception="None",method="POST",outcome="SUCCESS",status="202",uri="/policy/pap/v1/pdps/policies",} 14603.589203056
+http_server_requests_seconds_count{exception="None",method="POST",outcome="SUCCESS",status="200",uri="/policy/pap/v1/pdps/groups/batch",} 2.0
+http_server_requests_seconds_sum{exception="None",method="POST",outcome="SUCCESS",status="200",uri="/policy/pap/v1/pdps/groups/batch",} 1.877099877
+# HELP http_server_requests_seconds_max  
+# TYPE http_server_requests_seconds_max gauge
+http_server_requests_seconds_max{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/policies/audit/{pdpGroupName}/{policyName}/{policyVersion}",} 0.0
+http_server_requests_seconds_max{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/metrics",} 0.147881793
+http_server_requests_seconds_max{exception="None",method="POST",outcome="SUCCESS",status="202",uri="/policy/pap/v1/pdps/deployments/batch",} 0.0
+http_server_requests_seconds_max{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/policies/status",} 0.0
+http_server_requests_seconds_max{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/policies/status/{pdpGroupName}/{policyName}/{policyVersion}",} 0.0
+http_server_requests_seconds_max{exception="None",method="DELETE",outcome="SUCCESS",status="202",uri="/policy/pap/v1/pdps/policies/{name}",} 0.0
+http_server_requests_seconds_max{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/policies/audit/{pdpGroupName}",} 0.0
+http_server_requests_seconds_max{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/policies/deployed",} 0.0
+http_server_requests_seconds_max{exception="None",method="PUT",outcome="SUCCESS",status="200",uri="/policy/pap/v1/pdps/groups/{name}",} 0.0
+http_server_requests_seconds_max{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/pdps",} 0.227488581
+http_server_requests_seconds_max{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/healthcheck",} 0.272733892
+http_server_requests_seconds_max{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/components/healthcheck",} 0.0
+http_server_requests_seconds_max{exception="None",method="GET",outcome="CLIENT_ERROR",status="404",uri="/policy/pap/v1/policies/status/{pdpGroupName}/{policyName}/{policyVersion}",} 0.0
+http_server_requests_seconds_max{exception="None",method="DELETE",outcome="SUCCESS",status="200",uri="/policy/pap/v1/pdps/groups/{name}",} 0.0
+http_server_requests_seconds_max{exception="None",method="POST",outcome="SUCCESS",status="202",uri="/policy/pap/v1/pdps/policies",} 0.0
+http_server_requests_seconds_max{exception="None",method="POST",outcome="SUCCESS",status="200",uri="/policy/pap/v1/pdps/groups/batch",} 0.0
+# HELP jvm_buffer_count_buffers An estimate of the number of buffers in the pool
+# TYPE jvm_buffer_count_buffers gauge
+jvm_buffer_count_buffers{id="mapped",} 0.0
+jvm_buffer_count_buffers{id="direct",} 10.0
+# HELP hikaricp_connections_pending Pending threads
+# TYPE hikaricp_connections_pending gauge
+hikaricp_connections_pending{pool="HikariPool-1",} 0.0
+# HELP system_load_average_1m The sum of the number of runnable entities queued to available processors and the number of runnable entities running on the available processors averaged over a period of time
+# TYPE system_load_average_1m gauge
+system_load_average_1m 0.6
+# HELP jvm_memory_used_bytes The amount of used memory
+# TYPE jvm_memory_used_bytes gauge
+jvm_memory_used_bytes{area="heap",id="Tenured Gen",} 6.7084064E7
+jvm_memory_used_bytes{area="nonheap",id="CodeHeap 'profiled nmethods'",} 4.110464E7
+jvm_memory_used_bytes{area="heap",id="Eden Space",} 3.329572E7
+jvm_memory_used_bytes{area="nonheap",id="Metaspace",} 1.12499384E8
+jvm_memory_used_bytes{area="nonheap",id="CodeHeap 'non-nmethods'",} 1394432.0
+jvm_memory_used_bytes{area="heap",id="Survivor Space",} 463856.0
+jvm_memory_used_bytes{area="nonheap",id="Compressed Class Space",} 1.3096368E7
+jvm_memory_used_bytes{area="nonheap",id="CodeHeap 'non-profiled nmethods'",} 3.1773568E7
+# HELP tomcat_sessions_rejected_sessions_total  
+# TYPE tomcat_sessions_rejected_sessions_total counter
+tomcat_sessions_rejected_sessions_total 0.0
+# HELP jvm_gc_live_data_size_bytes Size of long-lived heap memory pool after reclamation
+# TYPE jvm_gc_live_data_size_bytes gauge
+jvm_gc_live_data_size_bytes 5.0955016E7
+# HELP jvm_gc_memory_promoted_bytes_total Count of positive increases in the size of the old generation memory pool before GC to after GC
+# TYPE jvm_gc_memory_promoted_bytes_total counter
+jvm_gc_memory_promoted_bytes_total 1.692072808E9
+# HELP tomcat_sessions_active_max_sessions  
+# TYPE tomcat_sessions_active_max_sessions gauge
+tomcat_sessions_active_max_sessions 1101.0
+# HELP jdbc_connections_active Current number of active connections that have been allocated from the data source.
+# TYPE jdbc_connections_active gauge
+jdbc_connections_active{name="dataSource",} 0.0
+# HELP jdbc_connections_max Maximum number of active connections that can be allocated at the same time.
+# TYPE jdbc_connections_max gauge
+jdbc_connections_max{name="dataSource",} 10.0
+# HELP jvm_memory_max_bytes The maximum amount of memory in bytes that can be used for memory management
+# TYPE jvm_memory_max_bytes gauge
+jvm_memory_max_bytes{area="heap",id="Tenured Gen",} 2.803236864E9
+jvm_memory_max_bytes{area="nonheap",id="CodeHeap 'profiled nmethods'",} 1.22912768E8
+jvm_memory_max_bytes{area="heap",id="Eden Space",} 1.12132096E9
+jvm_memory_max_bytes{area="nonheap",id="Metaspace",} -1.0
+jvm_memory_max_bytes{area="nonheap",id="CodeHeap 'non-nmethods'",} 5828608.0
+jvm_memory_max_bytes{area="heap",id="Survivor Space",} 1.40115968E8
+jvm_memory_max_bytes{area="nonheap",id="Compressed Class Space",} 1.073741824E9
+jvm_memory_max_bytes{area="nonheap",id="CodeHeap 'non-profiled nmethods'",} 1.22916864E8
+# HELP jvm_threads_daemon_threads The current number of live daemon threads
+# TYPE jvm_threads_daemon_threads gauge
+jvm_threads_daemon_threads 34.0
+# HELP process_files_open_files The open file descriptor count
+# TYPE process_files_open_files gauge
+process_files_open_files 36.0
+# HELP system_cpu_count The number of processors available to the Java virtual machine
+# TYPE system_cpu_count gauge
+system_cpu_count 1.0
+# HELP jvm_gc_pause_seconds Time spent in GC pause
+# TYPE jvm_gc_pause_seconds summary
+jvm_gc_pause_seconds_count{action="end of major GC",cause="Metadata GC Threshold",} 2.0
+jvm_gc_pause_seconds_sum{action="end of major GC",cause="Metadata GC Threshold",} 0.391
+jvm_gc_pause_seconds_count{action="end of major GC",cause="Allocation Failure",} 13.0
+jvm_gc_pause_seconds_sum{action="end of major GC",cause="Allocation Failure",} 5.98
+jvm_gc_pause_seconds_count{action="end of minor GC",cause="Allocation Failure",} 56047.0
+jvm_gc_pause_seconds_sum{action="end of minor GC",cause="Allocation Failure",} 549.532
+jvm_gc_pause_seconds_count{action="end of minor GC",cause="GCLocker Initiated GC",} 9.0
+jvm_gc_pause_seconds_sum{action="end of minor GC",cause="GCLocker Initiated GC",} 0.081
+# HELP jvm_gc_pause_seconds_max Time spent in GC pause
+# TYPE jvm_gc_pause_seconds_max gauge
+jvm_gc_pause_seconds_max{action="end of major GC",cause="Metadata GC Threshold",} 0.0
+jvm_gc_pause_seconds_max{action="end of major GC",cause="Allocation Failure",} 0.0
+jvm_gc_pause_seconds_max{action="end of minor GC",cause="Allocation Failure",} 0.0
+jvm_gc_pause_seconds_max{action="end of minor GC",cause="GCLocker Initiated GC",} 0.0
+# HELP hikaricp_connections_min Min connections
+# TYPE hikaricp_connections_min gauge
+hikaricp_connections_min{pool="HikariPool-1",} 10.0
+# HELP process_files_max_files The maximum file descriptor count
+# TYPE process_files_max_files gauge
+process_files_max_files 1048576.0
+# HELP hikaricp_connections_active Active connections
+# TYPE hikaricp_connections_active gauge
+hikaricp_connections_active{pool="HikariPool-1",} 0.0
+# HELP jvm_threads_live_threads The current number of live threads including both daemon and non-daemon threads
+# TYPE jvm_threads_live_threads gauge
+jvm_threads_live_threads 46.0
+# HELP process_uptime_seconds The uptime of the Java virtual machine
+# TYPE process_uptime_seconds gauge
+process_uptime_seconds 510671.853
+# HELP hikaricp_connections_usage_seconds Connection usage time
+# TYPE hikaricp_connections_usage_seconds summary
+hikaricp_connections_usage_seconds_count{pool="HikariPool-1",} 298222.0
+hikaricp_connections_usage_seconds_sum{pool="HikariPool-1",} 125489.766
+# HELP hikaricp_connections_usage_seconds_max Connection usage time
+# TYPE hikaricp_connections_usage_seconds_max gauge
+hikaricp_connections_usage_seconds_max{pool="HikariPool-1",} 0.878
+# HELP pap_policy_deployments_total  
+# TYPE pap_policy_deployments_total counter
+pap_policy_deployments_total{operation="deploy",status="FAILURE",} 0.0
+pap_policy_deployments_total{operation="undeploy",status="SUCCESS",} 13971.0
+pap_policy_deployments_total{operation="deploy",status="SUCCESS",} 14028.0
+pap_policy_deployments_total{operation="undeploy",status="FAILURE",} 0.0
+# HELP jvm_buffer_memory_used_bytes An estimate of the memory that the Java virtual machine is using for this buffer pool
+# TYPE jvm_buffer_memory_used_bytes gauge
+jvm_buffer_memory_used_bytes{id="mapped",} 0.0
+jvm_buffer_memory_used_bytes{id="direct",} 169210.0
+# HELP hikaricp_connections_timeout_total Connection timeout total count
+# TYPE hikaricp_connections_timeout_total counter
+hikaricp_connections_timeout_total{pool="HikariPool-1",} 0.0
+# HELP jvm_classes_loaded_classes The number of classes that are currently loaded in the Java virtual machine
+# TYPE jvm_classes_loaded_classes gauge
+jvm_classes_loaded_classes 18727.0
+# HELP jdbc_connections_idle Number of established but idle connections.
+# TYPE jdbc_connections_idle gauge
+jdbc_connections_idle{name="dataSource",} 10.0
+# HELP tomcat_sessions_active_current_sessions  
+# TYPE tomcat_sessions_active_current_sessions gauge
+tomcat_sessions_active_current_sessions 60.0
+# HELP jvm_gc_max_data_size_bytes Max size of long-lived heap memory pool
+# TYPE jvm_gc_max_data_size_bytes gauge
+jvm_gc_max_data_size_bytes 2.803236864E9
diff --git a/docs/development/devtools/pap-s3p-results/pap_metrics_before_72h.txt b/docs/development/devtools/pap-s3p-results/pap_metrics_before_72h.txt
new file mode 100644 (file)
index 0000000..047ccf9
--- /dev/null
@@ -0,0 +1,225 @@
+# HELP spring_data_repository_invocations_seconds_max  
+# TYPE spring_data_repository_invocations_seconds_max gauge
+spring_data_repository_invocations_seconds_max{exception="None",method="findByKeyName",repository="PdpGroupRepository",state="SUCCESS",} 0.0
+spring_data_repository_invocations_seconds_max{exception="None",method="save",repository="PdpRepository",state="SUCCESS",} 0.008146982
+spring_data_repository_invocations_seconds_max{exception="None",method="findAll",repository="PdpGroupRepository",state="SUCCESS",} 0.777049798
+spring_data_repository_invocations_seconds_max{exception="None",method="findAll",repository="PolicyStatusRepository",state="SUCCESS",} 0.569583402
+# HELP spring_data_repository_invocations_seconds  
+# TYPE spring_data_repository_invocations_seconds summary
+spring_data_repository_invocations_seconds_count{exception="None",method="findByKeyName",repository="PdpGroupRepository",state="SUCCESS",} 1.0
+spring_data_repository_invocations_seconds_sum{exception="None",method="findByKeyName",repository="PdpGroupRepository",state="SUCCESS",} 1.257790017
+spring_data_repository_invocations_seconds_count{exception="None",method="save",repository="PdpRepository",state="SUCCESS",} 23.0
+spring_data_repository_invocations_seconds_sum{exception="None",method="save",repository="PdpRepository",state="SUCCESS",} 0.671469491
+spring_data_repository_invocations_seconds_count{exception="None",method="findAll",repository="PdpGroupRepository",state="SUCCESS",} 30.0
+spring_data_repository_invocations_seconds_sum{exception="None",method="findAll",repository="PdpGroupRepository",state="SUCCESS",} 8.481980058
+spring_data_repository_invocations_seconds_count{exception="None",method="findAll",repository="PolicyStatusRepository",state="SUCCESS",} 4.0
+spring_data_repository_invocations_seconds_sum{exception="None",method="findAll",repository="PolicyStatusRepository",state="SUCCESS",} 1.939575991
+# HELP hikaricp_connections_max Max connections
+# TYPE hikaricp_connections_max gauge
+hikaricp_connections_max{pool="HikariPool-1",} 10.0
+# HELP tomcat_sessions_created_sessions_total  
+# TYPE tomcat_sessions_created_sessions_total counter
+tomcat_sessions_created_sessions_total 16.0
+# HELP process_files_open_files The open file descriptor count
+# TYPE process_files_open_files gauge
+process_files_open_files 34.0
+# HELP hikaricp_connections_active Active connections
+# TYPE hikaricp_connections_active gauge
+hikaricp_connections_active{pool="HikariPool-1",} 0.0
+# HELP jvm_classes_unloaded_classes_total The total number of classes unloaded since the Java virtual machine has started execution
+# TYPE jvm_classes_unloaded_classes_total counter
+jvm_classes_unloaded_classes_total 2.0
+# HELP system_cpu_usage The "recent cpu usage" for the whole system
+# TYPE system_cpu_usage gauge
+system_cpu_usage 0.03765922097101717
+# HELP jvm_classes_loaded_classes The number of classes that are currently loaded in the Java virtual machine
+# TYPE jvm_classes_loaded_classes gauge
+jvm_classes_loaded_classes 18022.0
+# HELP process_uptime_seconds The uptime of the Java virtual machine
+# TYPE process_uptime_seconds gauge
+process_uptime_seconds 570.627
+# HELP jvm_memory_committed_bytes The amount of memory in bytes that is committed for the Java virtual machine to use
+# TYPE jvm_memory_committed_bytes gauge
+jvm_memory_committed_bytes{area="heap",id="Tenured Gen",} 1.76160768E8
+jvm_memory_committed_bytes{area="nonheap",id="CodeHeap 'profiled nmethods'",} 2.6017792E7
+jvm_memory_committed_bytes{area="heap",id="Eden Space",} 7.0582272E7
+jvm_memory_committed_bytes{area="nonheap",id="Metaspace",} 1.04054784E8
+jvm_memory_committed_bytes{area="nonheap",id="CodeHeap 'non-nmethods'",} 2555904.0
+jvm_memory_committed_bytes{area="heap",id="Survivor Space",} 8781824.0
+jvm_memory_committed_bytes{area="nonheap",id="Compressed Class Space",} 1.4286848E7
+jvm_memory_committed_bytes{area="nonheap",id="CodeHeap 'non-profiled nmethods'",} 6881280.0
+# HELP jvm_gc_live_data_size_bytes Size of long-lived heap memory pool after reclamation
+# TYPE jvm_gc_live_data_size_bytes gauge
+jvm_gc_live_data_size_bytes 4.13206E7
+# HELP jdbc_connections_min Minimum number of idle connections in the pool.
+# TYPE jdbc_connections_min gauge
+jdbc_connections_min{name="dataSource",} 10.0
+# HELP process_start_time_seconds Start time of the process since unix epoch.
+# TYPE process_start_time_seconds gauge
+process_start_time_seconds 1.649787267607E9
+# HELP jdbc_connections_idle Number of established but idle connections.
+# TYPE jdbc_connections_idle gauge
+jdbc_connections_idle{name="dataSource",} 10.0
+# HELP jvm_gc_memory_promoted_bytes_total Count of positive increases in the size of the old generation memory pool before GC to after GC
+# TYPE jvm_gc_memory_promoted_bytes_total counter
+jvm_gc_memory_promoted_bytes_total 2.7154576E7
+# HELP hikaricp_connections_creation_seconds_max Connection creation time
+# TYPE hikaricp_connections_creation_seconds_max gauge
+hikaricp_connections_creation_seconds_max{pool="HikariPool-1",} 0.0
+# HELP hikaricp_connections_creation_seconds Connection creation time
+# TYPE hikaricp_connections_creation_seconds summary
+hikaricp_connections_creation_seconds_count{pool="HikariPool-1",} 0.0
+hikaricp_connections_creation_seconds_sum{pool="HikariPool-1",} 0.0
+# HELP tomcat_sessions_active_current_sessions  
+# TYPE tomcat_sessions_active_current_sessions gauge
+tomcat_sessions_active_current_sessions 16.0
+# HELP jvm_threads_daemon_threads The current number of live daemon threads
+# TYPE jvm_threads_daemon_threads gauge
+jvm_threads_daemon_threads 34.0
+# HELP jvm_memory_used_bytes The amount of used memory
+# TYPE jvm_memory_used_bytes gauge
+jvm_memory_used_bytes{area="heap",id="Tenured Gen",} 4.13206E7
+jvm_memory_used_bytes{area="nonheap",id="CodeHeap 'profiled nmethods'",} 2.6013824E7
+jvm_memory_used_bytes{area="heap",id="Eden Space",} 2853928.0
+jvm_memory_used_bytes{area="nonheap",id="Metaspace",} 9.9649768E7
+jvm_memory_used_bytes{area="nonheap",id="CodeHeap 'non-nmethods'",} 1364736.0
+jvm_memory_used_bytes{area="heap",id="Survivor Space",} 1036120.0
+jvm_memory_used_bytes{area="nonheap",id="Compressed Class Space",} 1.2613992E7
+jvm_memory_used_bytes{area="nonheap",id="CodeHeap 'non-profiled nmethods'",} 6865408.0
+# HELP hikaricp_connections_timeout_total Connection timeout total count
+# TYPE hikaricp_connections_timeout_total counter
+hikaricp_connections_timeout_total{pool="HikariPool-1",} 0.0
+# HELP jvm_memory_max_bytes The maximum amount of memory in bytes that can be used for memory management
+# TYPE jvm_memory_max_bytes gauge
+jvm_memory_max_bytes{area="heap",id="Tenured Gen",} 2.803236864E9
+jvm_memory_max_bytes{area="nonheap",id="CodeHeap 'profiled nmethods'",} 1.22912768E8
+jvm_memory_max_bytes{area="heap",id="Eden Space",} 1.12132096E9
+jvm_memory_max_bytes{area="nonheap",id="Metaspace",} -1.0
+jvm_memory_max_bytes{area="nonheap",id="CodeHeap 'non-nmethods'",} 5828608.0
+jvm_memory_max_bytes{area="heap",id="Survivor Space",} 1.40115968E8
+jvm_memory_max_bytes{area="nonheap",id="Compressed Class Space",} 1.073741824E9
+jvm_memory_max_bytes{area="nonheap",id="CodeHeap 'non-profiled nmethods'",} 1.22916864E8
+# HELP tomcat_sessions_active_max_sessions  
+# TYPE tomcat_sessions_active_max_sessions gauge
+tomcat_sessions_active_max_sessions 16.0
+# HELP tomcat_sessions_alive_max_seconds  
+# TYPE tomcat_sessions_alive_max_seconds gauge
+tomcat_sessions_alive_max_seconds 0.0
+# HELP jvm_threads_peak_threads The peak live thread count since the Java virtual machine started or peak was reset
+# TYPE jvm_threads_peak_threads gauge
+jvm_threads_peak_threads 43.0
+# HELP hikaricp_connections_acquire_seconds Connection acquire time
+# TYPE hikaricp_connections_acquire_seconds summary
+hikaricp_connections_acquire_seconds_count{pool="HikariPool-1",} 57.0
+hikaricp_connections_acquire_seconds_sum{pool="HikariPool-1",} 0.103535665
+# HELP hikaricp_connections_acquire_seconds_max Connection acquire time
+# TYPE hikaricp_connections_acquire_seconds_max gauge
+hikaricp_connections_acquire_seconds_max{pool="HikariPool-1",} 0.004207252
+# HELP hikaricp_connections_usage_seconds Connection usage time
+# TYPE hikaricp_connections_usage_seconds summary
+hikaricp_connections_usage_seconds_count{pool="HikariPool-1",} 57.0
+hikaricp_connections_usage_seconds_sum{pool="HikariPool-1",} 13.297
+# HELP hikaricp_connections_usage_seconds_max Connection usage time
+# TYPE hikaricp_connections_usage_seconds_max gauge
+hikaricp_connections_usage_seconds_max{pool="HikariPool-1",} 0.836
+# HELP http_server_requests_seconds  
+# TYPE http_server_requests_seconds summary
+http_server_requests_seconds_count{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/metrics",} 9.0
+http_server_requests_seconds_sum{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/metrics",} 1.93944618
+http_server_requests_seconds_count{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/pdps",} 3.0
+http_server_requests_seconds_sum{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/pdps",} 1.365007581
+http_server_requests_seconds_count{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/healthcheck",} 4.0
+http_server_requests_seconds_sum{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/healthcheck",} 2.636914428
+# HELP http_server_requests_seconds_max  
+# TYPE http_server_requests_seconds_max gauge
+http_server_requests_seconds_max{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/metrics",} 0.213989915
+http_server_requests_seconds_max{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/pdps",} 0.0
+http_server_requests_seconds_max{exception="None",method="GET",outcome="SUCCESS",status="200",uri="/policy/pap/v1/healthcheck",} 0.714076223
+# HELP process_cpu_usage The "recent cpu usage" for the Java Virtual Machine process
+# TYPE process_cpu_usage gauge
+process_cpu_usage 0.002436413304293255
+# HELP hikaricp_connections_idle Idle connections
+# TYPE hikaricp_connections_idle gauge
+hikaricp_connections_idle{pool="HikariPool-1",} 10.0
+# HELP tomcat_sessions_rejected_sessions_total  
+# TYPE tomcat_sessions_rejected_sessions_total counter
+tomcat_sessions_rejected_sessions_total 0.0
+# HELP jvm_gc_memory_allocated_bytes_total Incremented for an increase in the size of the (young) heap memory pool after one GC to before the next
+# TYPE jvm_gc_memory_allocated_bytes_total counter
+jvm_gc_memory_allocated_bytes_total 1.401269088E9
+# HELP tomcat_sessions_expired_sessions_total  
+# TYPE tomcat_sessions_expired_sessions_total counter
+tomcat_sessions_expired_sessions_total 0.0
+# HELP pap_policy_deployments_total  
+# TYPE pap_policy_deployments_total counter
+pap_policy_deployments_total{operation="deploy",status="FAILURE",} 0.0
+pap_policy_deployments_total{operation="undeploy",status="SUCCESS",} 0.0
+pap_policy_deployments_total{operation="deploy",status="SUCCESS",} 0.0
+pap_policy_deployments_total{operation="undeploy",status="FAILURE",} 0.0
+# HELP hikaricp_connections_pending Pending threads
+# TYPE hikaricp_connections_pending gauge
+hikaricp_connections_pending{pool="HikariPool-1",} 0.0
+# HELP process_files_max_files The maximum file descriptor count
+# TYPE process_files_max_files gauge
+process_files_max_files 1048576.0
+# HELP jvm_buffer_memory_used_bytes An estimate of the memory that the Java virtual machine is using for this buffer pool
+# TYPE jvm_buffer_memory_used_bytes gauge
+jvm_buffer_memory_used_bytes{id="mapped",} 0.0
+jvm_buffer_memory_used_bytes{id="direct",} 169210.0
+# HELP jvm_gc_pause_seconds Time spent in GC pause
+# TYPE jvm_gc_pause_seconds summary
+jvm_gc_pause_seconds_count{action="end of major GC",cause="Metadata GC Threshold",} 2.0
+jvm_gc_pause_seconds_sum{action="end of major GC",cause="Metadata GC Threshold",} 0.472
+jvm_gc_pause_seconds_count{action="end of minor GC",cause="Allocation Failure",} 19.0
+jvm_gc_pause_seconds_sum{action="end of minor GC",cause="Allocation Failure",} 0.507
+# HELP jvm_gc_pause_seconds_max Time spent in GC pause
+# TYPE jvm_gc_pause_seconds_max gauge
+jvm_gc_pause_seconds_max{action="end of major GC",cause="Metadata GC Threshold",} 0.0
+jvm_gc_pause_seconds_max{action="end of minor GC",cause="Allocation Failure",} 0.029
+# HELP jvm_threads_live_threads The current number of live threads including both daemon and non-daemon threads
+# TYPE jvm_threads_live_threads gauge
+jvm_threads_live_threads 43.0
+# HELP hikaricp_connections_min Min connections
+# TYPE hikaricp_connections_min gauge
+hikaricp_connections_min{pool="HikariPool-1",} 10.0
+# HELP jdbc_connections_max Maximum number of active connections that can be allocated at the same time.
+# TYPE jdbc_connections_max gauge
+jdbc_connections_max{name="dataSource",} 10.0
+# HELP jvm_buffer_total_capacity_bytes An estimate of the total capacity of the buffers in this pool
+# TYPE jvm_buffer_total_capacity_bytes gauge
+jvm_buffer_total_capacity_bytes{id="mapped",} 0.0
+jvm_buffer_total_capacity_bytes{id="direct",} 169210.0
+# HELP system_cpu_count The number of processors available to the Java virtual machine
+# TYPE system_cpu_count gauge
+system_cpu_count 1.0
+# HELP hikaricp_connections Total connections
+# TYPE hikaricp_connections gauge
+hikaricp_connections{pool="HikariPool-1",} 10.0
+# HELP jdbc_connections_active Current number of active connections that have been allocated from the data source.
+# TYPE jdbc_connections_active gauge
+jdbc_connections_active{name="dataSource",} 0.0
+# HELP system_load_average_1m The sum of the number of runnable entities queued to available processors and the number of runnable entities running on the available processors averaged over a period of time
+# TYPE system_load_average_1m gauge
+system_load_average_1m 0.36
+# HELP jvm_gc_max_data_size_bytes Max size of long-lived heap memory pool
+# TYPE jvm_gc_max_data_size_bytes gauge
+jvm_gc_max_data_size_bytes 2.803236864E9
+# HELP jvm_threads_states_threads The current number of threads having NEW state
+# TYPE jvm_threads_states_threads gauge
+jvm_threads_states_threads{state="runnable",} 9.0
+jvm_threads_states_threads{state="blocked",} 0.0
+jvm_threads_states_threads{state="waiting",} 26.0
+jvm_threads_states_threads{state="timed-waiting",} 8.0
+jvm_threads_states_threads{state="new",} 0.0
+jvm_threads_states_threads{state="terminated",} 0.0
+# HELP jvm_buffer_count_buffers An estimate of the number of buffers in the pool
+# TYPE jvm_buffer_count_buffers gauge
+jvm_buffer_count_buffers{id="mapped",} 0.0
+jvm_buffer_count_buffers{id="direct",} 10.0
+# HELP logback_events_total Number of error level events that made it to the logs
+# TYPE logback_events_total counter
+logback_events_total{level="warn",} 22.0
+logback_events_total{level="debug",} 0.0
+logback_events_total{level="error",} 0.0
+logback_events_total{level="trace",} 0.0
+logback_events_total{level="info",} 385.0
diff --git a/docs/development/devtools/pap-s3p-results/pap_performance_jmeter_results.jpg b/docs/development/devtools/pap-s3p-results/pap_performance_jmeter_results.jpg
new file mode 100644 (file)
index 0000000..eae3ac0
Binary files /dev/null and b/docs/development/devtools/pap-s3p-results/pap_performance_jmeter_results.jpg differ
diff --git a/docs/development/devtools/pap-s3p-results/pap_stability_jmeter_results.jpg b/docs/development/devtools/pap-s3p-results/pap_stability_jmeter_results.jpg
new file mode 100644 (file)
index 0000000..4640151
Binary files /dev/null and b/docs/development/devtools/pap-s3p-results/pap_stability_jmeter_results.jpg differ
diff --git a/docs/development/devtools/pap-s3p-results/pap_top_after_72h.jpg b/docs/development/devtools/pap-s3p-results/pap_top_after_72h.jpg
new file mode 100644 (file)
index 0000000..ecab404
Binary files /dev/null and b/docs/development/devtools/pap-s3p-results/pap_top_after_72h.jpg differ
diff --git a/docs/development/devtools/pap-s3p-results/pap_top_before_72h.jpg b/docs/development/devtools/pap-s3p-results/pap_top_before_72h.jpg
new file mode 100644 (file)
index 0000000..ce2208f
Binary files /dev/null and b/docs/development/devtools/pap-s3p-results/pap_top_before_72h.jpg differ
index df9d5c7..23a2d35 100644 (file)
@@ -18,7 +18,7 @@ Setup Details
 
 - Policy-PAP along with all policy components deployed as part of a full ONAP OOM deployment.
 - A second instance of APEX-PDP is spun up in the setup. Update the configuration file (OnapPfConfig.json) such that the PDP can register to the new group created by PAP in the tests.
-- Both tests were run via jMeter, which was installed on a separate VM.
+- Both tests were run via jMeter.
 
 Stability Test of PAP
 +++++++++++++++++++++
@@ -27,33 +27,58 @@ Test Plan
 ---------
 The 72 hours stability test ran the following steps sequentially in a single threaded loop.
 
-- **Create Policy defaultDomain** - creates an operational policy using policy/api component
-- **Create Policy sampleDomain** - creates an operational policy using policy/api component
+Setup Phase (steps running only once)
+"""""""""""""""""""""""""""""""""""""
+
+- **Create Policy for defaultGroup** - creates an operational policy using policy/api component
+- **Create NodeTemplate metadata for sampleGroup policy** - creates a node template containing metadata using policy/api component
+- **Create Policy for sampleGroup** - creates an operational policy that refers to the metadata created above using policy/api component
+- **Change defaultGroup state to ACTIVE** - changes the state of defaultGroup PdpGroup to ACTIVE
+- **Create/Update PDP Group** - creates a new PDPGroup named sampleGroup.
+      A second instance of the PDP that is already spun up gets registered to this new group
+- **Check PdpGroup Query** - makes a PdpGroup query request and verifies that both PdpGroups are in ACTIVE state.
+
+PAP Test Flow (steps running in a loop for 72 hours)
+""""""""""""""""""""""""""""""""""""""""""""""""""""
+
 - **Check Health** - checks the health status of pap
-- **Check Statistics** - checks the statistics of pap
-- **Change state to ACTIVE** - changes the state of defaultGroup PdpGroup to ACTIVE
-- **Check PdpGroup Query** - makes a PdpGroup query request and verifies that PdpGroup is in the ACTIVE state.
-- **Deploy defaultDomain Policy** - deploys the policy defaultDomain in the existing PdpGroup
-- **Check status of defaultGroup** - checks the status of defaultGroup PdpGroup with the defaultDomain policy 1.0.0.
+- **PAP Metrics** - Fetch prometheus metrics before the deployment/undeployment cycle
+      Save different counters such as deploy/undeploy-success/failure counters at API and engine level.
+- **Check PdpGroup Query** - makes a PdpGroup query request and verifies that both PdpGroups are in the ACTIVE state.
+- **Deploy Policy for defaultGroup** - deploys the policy defaultDomain to defaultGroup
+- **Check status of defaultGroup policy** - checks the status of defaultGroup PdpGroup with the defaultDomain policy 1.0.0.
 - **Check PdpGroup Audit defaultGroup** - checks the audit information for the defaultGroup PdpGroup.
 - **Check PdpGroup Audit Policy (defaultGroup)** - checks the audit information for the defaultGroup PdpGroup with the defaultDomain policy 1.0.0.
-- **Create/Update PDP Group** - creates a new PDPGroup named sampleGroup.
 - **Check PdpGroup Query** - makes a PdpGroup query request and verifies that 2 PdpGroups are in the ACTIVE state and defaultGroup has a policy deployed on it.
-- **Deployment Update sampleDomain** - deploys the policy sampleDomain in sampleGroup PdpGroup using pap api
+- **Deployment Update for sampleGroup policy** - deploys the policy sampleDomain in sampleGroup PdpGroup using pap api
 - **Check status of sampleGroup** - checks the status of the sampleGroup PdpGroup.
 - **Check status of PdpGroups** - checks the status of both PdpGroups.
 - **Check PdpGroup Query** - makes a PdpGroup query request and verifies that the defaultGroup has a policy defaultDomain deployed on it and sampleGroup has policy sampleDomain deployed on it.
 - **Check Audit** - checks the audit information for all PdpGroups.
 - **Check Consolidated Health** - checks the consolidated health status of all policy components.
 - **Check Deployed Policies** - checks for all the deployed policies using pap api.
-- **Undeploy Policy sampleDomain** - undeploys the policy sampleDomain from sampleGroup PdpGroup using pap api
-- **Undeploy Default Policy** - undeploys the policy defaultDomain from PdpGroup
+- **Undeploy policy in sampleGroup** - undeploys the policy sampleDomain from sampleGroup PdpGroup using pap api
+- **Undeploy policy in defaultGroup** - undeploys the policy defaultDomain from PdpGroup
+- **Check status of policies** - checks the status of all policies and make sure both the policies are undeployed
+- **Check PdpGroup Query** - makes a PdpGroup query request and verifies that PdpGroup is in the PASSIVE state.
+- **PAP Metrics after deployments** - Fetch prometheus metrics after the deployment/undeployment cycle
+      Save the new counter values such as deploy/undeploy-success/failure counters at API and engine level, and check that the deploySuccess and undeploySuccess counters are increased by 2.
+
+.. Note::
+   To avoid putting a large Constant Timer value after every deployment/undeployment, the status API is polled until the deployment/undeployment
+   is successfully completed, or until a timeout. This is to make sure that the operation is completed successfully and the PDPs gets enough time to respond back.
+   Otherwise, before the deployment is marked successful by PAP, an undeployment could be triggered as part of other tests,
+   and the operation's corresponding prometheus counter at engine level will not get updated.
+
+Teardown Phase (steps running only once after PAP Test Flow is completed)
+"""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
+
 - **Change state to PASSIVE(sampleGroup)** - changes the state of sampleGroup PdpGroup to PASSIVE
-- **Delete PdpGroup SampleGroup** - delete the sampleGroup PdpGroup using pap api
+- **Delete PdpGroup sampleGroup** - delete the sampleGroup PdpGroup using pap api
 - **Change State to PASSIVE(defaultGroup)** - changes the state of defaultGroup PdpGroup to PASSIVE
-- **Check PdpGroup Query** - makes a PdpGroup query request and verifies that PdpGroup is in the PASSIVE state.
-- **Delete Policy defaultDomain** - deletes the operational policy defaultDomain using policy/api component
-- **Delete Policy sampleDomain** - deletes the operational policy sampleDomain using policy/api component
+- **Delete policy created for defaultGroup** - deletes the operational policy defaultDomain using policy/api component
+- **Delete Policy created for sampleGroup** - deletes the operational policy sampleDomain using policy/api component
+- **Delete Nodetemplate metadata for sampleGroup policy** - deleted the nodetemplate containing metadata for sampleGroup policy
 
 The following steps can be used to configure the parameters of test plan.
 
@@ -74,61 +99,49 @@ The test was run in the background via "nohup", to prevent it from being interru
 
 .. code-block:: bash
 
-    nohup ./jMeter/apache-jmeter-5.3/bin/jmeter.sh -n -t stability.jmx -l testresults.jtl
+    nohup ./apache-jmeter-5.4.1/bin/jmeter.sh -n -t stability.jmx -l stabilityTestResults.jtl
 
 Test Results
 ------------
 
 **Summary**
 
-Stability test plan was triggered for 72 hours.
-
-.. Note::
+Stability test plan was triggered for 72 hours. There were no failures during the 72 hours test.
 
-              .. container:: paragraph
-
-                  As part of the OOM deployment, another APEX-PDP pod is spun up with the pdpGroup name specified as 'sampleGroup'.
-                  After creating the new group called 'sampleGroup' as part of the test, a time delay of 2 minutes is added,
-                  so that the pdp is registered to the newly created group.
-                  This has resulted in a spike in the Average time taken per request. But this is required to make proper assertions,
-                  and also for the consolidated health check.
 
 **Test Statistics**
 
 =======================  =================  ==================  ==================================
 **Total # of requests**  **Success %**      **Error %**         **Average time taken per request**
 =======================  =================  ==================  ==================================
-34053                    99.14 %            0.86 %              1051 ms
+140980                    100 %             0.00 %              717 ms
 =======================  =================  ==================  ==================================
 
 .. Note::
 
-              .. container:: paragraph
-
-                  There were some failures during the 72 hour stability tests. These tests were caused by the apex-pdp pods restarting
-                  intermitently due to limited resources in our testing environment. The second apex instance was configured as a
-                  replica of the apex-pdp pod and therefore, when it restarted, registered to the "defaultGroup" as the configuration 
-                  was taken from the original apex-pdp pod. This meant a manual change whenever the pods restarted to make apex-pdp-"2"
-                  register with the "sampleGroup".
-                  When both pods were running as expected, no errors relating to the pap functionality were observed. These errors are
-                  strictly caused by the environment setup and not by pap.
+   There were no failures during the 72 hours test.
 
 **JMeter Screenshot**
 
-.. image:: pap-s3p-results/pap-s3p-stability-result-jmeter.png
+.. image:: pap-s3p-results/pap_stability_jmeter_results.jpg
 
 **Memory and CPU usage**
 
-The memory and CPU usage can be monitored by running "top" command on the PAP pod. A snapshot is taken before and after test execution to monitor the changes in resource utilization.
+The memory and CPU usage can be monitored by running "top" command in the PAP pod.
+A snapshot is taken before and after test execution to monitor the changes in resource utilization.
+Prometheus metrics is also collected before and after the test execution.
 
 Memory and CPU usage before test execution:
 
-.. image:: pap-s3p-results/pap-s3p-mem-bt.png
+.. image:: pap-s3p-results/pap_top_before_72h.jpg
+
+:download:`Prometheus metrics before 72h test  <pap-s3p-results/pap_metrics_before_72h.txt>`
 
 Memory and CPU usage after test execution:
 
-.. image:: pap-s3p-results/pap-s3p-mem-at.png
+.. image:: pap-s3p-results/pap_top_after_72h.jpg
 
+:download:`Prometheus metrics after 72h test  <pap-s3p-results/pap_metrics_after_72h.txt>`
 
 Performance Test of PAP
 ++++++++++++++++++++++++
@@ -149,10 +162,13 @@ Test Plan
 
 Performance test plan is the same as the stability test plan above except for the few differences listed below.
 
-- Increase the number of threads up to 5 (simulating 5 users' behaviours at the same time).
+- Increase the number of threads up to 10 (simulating 10 users' behaviours at the same time).
 - Reduce the test time to 2 hours.
-- Usage of counters to create different groups by the 'Create/Update PDP Group' test case.
-- Removed the delay to wait for the new PDP to be registered. Also removed the corresponding assertions where the Pdp instance registration to the newly created group is validated.
+- Usage of counters (simulating each user) to create different pdpGroups, update their state and later delete them.
+- Removed the tests to deploy policies to newly created groups as this will need a larger setup with multiple pdps registered to each group, which will also slow down the performance test with the time needed for registration process etc.
+- Usage of counters (simulating each user) to create different drools policies and deploy them to defaultGroup.
+      In the test, a thread count of 10 is used resulting in 10 different drools policies getting deployed and undeployed continuously for 2 hours.
+      Other standard operations like checking the deployment status of policies, checking the metrics, health etc remains.
 
 Run Test
 --------
@@ -161,14 +177,7 @@ Running/Triggering the performance test will be the same as the stability test.
 
 .. code-block:: bash
 
-    nohup ./jMeter/apache-jmeter-5.3/bin/jmeter.sh -n -t performance.jmx -l perftestresults.jtl
-
-Once the test execution is completed, execute the below script to get the statistics:
-
-.. code-block:: bash
-
-    $ cd /home/ubuntu/pap/testsuites/performance/src/main/resources/testplans
-    $ ./results.sh /home/ubuntu/pap_perf/resultTree.log
+    nohup ./apache-jmeter-5.4.1/bin/jmeter.sh -n -t performance.jmx -l performanceTestResults.jtl
 
 Test Results
 ------------
@@ -180,9 +189,9 @@ Test results are shown as below.
 =======================  =================  ==================  ==================================
 **Total # of requests**  **Success %**      **Error %**         **Average time taken per request**
 =======================  =================  ==================  ==================================
-24092                    100 %              0.00 %              2467 ms
+24276                    100 %              0.00 %              2556 ms
 =======================  =================  ==================  ==================================
 
 **JMeter Screenshot**
 
-.. image:: pap-s3p-results/pap-s3p-performance-result-jmeter.png
\ No newline at end of file
+.. image:: pap-s3p-results/pap_performance_jmeter_results.jpg
index 90bc922..341d6d5 100644 (file)
@@ -12,28 +12,149 @@ Prometheus Metrics support in Policy Framework Components
 
 This page explains the prometheus metrics exposed by different Policy Framework components.
 
-XACML-PDP
+
+1. Context
+==========
+
+Collecting application metrics is the first step towards gaining insights into Policy Fwk services and infrastructure from point of view of Availability, Performance, Reliability and Scalability.
+
+The goal of monitoring is to achieve the below operational needs:
+
+1. Monitoring via dashboards: Provide visual aids to display health, key metrics for use by Operations.
+2. Alerting: Something is broken, and the issue must be addressed immediately OR, something might break soon, and proactive measures are taken to avoid such a situation.
+3. Conducting retrospective analysis: Rich information that is readily available to better troubleshoot issues.
+4. Analyzing trends: How fast is the usage growing? How is the incoming traffic like? Helps assess needs for scaling to meet forecasted demands.
+
+The principles outlined in the `Four Golden Signals <https://sre.google/sre-book/monitoring-distributed-systems/#xref_monitoring_golden-signals>`__ developed by Google Site Reliability Engineers has been adopted to define the key metrics for Policy Framework.
+
+- Request Rate: # of requests per second as served by Policy services.
+- Event Processing rate: # of requests/events per second as processed by the PDPs.
+- Errors: # of those requests/events processed that are failing.
+- Latency/Duration: Amount of time those requests take, and for PDPs relevant metrics for event processing times.
+- Saturation: Measures the degree of fullness or % utilization of a service emphasizing the resources that are most constrained: CPU, Memory, I/O, custom metrics by domain.
+
+
+2. Policy Framework Key metrics
+===============================
+
+System Metrics common across all Policy components
+--------------------------------------------------
+
+These standard metrics are available and exposed via a Prometheus endpoint since Istanbul release and can be categorized as below:
+
+CPU Usage
 *********
 
-The following Prometheus metric counters are present in the current release:
+CPU usage percentage can be derived *"system_cpu_usage"* for springboot applications and *"process_cpu_seconds_total* for non springboot applications using `PromQL <https://prometheus.io/docs/prometheus/latest/querying/basics/>`__ .
+
+Process uptime
+**************
+
+The process uptime in seconds is available via *"process_uptime_seconds"* for springboot applications or *"process_start_time_seconds"* otherwise.
+
+Status of the applications is available via the standard *"up"* metric.
+
+JVM memory metrics
+******************
+
+These metrics begin with the prefix *"jvm_memory_"*.
+There is a lot of data here however, one of the key metric to monitor would be the total heap memory usage, *E.g. sum(jvm_memory_used_bytes{area="heap"})*.
+
+`PromQL <https://prometheus.io/docs/prometheus/latest/querying/basics/>`__ can be leveraged to represent the total or rate of memory usage.
+
+JVM thread metrics
+******************
+
+These metrics begin with the prefix *"jvm_threads_"*. Some of the key data to monitor for are:
+
+- *"jvm_threads_live_threads"* (springboot apps), or *"jvm_threads_current"* (non springboot) shows the total number of live threads, including daemon and non-daemon threads
+- *"jvm_threads_peak_threads"* (springboot apps), or *"jvm_threads_peak"* (non springboot) shows the peak total number of threads since the JVM started
+- *"jvm_threads_states_threads"* (springboot apps), or *"jvm_threads_state"* (non springboot) shows number of threads by thread state
+
+JVM garbage collection metrics
+******************************
+
+There are many garbage collection metrics, with prefix *"jvm_gc_"* available to get deep insights into how the JVM is managing memory. They can be broadly categorized into:
+
+- Pause duration *"jvm_gc_pause_"* for springboot applications gives us information about how long GC took. For non springboot application, the collection duration metrics *"jvm_gc_collection_"* provide the same information.
+- Memory pool size increase can be assessed using *"jvm_gc_memory_allocated_bytes_total"* and *"jvm_gc_memory_promoted_bytes_total"* for springboot applications.
+
+Average garbage collection time and rate of garbage collection per second are key metrics to monitor.
+
+
+Key metrics for Policy API
+--------------------------
+
++-------------------------------------+----------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| Metric name                         | Metric description                                                                                 | Metric labels                                                                                                                                                         |
++=====================================+====================================================================================================+=======================================================================================================================================================================+
+| process_uptime_seconds              | Uptime of policy-api application in seconds.                                                       |                                                                                                                                                                       |
++-------------------------------------+----------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| http_server_requests_seconds_count  | Number of API requests filtered by uri, REST method and response status among other labels         | "exception": any exception string; "method": REST method used; "outcome": response status string; "status": http response status code; "uri": REST endpoint invoked   |
++-------------------------------------+----------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| http_server_requests_seconds_sum    | Time taken for an API request filtered by uri, REST method and response status among other labels  | "exception": any exception string; "method": REST method used; "outcome": response status string; "status": http response status code; "uri": REST endpoint invoked   |
++-------------------------------------+----------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+
+Key metrics for Policy PAP
+--------------------------
+
++-------------------------------------+----------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| Metric name                         | Metric description                                                                                 | Metric labels                                                                                                                                                         |
++=====================================+====================================================================================================+=======================================================================================================================================================================+
+| process_uptime_seconds              | Uptime of policy-pap application in seconds.                                                       |                                                                                                                                                                       |
++-------------------------------------+----------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| http_server_requests_seconds_count  | Number of API requests filtered by uri, REST method and response status among other labels         | "exception": any exception string; "method": REST method used; "outcome": response status string; "status": http response status code; "uri": REST endpoint invoked   |
++-------------------------------------+----------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| http_server_requests_seconds_sum    | Time taken for an API request filtered by uri, REST method and response status among other labels  | "exception": any exception string; "method": REST method used; "outcome": response status string; "status": http response status code; "uri": REST endpoint invoked   |
++-------------------------------------+----------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| pap_policy_deployments              | Number of TOSCA policy deploy/undeploy operations                                                  | "operation": Possibles values are deploy, undeploy; "status": Deploy/Undeploy status values - SUCCESS, FAILURE, TOTAL                                                 |
++-------------------------------------+----------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+
+Key metrics for APEX-PDP
+------------------------
+
++---------------------------------------------+-------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------+
+| Metric name                                 | Metric description                                                                  | Metric labels                                                                                                        |
++=============================================+=====================================================================================+======================================================================================================================+
+| process_start_time_seconds                  | Uptime of apex-pdp application in seconds                                           |                                                                                                                      |
++---------------------------------------------+-------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------+
+| pdpa_policy_deployments_total               | Number of TOSCA policy deploy/undeploy operations                                   | "operation": Possibles values are deploy, undeploy; "status": Deploy/Undeploy status values - SUCCESS, FAILURE, TOTAL   |
++---------------------------------------------+-------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------+
+| pdpa_policy_executions_total                | Number of TOSCA policy executions                                                   | "status": Execution status values - SUCCESS, FAILURE, TOTAL"                                                            |
++---------------------------------------------+-------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------+
+| pdpa_engine_state                           | State of APEX engine                                                                | "engine_instance_id": ID of the engine thread                                                                        |
++---------------------------------------------+-------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------+
+| pdpa_engine_last_start_timestamp_epoch      | Epoch timestamp of the instance when engine was last started to derive uptime from  | "engine_instance_id": ID of the engine thread                                                                        |
++---------------------------------------------+-------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------+
+| pdpa_engine_event_executions                | Number of APEX event execution counter per engine thread                            | "engine_instance_id": ID of the engine thread                                                                        |
++---------------------------------------------+-------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------+
+| pdpa_engine_average_execution_time_seconds  | Average time taken to execute an APEX policy in seconds                             | "engine_instance_id": ID of the engine thread                                                                        |
++---------------------------------------------+-------------------------------------------------------------------------------------+----------------------------------------------------------------------------------------------------------------------+
+
+Key metrics for Drools PDP
+--------------------------
 
-- pdpx_policy_deployments_total counts the total number of deployment operations.
-- pdpx_policy_decisions_total counts the total number of decisions.
+Key metrics for XACML PDP
+-------------------------
 
-pdpx_policy_deployments_total
-+++++++++++++++++++++++++++++
++--------------------------------+---------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| Metric name                    | Metric description                                | Metric labels                                                                                                                                                                                                                |
++================================+===================================================+==============================================================================================================================================================================================================================+
+| process_start_time_seconds     | Uptime of policy-pap application in seconds.      |                                                                                                                                                                                                                              |
++--------------------------------+---------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| pdpx_policy_deployments_total  | Counts the total number of deployment operations  | "deploy": Counts the number of successful or failed deploys; "undeploy": Counts the number of successful or failed undeploys                                                                                                 |
++--------------------------------+---------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
+| pdpx_policy_decisions_total    | Counts the total number of decisions              | permit: Counts the number of permit decisions; "deny": Counts the number of deny decisions; "indeterminant": Counts the number of indeterminant decisions; "not_applicable": Counts the number of not applicable decisions.  |
++--------------------------------+---------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
 
-This counter supports the following labels:
 
--  "deploy": Counts the number of successful or failed deploys.
--  "undeploy": Counts the number of successful or failed undeploys.
+Key metrics for Policy Distribution
+-----------------------------------
 
-pdpx_policy_decisions_total
-+++++++++++++++++++++++++++
+3. OOM changes to enable prometheus monitoring for Policy Framework
+===================================================================
 
-This counter supports the following labels:
+Policy Framework uses ServiceMonitor custom resource definition (CRD) to allow Prometheus to monitor the services it exposes. Label selection is used to determine which services are selected to be monitored.
+For label management and troubleshooting refer to the documentation at: `Prometheus operator <https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/troubleshooting.md#overview-of-servicemonitor-tagging-and-related-elements>`__.
 
--  "permit": Counts the number of permit decisions.
--  "deny": Counts the number of deny decisions.
--  "indeterminant": Counts the number of indeterminant decisions.
--  "not_applicable": Counts the number of not applicable decisions.
+`OOM charts <https://github.com/onap/oom/tree/master/kubernetes/policy/components>`__ for policy include ServiceMonitor and properties can be overrided based on the deployment specifics.
index 32d5a76..389f55e 100644 (file)
@@ -42,7 +42,7 @@
         <version.dmaap>1.1.12</version.dmaap>
         <version.powermock>2.0.9</version.powermock>
         <version.eclipselink>2.7.8</version.eclipselink>
-        <version.drools>7.66.0.Final</version.drools>
+        <version.drools>7.68.0.Final</version.drools>
         <version.jersey>2.34</version.jersey>
         <version.jackson>2.12.6</version.jackson>
         <version.jackson.databind>2.12.6.1</version.jackson.databind>