1 .. This work is licensed under a
2 Creative Commons Attribution 4.0 International License.
9 The Release stability has been evaluated by:
11 - The daily Jakarta CI/CD chain
15 The scope of these tests remains limited and does not provide a full set of
16 KPIs to determinate the limits and the dimensioning of the ONAP solution.
21 As usual, a daily CI chain dedicated to the release is created after RC0.
23 The daily results can be found in `LF Orange lab daily results web site
24 <https://logs.onap.org/onap-integration/daily/onap_daily_pod4_master/>`_ and
25 `LF DT lab daily results web site <https://logs.onap.org/onap-integration/daily/onap-daily-dt-oom-master/>`_.
27 .. image:: files/s3p/istanbul-dashboard.png
31 Infrastructure Healthcheck Tests
32 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
34 These tests deal with the Kubernetes/Helm tests on ONAP cluster.
36 The global expected criteria is **75%**.
38 The onap-k8s and onap-k8s-teardown, providing a snapshop of the onap namespace
39 in Kubernetes, as well as the onap-helm tests are expected to be PASS.
41 nodeport_check_certs test is expected to fail. Even tremendous progress have
42 been done in this area, some certificates (unmaintained, upstream or integration
43 robot pods) are still not correct due to bad certificate issuers (Root CA
44 certificate non valid) or extra long validity. Most of the certificates have
45 been installed using cert-manager and will be easily renewable.
47 .. image:: files/s3p/istanbul_daily_infrastructure_healthcheck.png
53 These tests are the traditionnal robot healthcheck tests and additional tests
54 dealing with a single component.
56 The expectation is **100% OK**.
58 .. image:: files/s3p/istanbul_daily_healthcheck.png
64 These tests are end to end and automated use case tests.
65 See the :ref:`the Integration Test page <integration-tests>` for details.
67 The expectation is **100% OK**.
69 .. figure:: files/s3p/istanbul_daily_smoke.png
75 These tests are tests dealing with security.
76 See the :ref:`the Integration Test page <integration-tests>` for details.
78 Waivers have been granted on different projects for the different tests.
79 The list of waivers can be found in
80 https://git.onap.org/integration/seccom/tree/waivers?h=istanbul.
82 The expectation is **100% OK**. The criteria is met.
84 .. figure:: files/s3p/istanbul_daily_security.png
90 Stability tests have been performed on Istanbul release:
93 - Parallel instantiation test
95 The results can be found in the weekly backend logs
96 https://logs.onap.org/onap-integration/weekly/onap_weekly_pod4_istanbul.
101 In this test, we consider the basic_onboard automated test and we run 5
102 simultaneous onboarding procedures in parallel during 24h.
104 The basic_onboard test consists in the following steps:
106 - [SDC] VendorOnboardStep: Onboard vendor in SDC.
107 - [SDC] YamlTemplateVspOnboardStep: Onboard vsp described in YAML file in SDC.
108 - [SDC] YamlTemplateVfOnboardStep: Onboard vf described in YAML file in SDC.
109 - [SDC] YamlTemplateServiceOnboardStep: Onboard service described in YAML file
112 The test has been initiated on the Istanbul weekly lab on the 14th of November.
114 As already observed in daily|weekly|gating chain, we got race conditions on
115 some tests (https://jira.onap.org/browse/INT-1918).
117 The success rate is expected to be above 95% on the 100 first model upload
118 and above 80% until we onboard more than 500 models.
120 We may also notice that the function test_duration=f(time) increases
121 continuously. At the beginning the test takes about 200s, 24h later the same
122 test will take around 1000s.
123 Finally after 36h, the SDC systematically answers with a 500 HTTP answer code
124 explaining the linear decrease of the success rate.
126 The following graphs provides a good view of the SDC stability test.
128 .. image:: files/s3p/istanbul_sdc_stability.png
131 .. csv-table:: S3P Onboarding stability results
132 :file: ./files/csv/s3p-sdc.csv
138 The onboarding duration increases linearly with the number of on-boarded
139 models, which is already reported and may be due to the fact that models
140 cannot be deleted. In fact the test client has to retrieve the list of
141 models, which is continuously increasing. No limit tests have been
143 However 1085 on-boarded models is already a vry high figure regarding the
145 Moreover the mean duration time is much lower in Istanbul.
146 It explains why it was possible to run 35% more tests within the same
149 Parallel instantiations stability test
150 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
152 The test is based on the single test (basic_vm) that can be described as follows:
154 - [SDC] VendorOnboardStep: Onboard vendor in SDC.
155 - [SDC] YamlTemplateVspOnboardStep: Onboard vsp described in YAML file in SDC.
156 - [SDC] YamlTemplateVfOnboardStep: Onboard vf described in YAML file in SDC.
157 - [SDC] YamlTemplateServiceOnboardStep: Onboard service described in YAML file
159 - [AAI] RegisterCloudRegionStep: Register cloud region.
160 - [AAI] ComplexCreateStep: Create complex.
161 - [AAI] LinkCloudRegionToComplexStep: Connect cloud region with complex.
162 - [AAI] CustomerCreateStep: Create customer.
163 - [AAI] CustomerServiceSubscriptionCreateStep: Create customer's service
165 - [AAI] ConnectServiceSubToCloudRegionStep: Connect service subscription with
167 - [SO] YamlTemplateServiceAlaCarteInstantiateStep: Instantiate service described
168 in YAML using SO a'la carte method.
169 - [SO] YamlTemplateVnfAlaCarteInstantiateStep: Instantiate vnf described in YAML
170 using SO a'la carte method.
171 - [SO] YamlTemplateVfModuleAlaCarteInstantiateStep: Instantiate VF module
172 described in YAML using SO a'la carte method.
174 10 instantiation attempts are done simultaneously on the ONAP solution during 24h.
176 The results can be described as follows:
178 .. image:: files/s3p/istanbul_instantiation_stability_10.png
181 .. csv-table:: S3P Instantiation stability results
182 :file: ./files/csv/s3p-instantiation.csv
187 The results are good with a success rate above 95%. After 24h more than 1300
188 VNF have been created and deleted.
190 As for SDC, we can observe a linear increase of the test duration. This issue
191 has been reported since Guilin. For SDC as it is not possible to delete the
192 models, it is possible to imagine that the duration increases due to the fact
193 that the database of models continuously increases. Therefore the client has
194 to retrieve an always bigger list of models.
195 But for the instantiations, it is not the case as the references
196 (module, VNF, service) are cleaned at the end of each test and all the tests
197 use the same model. Then the duration of an instantiation test should be
198 almost constant, which is not the case. Further investigations are needed.
201 The test has been executed with the mariadb-galera replicaset set to 1
202 (3 by default). With this configuration the results during 24h are very
203 good. When set to 3, the error rate is higher and after some hours
204 most of the instantiation are failing.
205 However, even with a replicaset set to 1, a test on Master weekly chain
206 showed that the system is hitting another limit after about 35h
207 (https://jira.onap.org/browse/SO-3791).