1 .. This work is licensed under a Creative Commons Attribution 4.0 International License.
2 .. http://creativecommons.org/licenses/by/4.0
10 ----------------------
14 - Heat/Helm/CDS models: `vFW_CNF_CDS Model`_
15 - Automation Scripts: `vFW_CNF_CDS Automation`_
19 This use case is a combination of `vFW CDS Dublin`_ and `vFW EDGEX K8S`_ use cases. The aim is to continue improving Kubernetes based Network Functions (a.k.a CNF) support in ONAP. Use case continues where `vFW EDGEX K8S`_ left and brings CDS support into picture like `vFW CDS Dublin`_ did for the old vFW Use case. Predecessor use case is also documented here `vFW EDGEX K8S In ONAP Wiki`_.
21 This use case shows how to onboard helm packages and to instantiate them with help of ONAP. Following improvements were made in the vFW CNF Use Case:
23 - vFW Kubernetes Helm charts support overrides (previously mostly hardcoded values)
24 - SDC accepts Onboarding Package with many helm packages what allows to keep decomposition of service instance similar to `vFW CDS Dublin`_
25 - Compared to `vFW EDGEX K8S`_ use case **MACRO** workflow in SO is used instead of VNF a'la carte workflow
26 - No VNF data preloading used, instead resource-assignment feature of CDS is used
27 - CDS is used to resolve instantiation time parameters (Helm overrides)
28 - Ip addresses with IPAM
29 - Unique names for resources with ONAP naming service
30 - CDS is used to create and upload **multicloud/k8s profile** as part of instantiation flow
31 - Combined all models (Heat, Helm, CBA) in to same git repo and a created single onboarding package `vFW_CNF_CDS Model`_
32 - Use case does not contain Closed Loop part of the vFW demo.
34 All changes to related ONAP components and Use Case can be found in the following tickets:
39 **Since Guilin ONAP supports Helm packages as a native onboarding artifacts and SO natively orchestrates Helm packages what brings significant advantages in the future. Also since this release ONAP has first mechanisms for monitoring of the status of deployed CNF resources**.
43 The vFW CNF CDS use case shows how to instantiate multiple CNF instances in similar way as VNFs bringing CNFs closer to first class citizens in ONAP.
45 One of the biggest practical change compared to the old demos (any ONAP demo) is that whole network function content (user provided content) is collected to one place and more importantly into git repository (`vFW_CNF_CDS Model`_) that provides version control (that is pretty important thing). That is very basic thing but unfortunately this is a common problem when running any ONAP demo and trying to find all content from many different git repositories and even some files only in ONAP wiki.
47 Demo git directory has also `Data Dictionary`_ file (CDS model time resource) included.
49 Another founding idea from the start was to provide complete content in single onboarding package available directly from that git repository. Not any revolutionary idea as that's the official package format ONAP supports and all content supposed to be in that same package for single service regardless of the models and closed loops and configurations etc.
51 Following table describes all the source models to which this demo is based on.
53 =============== ================= ===========
54 Model Git reference Description
55 --------------- ----------------- -----------
56 Heat `vFW_NextGen`_ Heat templates used in original vFW demo but split into multiple vf-modules
57 Helm `vFW_Helm Model`_ Helm templates used in `vFW EDGEX K8S`_ demo
58 CDS model `vFW CBA Model`_ CDS CBA model used in `vFW CDS Dublin`_ demo
59 =============== ================= ===========
61 .. note:: Since the Guilin release `vFW_CNF_CDS Model`_ contains sources that allow to model and instantiate CNF with VNF/Heat orchestration approach (Frankfurt) and with native Helm orchestration approach. Please follow README.txt description and further documentation here to generate and select appropriate onboarding package which will leverage appropriate SO orchestration path.
63 Modeling Onboarding Package/Helm
64 ................................
66 The starting point for this demo was Helm package containing one Kubernetes application, see `vFW_Helm Model`_. In this demo we decided to follow SDC/SO vf-module concept the same way as original vFW demo was split into multiple vf-modules instead of one (`vFW_NextGen`_). The same way we splitted Helm version of vFW into multiple Helm packages each matching one dedicated vf-module.
68 The Guilin version of the `vFW_CNF_CDS Model`_ contains files required to create **VSP onboarding packages in two formats**: the **Dummy Heat** (available in Frankfurt release already) one that considers association of each Helm package with dummy heat templates and the **Native Helm** one where each Helm package is standalone and is natively understood in consequence by SO. For both variants of VSP Helm packages are matched to the vf-module concept, so basically each Helm application after instantiation is visible to ONAP as a separate vf-module. The chosen format for onboarding has **crucial** role in the further orchestration approach applied for Helm package instantiation. The **Dummy Heat** will result with orchestration through the **Openstack Adapter** component of SO while **Native Helm** will result with **CNF Adapter**. Both approaches will result with instantiation of the same CNF, however the **Native Helm** approach will be enhanced in the future releases while **Dummy Heat** approach will become deprecated in the future.
70 Produced **Dummy Heat** VSP onboarding package `Creating Onboarding Package`_ format has following MANIFEST file (package_dummy/MANIFEST.json). The Helm package is delivered as CLOUD_TECHNOLOGY_SPECIFIC_ARTIFACT package through SDC and SO. Dummy heat templates are matched to Helm packages by the same prefix <vf_module_label> of the file name that for both dummy Heat teamplate and for CLOUD_TECHNOLOGY_SPECIFIC_ARTIFACT must be the same, like i.e. *vpg* vf-module in the manifest file below. The name of the CLOUD_TECHNOLOGY_SPECIFIC_ARTIFACT artifact is predefined and needs to match the pattern: <vf_module_label>_cloudtech_k8s_charts.tgz. More examples can be found in `Modeling Onboarding Package/Helm`_ section.
75 "name": "virtualFirewall",
80 "type": "CONTROLLER_BLUEPRINT_ARCHIVE"
83 "file": "base_template.yaml",
88 "file": "base_template.env",
94 "file": "base_template_cloudtech_k8s_charts.tgz",
95 "type": "CLOUD_TECHNOLOGY_SPECIFIC_ARTIFACT"
109 "file": "vfw_cloudtech_k8s_charts.tgz",
110 "type": "CLOUD_TECHNOLOGY_SPECIFIC_ARTIFACT"
124 "file": "vpkg_cloudtech_k8s_charts.tgz",
125 "type": "CLOUD_TECHNOLOGY_SPECIFIC_ARTIFACT"
139 "file": "vsn_cloudtech_k8s_charts.tgz",
140 "type": "CLOUD_TECHNOLOGY_SPECIFIC_ARTIFACT"
145 Produced **Native Helm** VSP onboarding package `Creating Onboarding Package`_ format has following MANIFEST file (package_native/MANIFEST.json). The Helm package is delivered as HELM package through SDC and SO. The *isBase* flag of HELM artifact is ignored by SDC but in the manifest one HELM or HEAT artifacts must be defined as isBase = true. If both HEAT and HELM are present in the same manifest file the base one must be always one of HELM artifacts. Moreover, the name of HELM type artifact must match the specified pattern: *helm_<some_name>* and the HEAT type artifacts, if present in the same manifest, cannot contain keyword *helm*. These limitations are a consequence of current limitations of the SDC onboarding and VSP validation engine and will be adresssed in the future releases.
150 "name": "virtualFirewall",
155 "type": "CONTROLLER_BLUEPRINT_ARCHIVE"
158 "file": "helm_base_template.tgz",
163 "file": "helm_vfw.tgz",
168 "file": "helm_vpkg.tgz",
173 "file": "helm_vsn.tgz",
180 .. note:: CDS model (CBA package) is delivered as SDC supported own type CONTROLLER_BLUEPRINT_ARCHIVE but the current limitation of VSP onbarding forces to use the artifact name *CBA.zip* to automaticaly recognize CBA as a CONTROLLER_BLUEPRINT_ARCHIVE.
185 Creating CDS model was the core of the use case work and also the most difficult and time consuming part. Current template used by use-case should be easily reusable for anyone. Once CDS GUI will be fully working, we think that CBA development should be much easier. For CBA structure reference, please visit it's documentation page `CDS Documentation`_.
187 At first the target was to keep CDS model as close as possible to `vFW_CNF_CDS Model`_ use case model and only add smallest possible changes to enable also k8s usage. That is still the target but in practice model deviated from the original one already and time pressure pushed us to not care about sync. Basically the end result could be possible much streamlined if wanted to be smallest possible to working only for K8S based network functions.
189 As K8S application was split into multiple Helm packages to match vf-modules, CBA modeling follows the same and for each vf-module there's own template in CBA package. The list of artifact with the templates is different for **Dummy Heat** and **Native Helm** approach. The second one has artifact names starting with *helm_* prefiks, in the same way like names of artifacts in the MANIFEST file of VSP differs. The **Dummy Heat** artifacts' list is following:
194 "base_template-template" : {
195 "type" : "artifact-template-velocity",
196 "file" : "Templates/base_template-template.vtl"
198 "base_template-mapping" : {
199 "type" : "artifact-mapping-resource",
200 "file" : "Templates/base_template-mapping.json"
203 "type" : "artifact-template-velocity",
204 "file" : "Templates/vpkg-template.vtl"
207 "type" : "artifact-mapping-resource",
208 "file" : "Templates/vpkg-mapping.json"
211 "type" : "artifact-template-velocity",
212 "file" : "Templates/vfw-template.vtl"
215 "type" : "artifact-mapping-resource",
216 "file" : "Templates/vfw-mapping.json"
219 "type" : "artifact-template-velocity",
220 "file" : "Templates/vnf-template.vtl"
223 "type" : "artifact-mapping-resource",
224 "file" : "Templates/vnf-mapping.json"
227 "type" : "artifact-template-velocity",
228 "file" : "Templates/vsn-template.vtl"
231 "type" : "artifact-mapping-resource",
232 "file" : "Templates/vsn-mapping.json"
236 The **Native Helm** artifacts' list is following:
241 "helm_base_template-template" : {
242 "type" : "artifact-template-velocity",
243 "file" : "Templates/base_template-template.vtl"
245 "helm_base_template-mapping" : {
246 "type" : "artifact-mapping-resource",
247 "file" : "Templates/base_template-mapping.json"
249 "helm_vpkg-template" : {
250 "type" : "artifact-template-velocity",
251 "file" : "Templates/vpkg-template.vtl"
253 "helm_vpkg-mapping" : {
254 "type" : "artifact-mapping-resource",
255 "file" : "Templates/vpkg-mapping.json"
257 "helm_vfw-template" : {
258 "type" : "artifact-template-velocity",
259 "file" : "Templates/vfw-template.vtl"
261 "helm_vfw-mapping" : {
262 "type" : "artifact-mapping-resource",
263 "file" : "Templates/vfw-mapping.json"
266 "type" : "artifact-template-velocity",
267 "file" : "Templates/vnf-template.vtl"
270 "type" : "artifact-mapping-resource",
271 "file" : "Templates/vnf-mapping.json"
273 "helm_vsn-template" : {
274 "type" : "artifact-template-velocity",
275 "file" : "Templates/vsn-template.vtl"
277 "helm_vsn-mapping" : {
278 "type" : "artifact-mapping-resource",
279 "file" : "Templates/vsn-mapping.json"
283 Only **resource-assignment** workflow of the CBA model is utilized in this demo. If final CBA model contains also **config-deploy** workflow it's there just to keep parity with original vFW CBA (for VMs). Same applies for the related template *Templates/nf-params-template.vtl* and it's mapping file.
285 Another advance of the presented use case over solution presented in the Dublin release is possibility of the automatic generation and upload to multicloud/k8s plugin the RB profile content.
286 RB profile can be used to enrich or to modify the content of the original helm package. Profile can be also used to add additional k8s helm templates for helm installation or can be used to
287 modify existing k8s helm templates for each create CNF instance. It opens another level of CNF customization, much more than customization of helm package with override values.
294 values: “override_values.yaml”
296 - filepath: resources/deployment.yaml
297 chartpath: templates/deployment.yaml
300 Above we have exemplary manifest file of the RB profile. Since Frankfurt *override_values.yaml* file does not need to be used as instantiation values are passed to the plugin over Instance API of k8s plugin. In the example, profile contains additional k8s helm template which will be added on demand to the helm package during its installation. In our case, depending on the SO instantiation request input parameters, vPGN helm package can be enriched with additional ssh service. Such service will be dynamically added to the profile by CDS and later on CDS will upload whole custom RB profile to multicloud/k8s plugin.
302 In order to support generation and upload of profile, our vFW CBA model has enhanced **resource-assignment** workflow which contains additional step: **profile-upload**. It leverages dedicated functionality introduced in Guilin release that can be used to upload predefined profile or to generate and upload content of the profile with Velocity templating mechanism.
306 "resource-assignment": {
308 "resource-assignment": {
309 "description": "Resource Assign Workflow",
310 "target": "resource-assignment",
313 "call_operation": "ResourceResolutionComponent.process"
321 "description": "Generate and upload K8s Profile",
322 "target": "k8s-profile-upload",
325 "call_operation": "ComponentScriptExecutor.process"
331 .. note:: In the Frankfurt reelase profile upload was implementes as a custom Kotlin script included into the CBA. It was responsible for upload of K8S profile into multicloud/k8s plugin. It is still a good example of the integration of Kotlin scripting into the CBA. For those interested in this functionaliy we recommend to look into the `Frankfurt CBA Definition`_ and `Frankfurt CBA Script`_.
333 In our example for vPKG helm package we may select *vfw-cnf-cds-vpkg-profile* profile that is included into CBA as a folder. Profile generation step uses embedded into CDS functionality of Velocity templates processing and on its basis ssh port number (specified in the SO request as *vpg-management-port*).
338 "name": "vpg-management-port",
340 "description": "The number of node port for ssh service of vpg",
344 "input-param": false,
345 "dictionary-name": "vpg-management-port",
346 "dictionary-source": "default",
350 *vpg-management-port* can be included directly into the helm template and such template will be included into vPKG helm package in time of its instantiation.
357 name: {{ .Values.vpg_name_0 }}-ssh-access
359 vnf-name: {{ .Values.vnf_name }}
360 vf-module-name: {{ .Values.vpg_name_0 }}
361 release: {{ .Release.Name }}
362 chart: {{ .Chart.Name }}
367 nodePort: ${vpg-management-port}
369 vf-module-name: {{ .Values.vpg_name_0 }}
370 release: {{ .Release.Name }}
371 chart: {{ .Chart.Name }}
374 The mechanism of profile generation and upload requires specific node teamplate in the CBA definition. In our case it comes with the declaration of two profiles: one static *vfw-cnf-cds-base-profile* in a form of an archive and the second complex *vfw-cnf-cds-vpkg-profile* in a form of a folder for processing and profile generation.
378 "k8s-profile-upload": {
379 "type": "component-k8s-profile-upload",
381 "K8sProfileUploadComponent": {
385 "artifact-prefix-names": {
386 "get_input": "template-prefix"
388 "resource-assignment-map": {
390 "resource-assignment",
400 "vfw-cnf-cds-base-profile": {
401 "type": "artifact-k8sprofile-content",
402 "file": "Templates/k8s-profiles/vfw-cnf-cds-base-profile.tar.gz"
404 "vfw-cnf-cds-vpkg-profile": {
405 "type": "artifact-k8sprofile-content",
406 "file": "Templates/k8s-profiles/vfw-cnf-cds-vpkg-profile"
408 "vfw-cnf-cds-vpkg-profile-mapping": {
409 "type": "artifact-mapping-resource",
410 "file": "Templates/k8s-profiles/vfw-cnf-cds-vpkg-profile/ssh-service-mapping.json"
415 Artifact file determines a place of the static profile or the content of the complex profile. In the latter case we need a pair of profile folder and mappimng file with a declaration of the parameters that CDS needs to resolve first, before the Velocity templating is applied to the .vtl files present in the profile content. After Velovity templating the .vtl extensions will be ropped from the file names. The embedded mechanism will include in the profile only files present in the profile MANIFEST file that needs to contain the list of final names of the files to be included into the profile. Th figure below shows the idea of profile templating.
417 .. figure:: files/vFW_CNF_CDS/profile-templating.png
420 K8s Profile Templating
422 SO requires for instantiation name of the profile in the parameter: *k8s-rb-profile-name*. The *component-k8s-profile-upload* that stands behind the profile uploading mechanism has input parameters that can be passed directly (checked in the first order) or can be taken from the *resource-assignment-map* parameter which can be a result of associated *component-resource-resolution* result, like in our case their values are resolved on vf-module level resource assignment. The *component-k8s-profile-upload* inputs are following:
424 - k8s-rb-profile-name – (mandatory) the name of the profile under which it will be created in k8s plugin. Other parameters are required only when profile must be uploaded
425 - k8s-rb-definition-name – the name under which RB definition was created - **VF Module Model Invariant ID** in ONAP
426 - k8s-rb-definition-version – the version of created RB definition name - **VF Module Model Version ID** in ONAP
427 - k8s-rb-profile-namespace – the k8s namespace name associated with profile being created
428 - k8s-rb-profile-source – the source of profile content - name of the artifact of the profile
429 - resource-assignment-map – result of the associated resource assignment step
430 - artifact-prefix-names – (mandatory) the list of artifact prefixes like for resource-assigment step
432 In the SO request user can pass parameter of name *k8s-rb-profile-name* which in our case may have value: *vfw-cnf-cds-base-profile*, *vfw-cnf-cds-vpkg-profile* or *default*. The *default* profile doesn’t contain any content inside and allows instantiation of CNF without the need to define and upload any additional profiles. *vfw-cnf-cds-vpkg-profile* has been prepard to test instantiation of the second modified vFW CNF instance `Second Service Instance Instantiation`_.
434 K8splugin allows to specify override parameters (similar to --set behavior of helm client) to instantiated resource bundles. This allows for providing dynamic parameters to instantiated resources without the need to create new profiles for this purpose and should be used with *default* profile but may be used also with custom profiles. The overall flow of helm overrides parameters processing is visible on following figure.
436 .. figure:: files/vFW_CNF_CDS/helm-overrides.png
439 The overall flow of helm overrides
441 Finally, `Data Dictionary`_ is also included into demo git directory, re-modeling and making changes into model utilizing CDS model time / runtime is easier as used DD is also known.
443 .. note:: The CBA for this use case is already enriched and there is no need to perform enrichment process for it. It is also automatically uploaded into CDS in time of the model distribution from the SDC.
445 Instantiation Overview
446 ----------------------
448 .. note:: Since Guilin release use case is equipped with automated method **<AUTOMATED>** with python scripts to replace Postman method **<MANUAL>** used in Frankfurt. Nevertheless, Postman collection is good to understand the entire process but should be used **separably** with automation scripts. **For the entire process use only scripts or only Postman collection**. Both options are described in the further steps of this instruction.
450 The figure below shows all the interactions that take place during vFW CNF instantiation. It's not describing flow of actions (ordered steps) but rather component dependencies.
452 .. figure:: files/vFW_CNF_CDS/Instantiation_topology.png
455 vFW CNF CDS Use Case Runtime interactions.
457 PART 1 - ONAP Installation
458 ~~~~~~~~~~~~~~~~~~~~~~~~~~
460 1-1 Deployment components
461 .........................
463 In order to run the vFW_CNF_CDS use case, we need ONAP Guilin Release (or later) with at least following components:
465 ======================================================= ===========
466 ONAP Component name Describtion
467 ------------------------------------------------------- -----------
468 AAI Required for Inventory Cloud Owner, Customer, Owning Entity, Service, Generic VNF, VF Module
469 SDC VSP, VF and Service Modeling of the CNF
470 DMAAP Distribution of the onboarding package including CBA to all ONAP components
471 SO Requires for Macro Orchestration using the generic building blocks
472 CDS Resolution of cloud parameters including Helm override parameters for the CNF. Creation of the multicloud/k8s profile for CNF instantion.
473 SDNC (needs to include netbox and Naming Generation mS) Provides GENERIC-RESOURCE-API for cloud Instantiation orchestration via CDS.
474 Policy Used to Store Naming Policy
475 AAF Used for Authentication and Authorization of requests
476 Portal Required to access SDC.
477 MSB Exposes multicloud interfaces used by SO.
478 Multicloud K8S plugin part used to pass SO instantiation requests to external Kubernetes cloud region.
479 Contrib Chart containing multiple external components. Out of those, we only use Netbox utility in this use-case for IPAM
480 Robot Optional. Can be used for running automated tasks, like provisioning cloud customer, cloud region, service subscription, etc ..
481 Shared Cassandra DB Used as a shared storage for ONAP components that rely on Cassandra DB, like AAI
482 Shared Maria DB Used as a shared storage for ONAP components that rely on Maria DB, like SDNC, and SO
483 ======================================================= ===========
488 In order to deploy such an instance, follow the `ONAP Deployment Guide`_
490 As we can see from the guide, we can use an override file that helps us customize our ONAP deployment, without modifying the OOM Folder, so you can download this override file here, that includes the necessary components mentioned above.
492 **override.yaml** file where enabled: true is set for each component needed in demo (by default all components are disabled).
527 Then deploy ONAP with Helm with your override file.
531 helm deploy onap local/onap --namespace onap -f ~/override.yaml
533 In case redeployment needed `Helm Healer`_ could be a faster and convenient way to redeploy.
537 helm-healer.sh -n onap -f ~/override.yaml -s /dockerdata-nfs --delete-all
539 Or redeploy (clean re-deploy also data removed) just wanted components (Helm releases), cds in this example.
543 helm-healer.sh -f ~/override.yaml -s /dockerdata-nfs/ -n onap -c onap-cds
545 There are many instructions in ONAP wiki how to follow your deployment status and does it succeeded or not, mostly using Robot Health checks. One way we used is to skip the outermost Robot wrapper and use directly ete-k8s.sh to able to select checked components easily. Script is found from OOM git repository *oom/kubernetes/robot/ete-k8s.sh*.
551 for comp in {aaf,aai,dmaap,msb,multicloud,policy,portal,sdc,sdnc,so}; do
552 if ! ./ete-k8s.sh onap health-$comp; then
556 if [ -n "$failed" ]; then
557 echo "These components failed: $failed"
560 echo "Healthcheck successful"
564 And check status of pods, deployments, jobs etc.
568 kubectl -n onap get pods | grep -vie 'completed' -e 'running'
569 kubectl -n onap get deploy,sts,jobs
575 After completing the first part above, we should have a functional ONAP deployment for the Guilin Release.
577 We will need to apply a few modifications to the deployed ONAP Guilin instance in order to run the use case.
579 Retrieving logins and passwords of ONAP components
580 ++++++++++++++++++++++++++++++++++++++++++++++++++
582 Since Frankfurt release hardcoded passwords were mostly removed and it is possible to configure passwords of ONAP components in time of their installation. In order to retrieve these passwords with associated logins it is required to get them with kubectl. Below is the procedure on mariadb-galera DB component example.
586 kubectl get secret `kubectl get secrets | grep mariadb-galera-db-root-password | awk '{print $1}'` -o jsonpath="{.data.login}" | base64 --decode
587 kubectl get secret `kubectl get secrets | grep mariadb-galera-db-root-password | awk '{print $1}'` -o jsonpath="{.data.password}" | base64 --decode
589 In this case login is empty as the secret is dedicated to root user.
592 Postman collection setup
593 ++++++++++++++++++++++++
595 In this demo we have on purpose created all manual ONAP preparation steps (which in real life are automated) by using Postman so it will be clear what exactly is needed. Some of the steps like AAI population is automated by Robot scripts in other ONAP demos (**./demo-k8s.sh onap init**) and Robot script could be used for many parts also in this demo.
597 Postman collection is used also to trigger instantiation using SO APIs.
599 Following steps are needed to setup Postman:
601 - Import this Postman collection zip
603 :download:`Postman collection <files/vFW_CNF_CDS/postman.zip>`
605 - Extract the zip and import Postman collection into Postman. Environment file is provided for reference, it's better to create own environment on your own providing variables as listed in next chapter.
606 - `vFW_CNF_CDS.postman_collection.json`
607 - `vFW_CNF_CDS.postman_environment.json`
609 - For use case debugging purposes to get Kubernetes cluster external access to SO CatalogDB (GET operations only), modify SO CatalogDB service to NodePort instead of ClusterIP. You may also create separate own NodePort if you wish, but here we have just edited directly the service with kubectl.
613 kubectl -n onap edit svc so-catalog-db-adapter
614 - .spec.type: ClusterIP
615 + .spec.type: NodePort
616 + .spec.ports[0].nodePort: 30120
618 .. note:: The port number 30120 is used in included Postman collection
620 - You may also want to inspect after SDC distribution if CBA has been correctly delivered to CDS. In order to do it, there are created relevant calls later described in doc, however CDS since Frankfurt doesn't expose blueprints-processor's service as NodePort. This is OPTIONAL but if you'd like to use these calls later, you need to expose service in similar way as so-catalog-db-adapter above:
624 kubectl edit -n onap svc cds-blueprints-processor-http
625 - .spec.type: ClusterIP
626 + .spec.type: NodePort
627 + .spec.ports[0].nodePort: 30499
629 .. note:: The port number 30499 is used in included Postman collection
631 **Postman variables:**
633 Most of the Postman variables are automated by Postman scripts and environment file provided, but there are few mandatory variables to fill by user.
635 ===================== ===================
637 --------------------- -------------------
638 k8s ONAP Kubernetes host
639 sdnc_port port of sdnc service for accessing MDSAL
640 service-name name of service as defined in SDC
641 service-version version of service defined in SDC (if service wasn't updated, it should be set to "1.0")
642 service-instance-name name of instantiated service (if ending with -{num}, will be autoincremented for each instantiation request)
643 ===================== ===================
645 You can get the sdnc_port value with
649 kubectl -n onap get svc sdnc -o json | jq '.spec.ports[]|select(.port==8282).nodePort'
651 Automation Environment Setup
652 ............................
654 Whole content of this use case is stored into single git repository and it contains both the required onboarding information as well as automation scripts for onboarding and instantiation of the use case.
658 git clone --single-branch --branch guilin "https://gerrit.onap.org/r/demo"
659 cd demo/heat/vFW_CNF_CDS/templates
661 In order to prepare environment for onboarding and instantiation of the use case make sure you have *git*, *make*, *helm* and *pipenv* applications installed.
663 The automation scripts are based on `Python SDK`_ and are adopted to automate process of service onboarding, instantiation, deletion and cloud region registration. To configure them for further use:
667 cd demo/heat/vFW_CNF_CDS/automation
669 1. Install required packages with
672 pipenv pipenv install
674 2. Run virtual python environment
679 3. Add kubeconfig files, one for ONAP cluster, and one for k8s cluster that will host vFW
681 .. note:: Both files can be configured after creation of k8s cluster for vFW instance `2-1 Installation of Managed Kubernetes`_. Make sure that they have configured external IP address properly. If any cluster uses self signed certificates set also *insecure-skip-tls-verify* flag in the config file.
683 - artifacts/cluster_kubeconfig - IP address must be reachable by ONAP pods, especially *mutlicloud-k8s* pod
685 - artifacts/onap_kubeconfig - IP address must be reachable by automation scripts
687 4. Modify config.py file
689 - NATIVE - when enabled **Native Helm** path will be used, otherwise **Dummy Heat** path will be used
690 - CLOUD_REGION - name of your k8s cluster from ONAP perspective
691 - GLOBAL_CUSTOMER_ID - identifier of customer in ONAP
692 - VENDOR - name of the Vendor in ONAP
693 - SERVICENAME - **Name of your service model in SDC**
694 - CUSTOMER_RESOURCE_DEFINITIONS - add list of CRDs to be installed on non KUD k8s cluster - should be used ony to use some non-KUD cluster like i.e. ONAP one to test instantiation of Helm package. For KUD should be empty list
696 .. note:: For automation script it is necessary to modify only NATIVE and SERVICENAME constants. Other constants may be modified if needed.
701 Some basic entries are needed in ONAP AAI. These entries are needed ones per onap installation and do not need to be repeated when running multiple demos based on same definitions.
703 Create all these entries into AAI in this order. Postman collection provided in this demo can be used for creating each entry.
708 Postman -> Initial ONAP setup -> Create
711 - Create Owning-entity
714 - Create Line Of Business
716 Corresponding GET operations in "Check" folder in Postman can be used to verify entries created. Postman collection also includes some code that tests/verifies some basic issues e.g. gives error if entry already exists.
720 This step is performed jointly with onboarding step `3-1 Onboarding`_
725 Naming policy is needed to generate unique names for all instance time resources that are wanted to be modeled in the way naming policy is used. Those are normally VNF, VNFC and VF-module names, network names etc. Naming is general ONAP feature and not limited to this use case.
727 This usecase leverages default ONAP naming policy - "SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP".
728 To check that the naming policy is created and pushed OK, we can run the command below from inside any ONAP pod.
732 curl --silent -k --user 'healthcheck:zb!XztG34' -X GET "https://policy-api:6969/policy/api/v1/policytypes/onap.policies.Naming/versions/1.0.0/policies/SDNC_Policy.ONAP_NF_NAMING_TIMESTAMP/versions/1.0.0"
734 .. note:: Please change credentials respectively to your installation. The required credentials can be retrieved with instruction `Retrieving logins and passwords of ONAP components`_
736 PART 2 - Installation of managed Kubernetes cluster
737 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
739 In this demo the target cloud region is a Kubernetes cluster of your choice basically just like with Openstack. ONAP platform is a bit too much hard wired to Openstack and it's visible in many demos.
741 2-1 Installation of Managed Kubernetes
742 ......................................
744 In this demo we use Kubernetes deployment used by ONAP multicloud/k8s team to test their plugin features see `KUD github`_. There's also some outdated instructions in ONAP wiki `KUD in Wiki`_.
746 KUD deployment is fully automated and also used in ONAP's CI/CD to automatically verify all `Multicloud k8s gerrit`_ commits (see `KUD Jenkins ci/cd verification`_) and that's quite good (and rare) level of automated integration testing in ONAP. KUD deployemnt is used as it's installation is automated and it also includes bunch of Kubernetes plugins used to tests various k8s plugin features. In addition to deployement, KUD repository also contains test scripts to automatically test multicloud/k8s plugin features. Those scripts are run in CI/CD.
748 See `KUD subproject in github`_ for a list of additional plugins this Kubernetes deployment has. In this demo the tested CNF is dependent on following plugins:
754 Follow instructions in `KUD github`_ and install target Kubernetes cluster in your favorite machine(s), simplest being just one machine. Your cluster nodes(s) needs to be accessible from ONAP Kuberenetes nodes. Make sure your installed *pip* is of **version < 21.0**. Version 21 do not support python 2.7 that is used in *aio.sh* script. Also to avoid performance problems of your k8s cluster make sure you install only necessary plugins and before running *aio.sh* script execute following command
757 export KUD_ADDONS="virtlet ovn4nfv"
759 2-2 Cloud Registration
760 ......................
762 Managed Kubernetes cluster is registered here into ONAP as one cloud region. This obviously is done just one time for this particular cloud. Cloud registration information is kept in AAI.
766 Postman collection have folder/entry for each step. Execute in this order.
769 Postman -> K8s Cloud Region Registration -> Create
772 - Create Cloud Region
773 - Create Complex-Cloud Region Relationship
775 - Create Service Subscription
776 - Create Cloud Tenant
777 - Create Availability Zone
778 - Upload Connectivity Info
780 .. note:: For "Upload Connectivity Info" call you need to provide kubeconfig file of existing KUD cluster. You can find that kubeconfig on deployed KUD in the directory `~/.kube/config` and this file can be easily copied e.g. via SCP. Please ensure that kubeconfig contains external IP of K8s cluster in kubeconfig and correct it, if it's not.
782 SO database needs to be (manually) modified for SO to know that this particular cloud region is to be handled by multicloud. Values we insert needs to obviously match to the ones we populated into AAI.
784 .. note:: Please change credentials respectively to your installation. The required credentials can be retrieved with instruction `Retrieving logins and passwords of ONAP components`_
788 kubectl -n onap exec onap-mariadb-galera-0 -it -- mysql -uroot -psecretpassword -D catalogdb
789 select * from cloud_sites;
790 insert into cloud_sites(ID, REGION_ID, IDENTITY_SERVICE_ID, CLOUD_VERSION, CLLI, ORCHESTRATOR) values("k8sregionfour", "k8sregionfour", "DEFAULT_KEYSTONE", "2.5", "clli2", "multicloud");
791 select * from cloud_sites;
794 .. note:: The configuration of the new k8s cloud site is documented also here `K8s cloud site config`_
798 Please copy the kubeconfig file of existing KUD cluster to automation/artifacts/cluster_kubeconfig location `Automation Environment Setup`_ - step **3**. You can find that kubeconfig on deployed KUD in the directory `~/.kube/config` and this file can be easily copied e.g. via SCP. Please ensure that kubeconfig contains external IP of K8s cluster in kubeconfig and correct it, if it's not.
802 python create_k8s_region.py
804 PART 3 - Execution of the Use Case
805 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
807 This part contains all the steps to run the use case by using ONAP GUIs, Postman or Python automation scripts.
809 Following pictures describe the overall sequential flow of the use case in two scenarios: **Dummy Heat** path (with OpenStack adapter) and **Native Helm** path (with CNF Adapter)
811 .. figure:: files/vFW_CNF_CDS/Dummy_Heat_Flow.png
814 vFW CNF CDS Use Case sequence flow for *Dummy Heat* (Frankfurt) path.
816 .. figure:: files/vFW_CNF_CDS/Native_Helm_Flow.png
819 vFW CNF CDS Use Case sequence flow for *Native Helm* (Guilin) path.
821 .. note:: The **Native Helm** path has identified defects in the instantiation process and requires SO images of version 1.7.11 for successfull instantiation of the CNF. Please monitor `SO-3403`_ and `SO-3404`_ tickets to make sure that necessary fixes have been delivered and 1.7.11 SO images are avaialble in your Guilin ONAP instance.
827 .. note:: Make sure you have performed `Automation Environment Setup`_ steps before following actions here.
829 Creating Onboarding Package
830 +++++++++++++++++++++++++++
832 Content of the onboarding package can be created with provided Makefile in the *template* folder.
834 Complete content of both Onboarding Packages for **Dummy Heat** and **Native Helm** is packaged to the following VSP onboarding package files:
836 - **Dummy Heat** path: **vfw_k8s_demo.zip**
838 - **Native Helm** path: **native_vfw_k8s_demo.zip**
840 .. note:: Procedure requires *make* and *helm* applications installed
844 git clone --single-branch --branch guilin "https://gerrit.onap.org/r/demo"
845 cd demo/heat/vFW_CNF_CDS/templates
848 The result of make operation execution is following:
852 make[1]: Entering directory '/mnt/c/Users/advnet/Desktop/SOURCES/demo/heat/vFW_CNF_CDS/templates'
853 rm -rf package_dummy/
854 rm -rf package_native/
856 rm -f vfw_k8s_demo.zip
857 rm -f native_vfw_k8s_demo.zip
858 make[1]: Leaving directory '/mnt/c/Users/advnet/Desktop/SOURCES/demo/heat/vFW_CNF_CDS/templates'
860 make[1]: Entering directory '/mnt/c/Users/advnet/Desktop/SOURCES/demo/heat/vFW_CNF_CDS/templates'
862 mkdir package_native/
864 make[2]: Entering directory '/mnt/c/Users/advnet/Desktop/SOURCES/demo/heat/vFW_CNF_CDS/templates/helm'
865 rm -f base_template-*.tgz
866 rm -f helm_base_template.tgz
867 rm -f base_template_cloudtech_k8s_charts.tgz
868 helm package base_template
869 Successfully packaged chart and saved it to: /mnt/c/Users/advnet/Desktop/SOURCES/demo/heat/vFW_CNF_CDS/templates/helm/base_template-0.2.0.tgz
870 mv base_template-*.tgz helm_base_template.tgz
871 cp helm_base_template.tgz base_template_cloudtech_k8s_charts.tgz
874 rm -f vpkg_cloudtech_k8s_charts.tgz
876 Successfully packaged chart and saved it to: /mnt/c/Users/advnet/Desktop/SOURCES/demo/heat/vFW_CNF_CDS/templates/helm/vpkg-0.2.0.tgz
877 mv vpkg-*.tgz helm_vpkg.tgz
878 cp helm_vpkg.tgz vpkg_cloudtech_k8s_charts.tgz
881 rm -f vfw_cloudtech_k8s_charts.tgz
883 Successfully packaged chart and saved it to: /mnt/c/Users/advnet/Desktop/SOURCES/demo/heat/vFW_CNF_CDS/templates/helm/vfw-0.2.0.tgz
884 mv vfw-*.tgz helm_vfw.tgz
885 cp helm_vfw.tgz vfw_cloudtech_k8s_charts.tgz
888 rm -f vsn_cloudtech_k8s_charts.tgz
890 Successfully packaged chart and saved it to: /mnt/c/Users/advnet/Desktop/SOURCES/demo/heat/vFW_CNF_CDS/templates/helm/vsn-0.2.0.tgz
891 mv vsn-*.tgz helm_vsn.tgz
892 cp helm_vsn.tgz vsn_cloudtech_k8s_charts.tgz
893 make[2]: Leaving directory '/mnt/c/Users/advnet/Desktop/SOURCES/demo/heat/vFW_CNF_CDS/templates/helm'
894 mv helm/helm_*.tgz package_native/
895 mv helm/*.tgz package_dummy/
896 cp base_dummy/* package_dummy/
897 cp base_native/* package_native/
899 sed -i 's/"helm_/"/g' cba_dummy/Definitions/vFW_CNF_CDS.json
900 cd cba_dummy/ && zip -r CBA.zip . -x pom.xml .idea/\* target/\*
901 adding: Definitions/ (stored 0%)
902 adding: Definitions/artifact_types.json (deflated 69%)
903 adding: Definitions/data_types.json (deflated 88%)
904 adding: Definitions/node_types.json (deflated 90%)
905 adding: Definitions/policy_types.json (stored 0%)
906 adding: Definitions/relationship_types.json (stored 0%)
907 adding: Definitions/resources_definition_types.json (deflated 94%)
908 adding: Definitions/vFW_CNF_CDS.json (deflated 87%)
909 adding: Scripts/ (stored 0%)
910 adding: Scripts/kotlin/ (stored 0%)
911 adding: Scripts/kotlin/README.md (stored 0%)
912 adding: Templates/ (stored 0%)
913 adding: Templates/base_template-mapping.json (deflated 89%)
914 adding: Templates/base_template-template.vtl (deflated 87%)
915 adding: Templates/k8s-profiles/ (stored 0%)
916 adding: Templates/k8s-profiles/vfw-cnf-cds-base-profile.tar.gz (stored 0%)
917 adding: Templates/k8s-profiles/vfw-cnf-cds-vpkg-profile/ (stored 0%)
918 adding: Templates/k8s-profiles/vfw-cnf-cds-vpkg-profile/manifest.yaml (deflated 35%)
919 adding: Templates/k8s-profiles/vfw-cnf-cds-vpkg-profile/override_values.yaml (stored 0%)
920 adding: Templates/k8s-profiles/vfw-cnf-cds-vpkg-profile/ssh-service-mapping.json (deflated 51%)
921 adding: Templates/k8s-profiles/vfw-cnf-cds-vpkg-profile/ssh-service-template.yaml.vtl (deflated 56%)
922 adding: Templates/nf-params-mapping.json (deflated 88%)
923 adding: Templates/nf-params-template.vtl (deflated 44%)
924 adding: Templates/vfw-mapping.json (deflated 89%)
925 adding: Templates/vfw-template.vtl (deflated 87%)
926 adding: Templates/vnf-mapping.json (deflated 89%)
927 adding: Templates/vnf-template.vtl (deflated 93%)
928 adding: Templates/vpkg-mapping.json (deflated 89%)
929 adding: Templates/vpkg-template.vtl (deflated 87%)
930 adding: Templates/vsn-mapping.json (deflated 89%)
931 adding: Templates/vsn-template.vtl (deflated 87%)
932 adding: TOSCA-Metadata/ (stored 0%)
933 adding: TOSCA-Metadata/TOSCA.meta (deflated 37%)
934 cd cba/ && zip -r CBA.zip . -x pom.xml .idea/\* target/\*
935 adding: Definitions/ (stored 0%)
936 adding: Definitions/artifact_types.json (deflated 69%)
937 adding: Definitions/data_types.json (deflated 88%)
938 adding: Definitions/node_types.json (deflated 90%)
939 adding: Definitions/policy_types.json (stored 0%)
940 adding: Definitions/relationship_types.json (stored 0%)
941 adding: Definitions/resources_definition_types.json (deflated 94%)
942 adding: Definitions/vFW_CNF_CDS.json (deflated 87%)
943 adding: Scripts/ (stored 0%)
944 adding: Scripts/kotlin/ (stored 0%)
945 adding: Scripts/kotlin/README.md (stored 0%)
946 adding: Templates/ (stored 0%)
947 adding: Templates/base_template-mapping.json (deflated 89%)
948 adding: Templates/base_template-template.vtl (deflated 87%)
949 adding: Templates/k8s-profiles/ (stored 0%)
950 adding: Templates/k8s-profiles/vfw-cnf-cds-base-profile.tar.gz (stored 0%)
951 adding: Templates/k8s-profiles/vfw-cnf-cds-vpkg-profile/ (stored 0%)
952 adding: Templates/k8s-profiles/vfw-cnf-cds-vpkg-profile/manifest.yaml (deflated 35%)
953 adding: Templates/k8s-profiles/vfw-cnf-cds-vpkg-profile/override_values.yaml (stored 0%)
954 adding: Templates/k8s-profiles/vfw-cnf-cds-vpkg-profile/ssh-service-mapping.json (deflated 51%)
955 adding: Templates/k8s-profiles/vfw-cnf-cds-vpkg-profile/ssh-service-template.yaml.vtl (deflated 56%)
956 adding: Templates/nf-params-mapping.json (deflated 88%)
957 adding: Templates/nf-params-template.vtl (deflated 44%)
958 adding: Templates/vfw-mapping.json (deflated 89%)
959 adding: Templates/vfw-template.vtl (deflated 87%)
960 adding: Templates/vnf-mapping.json (deflated 89%)
961 adding: Templates/vnf-template.vtl (deflated 93%)
962 adding: Templates/vpkg-mapping.json (deflated 89%)
963 adding: Templates/vpkg-template.vtl (deflated 87%)
964 adding: Templates/vsn-mapping.json (deflated 89%)
965 adding: Templates/vsn-template.vtl (deflated 87%)
966 adding: TOSCA-Metadata/ (stored 0%)
967 adding: TOSCA-Metadata/TOSCA.meta (deflated 37%)
968 mv cba/CBA.zip package_native/
969 mv cba_dummy/CBA.zip package_dummy/
970 cd package_dummy/ && zip -r vfw_k8s_demo.zip .
971 adding: base_template.env (deflated 22%)
972 adding: base_template.yaml (deflated 59%)
973 adding: base_template_cloudtech_k8s_charts.tgz (stored 0%)
974 adding: CBA.zip (stored 0%)
975 adding: MANIFEST.json (deflated 84%)
976 adding: vfw.env (deflated 23%)
977 adding: vfw.yaml (deflated 60%)
978 adding: vfw_cloudtech_k8s_charts.tgz (stored 0%)
979 adding: vpkg.env (deflated 13%)
980 adding: vpkg.yaml (deflated 59%)
981 adding: vpkg_cloudtech_k8s_charts.tgz (stored 0%)
982 adding: vsn.env (deflated 15%)
983 adding: vsn.yaml (deflated 59%)
984 adding: vsn_cloudtech_k8s_charts.tgz (stored 0%)
985 cd package_native/ && zip -r native_vfw_k8s_demo.zip .
986 adding: CBA.zip (stored 0%)
987 adding: helm_base_template.tgz (stored 0%)
988 adding: helm_vfw.tgz (stored 0%)
989 adding: helm_vpkg.tgz (stored 0%)
990 adding: helm_vsn.tgz (stored 0%)
991 adding: MANIFEST.json (deflated 71%)
992 mv package_dummy/vfw_k8s_demo.zip .
993 mv package_native/native_vfw_k8s_demo.zip .
996 Import this package into SDC and follow onboarding steps.
998 Service Creation with SDC
999 +++++++++++++++++++++++++
1003 Service Creation in SDC is composed of the same steps that are performed by most other use-cases. For reference, you can relate to `vLB use-case`_
1007 - Remember during VSP onboard to choose "Network Package" Onboarding procedure
1009 Create VF and Service
1010 Service -> Properties Assignment -> Choose VF (at right box):
1012 - skip_post_instantiation_configuration - True
1013 - sdnc_artifact_name - vnf
1014 - sdnc_model_name - vFW_CNF_CDS
1015 - sdnc_model_version - 7.0.0
1018 .. note:: The onboarding packages for **Dummy Heat** and **Native Helm** path contain different CBA packages but with the same version and number. In consequence, when one VSP is distributed it replaces the CBA package of the other one and you can instantiate service only for the vFW CNF service service model distributed as a last one. If you want to instantiate vFW CNF service, make sure you have fresh distribution of vFW CNF service model.
1022 python onboarding.py
1024 Distribution Of Service
1025 +++++++++++++++++++++++
1031 Verify in SDC UI if distribution was successful. In case of any errors (sometimes SO fails on accepting CLOUD_TECHNOLOGY_SPECIFIC_ARTIFACT), try redistribution. You can also verify distribution for few components manually:
1035 SDC Catalog database should have our service now defined.
1039 Postman -> LCM -> [SDC] Catalog Service
1044 "uuid": "64dd38f3-2307-4e0a-bc98-5c2cbfb260b6",
1045 "invariantUUID": "cd1a5c2d-2d4e-4d62-ac10-a5fe05e32a22",
1046 "name": "vfw_cnf_cds_svc",
1048 "toscaModelURL": "/sdc/v1/catalog/services/64dd38f3-2307-4e0a-bc98-5c2cbfb260b6/toscaModel",
1049 "category": "Network L4+",
1050 "lifecycleState": "CERTIFIED",
1051 "lastUpdaterUserId": "cs0008",
1052 "distributionStatus": "DISTRIBUTED"
1055 Listing should contain entry with our service name **vfw_cnf_cds_svc**.
1057 .. note:: Note that it's an example name, it depends on how your model is named during Service design in SDC and must be kept in sync with Postman variables.
1061 SO Catalog database should have our service NFs defined now.
1065 Postman -> LCM -> [SO] Catalog DB Service xNFs
1073 "modelName": "vfw_cnf_cds_vsp",
1074 "modelUuid": "70edaca8-8c79-468a-aa76-8224cfe686d0",
1075 "modelInvariantUuid": "7901fc89-a94d-434a-8454-1e27b99dc0e2",
1076 "modelVersion": "1.0",
1077 "modelCustomizationUuid": "86dc8af4-aa17-4fc7-9b20-f12160d99718",
1078 "modelInstanceName": "vfw_cnf_cds_vsp 0"
1080 "toscaNodeType": "org.openecomp.resource.vf.VfwCnfCdsVsp",
1084 "nfNamingCode": null,
1085 "multiStageDesign": "false",
1086 "vnfcInstGroupOrder": null,
1087 "resourceInput": "TBD",
1091 "modelName": "VfwCnfCdsVsp..base_template..module-0",
1092 "modelUuid": "274f4bc9-7679-4767-b34d-1df51cdf2496",
1093 "modelInvariantUuid": "52842255-b7be-4a1c-ab3b-2bd3bd4a5423",
1094 "modelVersion": "1",
1095 "modelCustomizationUuid": "b27fad11-44da-4840-9256-7ed8a32fbe3e"
1098 "vfModuleLabel": "base_template",
1100 "hasVolumeGroup": false
1104 "modelName": "VfwCnfCdsVsp..vsn..module-1",
1105 "modelUuid": "0cbf558f-5a96-4555-b476-7df8163521aa",
1106 "modelInvariantUuid": "36f25e1b-199b-4de2-b656-c870d341cf0e",
1107 "modelVersion": "1",
1108 "modelCustomizationUuid": "4cac0584-c0d6-42a7-bdb3-29162792e07f"
1111 "vfModuleLabel": "vsn",
1113 "hasVolumeGroup": false
1117 "modelName": "VfwCnfCdsVsp..vpkg..module-2",
1118 "modelUuid": "011b5f61-6524-4789-bd9a-44cfbf321463",
1119 "modelInvariantUuid": "4e2b9975-5214-48b8-861a-5701c09eedfa",
1120 "modelVersion": "1",
1121 "modelCustomizationUuid": "4e7028a1-4c80-4d20-a7a2-a1fb3343d5cb"
1124 "vfModuleLabel": "vpkg",
1126 "hasVolumeGroup": false
1130 "modelName": "VfwCnfCdsVsp..vfw..module-3",
1131 "modelUuid": "0de4ed56-8b4c-4a2d-8ce6-85d5e269204f",
1132 "modelInvariantUuid": "9ffda670-3d77-4f6c-a4ad-fb7a09f19817",
1133 "modelVersion": "1",
1134 "modelCustomizationUuid": "1e123e43-ba40-4c93-90d7-b9f27407ec03"
1137 "vfModuleLabel": "vfw",
1139 "hasVolumeGroup": false
1147 .. note:: For **Native Helm** path both modelName will have prefix *helm_* i.e. *helm_vfw* and vfModuleLabel will have *helm_* keyword inside i.e. *VfwCnfCdsVsp..helm_vfw..module-3*
1151 SDNC should have it's database updated with *sdnc_* properties that were set during service modeling.
1153 .. note:: Please change credentials respectively to your installation. The required credentials can be retrieved with instruction `Retrieving logins and passwords of ONAP components`_
1158 kubectl -n onap exec onap-mariadb-galera-0 -it -- sh
1159 mysql -uroot -psecretpassword -D sdnctl
1160 MariaDB [sdnctl]> select sdnc_model_name, sdnc_model_version, sdnc_artifact_name from VF_MODEL WHERE customization_uuid = '86dc8af4-aa17-4fc7-9b20-f12160d99718';
1161 +-----------------+--------------------+--------------------+
1162 | sdnc_model_name | sdnc_model_version | sdnc_artifact_name |
1163 +-----------------+--------------------+--------------------+
1164 | vFW_CNF_CDS | 7.0.0 | vnf |
1165 +-----------------+--------------------+--------------------+
1166 1 row in set (0.00 sec)
1169 .. note:: customization_uuid value is the modelCustomizationUuid of the VNF (serviceVnfs response in 2nd Postman call from SO Catalog DB)
1173 CDS should onboard CBA uploaded as part of VF.
1177 Postman -> Distribution Verification -> [CDS] List CBAs
1184 "id": "c505e516-b35d-4181-b1e2-bcba361cfd0a",
1185 "artifactUUId": null,
1186 "artifactType": "SDNC_MODEL",
1187 "artifactVersion": "7.0.0",
1188 "artifactDescription": "Controller Blueprint for vFW_CNF_CDS:7.0.0",
1189 "internalVersion": null,
1190 "createdDate": "2020-05-29T06:02:20.000Z",
1191 "artifactName": "vFW_CNF_CDS",
1193 "updatedBy": "Samuli Silvius <s.silvius@partner.samsung.com>",
1194 "tags": "Samuli Silvius, Lukasz Rajewski, vFW_CNF_CDS"
1199 The list should have the matching entries with SDNC database:
1201 - sdnc_model_name == artifactName
1202 - sdnc_model_version == artifactVersion
1204 You can also use Postman to download CBA for further verification but it's fully optional.
1208 Postman -> Distribution Verification -> [CDS] CBA Download
1212 K8splugin should onboard 4 resource bundles related to helm resources:
1216 Postman -> Distribution Verification -> [K8splugin] List Resource Bundle Definitions
1222 "rb-name": "52842255-b7be-4a1c-ab3b-2bd3bd4a5423",
1223 "rb-version": "274f4bc9-7679-4767-b34d-1df51cdf2496",
1224 "chart-name": "base_template",
1227 "vnf_customization_uuid": "b27fad11-44da-4840-9256-7ed8a32fbe3e"
1231 "rb-name": "36f25e1b-199b-4de2-b656-c870d341cf0e",
1232 "rb-version": "0cbf558f-5a96-4555-b476-7df8163521aa",
1233 "chart-name": "vsn",
1236 "vnf_customization_uuid": "4cac0584-c0d6-42a7-bdb3-29162792e07f"
1240 "rb-name": "4e2b9975-5214-48b8-861a-5701c09eedfa",
1241 "rb-version": "011b5f61-6524-4789-bd9a-44cfbf321463",
1242 "chart-name": "vpkg",
1245 "vnf_customization_uuid": "4e7028a1-4c80-4d20-a7a2-a1fb3343d5cb"
1249 "rb-name": "9ffda670-3d77-4f6c-a4ad-fb7a09f19817",
1250 "rb-version": "0de4ed56-8b4c-4a2d-8ce6-85d5e269204f",
1251 "chart-name": "vfw",
1254 "vnf_customization_uuid": "1e123e43-ba40-4c93-90d7-b9f27407ec03"
1261 Distribution is a part of the onboarding step and at this stage is performed
1263 3-2 CNF Instantiation
1264 .....................
1266 This is the whole beef of the use case and furthermore the core of it is that we can instantiate any amount of instances of the same CNF each running and working completely of their own. Very basic functionality in VM (VNF) side but for Kubernetes and ONAP integration this is the first milestone towards other normal use cases familiar for VNFs.
1270 Postman collection is automated to populate needed parameters when queries are run in correct order. If you did not already run following 2 queries after distribution (to verify distribution), run those now:
1274 Postman -> LCM -> 1.[SDC] Catalog Service
1278 Postman -> LCM -> 2. [SO] Catalog DB Service xNFs
1280 Now actual instantiation can be triggered with:
1284 Postman -> LCM -> 3. [SO] Self-Serve Service Assign & Activate
1288 Required inputs for instantiation process are taken from the *config.py* file.
1291 python instantiation.py
1294 Finally, to follow the progress of instantiation request with SO's GET request:
1300 Postman -> LCM -> 4. [SO] Infra Active Requests
1302 The successful reply payload in that query should start like this:
1307 "requestStatus": "COMPLETE",
1308 "statusMessage": "Macro-Service-createInstance request was executed correctly.",
1309 "flowStatus": "Successfully completed all Building Blocks",
1311 "startTime": 1590996766000,
1312 "endTime": 1590996945000,
1313 "source": "Postman",
1314 "vnfId": "93b3350d-ed6f-413b-9cc5-a158c1676eb0",
1316 "requestBody": "**REDACTED FOR READABILITY**",
1317 "lastModifiedBy": "CamundaBPMN",
1318 "modifyTime": "2020-06-01T07:35:45.000+0000",
1319 "cloudRegion": "k8sregionfour",
1320 "serviceInstanceId": "8ead0480-cf44-428e-a4c2-0e6ed10f7a72",
1321 "serviceInstanceName": "vfw-cnf-16",
1322 "requestScope": "service",
1323 "requestAction": "createInstance",
1324 "requestorId": "11c2ddb7-4659-4bf0-a685-a08dcbb5a099",
1325 "requestUrl": "http://infra:30277/onap/so/infra/serviceInstantiation/v7/serviceInstances",
1326 "tenantName": "k8stenant",
1327 "cloudApiRequests": [],
1328 "requestURI": "6a369c8e-d492-4ab5-a107-46804eeb7873",
1331 "href": "http://infra:30277/infraActiveRequests/6a369c8e-d492-4ab5-a107-46804eeb7873"
1333 "infraActiveRequests": {
1334 "href": "http://infra:30277/infraActiveRequests/6a369c8e-d492-4ab5-a107-46804eeb7873"
1340 Progress can be also followed also with `SO Monitoring`_ dashboard.
1342 Service Instance Termination
1343 ++++++++++++++++++++++++++++
1345 Service instance can be terminated with the following postman call:
1350 Postman -> LCM -> 5. [SO] Service Delete
1357 .. note:: Automated service deletion mecvhanism takes information about the instantiated service instance from the *config.py* file and *SERVICE_INSTANCE_NAME* variable. If you modify this value before the deletion of existing service instance then you will loose opportunity to easy delete already created service instance.
1359 Second Service Instance Instantiation
1360 +++++++++++++++++++++++++++++++++++++
1362 To finally verify that all the work done within this demo, it should be possible to instantiate second vFW instance successfully.
1364 Trigger new instance createion. You can use previous call or a separate one that will utilize profile templating mechanism implemented in CBA:
1369 Postman -> LCM -> 6. [SO] Self-Serve Service Assign & Activate - Second
1373 Before second instance of service is created you need to modify *config.py* file changing the *SERVICENAME* and *SERVICE_INSTANCE_NAME* to different values and by changing the value or *k8s-rb-profile-name* parameter for *vpg* module from value *default* or *vfw-cnf-cds-base-profile* to *vfw-cnf-cds-vpkg-profile* what will result with instantiation of additional ssh service for *vpg* module. Second onboarding in automated case is required due to the existing limitations of *python-sdk* librarier that create vf-module instance name base on the vf-module model name. For manual Postman option vf-module instance name is set on service instance name basis what makes it unique.
1376 python onboarding.py
1377 python instantiation.py
1379 3-3 Results and Logs
1380 ....................
1382 Now multiple instances of Kubernetes variant of vFW are running in target VIM (KUD deployment).
1384 .. figure:: files/vFW_CNF_CDS/vFW_Instance_In_Kubernetes.png
1387 vFW Instance In Kubernetes
1391 To review situation after instantiation from different ONAP components, most of the info can be found using Postman queries provided. For each query, example response payload(s) is/are saved and can be found from top right corner of the Postman window.
1395 Postman -> Instantiation verification**
1397 Execute example Postman queries and check example section to see the valid results.
1399 ========================== =================
1400 Verify Target Postman query
1401 -------------------------- -----------------
1402 Service Instances in AAI **Postman -> Instantiation verification -> [AAI] List Service Instances**
1403 Service Instances in MDSAL **Postman -> Instantiation verification -> [SDNC] GR-API MD-SAL Services**
1404 K8S Instances in KUD **Postman -> Instantiation verification -> [K8splugin] List Instances**
1405 ========================== =================
1407 .. note:: "[AAI] List vServers <Empty>" Request won't return any vserver info from AAI, as currently such information are not provided during instantiation process.
1410 Query also directly from VIM:
1415 ubuntu@kud-host:~$ kubectl get pods,svc,networks,cm,network-attachment-definition,deployments
1416 NAME READY STATUS RESTARTS AGE
1417 pod/vfw-17f6f7d3-8424-4550-a188-cd777f0ab48f-7cfb9949d9-8b5vg 1/1 Running 0 22s
1418 pod/vfw-19571429-4af4-49b3-af65-2eb1f97bba43-75cd7c6f76-4gqtz 1/1 Running 0 11m
1419 pod/vpg-5ea0d3b0-9a0c-4e88-a2e2-ceb84810259e-f4485d485-pln8m 1/1 Running 0 11m
1420 pod/vpg-8581bc79-8eef-487e-8ed1-a18c0d638b26-6f8cff54d-dvw4j 1/1 Running 0 32s
1421 pod/vsn-8e7ac4fc-2c31-4cf8-90c8-5074c5891c14-5879c56fd-q59l7 2/2 Running 0 11m
1422 pod/vsn-fdc9b4ba-c0e9-4efc-8009-f9414ae7dd7b-5889b7455-96j9d 2/2 Running 0 30s
1424 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
1425 service/vpg-5ea0d3b0-9a0c-4e88-a2e2-ceb84810259e-management-api NodePort 10.244.43.245 <none> 2831:30831/TCP 11m
1426 service/vpg-8581bc79-8eef-487e-8ed1-a18c0d638b26-management-api NodePort 10.244.1.45 <none> 2831:31831/TCP 33s
1427 service/vsn-8e7ac4fc-2c31-4cf8-90c8-5074c5891c14-darkstat-ui NodePort 10.244.16.187 <none> 667:30667/TCP 11m
1428 service/vsn-fdc9b4ba-c0e9-4efc-8009-f9414ae7dd7b-darkstat-ui NodePort 10.244.20.229 <none> 667:31667/TCP 30s
1431 network.k8s.plugin.opnfv.org/55118b80-8470-4c99-bfdf-d122cd412739-management-network 40s
1432 network.k8s.plugin.opnfv.org/55118b80-8470-4c99-bfdf-d122cd412739-protected-network 40s
1433 network.k8s.plugin.opnfv.org/55118b80-8470-4c99-bfdf-d122cd412739-unprotected-network 40s
1434 network.k8s.plugin.opnfv.org/567cecc3-9692-449e-877a-ff0b560736be-management-network 11m
1435 network.k8s.plugin.opnfv.org/567cecc3-9692-449e-877a-ff0b560736be-protected-network 11m
1436 network.k8s.plugin.opnfv.org/567cecc3-9692-449e-877a-ff0b560736be-unprotected-network 11m
1439 configmap/vfw-17f6f7d3-8424-4550-a188-cd777f0ab48f-configmap 6 22s
1440 configmap/vfw-19571429-4af4-49b3-af65-2eb1f97bba43-configmap 6 11m
1441 configmap/vpg-5ea0d3b0-9a0c-4e88-a2e2-ceb84810259e-configmap 6 11m
1442 configmap/vpg-8581bc79-8eef-487e-8ed1-a18c0d638b26-configmap 6 33s
1443 configmap/vsn-8e7ac4fc-2c31-4cf8-90c8-5074c5891c14-configmap 2 11m
1444 configmap/vsn-fdc9b4ba-c0e9-4efc-8009-f9414ae7dd7b-configmap 2 30s
1447 networkattachmentdefinition.k8s.cni.cncf.io/55118b80-8470-4c99-bfdf-d122cd412739-ovn-nat 40s
1448 networkattachmentdefinition.k8s.cni.cncf.io/567cecc3-9692-449e-877a-ff0b560736be-ovn-nat 11m
1450 NAME READY UP-TO-DATE AVAILABLE AGE
1451 deployment.extensions/vfw-17f6f7d3-8424-4550-a188-cd777f0ab48f 1/1 1 1 22s
1452 deployment.extensions/vfw-19571429-4af4-49b3-af65-2eb1f97bba43 1/1 1 1 11m
1453 deployment.extensions/vpg-5ea0d3b0-9a0c-4e88-a2e2-ceb84810259e 1/1 1 1 11m
1454 deployment.extensions/vpg-8581bc79-8eef-487e-8ed1-a18c0d638b26 1/1 1 1 33s
1455 deployment.extensions/vsn-8e7ac4fc-2c31-4cf8-90c8-5074c5891c14 1/1 1 1 11m
1456 deployment.extensions/vsn-fdc9b4ba-c0e9-4efc-8009-f9414ae7dd7b 1/1 1 1 30s
1459 Component Logs From The Execution
1460 +++++++++++++++++++++++++++++++++
1464 All logs from the use case execution can be retrieved with following
1468 kubectl -n onap logs `kubectl -n onap get pods -o go-template --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}' | grep -m1 <COMPONENT_NAME>` -c <CONTAINER>
1470 where <COMPONENT_NAME> and <CONTAINER> should be replaced with following keywords respectively:
1472 - so-bpmn-infra, so-bpmn-infra
1473 - so-openstack-adapter, so-openstack-adapter
1474 - so-cnf-adapter, so-cnf-adapter
1477 From karaf.log all requests (payloads) to CDS can be found by searching following string:
1479 ``'Sending request below to url http://cds-blueprints-processor-http:8080/api/v1/execution-service/process'``
1481 - cds-blueprints-processor, cds-blueprints-processor
1482 - multicloud-k8s, multicloud-k8s
1483 - network-name-gen, network-name-gen,
1487 In case more detailed logging is needed, here's instructions how to setup DEBUG logging for few components.
1493 kubectl -n onap exec -it onap-sdnc-0 -c sdnc /opt/opendaylight/bin/client log:set DEBUG
1496 - CDS Blueprint Processor
1501 kubectl -n onap edit configmap onap-cds-blueprints-processor-configmap
1503 # Edit logback.xml content change root logger level from info to debug.
1504 <root level="debug">
1505 <appender-ref ref="STDOUT"/>
1508 # Delete the Pods to make changes effective
1509 kubectl -n onap delete pods -l app=cds-blueprints-processor
1511 3-4 Verification of the CNF Status
1512 ..................................
1516 The Guilin introduces new API for verification of the status of instantiated resouces in k8s cluster. The API gives result similar to *kubectl describe* operation for all the resources created for particular *rb-definition*. Status API can be used to verify the k8s resources after instantiation but also can be used leveraged for synchronization of the information with external components, like AAI in the future. To use Status API call
1520 curl -i http://${K8S_NODE_IP}:30280/api/multicloud-k8s/v1/v1/instance/{rb-instance-id}/status
1522 where {rb-instance-id} can be taken from the list of instances resolved the following call
1526 curl -i http://${K8S_NODE_IP}:30280/api/multicloud-k8s/v1/v1/instance/
1528 or from AAI *heat-stack-id* property of created *vf-module* associated with each Helm package from onboarded VSP which holds the *rb-instance-id* value.
1530 Examplary output of Status API is shown below (result of test vFW CNF helm package). It shows the list of GVK resources created for requested *rb-instance* (Helm and vf-module in the same time) with assocated describe result for all of them.
1532 .. note:: The example of how the Stauts API could be integrated into CDS can be found in the Frankfurt version of k8s profile upload mechanism `Frankfurt CBA Definition`_ (*profile-upload* TOSCA node template), implemented in inside of the Kotlin script `Frankfurt CBA Script`_ for profile upload. This method shows how to integrate mutlicloud-k8s API endpoint into Kotlin script executed by CDS. For more details please take a look into Definition file of 1.0.45 version of the CBA and also the kotlin script used there for uploading the profile.
1539 "rb-version": "plugin_test",
1540 "profile-name": "test_profile",
1542 "cloud-region": "kud",
1544 "testCaseName": "plugin_fw.sh"
1546 "override-values": {
1547 "global.onapPrivateNetworkName": "onap-private-net-test"
1552 "resourcesStatus": [
1554 "name": "sink-configmap",
1563 "protected_net_gw": "192.168.20.100",
1564 "protected_private_net_cidr": "192.168.10.0/24"
1566 "kind": "ConfigMap",
1568 "creationTimestamp": "2020-09-29T13:36:25Z",
1570 "k8splugin.io/rb-instance-id": "practical_nobel"
1572 "name": "sink-configmap",
1573 "namespace": "plugin-tests-namespace",
1574 "resourceVersion": "10720771",
1575 "selfLink": "/api/v1/namespaces/plugin-tests-namespace/configmaps/sink-configmap",
1576 "uid": "46c8bec4-980c-455b-9eb0-fb84ac8cc450"
1581 "name": "sink-service",
1591 "creationTimestamp": "2020-09-29T13:36:25Z",
1595 "k8splugin.io/rb-instance-id": "practical_nobel",
1596 "release": "test-release"
1598 "name": "sink-service",
1599 "namespace": "plugin-tests-namespace",
1600 "resourceVersion": "10720780",
1601 "selfLink": "/api/v1/namespaces/plugin-tests-namespace/services/sink-service",
1602 "uid": "789a14fe-1246-4cdd-ba9a-359240ba614f"
1605 "clusterIP": "10.244.2.4",
1606 "externalTrafficPolicy": "Cluster",
1617 "release": "test-release"
1619 "sessionAffinity": "None",
1628 "name": "test-release-sink",
1632 "Kind": "Deployment"
1635 "apiVersion": "apps/v1",
1636 "kind": "Deployment",
1639 "deployment.kubernetes.io/revision": "1"
1641 "creationTimestamp": "2020-09-29T13:36:25Z",
1646 "k8splugin.io/rb-instance-id": "practical_nobel",
1647 "release": "test-release"
1649 "name": "test-release-sink",
1650 "namespace": "plugin-tests-namespace",
1651 "resourceVersion": "10720857",
1652 "selfLink": "/apis/apps/v1/namespaces/plugin-tests-namespace/deployments/test-release-sink",
1653 "uid": "1f50eecf-c924-4434-be87-daf7c64b6506"
1656 "progressDeadlineSeconds": 600,
1658 "revisionHistoryLimit": 10,
1662 "release": "test-release"
1668 "maxUnavailable": "25%"
1670 "type": "RollingUpdate"
1675 "k8s.plugin.opnfv.org/nfn-network": "{ \"type\": \"ovn4nfv\", \"interface\": [ { \"name\": \"protected-private-net\", \"ipAddress\": \"192.168.20.3\", \"interface\": \"eth1\", \"defaultGateway\": \"false\" }, { \"name\": \"onap-private-net-test\", \"ipAddress\": \"10.10.100.4\", \"interface\": \"eth2\" , \"defaultGateway\": \"false\"} ]}",
1676 "k8s.v1.cni.cncf.io/networks": "[{\"name\": \"ovn-networkobj\", \"namespace\": \"default\"}]"
1678 "creationTimestamp": null,
1681 "k8splugin.io/rb-instance-id": "practical_nobel",
1682 "release": "test-release"
1691 "name": "sink-configmap"
1695 "image": "rtsood/onap-vfw-demo-sink:0.2.0",
1696 "imagePullPolicy": "IfNotPresent",
1699 "securityContext": {
1703 "terminationMessagePath": "/dev/termination-log",
1704 "terminationMessagePolicy": "File",
1708 "image": "electrocucaracha/darkstat:latest",
1709 "imagePullPolicy": "IfNotPresent",
1713 "containerPort": 667,
1719 "terminationMessagePath": "/dev/termination-log",
1720 "terminationMessagePolicy": "File",
1724 "dnsPolicy": "ClusterFirst",
1725 "restartPolicy": "Always",
1726 "schedulerName": "default-scheduler",
1727 "securityContext": {},
1728 "terminationGracePeriodSeconds": 30
1733 "availableReplicas": 1,
1736 "lastTransitionTime": "2020-09-29T13:36:33Z",
1737 "lastUpdateTime": "2020-09-29T13:36:33Z",
1738 "message": "Deployment has minimum availability.",
1739 "reason": "MinimumReplicasAvailable",
1744 "lastTransitionTime": "2020-09-29T13:36:25Z",
1745 "lastUpdateTime": "2020-09-29T13:36:33Z",
1746 "message": "ReplicaSet \"test-release-sink-6546c4f698\" has successfully progressed.",
1747 "reason": "NewReplicaSetAvailable",
1749 "type": "Progressing"
1752 "observedGeneration": 1,
1755 "updatedReplicas": 1
1760 "name": "onap-private-net-test",
1762 "Group": "k8s.plugin.opnfv.org",
1763 "Version": "v1alpha1",
1767 "apiVersion": "k8s.plugin.opnfv.org/v1alpha1",
1770 "creationTimestamp": "2020-09-29T13:36:25Z",
1776 "k8splugin.io/rb-instance-id": "practical_nobel"
1778 "name": "onap-private-net-test",
1779 "namespace": "plugin-tests-namespace",
1780 "resourceVersion": "10720825",
1781 "selfLink": "/apis/k8s.plugin.opnfv.org/v1alpha1/namespaces/plugin-tests-namespace/networks/onap-private-net-test",
1782 "uid": "43d413f1-f222-4d98-9ddd-b209d3ade106"
1785 "cniType": "ovn4nfv",
1789 "gateway": "10.10.0.1/16",
1791 "subnet": "10.10.0.0/16"
1801 "name": "protected-private-net",
1803 "Group": "k8s.plugin.opnfv.org",
1804 "Version": "v1alpha1",
1808 "apiVersion": "k8s.plugin.opnfv.org/v1alpha1",
1811 "creationTimestamp": "2020-09-29T13:36:25Z",
1817 "k8splugin.io/rb-instance-id": "practical_nobel"
1819 "name": "protected-private-net",
1820 "namespace": "plugin-tests-namespace",
1821 "resourceVersion": "10720827",
1822 "selfLink": "/apis/k8s.plugin.opnfv.org/v1alpha1/namespaces/plugin-tests-namespace/networks/protected-private-net",
1823 "uid": "75c98944-80b6-4158-afed-8efa7a1075e2"
1826 "cniType": "ovn4nfv",
1830 "gateway": "192.168.20.100/24",
1832 "subnet": "192.168.20.0/24"
1842 "name": "unprotected-private-net",
1844 "Group": "k8s.plugin.opnfv.org",
1845 "Version": "v1alpha1",
1849 "apiVersion": "k8s.plugin.opnfv.org/v1alpha1",
1852 "creationTimestamp": "2020-09-29T13:36:25Z",
1858 "k8splugin.io/rb-instance-id": "practical_nobel"
1860 "name": "unprotected-private-net",
1861 "namespace": "plugin-tests-namespace",
1862 "resourceVersion": "10720829",
1863 "selfLink": "/apis/k8s.plugin.opnfv.org/v1alpha1/namespaces/plugin-tests-namespace/networks/unprotected-private-net",
1864 "uid": "54995c10-bffd-4bb2-bbab-5de266af9456"
1867 "cniType": "ovn4nfv",
1871 "gateway": "192.168.10.1/24",
1873 "subnet": "192.168.10.0/24"
1883 "name": "test-release-sink-6546c4f698-dv529",
1892 "k8s.plugin.opnfv.org/nfn-network": "{ \"type\": \"ovn4nfv\", \"interface\": [ { \"name\": \"protected-private-net\", \"ipAddress\": \"192.168.20.3\", \"interface\": \"eth1\", \"defaultGateway\": \"false\" }, { \"name\": \"onap-private-net-test\", \"ipAddress\": \"10.10.100.4\", \"interface\": \"eth2\" , \"defaultGateway\": \"false\"} ]}",
1893 "k8s.plugin.opnfv.org/ovnInterfaces": "[{\"ip_address\":\"192.168.20.3/24\", \"mac_address\":\"00:00:00:13:40:87\", \"gateway_ip\": \"192.168.20.100\",\"defaultGateway\":\"false\",\"interface\":\"eth1\"},{\"ip_address\":\"10.10.100.4/16\", \"mac_address\":\"00:00:00:49:de:fc\", \"gateway_ip\": \"10.10.0.1\",\"defaultGateway\":\"false\",\"interface\":\"eth2\"}]",
1894 "k8s.v1.cni.cncf.io/networks": "[{\"name\": \"ovn-networkobj\", \"namespace\": \"default\"}]",
1895 "k8s.v1.cni.cncf.io/networks-status": "[{\n \"name\": \"cni0\",\n \"interface\": \"eth0\",\n \"ips\": [\n \"10.244.64.46\"\n ],\n \"mac\": \"0a:58:0a:f4:40:2e\",\n \"default\": true,\n \"dns\": {}\n},{\n \"name\": \"ovn4nfv-k8s-plugin\",\n \"interface\": \"eth2\",\n \"ips\": [\n \"192.168.20.3\",\n \"10.10.100.4\"\n ],\n \"mac\": \"00:00:00:49:de:fc\",\n \"dns\": {}\n}]"
1897 "creationTimestamp": "2020-09-29T13:36:25Z",
1898 "generateName": "test-release-sink-6546c4f698-",
1901 "k8splugin.io/rb-instance-id": "practical_nobel",
1902 "pod-template-hash": "6546c4f698",
1903 "release": "test-release"
1905 "name": "test-release-sink-6546c4f698-dv529",
1906 "namespace": "plugin-tests-namespace",
1907 "ownerReferences": [
1909 "apiVersion": "apps/v1",
1910 "blockOwnerDeletion": true,
1912 "kind": "ReplicaSet",
1913 "name": "test-release-sink-6546c4f698",
1914 "uid": "72c9da29-af3b-4b5c-a90b-06285ae83429"
1917 "resourceVersion": "10720854",
1918 "selfLink": "/api/v1/namespaces/plugin-tests-namespace/pods/test-release-sink-6546c4f698-dv529",
1919 "uid": "a4e24041-65c9-4b86-8f10-a27a4dba26bb"
1927 "name": "sink-configmap"
1931 "image": "rtsood/onap-vfw-demo-sink:0.2.0",
1932 "imagePullPolicy": "IfNotPresent",
1935 "securityContext": {
1939 "terminationMessagePath": "/dev/termination-log",
1940 "terminationMessagePolicy": "File",
1944 "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount",
1945 "name": "default-token-gsh95",
1951 "image": "electrocucaracha/darkstat:latest",
1952 "imagePullPolicy": "IfNotPresent",
1956 "containerPort": 667,
1962 "terminationMessagePath": "/dev/termination-log",
1963 "terminationMessagePolicy": "File",
1967 "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount",
1968 "name": "default-token-gsh95",
1974 "dnsPolicy": "ClusterFirst",
1975 "enableServiceLinks": true,
1976 "nodeName": "localhost",
1978 "restartPolicy": "Always",
1979 "schedulerName": "default-scheduler",
1980 "securityContext": {},
1981 "serviceAccount": "default",
1982 "serviceAccountName": "default",
1983 "terminationGracePeriodSeconds": 30,
1986 "effect": "NoExecute",
1987 "key": "node.kubernetes.io/not-ready",
1988 "operator": "Exists",
1989 "tolerationSeconds": 300
1992 "effect": "NoExecute",
1993 "key": "node.kubernetes.io/unreachable",
1994 "operator": "Exists",
1995 "tolerationSeconds": 300
2000 "name": "default-token-gsh95",
2003 "secretName": "default-token-gsh95"
2011 "lastProbeTime": null,
2012 "lastTransitionTime": "2020-09-29T13:36:25Z",
2014 "type": "Initialized"
2017 "lastProbeTime": null,
2018 "lastTransitionTime": "2020-09-29T13:36:33Z",
2023 "lastProbeTime": null,
2024 "lastTransitionTime": "2020-09-29T13:36:33Z",
2026 "type": "ContainersReady"
2029 "lastProbeTime": null,
2030 "lastTransitionTime": "2020-09-29T13:36:25Z",
2032 "type": "PodScheduled"
2035 "containerStatuses": [
2037 "containerID": "docker://87c9af78735400606d70ccd9cd85e2545e43cb3be9c30d4b4fe173da0062dda9",
2038 "image": "electrocucaracha/darkstat:latest",
2039 "imageID": "docker-pullable://electrocucaracha/darkstat@sha256:a6764fcc2e15f6156ac0e56f1d220b98970f2d4da9005bae99fb518cfd2f9c25",
2047 "startedAt": "2020-09-29T13:36:33Z"
2052 "containerID": "docker://a004f95e7c7a681c7f400852aade096e3ffd75b7efc64e12e65b4ce1fe326577",
2053 "image": "rtsood/onap-vfw-demo-sink:0.2.0",
2054 "imageID": "docker-pullable://rtsood/onap-vfw-demo-sink@sha256:15b7abb0b67a3804ea5f954254633f996fc99c680b09d86a6cf15c3d7b14ab16",
2062 "startedAt": "2020-09-29T13:36:32Z"
2067 "hostIP": "192.168.255.3",
2069 "podIP": "10.244.64.46",
2072 "ip": "10.244.64.46"
2075 "qosClass": "BestEffort",
2076 "startTime": "2020-09-29T13:36:25Z"
2083 PART 4 - Future improvements needed
2084 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2086 Future development areas for this use case:
2088 - Automated smoke use case.
2089 - Include Closed Loop part of the vFW demo.
2090 - vFW service with Openstack VNF and Kubernetes CNF
2092 Future development areas for CNF support:
2094 - Validation of Helm package and extraction of override values in time of the package onboarding.
2095 - Post instantiation configuration with Day 2 configuration APIs of multicloud/k8S API.
2096 - Synchroinzation of information about CNF between AAI and K8s.
2097 - Validation of status and health of CNF.
2098 - Use multicloud/k8S API v2.
2100 Many features from the list above are covered by the Honolulu roadmap described in `REQ-458`_.
2103 .. _ONAP Deployment Guide: https://docs.onap.org/projects/onap-oom/en/guilin/oom_quickstart_guide.html
2104 .. _CDS Documentation: https://docs.onap.org/projects/onap-ccsdk-cds/en/guilin/index.html
2105 .. _vLB use-case: https://wiki.onap.org/pages/viewpage.action?pageId=71838898
2106 .. _vFW_CNF_CDS Model: https://git.onap.org/demo/tree/heat/vFW_CNF_CDS/templates?h=guilin
2107 .. _vFW_CNF_CDS Automation: https://git.onap.org/demo/tree/heat/vFW_CNF_CDS/automation?h=guilin
2108 .. _vFW CDS Dublin: https://wiki.onap.org/display/DW/vFW+CDS+Dublin
2109 .. _vFW CBA Model: https://git.onap.org/ccsdk/cds/tree/components/model-catalog/blueprint-model/service-blueprint/vFW?h=elalto
2110 .. _vFW_Helm Model: https://git.onap.org/multicloud/k8s/tree/kud/demo/firewall?h=elalto
2111 .. _vFW_NextGen: https://git.onap.org/demo/tree/heat/vFW_NextGen?h=elalto
2112 .. _vFW EDGEX K8S: https://docs.onap.org/en/elalto/submodules/integration.git/docs/docs_vfw_edgex_k8s.html
2113 .. _vFW EDGEX K8S In ONAP Wiki: https://wiki.onap.org/display/DW/Deploying+vFw+and+EdgeXFoundry+Services+on+Kubernets+Cluster+with+ONAP
2114 .. _KUD github: https://github.com/onap/multicloud-k8s/tree/master/kud/hosting_providers/baremetal
2115 .. _KUD in Wiki: https://wiki.onap.org/display/DW/Kubernetes+Baremetal+deployment+setup+instructions
2116 .. _Multicloud k8s gerrit: https://gerrit.onap.org/r/q/status:open+project:+multicloud/k8s
2117 .. _KUD subproject in github: https://github.com/onap/multicloud-k8s/tree/master/kud
2118 .. _Frankfurt CBA Definition: https://git.onap.org/demo/tree/heat/vFW_CNF_CDS/templates/cba/Definitions/vFW_CNF_CDS.json?h=frankfurt
2119 .. _Frankfurt CBA Script: https://git.onap.org/demo/tree/heat/vFW_CNF_CDS/templates/cba/Scripts/kotlin/KotlinK8sProfileUpload.kt?h=frankfurt
2120 .. _SO-3403: https://jira.onap.org/browse/SO-3403
2121 .. _SO-3404: https://jira.onap.org/browse/SO-3404
2122 .. _REQ-182: https://jira.onap.org/browse/REQ-182
2123 .. _REQ-341: https://jira.onap.org/browse/REQ-341
2124 .. _REQ-458: https://jira.onap.org/browse/REQ-458
2125 .. _Python SDK: https://docs.onap.org/projects/onap-integration/en/guilin/integration-tooling.html?highlight=python-sdk#python-onapsdk
2126 .. _KUD Jenkins ci/cd verification: https://jenkins.onap.org/job/multicloud-k8s-master-kud-deployment-verify-shell/
2127 .. _K8s cloud site config: https://docs.onap.org/en/guilin/guides/onap-operator/cloud_site/k8s/index.html
2128 .. _SO Monitoring: https://docs.onap.org/projects/onap-so/en/guilin/developer_info/Working_with_so_monitoring.html
2129 .. _Data Dictionary: https://git.onap.org/demo/tree/heat/vFW_CNF_CDS/templates/cba-dd.json?h=guilin
2130 .. _Helm Healer: https://git.onap.org/oom/offline-installer/tree/tools/helm-healer.sh?h=frankfurt
2131 .. _CDS UAT Testing: https://wiki.onap.org/display/DW/Modeling+Concepts
2132 .. _infra_workload: https://docs.onap.org/projects/onap-multicloud-framework/en/latest/specs/multicloud_infra_workload.html?highlight=multicloud