1 .. This work is licensed under a Creative Commons Attribution 4.0 International License.
2 .. http://creativecommons.org/licenses/by/4.0
4 ===================================================
5 Container based network service/function deployment
6 ===================================================
7 https://wiki.onap.org/pages/viewpage.action?pageId=16007890
9 This proposal is to implement PoC in Beijing release(R-2) in order to
10 get experience/feedback for future progress.
15 The current ONAP supports only VM based cloud infrastructure for VNF.
16 On the other hand, in the industry container technology is getting more
17 momentum. Increasing VNF density on each node and latency
18 requirements are driving container based VNFs. This project enhances
19 ONAP to support VNFs as containers in addition to VNFs as VMs.
21 It is beneficial to support for multiple container orchestration technologies
22 as cloud infrastructure:
24 * Allow VNFs to run within container technology and also allow closed
25 feedback loop same to VM based VIM. e.g. openstack.
26 * Support for co-existence of VNF VMs and VNF containers
27 * Add container orchestration technology in addition to the
28 traditional VM-based VIM or environment managed by ONAP.
29 * Support for uniform network connectivity among VMs and containers.
31 NOTE: This is different from OOM project `OOM`_. their scope is to
32 deploy ONAP itself on k8s. Our scope is to deploy/manage VNFs on
33 container/container orchestration engine(coe). The first target is
34 k8s. Other CoE will also be addressed if someone steps up to support it.
40 Scope for Beijing release(R-2)
41 ------------------------------
44 * First baby step is to support containers in a Kubernetes cluster via a
45 Multicloud SBI /K8S Plugin
46 (other COE's(Container Orchestration Engine) are out of Beijing scope.
47 They are future scope.)
48 * Minimal implementation with zero impact on MVP of Multicloud Beijing work
52 * Sample VNFs(vFW and vDNS)
53 (vCPE use case is post Beijing release)
54 Both vFW and vDNS are targeted. Since custom TOSCA node definitions
55 are used (please refer to tosca section below), new TOSCA templates
56 are needed for them. (In future, post-Beijing, this will be revised
57 to share common TOSCA template.)
61 In Beijing release, several design aspects are compromised to re-use
62 the existing component/work flow with zero impact primarily because
63 the long term solution is not ready to use. It's acceptable this effort
64 in Beijing release is PoC or experimental to demonstrate functionality
65 and to get experience for post-Beijing design/architecture.
66 For example, the use of CSAR/new tosca node definitions are to re-use
67 the existing code(i.e. Amsteldam release). After Beijing release, those
68 will be revised for long term solution along with related topics. e.g.
69 model driven API, modeling based on Beijing experience. Once we
70 figured out what multicloud COE API should look like and what adapters
71 in other projects(SO, APP-C) are needed(or not needed) in long term,
72 the inter-project discussion (mainly with SO, APP-C) will start in
77 * Register/unregister k8s cluster instances which are already deployed.
78 dynamic deployment of k8s is out of scope. It is assumed that admin knows
79 all the necessary parameters.
80 * onboard VNFD/NSD to use container
81 * Instantiate / de-instantiate containerized VNFs through K8S Plugin
83 * Vnf configuration with sample VNFs(vFW, vDNS) with the existing configuration
84 interface. (no change to the existing configuration interface)
90 REST API Impact and Base URL
91 ----------------------------
93 Similar to other plugins(e.g. openstack plugin), k8s plugin has
94 its own API endpoint and base URL so that it doesn't affect other
95 multicloud northbound API.
97 Base URL for kubernets plugin:
99 https://msb.onap.org:80/api/multicloud/v0/
101 NOTE: Each multicloud plugin has its own API endpoint(ip address).
102 So the plugin is distinguished by endpoint IP address with MSB.
103 "multicloud-kubernetes" name space for MSB is used.
104 NOTE: each COE support has its own API end point and name space.
105 their name spaces will be "multicloud-<coe name>". With model driven
106 API, we will have API agnostic to COE. in that case the name space
107 "multicloud-coe" will be used.
111 In ONAP, cloud-id is the format of <cloudOwner>_<cloudRegion>
112 Since k8s doesn't have notion of region, cloud admin will assign
113 unique string it as cloudRegion and it will be used as a part of cloud-id.
115 APIs for VNF Lifecycle Management
116 ---------------------------------
118 * PATH: /<cloud-id>/proxy/<resources>
119 * METHOD: All methods
121 Northbound components, e.g. APP-C, use these APIs for lifecycle management of
122 VNF resources within the Kubernetes cluster, e.g. pods. In essence, these APIs
123 provide simple proxy (or passthrough) functions with authorization adjustment
124 to Kubernetes API Server so that the relevant lifecycle management operations
125 are actually achieved by Kubernetes cluster itself. In another word, these API
126 requests are proxied to "{kubernetes api prefix}/<resources>" within Kubernetes
127 cluster without any changes to http/https request body.
128 the API end point is stored in AA&I and the API consumer will get it from
131 For details of Kubernetes API, please refer to
132 https://kubernetes.io/docs/reference/api-overview/
134 NOTE: kubernetes doesn't have concept of region and tenant at this moment.
135 So region and tenant_id isn't in path.
136 NOTE: VF-C is ETSI NFV orchestrater.(NFV-O) In Beijing release, this isn't
137 addressed because container is out of scope of ETSI NFV at the time of
138 writing. Post-Beijing, this will be given consideration. First target
139 is APP-C as it's easier.
141 API for VNF Deployment
142 ----------------------
144 * PATH: /<cloud-id>/package
146 media type of Content-Type and/or filename of Contest-Disposition are used
147 to specify package type.
149 As private media type, application/onap-multicloud-<coe name>-<type> is used.
150 More concretely for Beijing release the following media types are used.
151 * Content-Type: application/onap-multicloud-kubernetes-csar
152 * Content-Type: application/onap-multicloud-kubernetes-helm
153 As supplement, filename is also used to guess media type. As http header type
154 Contest-Disposition is used to pass filename.
155 * Content-Disposition: attachment; filename="fname.tgz"
156 first media type is tried and then filename is tried. If both are present
159 This API provides northbound components, e.g. SO, with the function of
160 deploying containerized VNF package into Kubernetes cluster. The VNF package
161 is delivered as payload of HTTP request body in the API call. The VNF package
162 could be a CSAR or Helm Charts.
164 CSAR deployment package will include a yaml deployment file and other
166 This approach would work for simple VNFs consisting of single PODs.
168 For VNFs comprising of multiple PODs which are dependent on each other, Helm
169 based approach would be used. The VNF package would be described as a Helm
170 package consisting of a set of Helm charts and k8s yamls for each constituent
171 service that is part of the VNF.
173 There would be no change required in the Northboud API from MultiCloud for
174 either CSAR package or Helm package or any other package in the future. SO
175 calls this MultiVIM Northbound API and sends the k8s package (e.g. csar, or
176 tgz) as payload. k8s Plugin will distinguish package types based on its suffix
177 and interact with k8s cluster appropriately:
179 * For CSAR: k8s yaml file will be extracted from CSAR. k8s REST API server
180 will be called to create k8s resources (e.g. pods), which is equivalent to
181 "kubectl create -f <file.yaml>". The TOSCA file in CSAR is expected to
182 include onap.multicloud.container.kubernetes.proxy.nodes.resources_yaml
183 node which is explained below. In another word, Kubernetes yaml is stored as
184 artifact in CSAR. it is extracted and then it is fed to k8s API.
186 * For TGZ: call Tiller API (gRPC-based) and pass through the Helm package
188 The Kubernetes API Server (RESTful) or Helm Tiller Server (gRPC) URLs are
189 configured for k8s Plugin when the Kubernetes cluster is created and Helm
192 With this single API for package, when we need to add new package
193 support in the future, no extra code in SO is needed.
199 swagger.json for kubernetes API definitions
202 returns swagger.json definitions of k8s API similar to other multicloud plugins
204 Internal APIs for Implementations
205 ---------------------------------
207 Some internal APIs may be needed by the implementation details of above
208 northbound APIs. For example, when implementing VNF Deployment API above,
209 we may need internal APIs to assist calling Helm Tiller Server or Kubernetes
210 API Server, e.g. similar to "kubectl create -f xxx.yaml".
212 The internal API, if needed, will be handled in implementation, which is out
213 of scope of this section of the document.
218 In this section test play is discussed. In Beijing cycle, test is minimal
219 or stretched goal because the effort in Beijing is PoC/experimental
220 to get experience. the following class of test would be planned as
228 * communication to backend(K8S API server, helm tiller server)
229 * CSIT as end-to-end test
232 Register/Unregister Kubernetes Cluster Instance
233 ===============================================
235 This is done via A&AI ESR `ESR`_ to follow the way of the existing
236 multicloud. some attributes, e.g. region id, don't make sense for
237 k8s. In that case predefined value, e.g. 'default', are used.
238 The info for basic authentication, i.e. the pair of (username, password),
239 against kuberenetes API is registered and stored in A&AI.
241 NOTE: ESR will call registry API when register a new VIM(k8s). we need to
242 make sure that we have this API in this plugin and give them response.
244 NOTE: HPA(kubernetes cluster features/capabilities) is out of scope
245 for Beijing Assumption K8s cluster instance is already
246 pre-build/deployed Dynamic instantiation is out of scope(for Beijing)
248 attributes for A&AI ESR
249 -----------------------
251 This subsection describes how attributes for VIM registration are specified.
252 For actual definitions, please refer to `ESR`_
253 Some attributes doesn't apply to kubernetes so that such attributes will
254 be left unspecified if it's optional or define pre-defined constants if
257 URI /api/aai-esr-server/v1/vims
264 ------------------ ---------- ------- ----------------------------------------
265 Attribute Qualifier Content Description
266 ================== ========== ======= ========================================
267 cloudOwner M String any string as cloud owner
268 ------------------ ---------- ------- ----------------------------------------
269 cloudRegionId M String e.g. "kubernetes-<N>" as it doesn't apply
270 to k8s. Cloud admin assigns unique id.
271 ------------------ ---------- ------- ----------------------------------------
272 cloudType M String "kubernetes". new type
273 ------------------ ---------- ------- ----------------------------------------
274 cloudRegionVersion M String kubernetes version. "v1.9", "v1.8" ...
275 ------------------ ---------- ------- ----------------------------------------
276 ownerDefinedType O String None. (not specified)
277 ------------------ ---------- ------- ----------------------------------------
278 cloudZone O String None. (not speicfied)
279 as kubernetes doesn't have notion of
281 ------------------ ---------- ------- ----------------------------------------
282 complexName O String None. (not specified)
283 as kubernetes doesn't have notion of
285 ------------------ ---------- ------- ----------------------------------------
286 cloudExtraInfo O String json string(dictionary) for necessary
287 info. For now "{}" empty dictionary.
288 For helm support, URL for tiller server
290 ------------------ ---------- ------- ----------------------------------------
291 vimAuthInfos M [Obj] Auth information of Cloud
292 list of authInfoItem which is described
294 ================== ========== ======= ========================================
296 There are several constraints/assumptions on cloudOwner and
297 cloudRegionId. `cloud-region`_ . For k8s, cloudRegionId is (ab)used to
298 specify k8s cluster instance. ONAP admin has to assign unique id for
299 cloudRegionId as id for k8s cluster instance.
301 NOTE: complexName: this will be revised post-Beijing. "complex" is used to
302 specify (latitude, longitude) of a data center location for the purpose of
303 homing optimization. If those values can be obtained somehow, this should
308 Basic authentication is used for k8s api server.
312 -------------- --------- ------- -------------------------------------------
313 Attribute Qualifier Content Description
314 ============== ========= ======= ===========================================
315 cloudDomain M String "kubernetes" as this doesn't apply.
316 -------------- --------- ------- -------------------------------------------
317 userName M String User name
318 -------------- --------- ------- -------------------------------------------
319 password M String Password
320 -------------- --------- ------- -------------------------------------------
321 authUrl M String URL for kubernetes API server
322 -------------- --------- ------- -------------------------------------------
323 sslCacert O String ca file content if enabled ssl on
324 kubernetes API server
325 -------------- --------- ------- -------------------------------------------
326 sslInsecure O Boolean Whether to verify VIM's certificate
327 ============== ========= ======= ===========================================
329 NOTE: For some issues `issue23`_, ESR should provide authenticating by
330 bearer token for Kubernetes cluster if possible beside basic authentication.
331 Those extra value will be stored in cloudExtraInfo. This is stretched goal.
334 On boarding/packaging/instantiation
335 ===================================
337 We shouldn't change the current existing work flow.
338 In short term: Use additional node type/capability types etc.
339 In longer term way: Follow ONAP community directoin. At the moment, work
340 with TOSCA community to add additional node type to express k8s.
342 NOTE: this packaging is temporally work around until ONAP modelling
343 and multicloud model driven API are available. Post Beijing release
344 packaging will be revised to follow ONAP modeling and multicloud model
347 Packaging and on-boarding
348 -------------------------
350 Reuse CASR so that the existing work flow doesn't need change. For
351 Beijing CSAR is used with its own TOSCA node definition. In longer
352 term, once multicloud project has model driven API, it will be followed
353 to align with modeling and SO.
355 TOSCA node definitions
356 -----------------------
358 Introduce new nodes to wrap k8s ingredients(k8s yaml, helm etc.) These
359 TOSCA node definitions are short term work around to re-use the existing
360 component/workflow until model driven API is defined/implemented.
361 For Beijing, human will write this TOSCA by hands for PoC. Post Beijing,
362 packaging needs to be revised to align with modeling and SO. Also SDC,
363 VNF-SDK need to be addressed for creation.
365 * onap.multicloud.nodes.kubernetes.proxy
372 onap.multicloud.container.kubernetes.proxy.nodes.resources_yaml:
381 Paths to kubernetes yaml file
383 For VNFs that are packages as Helm package there would be only one
384 TOSCA node in the TOSCA template which would have reference to the
387 * onap.multicloud.nodes.kubernetes.helm
394 onap.multicloud.container.kubernetes.helm.nodes.helm_package:
403 Paths to Helm package file
405 This TOSCA node definitions wrap kubernetes yaml file or helm chart.
406 cloudify.nodes.Kubernetes isn't reused in order to avoid definition conflict.
411 SO ARIA adaptor can be used. (with twist to have SO to talk to
412 multicloud k8s plugin instead of ARIA) Instantiation so that SO
413 can talk to multicloud k8s plugin.
414 NOTE: This is temporally work around for Beijing release. Post Beijing, this
420 With Amsteldam Release, SO has ARIA adaptor which talks to ARIA orchestrator.
421 https://wiki.onap.org/download/attachments/16002054/Model%20Driven%20Service%20Orchestration%20-%20SO%20State%20of%20the%20Union.pptx
423 The work flow looks like follows::
425 user request to instantiate VNF
427 +--------------|-------+
430 | +------------------+ |
431 | | SO: ARIA adaptor | |
432 | +------------+-----+ |
433 +--------------|-------+
436 +--------------|---------+
439 | +--------------------+ |
440 | | multicloud plugin | | template as TOSCA artifact is
441 | +------------+-------+ | extracted and build requests to
442 +--------------|---------+ multicloud
445 +--------------|-------+
448 | +------------------+ |
449 | | openstack plugin | |
450 | +------------+-----+ |
451 +--------------|-------+
455 +----------------------+
457 +----------------------+
460 This will be twisted by configuration so that SO can talks to
461 multicloud k8s plugin::
463 user request to instantiate VNF
465 +--------------|-------+
468 | +------------------+ |
469 | | SO: ARIA adaptor | | configuration is twisted to call
470 | +------------+-----+ | multicloud k8s API
471 +--------------|-------+
474 +--------------|-------+
477 | +------------------+ | handle CSAR or TGZ (Helm Charts) file
478 | | k8s plugin | | e.g. extract k8s yaml from CSAR, and
479 | +------------+-----+ | pass through requests to k8s/Helm API
480 +--------------|-------+
484 +----------------------+
485 | k8s/Helm API server |
486 +----------------------+
489 NOTE: In this work flow. only the northbound deployment API endpoint is needed
490 for VNF deployment. LCM APIs are only needed for lifecycle management. Other
491 internal APIs, e.g. k8s YAML API may be needed only for internal
494 SO ARIA multicloud plugin needs to be twisted to call k8s plugin.
496 The strategy is to keep the existing design of ONAP or to follow
498 The key point of The interaction between SO and multicloud is
500 * SO decomposes VNFD/NSD into single atomic resource
501 (e.g. VNF-C corresponding to single VM or single container/pod)
502 and send requests to create each resources via deployment API.
503 * multicloud accepts each request for single atomic resource and
504 create single resource(e.g. VM or container/pod)
505 * multicloud doesn't do resource decomposition. The decomposition is task
508 API work flow example and k8s API
509 ---------------------------------
510 * register k8s cluster to A&AI ESR
511 <cloud-id> is obtained
512 * ONAP north bound components generates a TOSCA template targeted for k8s.
513 * SO calls Multicloud deployment API and passes the entire BluePrint(as CSAR or
514 TGZ) to k8s plugin, e.g.:
515 POST https://msb.onap.org:80/api/multicloud/v0/<cloud-id>/package
516 * k8s plugin handles the CSAR or TGZ accordingly and talks to k8s API Server
517 or Helm Tiller Server to deploy containerized VNF
518 POST <k8s api server>://api/v1/namespaces/{namespace}/pods
519 to create pods. then <pod id> is obtained
520 * DELETE https://msb.onap.org:80/api/multicloud/v0/<cloud-id>/proxy/api/v1/namespaces/{namespace}/pods/<pod id>
522 * to execute script inside pod, the following URL can be used.
523 POST /api/v1/namespaces/{namespace}/pods/{name}/exec
526 Affected Projects and impact
527 ============================
531 new type to represent k8s/container for cloud infrastructure will
532 be introduced as work around. Post Beijing official value will be
533 discussed for inclusion.
537 Policy matching is done by OOF.
538 For Beijing. Enhancement to policy is stretched goal.
539 Decomposing service design(NSD, VNFD) from VNF package is done by SO
544 ARIA adaptor is re-used with config tweak to avoid modification
548 new k8s plugin will be introduced. The details are discussed in this
549 documentation you're reading right now.
552 Kubernetes cluster authentication
553 =================================
554 For details of k8s authentication, please refer to
555 https://kubernetes.io/docs/admin/authentication
557 Because Kubernetes cluster installation is not mentioned, we should
558 treat all users as normal users when authenticate to
559 Kubernetes VIM. There are several ways to authenticate Kubernetes
560 cluster. For Beijing release, basic authentication will be supported.
561 username and password are stored in ESR.
566 Past presentations/proposals
567 ----------------------------
568 .. _Munish proposal: https://schd.ws/hosted_files/onapbeijing2017/dd/Management%20of%20Cloud%20Native%20VNFs%20with%20ONAP%20PA5.pptx
569 .. _Isaku proposal: https://schd.ws/hosted_files/onapbeijing2017/9d/onap-kubernetes-arch-design-proposal.pdf
570 .. _Bin Hu proposal: https://wiki.onap.org/download/attachments/16007890/ONAP-SantaClara-BinHu-final.pdf?version=1&modificationDate=1513558701000&api=v2
574 .. _ESR: Extenral System Register https://wiki.onap.org/pages/viewpage.action?pageId=11930343#A&AI:ExternalSystemOperationAPIDefinition-VIM
575 .. _AAI: Active and Available Inventory https://wiki.onap.org/display/DW/Active+and+Available+Inventory+Project
576 .. _OOM: ONAP Operations Manager https://wiki.onap.org/display/DW/ONAP+Operations+Manager+Project
577 .. _ONAPREST: RESTful API Design Specification https://wiki.onap.org/display/DW/RESTful+API+Design+Specification
581 .. _kubernetes-python-client: Kubernetes python client https://github.com/kubernetes-client/python
583 .. _issue23: https://github.com/kubernetes/kubeadm/issues/23
587 .. _cloud-region: How to add a new cloud region and some thoughts https://wiki.onap.org/download/attachments/25429038/HowToAddNewCloudRegionAndThoughts.pdf
592 * Isaku Yamahata <isaku.yamahata@intel.com> <isaku.yamahata@gmail.com>
593 * Bin Hu <bh526r@att.com>
594 * Munish Agarwal <munish.agarwal@ericsson.com>
595 * Phuoc Hoang <phuoc.hc@dcn.ssu.ac.kr>
600 This section is informative. This is out of Beijing scope and will be
601 revised after Beijing.
602 The purpose of this appendix is to help readers to understand this
603 proposal by giving future direction and considerations.
604 At some point, this appendix will be separated out into its own document
605 for long-term right design.
607 Model driven API and kubernetes model
608 -------------------------------------
609 Currently the discussion on model driver API is on going. Once it's usable,
610 it will be followed and the above experimental API/code will be revised.
612 The eventual work flow looks like as follows::
614 user request to instantiate VNF/NS
617 +----------------------+ +-----+
618 | SO |-------->| OOF | <--- policy to use
619 | |<--------| | CoE instead of VM
620 | | +-----+ from A&AI
621 | +------------------+ |
622 | | SO: adaptor for | | SO decomposes VNFD/NSD into atomic
623 | | multicloud model | | resources(VDUs for VNF-C) with asking OOF
624 | | driven API | | for placement. then SO builds up
625 | +------------+-----+ | requests to multicoud for instantiation.
626 +--------------|-------+
629 +--------------|-------+
630 | multicloud | | So multicloud accepts request for single
631 | V | resource of VDU which corresponds to
632 | +------------------+ | VNF-C. which is mapped to single
633 | | model driven API | | container/pod. multicloud doesn't
634 | +------------+-----+ | decompose VDU into multiple containers.
635 | | | CoE doesn't change such work flow.
637 | +------------------+ |
638 | | k8s plugin | | convert request(VDU of VNF-C) into
639 | +------------+-----+ | kubernetes
640 +--------------|-------+
644 +----------------------+
646 +----------------------+
649 Modeling/TOSCA to kubernetes conversion
650 ---------------------------------------
651 In this section, conversion from TOSCA to kubernetes is discussed
652 so that reader can get idea for future direction.
654 Once ONAP information/data model is usable, similar conversion is possible.
655 The followings are only examples. More node definitions would be considered
658 TOSCA node definition k8s resource
659 ============================ ================================
660 tosca.nodes.Compute (bare)single pod
661 vcpu, memory -> k8s resource
662 ---------------------------- --------------------------------
663 tosca.nodes.nfv.VDU.Compute (bare)single pod
668 This is just to show idea.
669 This example is very early phase and there are hard-coded values.
678 type: tosca.nodes.Compute
680 # Host container properties
686 # Guest Operating System properties
689 # host Operating System image properties
699 $ PYTHONPATH=. python -m tosca_translator.shell -d --debug --template-file tosca_translator/tests/data/tosca_helloworld.yaml
700 api_version: apps/v1beta1
703 labels: {name: my_server}
708 labels: {name: my_server}
714 limits: {cpu: 2, ephemeral-storage: 10 GB, memory: 512 MB}
715 requests: {cpu: 2, ephemeral-storage: 10 GB, memory: 512 MB}