4 The Demo repository contains the HEAT templates and scripts for the instantiation of the ONAP platform and use cases. The repository includes:
6 - README.md: this file.
8 - LICENSE.TXT: the license text.
10 - pom.xml: POM file used to build the software hosted in this repository.
12 - version.properties: current version number of the Demo repository. Format: MAJOR.MINOR.PATCH (e.g. 1.1.0)
14 - The "boot" directory contains the scripts that install and configure ONAP:
15 - install.sh: sets up the host VM for specific components. This script runs only once, soon after the VM is created.
16 - vm\_init.sh: contains component-specific configuration, downloads and runs docker containers. For some components, this script may either call a component-specific script (cloned from Gerrit repository) or call docker-compose.
17 - serv.sh: it is installed in /etc/init.d, calls vm\_init.sh at each VM (re)boot.
18 - configuration files for the Bind DNS Server installed with ONAP. Currently, both simpledemo.openecomp.org and simpledemo.onap.org domains are supported.
19 - sdc\_ext\_volume_partitions.txt: file that contains external volume partitions for SDC.
21 - The "docker\_update\_scripts" directory contains scripts that update all the docker containers of an ONAP instance.
23 - The "heat" directory contains the following sub-directories:
25 - ONAP: contains the HEAT files for the installation of the ONAP platform. NOTE: onap\_openstack.yaml AND onap\_openstack.env ARE THE HEAT TEMPLATE AND ENVIRONMENT FILE CURRENTLY SUPPORTED. onap\_openstack\_float.yaml, onap\_openstack\_float.env, onap\_openstack\_nofloat.yaml, onap\_openstack\_nofloat.env AND onap\_rackspace.yaml, onap\_rackspace.env AREN'T UPDATED AND THEIR USAGE IS DEPRECATED.
27 - vCPE: contains sub-directories with HEAT templates for the installation of vCPE Infrastructure (Radius Server, DHCP, DNS, Web Server), vBNG, vBRG Emulator, vGMUX, and vGW.
29 - vFW: contains the HEAT template for the instantiation of the vFirewall VNF (base\_vfw.yaml) and the environment file (base\_vfw.env) For Amsterdam release, this template is used for testing and demonstrating VNF instantiation only (no closed-loop).
31 - vFWCL: contains two sub-directories, one that hosts the HEAT template for the vFirewall and vSink (vFWSNK/base\_vfw.yaml), and one that hosts the HEAT template for the vPacketGenerator (vPKG/base\_vpkg.yaml). For Amsterdam release, these templates are used for testing and demonstrating VNF instantiation and closed-loop.
33 - vLB: contains the HEAT template for the instantiation of the vPacketGenerator/vLoadBalancer/vDNS VNF (base\_vlb.yaml) and the environment file (base\_vlb.env). The directory also contains the HEAT template for the DNS scaling-up scenario (dnsscaling.yaml) with its environment file (dnsscaling.env).
35 - vVG: contains the HEAT template for the instantiation of a volume group (base\_vvg.yaml and base\_vvg.env).
37 - The "scripts" directory contains the deploy.sh script that uploads software artifacts to the Nexus repository during the build process.
39 - The "tosca" directory contains an example of the TOSCA model of the vCPE infrastructure.
41 - The "tutorials" directory contains tutorials for Clearwater\_IMS and for creating a Netconf mount point in APPC. The "VoLTE" sub-directory is currently not used.
43 - The "vagrant" directory contains the scripts that install ONAP using Vagrant.
45 - The "vnfs" directory: contains the following directories:
47 - honeycomb_plugin: Honeycomb plugin that allows ONAP to change VNF configuration via RESTCONF or NETCONF protocols.
49 - vCPE: contains sub-directories with the scripts that install all the components of the vCPE use case.
51 - VES: source code of the ONAP Vendor Event Listener (VES) Library. The VES library used here has been cloned from the GitHub repository at https://github.com/att/evel-library on February 1, 2017. (DEPRECATED SINCE AMSTERDAM RELEASE)
53 - VESreporting_vFW: VES client for vFirewall demo application. (DEPRECATED SINCE AMSTERDAM RELEASE)
55 - VESreporting_vLB: VES client for vLoadBalancer/vDNS demo application. (DEPRECATED SINCE AMSTERDAM RELEASE)
57 - VES5.0: source code of the ONAP Vendor Event Listener (VES) Library, version 5.0. (SUPPORTED FOR AMSTERDAM AND BEIJING RELEASES)
59 - VESreporting_vFW5.0: VES v5.0 client for vFirewall demo application. (SUPPORTED FOR AMSTERDAM AND BEIJING RELEASES)
61 - VESreporting_vLB5.0: VES v5.0 client for vLoadBalancer/vDNS demo application. (SUPPORTED FOR AMSTERDAM AND BEIJING RELEASES)
63 - vFW: scripts that download, install and run packages for the vFirewall use case.
65 - vLB: scripts that download, install and run packages for the vLoadBalancer/vDNS use case.
67 - vLBMS: scripts that download, install and run packages for the vLoadBalancer/vDNS used for Manual Scale Out use case.
70 ONAP Installation in OpenStack Clouds via HEAT Template
73 The ONAP HEAT template spins up the entire ONAP platform in OpenStack-based clouds. The template, onap\_openstack.yaml, comes with an environment file, onap\_openstack.env, in which all the default values are defined.
75 NOTE: onap\_openstack.yaml AND onap\_openstack.env ARE THE HEAT TEMPLATE AND ENVIRONMENT FILE CURRENTLY SUPPORTED. onap\_openstack\_float.yaml, onap\_openstack\_float.env, onap\_openstack\_nofloat.yaml, onap\_openstack\_nofloat.env AND onap\_rackspace.yaml, onap\_rackspace.env AREN'T UPDATED AND THEIR USAGE IS DEPRECATED. As such, the following description refers to onap\_openstack.yaml and onap\_openstack.env.
77 The HEAT template is composed of two sections: (i) parameters, and (ii) resources.
79 - The "parameters" section contains the declarations and descriptions of the parameters that will be used to spin up ONAP, such as public network identifier, URLs of code and artifacts repositories, etc. The default values of these parameters can be found in the environment file.
81 - The "resources" section contains the definitions of:
82 - ONAP Private Management Network, which is used by ONAP components to communicate with each other and with VNFs
83 - ONAP Virtual Machines (VMs)
84 - Public/private key pair used to access ONAP VMs
85 - Virtual interfaces towards the ONAP Private Management Network
88 Each VM specification includes Operating System image name, VM size (i.e. flavor), VM name, etc. Each VM has a virtual network interface with a private IP address in the ONAP Private Management network and a floating IP that OpenStack assigns based on availability.
89 Furthermore, each VM runs an install.sh script that downloads and installs software dependencies (e.g. Java JDK, gcc, make, Python, ...). install.sh finally calls vm_init.sh that downloads docker containers from remote repositories and runs them.
91 When the HEAT template is executed, the OpenStack HEAT engine creates the resources defined in the HEAT template, based on the parameter values defined in the environment file.
93 Before running HEAT, it is necessary to customize the environment file. Indeed, some parameters, namely public\_net\_id, pub\_key, openstack\_tenant\_id, openstack\_username, and openstack\_api\_key, need to be set depending on the user's environment:
95 public_net_id: PUT YOUR NETWORK ID/NAME HERE
96 pub_key: PUT YOUR PUBLIC KEY HERE
97 openstack_tenant_id: PUT YOUR OPENSTACK PROJECT ID HERE
98 openstack_username: PUT YOUR OPENSTACK USERNAME HERE
99 openstack_api_key: PUT YOUR OPENSTACK PASSWORD HERE
100 horizon_url: PUT THE HORIZON URL HERE
101 keystone_url: PUT THE KEYSTONE URL HERE (do not include version number)
104 openstack\_region parameter is set to RegionOne (OpenStack default). If your OpenStack is using another Region, please modify this parameter.
106 public\_net\_id is the unique identifier (UUID) or name of the public network of the cloud provider. To get the public\_net\_id, use the following OpenStack CLI command (ext is the name of the external network, change it with the name of the external network of your installation)
108 openstack network list | grep ext | awk '{print $2}'
110 pub\_key is the string value of the public key that will be installed in each ONAP VM. To create a public/private key pair in Linux, please execute the following instruction:
112 user@ubuntu:~$ ssh-keygen -t rsa
114 The following operations to create the public/private key pair occur:
116 Generating public/private rsa key pair.
117 Enter file in which to save the key (/home/user/.ssh/id_rsa):
118 Created directory '/home/user/.ssh'.
119 Enter passphrase (empty for no passphrase):
120 Enter same passphrase again:
121 Your identification has been saved in /home/user/.ssh/id_rsa.
122 Your public key has been saved in /home/user/.ssh/id_rsa.pub.
124 openstack\_username, openstack\_tenant\_id (password), and openstack\_api\_key are the user's credentials to access the OpenStack-based cloud.
126 Some global parameters used for all components are also required:
128 ubuntu_1404_image: PUT THE UBUNTU 14.04 IMAGE NAME HERE
129 ubuntu_1604_image: PUT THE UBUNTU 16.04 IMAGE NAME HERE
130 flavor_small: PUT THE SMALL FLAVOR NAME HERE
131 flavor_medium: PUT THE MEDIUM FLAVOR NAME HERE
132 flavor_large: PUT THE LARGE FLAVOR NAME HERE
133 flavor_xlarge: PUT THE XLARGE FLAVOR NAME HERE
135 To get the images in your OpenStack environment, use the following OpenStack CLI command:
137 openstack image list | grep 'ubuntu'
139 To get the flavor names used in your OpenStack environment, use the following OpenStack CLI command:
141 openstack flavor list
143 Some network parameters must be configured:
145 dns_list: PUT THE ADDRESS OF THE EXTERNAL DNS HERE (e.g. a comma-separated list of IP addresses in your /etc/resolv.conf in UNIX-based Operating Systems).
146 external_dns: PUT THE FIRST ADDRESS OF THE EXTERNAL DNS LIST HERE (THIS WILL BE DEPRECATED SOON)
147 dns_forwarder: PUT THE IP OF DNS FORWARDER FOR ONAP DEPLOYMENT'S OWN DNS SERVER
148 oam_network_cidr: 10.0.0.0/16
150 ONAP installs a DNS server used to resolve IP addresses in the ONAP OAM private network. Unlike Amsterdam Release, ONAP Beijing does not requires OpenStack Designate DNS support for the DCAE platform. For Beijing Release, in fact, all the DCAE containers are installed in a single VM that has access to the OAM network. Originally, dns\_list and external\_dns were both used to circumvent some limitations of older OpenStack versions. In future releases, the DNS settings and parameters in HEAT will be consolidated.
152 Due to the new DCAE installation methodology, the following parameters are deprecated and no longer needed for DCAE instantiation:
154 dcae_keystone_url: PUT THE MULTIVIM PROVIDED KEYSTONE API URL HERE
155 dcae_centos_7_image: PUT THE CENTOS7 VM IMAGE NAME HERE FOR DCAE LAUNCHED CENTOS7 VM
156 dcae_domain: PUT THE NAME OF DOMAIN THAT DCAE VMS REGISTER UNDER
157 dcae_public_key: PUT THE PUBLIC KEY OF A KEYPAIR HERE TO BE USED BETWEEN DCAE LAUNCHED VMS
158 dcae_private_key: PUT THE SECRET KEY OF A KEYPAIR HERE TO BE USED BETWEEN DCAE LAUNCHED VMS
159 dnsaas_config_enabled: PUT WHETHER TO USE PROXYED DESIGNATE
160 dnsaas_region: PUT THE DESIGNATE PROVIDING OPENSTACK'S REGION HERE
161 dnsaas_keystone_url: PUT THE DESIGNATE PROVIDING OPENSTACK'S KEYSTONE URL HERE
162 dnsaas_tenant_name: PUT THE TENANT NAME IN THE DESIGNATE PROVIDING OPENSTACK HERE (FOR R1 USE THE SAME AS openstack_tenant_name)
163 dnsaas_username: PUT THE DESIGNATE PROVIDING OPENSTACK'S USERNAME HERE
164 dnsaas_password: PUT THE DESIGNATE PROVIDING OPENSTACK'S PASSWORD HERE
166 For Beijing Release, DCAE requires a new parameter called dcae\_deployment\_profile. It accepts one of the following values:
167 - R2MVP: Installs only the basic DCAE functionalities that will support the vFW/vDNS, vCPE and vVoLTE use cases;
168 - R2: Full DCAE installation;
169 - R2PLUS: This profile deploys the DCAE R2 stretch goal service components.
171 The recommended DCAE profile for Beijing Release is R2. For more information about DCAE deployment with HEAT, please refer to the ONAP documentation: https://onap.readthedocs.io/en/latest/submodules/dcaegen2.git/docs/sections/installation_heat.html
173 The ONAP platform can be instantiated via Horizon (OpenStack dashboard) or Command Line.
175 Instantiation via Horizon:
177 - Login to Horizon URL with your personal credentials
178 - Click "Stacks" from the "Orchestration" menu
179 - Click "Launch Stack"
180 - Paste or manually upload the HEAT template file (onap\_openstack.yaml) in the "Template Source" form
181 - Paste or manually upload the HEAT environment file (onap\_openstack.env) in the "Environment Source" form
183 - Specify a name in the "Stack Name" form
184 - Provide the password in the "Password" form
187 Instantiation via Command Line:
189 - Install the HEAT client on your machine, e.g. in Ubuntu (ref. http://docs.openstack.org/user-guide/common/cli-install-openstack-command-line-clients.html):
191 apt-get install python-dev python-pip
192 pip install python-heatclient # Install heat client
193 pip install python-openstackclient # Install the Openstack client to support multiple services
195 - Create a file (named i.e. ~/openstack/openrc) that sets all the environmental variables required to access the OpenStack platform:
197 export OS_AUTH_URL=INSERT THE AUTH URL HERE
198 export OS_USERNAME=INSERT YOUR USERNAME HERE
199 export OS_TENANT_ID=INSERT YOUR TENANT ID HERE
200 export OS_REGION_NAME=INSERT THE REGION HERE
201 export OS_PASSWORD=INSERT YOUR PASSWORD HERE
203 Alternatively, you can download the OpenStack RC file from the dashboard: Compute -> Access & Security -> API Access -> Download RC File
205 - Source the script or RC file from command line:
207 source ~/openstack/openrc
209 - In order to install the ONAP platform, type:
211 openstack stack create -t PATH_TO_HEAT_TEMPLATE(YAML FILE) -e PATH_TO_ENV_FILE STACK_NAME # New Openstack client, OR
212 heat stack-create STACK_NAME -f PATH_TO_HEAT_TEMPLATE(YAML FILE) -e PATH_TO_ENV_FILE # Old HEAT client
218 The use case is composed of three virtual functions (VFs): packet generator, firewall, and traffic sink. These VFs run in three separate VMs. The packet generator sends packets to the packet sink through the firewall. The firewall reports the volume of traffic passing though to the ONAP DCAE collector. To check the traffic volume that lands at the sink VM, you can access the link http://sink\_ip\_address:667 through your browser and enable automatic page refresh by clicking the "Off" button. You can see the traffic volume in the charts.
220 The packet generator includes a script that periodically generates different volumes of traffic. The closed-loop policy has been configured to re-adjust the traffic volume when high-water or low-water marks are crossed.
222 __Closed-Loop for vFirewall demo:__
224 Through the ONAP Portal's Policy Portal, we can find the configuration and operation policies that are currently enabled for the vFirewall use case.
226 - The configuration policy sets the thresholds for generating an onset event from DCAE to the Policy engine. Currently, the high-water mark is set to 700 packets while the low-water mark is set to 300 packets. The measurement interval is set to 10 seconds.
227 - When a threshold is crossed (i.e. the number of received packets is below 300 packets or above 700 packets per 10 seconds), the Policy engine executes the operational policy to request APPC to adjust the traffic volume to 500 packets per 10 seconds.
228 - APPC sends a request to the packet generator to adjust the traffic volume.
229 - Changes to the traffic volume can be observed through the link http://sink\_ip\_address:667.
231 __Adjust packet generator:__
233 The packet generator contains 10 streams: fw\_udp1, fw\_udp2, fw\_udp3, . . . , fw\_udp10. Each stream generates 100 packets per 10 seconds. A script in /opt/run\_traffic\_fw\_demo.sh on the packet generator VM starts automatically and alternates high traffic (i.e. 10 active streams at the same time) and low traffic (1 active stream) every 5 minutes.
235 To enable a stream, include *{"id":"fw_udp1", "is-enabled":"true"}* in the *pg-stream* bracket.
237 To adjust the traffic volume produced by the packet generator, run the following command in a shell, replacing PacketGen_IP in the HTTP argument with localhost (if you run it in the packet generator VM) or the packet generator IP address:
240 curl -X PUT -H "Authorization: Basic YWRtaW46YWRtaW4=" -H "Content-Type: application/json" -H "Cache-Control: no-cache" -d '{"pg-streams":{"pg-stream": [{"id":"fw_udp1", "is-enabled":"true"},{"id":"fw_udp2", "is-enabled":"true"},{"id":"fw_udp3", "is-enabled":"true"},{"id":"fw_udp4", "is-enabled":"true"},{"id":"fw_udp5", "is-enabled":"true"}]}}' "http://PacketGen_IP:8183/restconf/config/sample-plugin:sample-plugin/pg-streams"
242 The command above enables 5 streams.
245 vLoadBalancer/vDNS Use Case
248 The use case is composed of three VFs: packet generator, load balancer, and DNS server. These VFs run in three separate VMs. The packet generator issues DNS lookup queries that reach the DNS server via the load balancer. DNS replies reach the packet generator via the load balancer as well. The load balancer reports the average amount of traffic per DNS over a time interval to the DCAE collector. When the average amount of traffic per DNS server crosses a predefined threshold, the closed-loop is triggered and a new DNS server is instantiated.
250 To test the application, you can run a DNS query from the packet generator VM:
252 dig @vLoadBalancer_IP host1.dnsdemo.onap.org
254 The output below means that the load balancer has been set up correctly, has forwarded the DNS queries to one DNS instance, and the packet generator has received the DNS reply message.
256 ; <<>> DiG 9.10.3-P4-Ubuntu <<>> @192.168.9.111 host1.dnsdemo.onap.org
258 ;; global options: +cmd
260 ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 31892
261 ;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 2
262 ;; WARNING: recursion requested but not available
264 ;; OPT PSEUDOSECTION:
265 ; EDNS: version: 0, flags:; udp: 4096
267 ;host1.dnsdemo.onap.org. IN A
270 host1.dnsdemo.onap.org. 604800 IN A 10.0.100.101
272 ;; AUTHORITY SECTION:
273 dnsdemo.onap.org. 604800 IN NS dnsdemo.onap.org.
275 ;; ADDITIONAL SECTION:
276 dnsdemo.onap.org. 604800 IN A 10.0.100.100
278 ;; Query time: 0 msec
279 ;; SERVER: 192.168.9.111#53(192.168.9.111)
280 ;; WHEN: Fri Nov 10 17:39:12 UTC 2017
284 __Closedloop for vLoadBalancer/vDNS:__
286 Through the Policy Portal (accessible via the ONAP Portal), we can find the configuration and operation policies that are currently enabled for the vLoadBalancer/vDNS application.
287 + The configuration policy sets the thresholds for generating an onset event from DCAE to the Policy engine. Currently, the threshold is set to 200 packets, while the measurement interval is set to 10 seconds.
288 + Once the threshold is crossed (e.g. the number of received packets is above 200 packets per 10 seconds), the Policy engine executes the operational policy. The Policy engine queries A&AI to fetch the VNF UUID and sends a request to SO to spin up a new DNS instance for the VNF identified by that UUID.
289 + SO spins up a new DNS instance.
292 To change the volume of queries generated by the packet generator, run the following command in a shell, replacing PacketGen_IP in the HTTP argument with localhost (if you run it in the packet generator VM) or the packet generator IP address:
294 curl -X PUT -H "Authorization: Basic YWRtaW46YWRtaW4=" -H "Content-Type: application/json" -H "Cache-Control: no-cache" -d '{"pg-streams":{"pg-stream": [{"id":"dns1", "is-enabled":"true"}]}}' "http://PacketGen_IP:8183/restconf/config/sample-plugin:sample-plugin/pg-streams"
296 + *{"id":"dns1", "is-enabled":"true"}* shows the stream *dns1* is enabled. The packet generator sends requests in the rate of 100 packets per 10 seconds.
298 + To increase the amount of traffic, you can enable more streams. The packet generator has 10 streams, *dns1*, *dns2*, *dns3* to *dns10*. Each of them generates 100 packets per 10 seconds. To enable the streams, please add *{"id":"dnsX", "is-enabled":"true"}* to the pg-stream bracket of the curl command, where *X* is the stream ID.
300 For example, if you want to enable 3 streams, the curl command will be:
302 curl -X PUT -H "Authorization: Basic YWRtaW46YWRtaW4=" -H "Content-Type: application/json" -H "Cache-Control: no-cache" -d '{"pg-streams":{"pg-stream": [{"id":"dns1", "is-enabled":"true"}, {"id":"dns2", "is-enabled":"true"},{"id":"dns3", "is-enabled":"true"}]}}' "http://PacketGen_IP:8183/restconf/config/sample-plugin:sample-plugin/pg-streams"
304 When the VNF starts, the packet generator is automatically configured to run 5 streams.
306 vVolumeGroup Use Case
309 The vVG directory contains the HEAT template (base\_vvg.yaml) and environment file (base\_vvg.env) used to spin up a volume group in OpenStack and attach it to an existing ONAP instance.
311 The HEAT environment file contains two parameters:
314 nova_instance: 1234456
316 volume\_size is the size (in gigabytes) of the volume group. nova\_instance is the name or UUID of the VM to which the volume group will be attached. This parameter should be changed appropriately.
319 VNF component Auto Scale Out with Manual Trigger use case via VID and APPC
322 The Auto Scale Out with Manual Trigger use case shows how users/network operators can add capacity to an existing VNF. ONAP Beijing release supports scale out of VNF components in two ways, so as to demonstrate flexibility of the ONAP platform and the use case itself. One way involves triggering the scale out operations via the Virtual Infrastructure Deployment (VID) GUI, and uses the Application Controller (APPC) as a generic VNF Manager. This is demonstrated against the vLB/vDNS VNFs. The second example involves triggering scale out operations from the Use case UI (UUI) and uses Virtual Function Controller (VF-C) as generic VNF Manager. This is demonstrated against VoLTE VNFs (MME, SAE-GW, CSCF, TAS). Both scale out blueprints use the Service Orchestrator (SO) as workflow execution engine.
324 This repository hosts the source code and scripts that implement the vLB/vDNS VNFs for the scale out blueprint that uses VID, SO, and APPC. At high level, the use case works as follows:
325 - The user/network operator triggers the scale out operation from the VID portal. VID translates the operation into a call to SO;
326 - SO instantiates a new VNF component and sends APPC a request for reconfiguring the VNF;
327 - APPC reconfigures the VNF, without interrupting the service.
329 For this use case, we created a modified version of the vLB/vDNS, contained in vnfs/VLBMS. Unlike the vLB/vDNS VNF described before, in this modified version the vLB and the vDNS do not run any automated discovery service. Instead, the vLB has a Northbound API that allows an upstream system (e.g. ONAP) to change the internal configuration by updating the list of active vDNS instances. The Northbound API framework has been built using FD.io-based Honeycomb 1707, and supports both RESTconf and NETCONF protocols. Below is an example of vDNS instances contained in the vLB, in JSON format:
332 "vlb-business-vnf-onap-plugin": {
336 "ip-addr": "192.168.10.211",
337 "oam-ip-addr": "10.0.150.2",
341 "ip-addr": "192.168.10.212",
342 "oam-ip-addr": "10.0.150.4",
349 According to the flow described above, during an execution of the use case against the vLB/vDNS VNF:
350 - The user/network operator triggers the instantiation of a new vDNS from the VID GUI;
351 - VID sends the request to SO, which spins up a new vDNS and sends APPC the details about the new vDNS (i.e. ip-addr, oam-ip-addr, enabled);
352 - APPC runs a NETCONF operation against the vLB to update the list of vDNS instances with the vDNS just created.
354 Although the VNF supports the update of multiple vDNS records in the same call, for Beijing release APPC updates a single vDNS instance at a time.
356 The use case includes design-time and run-time operations. For Beijing release, APPC has a new component called Controller Design Tool (CDT), a design-time tool that allows users to create and on-board VNF templates into the APPC. The template describes which control operation can be executed against the VNF (e.g. scale out, health check, modify configuration, etc.), the protocols that the VNF supports, port numbers, VNF APIs, and credentials for authentication. Being VNF agnostic, APPC uses these templates to "learn" about specific VNFs and the supported operations.
358 CDT requires two input: 1) the list of parameters that APPC will receive (ip-addr, oam-ip-addr, enabled in the example above); 2) the VNF API that APPC will use to reconfigure the VNF.
360 Below is an example of the parameters file (yaml format), which we call parameters.yaml:
392 Here is an example of API for the vLB VNF used for this use case. We name the file after the vnf-type contained in SDNC (i.e. Vloadbalancerms..base_vlb..module-0.xml):
394 <vlb-business-vnf-onap-plugin xmlns="urn:opendaylight:params:xml:ns:yang:vlb-business-vnf-onap-plugin">
397 <ip-addr>${ip-addr}</ip-addr>
398 <oam-ip-addr>${oam-ip-addr}</oam-ip-addr>
399 <enabled>${enabled}</enabled>
402 </vlb-business-vnf-onap-plugin>
404 To create the VNF template in CDT, the following steps are required:
405 - Connect to the CDT GUI: http://APPC-IP:8080 (in Heat-based ONAP deployments) or http://ANY-K8S-IP:30289 (in OOM/K8S-based ONAP deployments)
406 - Click "My VNF" Tab. Create your user ID, if necessary
407 - Click "Create new VNF" entering the VNF type as reported in VID or AAI, e.g. vLoadBalancerMS/vLoadBalancerMS 0
408 - Select "ConfigScaleOut" action
409 - Create a new template identifier using the vnf-type name in SDNC as template name, e.g. Vloadbalancerms..base_vlb..module-0
410 - Select protocol (Netconf-XML), VNF username (admin), and VNF port number (2831 for NETCONF)
411 - Click "Parameter Definition" Tab and upload the parameters (.yaml) file
412 - Click "Template Tab" and upload API template (.yaml) file
413 - Click "Reference Data" Tab
414 - Click "Save All to APPC"
416 Finally, log into the APPC controller container and set the VNF password (ConfigScaleOut.password) in /opt/onap/appc/data/properties/appc_southbound.properties to admin. Note that in an ONAP instance created with OOM, APPC may use redundancy to make the controller resilient to failures. For Beijing, CDT only updates one replica of APPC. As such, in a multi-replica environment, the property file should be copied over to the other replicas. If redundancy is used, APPC has 3 replicas. CDT typically updates APPC-0 only, so the property file should be copied over to APPC-1 and APPC-2. This will be addressed in future ONAP releases.
418 To trigger the scale out workflow, the user/network operator can log into VID from the ONAP Portal (demo/demo123456! as username/password), select "VNF Changes" and then the "New (+)" button. The user/network operator needs to fill in the "VNF Change Form" by selecting Subscriber, Service Type, NF Role, Model Version, VNF, Scale Out from the Workflow drop down window, and APPC from the Controller drop down window. After clicking "Next", in the following window the user/network operator has to select the VF Module to scale by clicking on the VNF and then on the appropriate VF Module checkbox. Finally, by clicking on the "Schedule" button, the scale out use case will run as described above.
421 ONAP Use Cases HEAT Templates
424 USE CASE VNFs SHOULD BE INSTANTIATED VIA ONAP. THE USER IS NOT SUPPOSED TO DOWNLOAD THE HEAT TEMPLATES AND RUN THEM MANUALLY.
426 The vFWCL directory contains two HEAT templates, one for creating a packet generator (vPKG/base\_vpkg.yaml) and one for creating a firewall and a packet sink (vFWSNK/base\_vfw.yaml). This use case supports VNF onboarding, instantiation, and closed-loop. The vFW directory, instead, contains a single HEAT template (base\_vfw) that spins up the three VFs. This use case supports VNF onboarding and instantiation only (no support for closed-loop). For Amsterdam Release, the HEAT templates in vFWCL are recommended, so that users can test and demonstrate the entire ONAP end-to-end flow.
428 The vLB directory contains a base HEAT template (base\_vlb.yaml) that install a packet generator, a load balancer, and a DNS instance, plus another HEAT template (dnsscaling.yaml) for the DNS scaling scenario, in which another DNS server is instantiated.
430 Before onboarding the VNFs in SDC, the user should set the following values in the HEAT environment files:
432 image_name: PUT THE VM IMAGE NAME HERE
433 flavor_name: PUT THE VM FLAVOR NAME HERE
434 public_net_id: PUT THE PUBLIC NETWORK ID HERE
435 dcae_collector_ip: PUT THE ADDRESS OF THE DCAE COLLECTOR HERE (NOTE: this is not required for vFWCL/vPKG/base\_vpkg.env)
436 pub_key: PUT YOUR KEY HERE
437 cloud_env: PUT openstack OR rackspace HERE
439 image\_name, flavor\_name, \public\_net\_id, and pub\_key can be obtained as described in the ONAP Section. For deployment in OpenStack, cloud\_env must be openstack.
441 The DNS scaling HEAT environment file for the vLoadBalancer use case also requires you to specify the private IP of the load balancer, so that the DNS can connect to the vLB:
443 vlb_private_ip_1: PUT THE PRIVATE ADDRESS OF THE VLB IN THE ONAP NETWORK SPACE HERE
445 As an alternative, it is possible to set the HEAT environment variables after the VNF is onboarded via SDC by appropriately preloading data into SDNC. That data will be fetched and used by SO to overwrite the default parameters in the HEAT environment file before the VNF is instantiated. For further information about SDNC data preload, please visit the wiki page: https://wiki.onap.org/display/DW/Tutorial_vIMS+%3A+SDNC+Updates
447 Each VNF has a MANIFEST.json file associated with the HEAT templates. During VNF onboarding, SDC reads the MANIFEST.json file to understand the role of each HEAT template that is part of the VNF (e.g. base template vs. non-base template). VNF onboarding requires users to create a zip file that contains all the HEAT templates and the MANIFEST file. To create the zip file, you can run the following command from shell:
449 cd VNF_FOLDER (this is the folder that contains the HEAT templates and the MANIFEST file)
450 zip ZIP_FILE_NAME.zip *
452 For information about VNF onboarding via the SDC portal, please refer to the wiki page: https://wiki.onap.org/display/DW/Design
454 NF Change Management use case
457 For the Beijing release, we focused on in-place software upgrades, with vendor-specific details encapsulated in Ansible scripts provided by NF vendors. In-place software upgrades use direct communication between the controller (SDNC or APPC) and the NF instance to trigger the software upgrade, with the upgrade executing on the instance without relinquishing any of the physical resources. Both L1 - L3 and L4+ NFs are supported in the ONAP release via SDN-C and APP-C respectively.
458 The change management workflow is defined as a composition of building blocks that include locking and unlocking the NF instance, executing health checks, and executing the software upgrade.
460 - The CM workflow for the in-place software upgrade is defined and executed by the service orchestrator (SO).
461 - A&AI is used to lock/unlock the NF instance
462 - The pre/post health checks and software upgrade execution are implemented in App-C (L4+ NFs) and SDN-C (L1-L3 NFs) by leveraging Ansible services to communicate with the NF instances.
463 - The user (or, operator) interfaces with the CM workflow using ONAP's VID. SO communicates with A&AI using a REST API and with the controllers SDNC/APPC via DMaaP.
465 We setup the use case demonstration for the software upgrade on the virtual gateway (vGW) as part of the vCPE use case in ONAP's Beijing release.
466 The main script for invoking SO in-place software upgrade workflow is in [demo.git]/vnfs/vCPE/scripts/inPlaceSoftwareUpgrade\_vGW.txt . The workflow can be tested without VID by using this script.
467 To execute the script, the user/operator would login to the SO container and copy/paste the script. One would have to install vim to edit the script and curl to execute commands within the script:
470 - apt-get install vim
471 - apt-get install curl
473 Check in VID for the available instances - service ID and instance ID - and replace those IDs in the script. Since the use case is for vGW, the controller type is SDNC.
475 Next, the user/operator would login to the SDNC container and appropriately configure the Ansible playbooks:
477 - Add the ssh key of the vGW on the Ansible server
478 - Update VNF IP in DG config
479 -- docker exec -it sdnc_controller_container bash
480 - Change the following line in /opt/onap/sdnc/data/properties/lcm-dg.properties with IP of VNF:
481 -- ansible.nodelist=['10.12.5.85']
482 - Update VNF IP in Ansible Server
483 -- docker exec -ti sdnc_ansible_container /bin/bash
484 -- Add VNF IP in /opt/onap/sdnc/Playbooks/Ansible_inventory
485 - Update the Playbooks /opt/onap/sdnc/Playbooks
486 -- ansible_upgradesw@0.00.yml
487 -- ansible_precheck@0.00.yml
488 -- ansible_postcheck@0.00.yml