+Here are the main steps to run the use case in Integration lab environment, where vCPE script is pre-installed on Rancher node under /root/integration/test/vcpe:
+
+1. Run Robot script from Rancher node to onboard VNFs, create and distribute models for vCPE four infrastructure services, i.e. infrastructure, brg, bng and gmux
+
+::
+
+ demo-k8s.sh onap init
+
+2. Add route on sdnc cluster VM node, which is the cluster VM node where pod sdnc-sdnc-0 is running on. This will allow ONAP SDNC to configure BRG later on.
+
+::
+
+ ip route add 10.3.0.0/24 via 10.0.101.10 dev ens3
+
+
+3. Install Python and other Python libraries
+
+::
+
+ integration/test/vcpe/bin/setup.sh
+
+
+4. Setup vcpe scripts by adjusting relevant parts of provided vcpeconfig.yaml config file. Most importantly adjust the Openstack env parameters shown below. Please issue 'vcpe.py --help' for detailed usage info.
+
+::
+
+ cloud_name: 'xxxxxxxx'
+
+ common_preload_config:
+ 'oam_onap_net': 'xxxxxxxx'
+ 'oam_onap_subnet': 'xxxxxxxxxx'
+ 'public_net': 'xxxxxxxxx'
+ 'public_net_id': 'xxxxxxxxxxxxx'
+
+"cloud_name" should be set to Openstack cloud name from clouds.yaml. By default this file is at ~/.config/openstack directory; if it's located in scripts directory it will have precedence over the beforementoined one. Example clouds.yaml.example file is provided.
+
+5. Run Robot to create and distribute for vCPE customer service. This step assumes step 1 has successfully distributed all vcpe models except customer service model
+
+::
+
+ ete-k8s.sh onap distributevCPEResCust
+
+6. If running with oom_mode=False initialize SDNC ip pool by running below command from k8s control node. It will be done automatically otherwise.
+
+::
+
+ kubectl -n onap exec -it dev-sdnc-sdnc-0 -- /opt/sdnc/bin/addIpAddresses.sh VGW 10.5.0 22 250
+
+7. Initialize vcpe
+
+::
+
+ vcpe.py init
+
+8. If running with oom_mode=False run a command printed at the end of the above step from k8s control node to insert vcpe customer service workflow entry in SO catalogdb. It will be done automatically otherwise.
+
+
+9. Instantiate vCPE infra services
+
+::
+
+ vcpe.py infra
+
+10. From Rancher node run vcpe healthcheck command to check connectivity from sdnc to brg and gmux, and vpp configuration of brg and gmux.
+
+::
+
+ healthcheck-k8s.py --namespace <namespace name> --environment <env name>
+
+11. Instantiate vCPE customer service.
+
+::
+
+ vcpe.py customer
+
+12. Update libevel.so in vGMUX VM and restart the VM. This allows vGMUX to send events to VES collector in close loop test. See tutorial wiki for details
+
+13. Run heatbridge. The heatbridge command usage: demo-k8s.sh <namespace> heatbridge <stack_name> <service_instance_id> <service> <oam-ip-address>, please refer to vCPE tutorial page on how to fill in those paraemters. See an example as following:
+
+::
+
+ ~/integration/test/vcpe# ~/oom/kubernetes/robot/demo-k8s.sh onap heatbridge vcpe_vfmodule_e2744f48729e4072b20b_201811262136 d8914ef3-3fdb-4401-adfe-823ee75dc604 vCPEvGMUX 10.0.101.21
+
+14. Start closed loop test by triggering packet drop VES event, and monitor if vGMUX is restarting. You may need to run the command twice if the first run fails
+
+::
+
+ vcpe.py loop
+
+