+* Check if all pods are up in edge1 namespace
+* Check the prometheus UI using port-forwarding port 9090 (default for prometheus service)
+```
+$ kubectl get pods -n edge1
+NAME READY STATUS RESTARTS AGE
+cp-cadvisor-8rk2b 1/1 Running 0 15s
+cp-cadvisor-nsjr6 1/1 Running 0 15s
+cp-collectd-h5krd 1/1 Running 0 23s
+cp-collectd-jc9m2 1/1 Running 0 23s
+cp-prometheus-node-exporter-blc6p 1/1 Running 0 17s
+cp-prometheus-node-exporter-qbvdx 1/1 Running 0 17s
+prometheus-cp-prometheus-prometheus-0 4/4 Running 1 33s
+
+$ kubectl get svc -n edge1
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
+cadvisor NodePort 10.43.53.122 <none> 80:30091/TCP
+collectd ClusterIP 10.43.222.34 <none> 9103/TCP
+cp13-prometheus-node-exporter ClusterIP 10.43.17.242 <none> 9100/TCP
+cp13-prometheus-prometheus NodePort 10.43.26.155 <none> 9090:30090/TCP
+prometheus-operated ClusterIP None <none> 9090/TCP
+```
+#### Configure Collectd Plugins
+1. Using the sample [collectdglobal.yaml](microservices/collectd-operator/examples/collectd/collectdglobal.yaml), Configure the CollectdGlobal CR
+2. If there are additional Types.db files to update, Copy the additional types.db files to resources folder.
+3. Create a ConfigMap to load the types.db and update the configMap with name of the ConfigMap created.
+4. Create and configure the required CollectdPlugin CRs. Use these samples as a reference [cpu_collectdplugin_cr.yaml](microservices/collectd-operator/examples/collectd/cpu_collectdplugin_cr.yaml), [prometheus_collectdplugin_cr.yaml](microservices/collectd-operator/examples/collectd/prometheus_collectdplugin_cr.yaml).
+4. Use the same namespace where the collection package was installed.
+5. Assuming it is edge1, create the config resources that are applicable. Apply the following commands in the same order.
+```yaml
+# Note:
+## 1. Create Configmap is optional and required only if additional types.db file needs to be mounted.
+## 2. Add/Remove --from-file accordingly. Use the correct file name based on the context.
+kubectl create configmap typesdb-configmap --from-file ./resource/[FILE_NAME1] --from-file ./resource/[FILE_NAME2]
+kubectl create -f edge1 collectdglobal.yaml
+kubectl create -f edge1 [PLUGIN_NAME1]_collectdplugin_cr.yaml
+kubectl create -f edge1 [PLUGIN_NAME2]_collectdplugin_cr.yaml
+kubectl create -f edge1 [PLUGIN_NAME3]_collectdplugin_cr.yaml
+...
+```
+
+#Install visualization package
+```bash
+Default (For custom Grafana dashboards skip this section)
+=======
+cd $DA_WORKING_DIR/visualization
+helm install -n viz . -f values.yaml -f grafana-values.yaml
+
+Custom Grafana dashboards
+=========================
+1. Place the custom dashboard definition into the folder $DA_WORKING_DIR/visualization/charts/grafana/dashboards
+ Example dashboard definition can be found at $DA_WORKING_DIR/visualization/charts/grafana/dashboards/dashboard1.json
+2. Create a configmap.yaml that imports above created dashboard.json file as config and copy that configmap.yaml to $DA_WORKING_DIR/visualization/charts/grafana/templates/
+ Example configmap can be found at $DA_WORKING_DIR/visualization/charts/grafana/templates/configmap-add-dashboard.yaml
+3. Add custom dashboard configuration to values.yaml or an overriding values.yaml.
+ Example configuration can be found in the "dashboardProviders" section of grafana-values.yaml
+
+4. cd $DA_WORKING_DIR/visualization
+5. For a fresh install of visualization package, do "helm install"
+ e.g., helm install -n viz . -f values.yaml -f grafana-values.yaml
+ If the custom dashboard is being added to an already running Grafana, do "helm upgrade"
+ e.g., helm upgrade -n viz . -f values.yaml -f grafana-values.yaml -f ......
+```
+
+#### Verify Visualization package
+Check if the visualization pod is up
+```
+$ kubectl get pods
+ NAME READY STATUS RESTARTS AGE
+ viz-grafana-78dcffd75-sxnjv 1/1 Running 0 52m
+```
+
+### Login to Grafana
+```
+1. Get your 'admin' user password by running:
+ kubectl get secret --namespace default viz-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
+
+2. Get the Grafana URL to visit by running these commands in the same shell:
+ export POD_NAME=$(kubectl get pods --namespace default -l "app=grafana,release=viz" -o jsonpath="{.items[0].metadata.name}")
+ kubectl --namespace default port-forward $POD_NAME 3000
+
+3. Visit the URL : http://localhost:3000 and login with the password from step 1 and the username: admin
+```
+
+#### Configure Grafana Datasources
+Using the sample [prometheus_grafanadatasource_cr.yaml](microservices/visualization-operator/examples/grafana/prometheus_grafanadatasource_cr.yaml), Configure the GrafanaDataSource CR by running the command below
+```yaml
+kubectl create -f [DATASOURCE_NAME1]_grafanadatasource_cr.yaml
+kubectl create -f [DATASOURCE_NAME2]_grafanadatasource_cr.yaml
+...
+```
+
+## Install Minio Model repository
+* Prerequisite: Dynamic storage provisioner needs to be enabled. Either rook-ceph ($DA_WORKING_DIR/00-init) or another alternate provisioner needs to be enabled.
+```bash
+cd $DA_WORKING_DIR/minio
+
+Edit the values.yaml to set the credentials to access the minio UI.
+Default values are
+accessKey: "onapdaas"
+secretKey: "onapsecretdaas"
+
+helm install -n minio . -f values.yaml --namespace=edge1
+```
+
+## Install Messaging platform
+
+We have currently support strimzi based kafka operator.
+Navigate to ```$DA_WORKING_DIR/deploy/messaging/charts/strimzi-kafka-operator``` directory.
+Use the below command :
+```
+helm install . -f values.yaml --name sko --namespace=test
+```
+
+NOTE: Make changes in the values.yaml if required.
+
+Once the strimzi operator ready, you shall get a pod like :
+
+```
+strimzi-cluster-operator-5cf7648b8c-zgxv7 1/1 Running 0 53m
+```
+
+Once this done, install the kafka package like any other helm charts you have.
+Navigate to dir : ```$DA_WORKING_DIRdeploy/messaging``` and use command:
+```
+helm install --name kafka-cluster charts/kafka/
+```
+
+Once this done, you should have the following pods up and running.
+
+```
+kafka-cluster-entity-operator-b6557fc6c-hlnkm 3/3 Running 0 47m
+kafka-cluster-kafka-0 2/2 Running 0 48m
+kafka-cluster-kafka-1 2/2 Running 0 48m
+kafka-cluster-kafka-2 2/2 Running 0 48m
+kafka-cluster-zookeeper-0 2/2 Running 0 49m
+kafka-cluster-zookeeper-1 2/2 Running 0 49m
+kafka-cluster-zookeeper-2 2/2 Running 0 49m
+```
+
+You should have the following services when do a ```kubectl get svc```
+
+```
+kafka-cluster-kafka-bootstrap ClusterIP 10.XX.YY.ZZ <none> 9091/TCP,9092/TCP,9093/TCP 53m
+kafka-cluster-kafka-brokers ClusterIP None <none> 9091/TCP,9092/TCP,9093/TCP 53m
+kafka-cluster-zookeeper-client ClusterIP 10.XX.YY.ZZ <none> 2181/TCP 55m
+kafka-cluster-zookeeper-nodes ClusterIP None <none> 2181/TCP,2888/TCP,3888/TCP 55m
+```
+#### Testing messaging
+
+You can test your kafka brokers by creating a simple producer and consumer.
+
+Producer :
+```
+kubectl run kafka-producer -ti --image=strimzi/kafka:0.12.2-kafka-2.2.1 --rm=true --restart=Never -- bin/kafka-console-producer.sh --broker-list kafka-cluster-kafka-bootstrap:9092 --topic my-topic
+ ```
+ Consumer :
+ ```
+
+kubectl run kafka-consumer -ti --image=strimzi/kafka:0.12.2-kafka-2.2.1 --rm=true --restart=Never -- bin/kafka-console-consumer.sh --bootstrap-server kafka-cluster-kafka-bootstrap:9092 --topic my-topic --from-beginning
+```
+
+## Install Training Package
+
+#### Install M3DB (Time series Data lake)
+##### Pre-requisites
+1. kubernetes cluster with atleast 3 nodes
+2. Etcd operator, M3DB operator
+3. Node labelled with zone and region.
+
+```bash
+## Defult region is us-west1, Default labels are us-west1-a, us-west1-b, us-west1-c
+## If this is changed then isolationGroups in training-core/charts/m3db/values.yaml needs to be updated.
+NODES=($(kubectl get nodes --output=jsonpath={.items..metadata.name}))
+
+kubectl label node/${NODES[0]} failure-domain.beta.kubernetes.io/region=us-west1
+kubectl label node/${NODES[1]} failure-domain.beta.kubernetes.io/region=us-west1
+kubectl label node/${NODES[2]} failure-domain.beta.kubernetes.io/region=us-west1
+
+kubectl label node/${NODES[0]} failure-domain.beta.kubernetes.io/zone=us-west1-a --overwrite=true
+kubectl label node/${NODES[1]} failure-domain.beta.kubernetes.io/zone=us-west1-b --overwrite=true
+kubectl label node/${NODES[2]} failure-domain.beta.kubernetes.io/zone=us-west1-c --overwrite=true
+```
+```bash
+cd $DA_WORKING_DIR/training-core/charts/m3db
+helm install -n m3db . -f values.yaml --namespace training