### Build docker images
#### collectd-operator
```bash
-cd $DA_WORKING_DIR/../microservices/collectd-operator
+cd $DA_WORKING_DIR/../microservices
## Note: The image tag and respository in the Collectd-operator helm charts needs to match the IMAGE_NAME
IMAGE_NAME=dcr.cluster.local:32644/collectd-operator:latest
-./build/build_image.sh $IMAGE_NAME
+./build_image.sh collectd-operator $IMAGE_NAME
```
+#### visualization-operator
+```bash
+cd $DA_WORKING_DIR/../microservices
+
+## Note: The image tag and respository in the Visualization-operator helm charts needs to match the IMAGE_NAME
+IMAGE_NAME=dcr.cluster.local:32644/visualization-operator:latest
+./build_image.sh visualization-operator $IMAGE_NAME
+```
+
### Install the Operator Package
```bash
cd $DA_WORKING_DIR/operator
5. Edit the values.yaml and change the image repository and tag using
COLLECTD_IMAGE_NAME appropriately.
6. Place the collectd.conf in
- $DA_WORKING_DIR/collection/charts/collectd/resources/config
+ $DA_WORKING_DIR/collection/charts/collectd/resources
7. cd $DA_WORKING_DIR/collection
8. helm install -n cp . -f values.yaml --namespace=edge1
cp13-prometheus-prometheus NodePort 10.43.26.155 <none> 9090:30090/TCP
prometheus-operated ClusterIP None <none> 9090/TCP
```
+#### Configure Collectd Plugins
+1. Using the sample [collectdglobal.yaml](microservices/collectd-operator/examples/collectd/collectdglobal.yaml), Configure the CollectdGlobal CR
+2. If there are additional Types.db files to update, Copy the additional types.db files to resources folder.
+3. Create a ConfigMap to load the types.db and update the configMap with name of the ConfigMap created.
+4. Create and configure the required CollectdPlugin CRs. Use these samples as a reference [cpu_collectdplugin_cr.yaml](microservices/collectd-operator/examples/collectd/cpu_collectdplugin_cr.yaml), [prometheus_collectdplugin_cr.yaml](microservices/collectd-operator/examples/collectd/prometheus_collectdplugin_cr.yaml).
+4. Use the same namespace where the collection package was installed.
+5. Assuming it is edge1, create the config resources that are applicable. Apply the following commands in the same order.
+```yaml
+# Note:
+## 1. Create Configmap is optional and required only if additional types.db file needs to be mounted.
+## 2. Add/Remove --from-file accordingly. Use the correct file name based on the context.
+kubectl create configmap typesdb-configmap --from-file ./resource/[FILE_NAME1] --from-file ./resource/[FILE_NAME2]
+kubectl create -f edge1 collectdglobal.yaml
+kubectl create -f edge1 [PLUGIN_NAME1]_collectdplugin_cr.yaml
+kubectl create -f edge1 [PLUGIN_NAME2]_collectdplugin_cr.yaml
+kubectl create -f edge1 [PLUGIN_NAME3]_collectdplugin_cr.yaml
+...
+```
+
+#Install visualization package
+```bash
+Default (For custom Grafana dashboards skip this section)
+=======
+cd $DA_WORKING_DIR/visualization
+helm install -n viz . -f values.yaml -f grafana-values.yaml
+
+Custom Grafana dashboards
+=========================
+1. Place the custom dashboard definition into the folder $DA_WORKING_DIR/visualization/charts/grafana/dashboards
+ Example dashboard definition can be found at $DA_WORKING_DIR/visualization/charts/grafana/dashboards/dashboard1.json
+2. Create a configmap.yaml that imports above created dashboard.json file as config and copy that configmap.yaml to $DA_WORKING_DIR/visualization/charts/grafana/templates/
+ Example configmap can be found at $DA_WORKING_DIR/visualization/charts/grafana/templates/configmap-add-dashboard.yaml
+3. Add custom dashboard configuration to values.yaml or an overriding values.yaml.
+ Example configuration can be found in the "dashboardProviders" section of grafana-values.yaml
+
+4. cd $DA_WORKING_DIR/visualization
+5. For a fresh install of visualization package, do "helm install"
+ e.g., helm install -n viz . -f values.yaml -f grafana-values.yaml
+ If the custom dashboard is being added to an already running Grafana, do "helm upgrade"
+ e.g., helm upgrade -n viz . -f values.yaml -f grafana-values.yaml -f ......
+```
+
+#### Verify Visualization package
+Check if the visualization pod is up
+```
+$ kubectl get pods
+ NAME READY STATUS RESTARTS AGE
+ viz-grafana-78dcffd75-sxnjv 1/1 Running 0 52m
+```
+
+### Login to Grafana
+```
+1. Get your 'admin' user password by running:
+ kubectl get secret --namespace default viz-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
+
+2. Get the Grafana URL to visit by running these commands in the same shell:
+ export POD_NAME=$(kubectl get pods --namespace default -l "app=grafana,release=viz" -o jsonpath="{.items[0].metadata.name}")
+ kubectl --namespace default port-forward $POD_NAME 3000
+
+3. Visit the URL : http://localhost:3000 and login with the password from step 1 and the username: admin
+```
+
+#### Configure Grafana Datasources
+Using the sample [prometheus_grafanadatasource_cr.yaml](microservices/visualization-operator/examples/grafana/prometheus_grafanadatasource_cr.yaml), Configure the GrafanaDataSource CR by running the command below
+```yaml
+kubectl create -f [DATASOURCE_NAME1]_grafanadatasource_cr.yaml
+kubectl create -f [DATASOURCE_NAME2]_grafanadatasource_cr.yaml
+...
+```
## Install Minio Model repository
* Prerequisite: Dynamic storage provisioner needs to be enabled. Either rook-ceph ($DA_WORKING_DIR/00-init) or another alternate provisioner needs to be enabled.