Refactor Distributed Analytics project structure
[demo.git] / vnfs / DAaaS / README.md
1 # Distributed Analytics Framework
2 ## Install
3
4 #### Pre-requisites
5 | Required   | Version |
6 |------------|---------|
7 | Kubernetes | 1.12.3+ |
8 | Docker CE  | 18.09+  |
9 | Helm       | 2.12.1+ |
10 #### Download Framework
11 ```bash
12 git clone https://github.com/onap/demo.git
13 DA_WORKING_DIR=$PWD/demo/vnfs/DAaaS/deploy
14 cd $DA_WORKING_DIR
15 ```
16
17 #### Install Rook-Ceph for Persistent Storage
18 Note: This is unusual but Flex volume path can be different than the default value. values.yaml has the most common flexvolume path configured. In case of errors related to flexvolume please refer to the https://rook.io/docs/rook/v0.9/flexvolume.html#configuring-the-flexvolume-path to find the appropriate flexvolume-path and set it in values.yaml
19 ```bash
20 cd 00-init/rook-ceph
21 helm install -n rook . -f values.yaml --namespace=rook-ceph-system
22 ```
23 Check for the status of the pods in rook-ceph namespace. Once all pods are in Ready state move on to the next section.
24 ```bash
25 $ kubectl -n rook-ceph get pod
26 NAME                                   READY   STATUS      RESTARTS   AGE
27 rook-ceph-agent-4zkg8                  1/1     Running     0          140s
28 rook-ceph-mgr-a-d9dcf5748-5s9ft        1/1     Running     0          77s
29 rook-ceph-mon-a-7d8f675889-nw5pl       1/1     Running     0          105s
30 rook-ceph-mon-b-856fdd5cb9-5h2qk       1/1     Running     0          94s
31 rook-ceph-mon-c-57545897fc-j576h       1/1     Running     0          85s
32 rook-ceph-osd-0-7cbbbf749f-j8fsd       1/1     Running     0          25s
33 rook-ceph-osd-1-7f67f9646d-44p7v       1/1     Running     0          25s
34 rook-ceph-osd-2-6cd4b776ff-v4d68       1/1     Running     0          25s
35 rook-ceph-osd-prepare-vx2rz            0/2     Completed   0          60s
36 rook-discover-dhkb8                    1/1     Running     0          140s
37 ```
38
39 #### Install Operator package
40 ```bash
41 cd $DA_WORKING_DIR/operator
42 helm install -n operator . -f values.yaml --namespace=operator
43 ```
44 Check for the status of the pods in operator namespace. Check if Prometheus operator pods are in Ready state.
45 ```bash
46 kubectl get pods -n operator
47 NAME                                                      READY   STATUS    RESTARTS
48 m3db-operator-0                                           1/1     Running   0       -etcd-operator-etcd-backup-operator-6cdc577f7d-ltgsr      1/1     Running   0
49 op-etcd-operator-etcd-operator-79fd99f8b7-fdc7p           1/1     Running   0
50 op-etcd-operator-etcd-restore-operator-855f7478bf-r7qxp   1/1     Running   0
51 op-prometheus-operator-operator-5c9b87965b-wjtw5          1/1     Running   1
52 op-sparkoperator-6cb4db884c-75rcd                         1/1     Running   0
53 strimzi-cluster-operator-5bffdd7b85-rlrvj                 1/1     Running   0
54 ```
55
56 #### Install Collection package
57 Note: Collectd.conf is avaliable in $DA_WORKING_DIR/collection/charts/collectd/resources/config directory. Any valid collectd.conf can be placed here.
58 ```bash
59 Default (For custom collectd skip this section)
60 =======
61 cd $DA_WORKING_DIR/collection
62 helm install -n cp . -f values.yaml --namespace=edge1
63
64 Custom Collectd
65 ===============
66 Build the image and set the image name with tag to COLLECTD_IMAGE_NAME
67 Push the image to docker registry using the command
68 docker push dcr.default.svc.local:32000/${COLLECTD_IMAGE_NAME}
69 Edit the values.yaml and change the image name as the value of COLLECTD_IMAGE_NAME
70 place the collectd.conf in $DA_WORKING_DIR/collection/charts/collectd/resources/config directory
71
72 cd $DA_WORKING_DIR/collection
73 helm install -n cp . -f values.yaml --namespace=edge1
74 ```
75
76 #### Verify Collection package
77 ```
78 TODO
79 1. Check if all pods are up uin edge1 namespace
80 2. Check the prometheus UI using the port 30090
81 ```
82 #### Onboard an Inference Application
83 ```
84 TODO
85 ```