1 # Distributed Analytics Framework
6 |------------|---------|
7 | Kubernetes | 1.12.3+ |
10 #### Download Framework
12 git clone https://github.com/onap/demo.git
13 DA_WORKING_DIR=$PWD/demo/vnfs/DAaaS/deploy
16 #### Install Rook-Ceph for Persistent Storage
17 Note: This is unusual but Flex volume path can be different than the default value. values.yaml has the most common flexvolume path configured. In case of errors related to flexvolume please refer to the https://rook.io/docs/rook/v0.9/flexvolume.html#configuring-the-flexvolume-path to find the appropriate flexvolume-path and set it in values.yaml
19 cd $DA_WORKING_DIR/00-init/rook-ceph
20 helm install -n rook . -f values.yaml --namespace=rook-ceph-system
22 Check for the status of the pods in rook-ceph namespace. Once all pods are in Ready state move on to the next section.
25 $ kubectl get pods -n rook-ceph-system
26 NAME READY STATUS RESTARTS AGE
27 rook-ceph-agent-9wszf 1/1 Running 0 121s
28 rook-ceph-agent-xnbt8 1/1 Running 0 121s
29 rook-ceph-operator-bc77d6d75-ltwww 1/1 Running 0 158s
30 rook-discover-bvj65 1/1 Running 0 133s
31 rook-discover-nbfrp 1/1 Running 0 133s
34 $ kubectl -n rook-ceph get pod
35 NAME READY STATUS RESTARTS AGE
36 rook-ceph-mgr-a-d9dcf5748-5s9ft 1/1 Running 0 77s
37 rook-ceph-mon-a-7d8f675889-nw5pl 1/1 Running 0 105s
38 rook-ceph-mon-b-856fdd5cb9-5h2qk 1/1 Running 0 94s
39 rook-ceph-mon-c-57545897fc-j576h 1/1 Running 0 85s
40 rook-ceph-osd-0-7cbbbf749f-j8fsd 1/1 Running 0 25s
41 rook-ceph-osd-1-7f67f9646d-44p7v 1/1 Running 0 25s
42 rook-ceph-osd-2-6cd4b776ff-v4d68 1/1 Running 0 25s
43 rook-ceph-osd-prepare-vx2rz 0/2 Completed 0 60s
44 rook-ceph-tools-5bd5cdb949-j68kk 1/1 Running 0 53s
47 #### Install Operator package
49 cd $DA_WORKING_DIR/operator
50 helm install -n operator . -f values.yaml --namespace=operator
52 Check for the status of the pods in operator namespace. Check if Prometheus operator pods are in Ready state.
54 kubectl get pods -n operator
55 NAME READY STATUS RESTARTS
56 m3db-operator-0 1/1 Running 0 -etcd-operator-etcd-backup-operator-6cdc577f7d-ltgsr 1/1 Running 0
57 op-etcd-operator-etcd-operator-79fd99f8b7-fdc7p 1/1 Running 0
58 op-etcd-operator-etcd-restore-operator-855f7478bf-r7qxp 1/1 Running 0
59 op-prometheus-operator-operator-5c9b87965b-wjtw5 1/1 Running 1
60 op-sparkoperator-6cb4db884c-75rcd 1/1 Running 0
61 strimzi-cluster-operator-5bffdd7b85-rlrvj 1/1 Running 0
64 #### Install Collection package
65 Note: Collectd.conf is avaliable in $DA_WORKING_DIR/collection/charts/collectd/resources/config directory. Any valid collectd.conf can be placed here.
67 Default (For custom collectd skip this section)
69 cd $DA_WORKING_DIR/collection
70 helm install -n cp . -f values.yaml --namespace=edge1
74 1. Build the custom collectd image
75 2. Set COLLECTD_IMAGE_NAME with appropriate image_repository:tag
76 3. Push the image to docker registry using the command
77 4. docker push ${COLLECTD_IMAGE_NAME}
78 5. Edit the values.yaml and change the image repository and tag using
79 COLLECTD_IMAGE_NAME appropriately.
80 6. Place the collectd.conf in
81 $DA_WORKING_DIR/collection/charts/collectd/resources/config
83 7. cd $DA_WORKING_DIR/collection
84 8. helm install -n cp . -f values.yaml --namespace=edge1
87 #### Verify Collection package
88 * Check if all pods are up in edge1 namespace
89 * Check the prometheus UI using port-forwarding port 9090 (default for prometheus service)
91 $ kubectl get pods -n edge1
92 NAME READY STATUS RESTARTS AGE
93 cp-cadvisor-8rk2b 1/1 Running 0 15s
94 cp-cadvisor-nsjr6 1/1 Running 0 15s
95 cp-collectd-h5krd 1/1 Running 0 23s
96 cp-collectd-jc9m2 1/1 Running 0 23s
97 cp-prometheus-node-exporter-blc6p 1/1 Running 0 17s
98 cp-prometheus-node-exporter-qbvdx 1/1 Running 0 17s
99 prometheus-cp-prometheus-prometheus-0 4/4 Running 1 33s
101 $ kubectl get svc -n edge1
102 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
103 cadvisor NodePort 10.43.53.122 <none> 80:30091/TCP
104 collectd ClusterIP 10.43.222.34 <none> 9103/TCP
105 cp13-prometheus-node-exporter ClusterIP 10.43.17.242 <none> 9100/TCP
106 cp13-prometheus-prometheus NodePort 10.43.26.155 <none> 9090:30090/TCP
107 prometheus-operated ClusterIP None <none> 9090/TCP
110 #### Install Minio Model repository
111 * Prerequisite: Dynamic storage provisioner needs to be enabled. Either rook-ceph ($DA_WORKING_DIR/00-init) or another alternate provisioner needs to be enabled.
113 cd $DA_WORKING_DIR/minio
115 Edit the values.yaml to set the credentials to access the minio UI.
117 accessKey: "onapdaas"
118 secretKey: "onapsecretdaas"
120 helm install -n minio . -f values.yaml --namespace=edge1
123 #### Onboard an Inference Application