1 # Distributed Analytics Framework
6 |------------|---------|
7 | Kubernetes | 1.12.3+ |
10 #### Download Framework
12 git clone https://github.com/onap/demo.git
13 DA_WORKING_DIR=$PWD/demo/vnfs/DAaaS/deploy
16 #### Install Rook-Ceph for Persistent Storage
17 Note: This is unusual but Flex volume path can be different than the default value. values.yaml has the most common flexvolume path configured. In case of errors related to flexvolume please refer to the https://rook.io/docs/rook/v0.9/flexvolume.html#configuring-the-flexvolume-path to find the appropriate flexvolume-path and set it in values.yaml
19 cd $DA_WORKING_DIR/00-init/rook-ceph
20 helm install -n rook . -f values.yaml --namespace=rook-ceph-system
22 Check for the status of the pods in rook-ceph namespace. Once all pods are in Ready state move on to the next section.
25 $ kubectl get pods -n rook-ceph-system
26 NAME READY STATUS RESTARTS AGE
27 rook-ceph-agent-9wszf 1/1 Running 0 121s
28 rook-ceph-agent-xnbt8 1/1 Running 0 121s
29 rook-ceph-operator-bc77d6d75-ltwww 1/1 Running 0 158s
30 rook-discover-bvj65 1/1 Running 0 133s
31 rook-discover-nbfrp 1/1 Running 0 133s
34 $ kubectl -n rook-ceph get pod
35 NAME READY STATUS RESTARTS AGE
36 rook-ceph-mgr-a-d9dcf5748-5s9ft 1/1 Running 0 77s
37 rook-ceph-mon-a-7d8f675889-nw5pl 1/1 Running 0 105s
38 rook-ceph-mon-b-856fdd5cb9-5h2qk 1/1 Running 0 94s
39 rook-ceph-mon-c-57545897fc-j576h 1/1 Running 0 85s
40 rook-ceph-osd-0-7cbbbf749f-j8fsd 1/1 Running 0 25s
41 rook-ceph-osd-1-7f67f9646d-44p7v 1/1 Running 0 25s
42 rook-ceph-osd-2-6cd4b776ff-v4d68 1/1 Running 0 25s
43 rook-ceph-osd-prepare-vx2rz 0/2 Completed 0 60s
44 rook-ceph-tools-5bd5cdb949-j68kk 1/1 Running 0 53s
47 #### Troubleshooting Rook-Ceph installation
49 In case your machine had rook previously installed successfully or unsuccessfully
50 and you are attempting a fresh installation of rook operator, you may face some issues.
51 Lets help you with that.
53 * First check if there are some rook CRDs existing :
55 kubectl get crds | grep rook
57 If this return results like :
59 otc@otconap7 /var/lib/rook $ kc get crds | grep rook
60 cephblockpools.ceph.rook.io 2019-07-19T18:19:05Z
61 cephclusters.ceph.rook.io 2019-07-19T18:19:05Z
62 cephfilesystems.ceph.rook.io 2019-07-19T18:19:05Z
63 cephobjectstores.ceph.rook.io 2019-07-19T18:19:05Z
64 cephobjectstoreusers.ceph.rook.io 2019-07-19T18:19:05Z
65 volumes.rook.io 2019-07-19T18:19:05Z
67 then you should delete these previously existing rook based CRDs by generating a delete
68 manifest file by these commands and then deleting those files:
70 helm template -n rook . -f values.yaml > ~/delete.yaml
71 kc delete -f ~/delete.yaml
74 After this, delete the below directory in all the nodes.
76 sudo rm -rf /var/lib/rook/
80 helm install -n rook . -f values.yaml --namespace=rook-ceph-system
83 #### Install Operator package
85 cd $DA_WORKING_DIR/operator
86 helm install -n operator . -f values.yaml --namespace=operator
88 Check for the status of the pods in operator namespace. Check if Prometheus operator pods are in Ready state.
90 kubectl get pods -n operator
91 NAME READY STATUS RESTARTS
92 m3db-operator-0 1/1 Running 0 -etcd-operator-etcd-backup-operator-6cdc577f7d-ltgsr 1/1 Running 0
93 op-etcd-operator-etcd-operator-79fd99f8b7-fdc7p 1/1 Running 0
94 op-etcd-operator-etcd-restore-operator-855f7478bf-r7qxp 1/1 Running 0
95 op-prometheus-operator-operator-5c9b87965b-wjtw5 1/1 Running 1
96 op-sparkoperator-6cb4db884c-75rcd 1/1 Running 0
97 strimzi-cluster-operator-5bffdd7b85-rlrvj 1/1 Running 0
100 #### Install Collection package
101 Note: Collectd.conf is avaliable in $DA_WORKING_DIR/collection/charts/collectd/resources/config directory. Any valid collectd.conf can be placed here.
103 Default (For custom collectd skip this section)
105 cd $DA_WORKING_DIR/collection
106 helm install -n cp . -f values.yaml --namespace=edge1
110 1. Build the custom collectd image
111 2. Set COLLECTD_IMAGE_NAME with appropriate image_repository:tag
112 3. Push the image to docker registry using the command
113 4. docker push ${COLLECTD_IMAGE_NAME}
114 5. Edit the values.yaml and change the image repository and tag using
115 COLLECTD_IMAGE_NAME appropriately.
116 6. Place the collectd.conf in
117 $DA_WORKING_DIR/collection/charts/collectd/resources/config
119 7. cd $DA_WORKING_DIR/collection
120 8. helm install -n cp . -f values.yaml --namespace=edge1
123 #### Verify Collection package
124 * Check if all pods are up in edge1 namespace
125 * Check the prometheus UI using port-forwarding port 9090 (default for prometheus service)
127 $ kubectl get pods -n edge1
128 NAME READY STATUS RESTARTS AGE
129 cp-cadvisor-8rk2b 1/1 Running 0 15s
130 cp-cadvisor-nsjr6 1/1 Running 0 15s
131 cp-collectd-h5krd 1/1 Running 0 23s
132 cp-collectd-jc9m2 1/1 Running 0 23s
133 cp-prometheus-node-exporter-blc6p 1/1 Running 0 17s
134 cp-prometheus-node-exporter-qbvdx 1/1 Running 0 17s
135 prometheus-cp-prometheus-prometheus-0 4/4 Running 1 33s
137 $ kubectl get svc -n edge1
138 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
139 cadvisor NodePort 10.43.53.122 <none> 80:30091/TCP
140 collectd ClusterIP 10.43.222.34 <none> 9103/TCP
141 cp13-prometheus-node-exporter ClusterIP 10.43.17.242 <none> 9100/TCP
142 cp13-prometheus-prometheus NodePort 10.43.26.155 <none> 9090:30090/TCP
143 prometheus-operated ClusterIP None <none> 9090/TCP
146 #### Install Minio Model repository
147 * Prerequisite: Dynamic storage provisioner needs to be enabled. Either rook-ceph ($DA_WORKING_DIR/00-init) or another alternate provisioner needs to be enabled.
149 cd $DA_WORKING_DIR/minio
151 Edit the values.yaml to set the credentials to access the minio UI.
153 accessKey: "onapdaas"
154 secretKey: "onapsecretdaas"
156 helm install -n minio . -f values.yaml --namespace=edge1
159 #### Onboard an Inference Application