1 # Distributed Analytics Framework
6 |------------|---------|
7 | Kubernetes | 1.12.3+ |
10 #### Download Framework
12 git clone https://github.com/onap/demo.git
13 DA_WORKING_DIR=$PWD/demo/vnfs/DAaaS/deploy
16 #### Install Rook-Ceph for Persistent Storage
17 Note: This is unusual but Flex volume path can be different than the default value. values.yaml has the most common flexvolume path configured. In case of errors related to flexvolume please refer to the https://rook.io/docs/rook/v0.9/flexvolume.html#configuring-the-flexvolume-path to find the appropriate flexvolume-path and set it in values.yaml
19 cd $DA_WORKING_DIR/00-init/rook-ceph
20 helm install -n rook . -f values.yaml --namespace=rook-ceph-system
22 Check for the status of the pods in rook-ceph namespace. Once all pods are in Ready state move on to the next section.
25 $ kubectl get pods -n rook-ceph-system
26 NAME READY STATUS RESTARTS AGE
27 rook-ceph-agent-9wszf 1/1 Running 0 121s
28 rook-ceph-agent-xnbt8 1/1 Running 0 121s
29 rook-ceph-operator-bc77d6d75-ltwww 1/1 Running 0 158s
30 rook-discover-bvj65 1/1 Running 0 133s
31 rook-discover-nbfrp 1/1 Running 0 133s
34 $ kubectl -n rook-ceph get pod
35 NAME READY STATUS RESTARTS AGE
36 rook-ceph-mgr-a-d9dcf5748-5s9ft 1/1 Running 0 77s
37 rook-ceph-mon-a-7d8f675889-nw5pl 1/1 Running 0 105s
38 rook-ceph-mon-b-856fdd5cb9-5h2qk 1/1 Running 0 94s
39 rook-ceph-mon-c-57545897fc-j576h 1/1 Running 0 85s
40 rook-ceph-osd-0-7cbbbf749f-j8fsd 1/1 Running 0 25s
41 rook-ceph-osd-1-7f67f9646d-44p7v 1/1 Running 0 25s
42 rook-ceph-osd-2-6cd4b776ff-v4d68 1/1 Running 0 25s
43 rook-ceph-osd-prepare-vx2rz 0/2 Completed 0 60s
44 rook-ceph-tools-5bd5cdb949-j68kk 1/1 Running 0 53s
47 #### Troubleshooting Rook-Ceph installation
49 In case your machine had rook previously installed successfully or unsuccessfully
50 and you are attempting a fresh installation of rook operator, you may face some issues.
51 Lets help you with that.
53 * First check if there are some rook CRDs existing :
55 kubectl get crds | grep rook
57 If this return results like :
59 otc@otconap7 /var/lib/rook $ kc get crds | grep rook
60 cephblockpools.ceph.rook.io 2019-07-19T18:19:05Z
61 cephclusters.ceph.rook.io 2019-07-19T18:19:05Z
62 cephfilesystems.ceph.rook.io 2019-07-19T18:19:05Z
63 cephobjectstores.ceph.rook.io 2019-07-19T18:19:05Z
64 cephobjectstoreusers.ceph.rook.io 2019-07-19T18:19:05Z
65 volumes.rook.io 2019-07-19T18:19:05Z
67 then you should delete these previously existing rook based CRDs by generating a delete
68 manifest file by these commands and then deleting those files:
70 helm template -n rook . -f values.yaml > ~/delete.yaml
71 kc delete -f ~/delete.yaml
74 After this, delete the below directory in all the nodes.
76 sudo rm -rf /var/lib/rook/
80 helm install -n rook . -f values.yaml --namespace=rook-ceph-system
83 #### Install Operator package
85 cd $DA_WORKING_DIR/operator
86 helm install -n operator . -f values.yaml --namespace=operator
88 Check for the status of the pods in operator namespace. Check if Prometheus operator pods are in Ready state.
90 kubectl get pods -n operator
91 NAME READY STATUS RESTARTS
92 m3db-operator-0 1/1 Running 0 -etcd-operator-etcd-backup-operator-6cdc577f7d-ltgsr 1/1 Running 0
93 op-etcd-operator-etcd-operator-79fd99f8b7-fdc7p 1/1 Running 0
94 op-etcd-operator-etcd-restore-operator-855f7478bf-r7qxp 1/1 Running 0
95 op-prometheus-operator-operator-5c9b87965b-wjtw5 1/1 Running 1
96 op-sparkoperator-6cb4db884c-75rcd 1/1 Running 0
97 strimzi-cluster-operator-5bffdd7b85-rlrvj 1/1 Running 0
100 #### Install Collection package
101 Note: Collectd.conf is avaliable in $DA_WORKING_DIR/collection/charts/collectd/resources/config directory. Any valid collectd.conf can be placed here.
103 Default (For custom collectd skip this section)
105 cd $DA_WORKING_DIR/collection
106 helm install -n cp . -f values.yaml --namespace=edge1
110 1. Build the custom collectd image
111 2. Set COLLECTD_IMAGE_NAME with appropriate image_repository:tag
112 3. Push the image to docker registry using the command
113 4. docker push ${COLLECTD_IMAGE_NAME}
114 5. Edit the values.yaml and change the image repository and tag using
115 COLLECTD_IMAGE_NAME appropriately.
116 6. Place the collectd.conf in
117 $DA_WORKING_DIR/collection/charts/collectd/resources/config
119 7. cd $DA_WORKING_DIR/collection
120 8. helm install -n cp . -f values.yaml --namespace=edge1
123 #### Verify Collection package
124 * Check if all pods are up in edge1 namespace
125 * Check the prometheus UI using port-forwarding port 9090 (default for prometheus service)
127 $ kubectl get pods -n edge1
128 NAME READY STATUS RESTARTS AGE
129 cp-cadvisor-8rk2b 1/1 Running 0 15s
130 cp-cadvisor-nsjr6 1/1 Running 0 15s
131 cp-collectd-h5krd 1/1 Running 0 23s
132 cp-collectd-jc9m2 1/1 Running 0 23s
133 cp-prometheus-node-exporter-blc6p 1/1 Running 0 17s
134 cp-prometheus-node-exporter-qbvdx 1/1 Running 0 17s
135 prometheus-cp-prometheus-prometheus-0 4/4 Running 1 33s
137 $ kubectl get svc -n edge1
138 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
139 cadvisor NodePort 10.43.53.122 <none> 80:30091/TCP
140 collectd ClusterIP 10.43.222.34 <none> 9103/TCP
141 cp13-prometheus-node-exporter ClusterIP 10.43.17.242 <none> 9100/TCP
142 cp13-prometheus-prometheus NodePort 10.43.26.155 <none> 9090:30090/TCP
143 prometheus-operated ClusterIP None <none> 9090/TCP
146 #### Install Minio Model repository
147 * Prerequisite: Dynamic storage provisioner needs to be enabled. Either rook-ceph ($DA_WORKING_DIR/00-init) or another alternate provisioner needs to be enabled.
149 cd $DA_WORKING_DIR/minio
151 Edit the values.yaml to set the credentials to access the minio UI.
153 accessKey: "onapdaas"
154 secretKey: "onapsecretdaas"
156 helm install -n minio . -f values.yaml --namespace=edge1
159 #### Onboard messaging platform
161 We have currently support strimzi based kafka operator.
162 Navigate to ```$DA_WORKING_DIR/deploy/messaging/charts/strimzi-kafka-operator``` directory.
163 Use the below command :
165 helm install . -f values.yaml --name sko --namespace=test
168 NOTE: Make changes in the values.yaml if required.
170 Once the strimzi operator ready, you shall get a pod like :
173 strimzi-cluster-operator-5cf7648b8c-zgxv7 1/1 Running 0 53m
176 Once this done, install the kafka package like any other helm charts you have.
177 Navigate to dir : ```$DA_WORKING_DIRdeploy/messaging``` and use command:
179 helm install --name kafka-cluster charts/kafka/
182 Once this done, you should have the following pods up and running.
185 kafka-cluster-entity-operator-b6557fc6c-hlnkm 3/3 Running 0 47m
186 kafka-cluster-kafka-0 2/2 Running 0 48m
187 kafka-cluster-kafka-1 2/2 Running 0 48m
188 kafka-cluster-kafka-2 2/2 Running 0 48m
189 kafka-cluster-zookeeper-0 2/2 Running 0 49m
190 kafka-cluster-zookeeper-1 2/2 Running 0 49m
191 kafka-cluster-zookeeper-2 2/2 Running 0 49m
194 You should have the following services when do a ```kubectl get svc```
197 kafka-cluster-kafka-bootstrap ClusterIP 10.XX.YY.ZZ <none> 9091/TCP,9092/TCP,9093/TCP 53m
198 kafka-cluster-kafka-brokers ClusterIP None <none> 9091/TCP,9092/TCP,9093/TCP 53m
199 kafka-cluster-zookeeper-client ClusterIP 10.XX.YY.ZZ <none> 2181/TCP 55m
200 kafka-cluster-zookeeper-nodes ClusterIP None <none> 2181/TCP,2888/TCP,3888/TCP 55m
202 #### Testing messaging
204 You can test your kafka brokers by creating a simple producer and consumer.
208 kubectl run kafka-producer -ti --image=strimzi/kafka:0.12.2-kafka-2.2.1 --rm=true --restart=Never -- bin/kafka-console-producer.sh --broker-list kafka-cluster-kafka-bootstrap:9092 --topic my-topic
213 kubectl run kafka-consumer -ti --image=strimzi/kafka:0.12.2-kafka-2.2.1 --rm=true --restart=Never -- bin/kafka-console-consumer.sh --bootstrap-server kafka-cluster-kafka-bootstrap:9092 --topic my-topic --from-beginning
217 #### Onboard an Inference Application