X-Git-Url: https://gerrit.onap.org/r/gitweb?a=blobdiff_plain;f=vnfs%2FDAaaS%2FREADME.md;h=3e1b7b2ddebfb8053fbf761475bca551b5521983;hb=3d1e26dc3e9af8cc066bef9971bb870b841c903b;hp=4b6fcf50cdcdb4190270cc9ef1ff9fc9939591e2;hpb=1bbe6436eeb56a8a091fc15ca3a023fcba31d24e;p=demo.git diff --git a/vnfs/DAaaS/README.md b/vnfs/DAaaS/README.md index 4b6fcf50..3e1b7b2d 100644 --- a/vnfs/DAaaS/README.md +++ b/vnfs/DAaaS/README.md @@ -44,6 +44,42 @@ rook-ceph-osd-prepare-vx2rz 0/2 Completed 0 60s rook-ceph-tools-5bd5cdb949-j68kk 1/1 Running 0 53s ``` +#### Troubleshooting Rook-Ceph installation + +In case your machine had rook previously installed successfully or unsuccessfully +and you are attempting a fresh installation of rook operator, you may face some issues. +Lets help you with that. + +* First check if there are some rook CRDs existing : +``` +kubectl get crds | grep rook +``` +If this return results like : +``` +otc@otconap7 /var/lib/rook $ kc get crds | grep rook +cephblockpools.ceph.rook.io 2019-07-19T18:19:05Z +cephclusters.ceph.rook.io 2019-07-19T18:19:05Z +cephfilesystems.ceph.rook.io 2019-07-19T18:19:05Z +cephobjectstores.ceph.rook.io 2019-07-19T18:19:05Z +cephobjectstoreusers.ceph.rook.io 2019-07-19T18:19:05Z +volumes.rook.io 2019-07-19T18:19:05Z +``` +then you should delete these previously existing rook based CRDs by generating a delete +manifest file by these commands and then deleting those files: +``` +helm template -n rook . -f values.yaml > ~/delete.yaml +kc delete -f ~/delete.yaml +``` + +After this, delete the below directory in all the nodes. +``` +sudo rm -rf /var/lib/rook/ +``` +Now, again attempt : +``` +helm install -n rook . -f values.yaml --namespace=rook-ceph-system +``` + #### Install Operator package ```bash cd $DA_WORKING_DIR/operator @@ -120,6 +156,64 @@ secretKey: "onapsecretdaas" helm install -n minio . -f values.yaml --namespace=edge1 ``` +#### Onboard messaging platform + +We have currently support strimzi based kafka operator. +Navigate to ```$DA_WORKING_DIR/deploy/messaging/charts/strimzi-kafka-operator``` directory. +Use the below command : +``` +helm install . -f values.yaml --name sko --namespace=test +``` + +NOTE: Make changes in the values.yaml if required. + +Once the strimzi operator ready, you shall get a pod like : + +``` +strimzi-cluster-operator-5cf7648b8c-zgxv7 1/1 Running 0 53m +``` + +Once this done, install the kafka package like any other helm charts you have. +Navigate to dir : ```$DA_WORKING_DIRdeploy/messaging``` and use command: +``` +helm install --name kafka-cluster charts/kafka/ +``` + +Once this done, you should have the following pods up and running. + +``` +kafka-cluster-entity-operator-b6557fc6c-hlnkm 3/3 Running 0 47m +kafka-cluster-kafka-0 2/2 Running 0 48m +kafka-cluster-kafka-1 2/2 Running 0 48m +kafka-cluster-kafka-2 2/2 Running 0 48m +kafka-cluster-zookeeper-0 2/2 Running 0 49m +kafka-cluster-zookeeper-1 2/2 Running 0 49m +kafka-cluster-zookeeper-2 2/2 Running 0 49m +``` + +You should have the following services when do a ```kubectl get svc``` + +``` +kafka-cluster-kafka-bootstrap ClusterIP 10.XX.YY.ZZ 9091/TCP,9092/TCP,9093/TCP 53m +kafka-cluster-kafka-brokers ClusterIP None 9091/TCP,9092/TCP,9093/TCP 53m +kafka-cluster-zookeeper-client ClusterIP 10.XX.YY.ZZ 2181/TCP 55m +kafka-cluster-zookeeper-nodes ClusterIP None 2181/TCP,2888/TCP,3888/TCP 55m +``` +#### Testing messaging + +You can test your kafka brokers by creating a simple producer and consumer. + +Producer : +``` +kubectl run kafka-producer -ti --image=strimzi/kafka:0.12.2-kafka-2.2.1 --rm=true --restart=Never -- bin/kafka-console-producer.sh --broker-list kafka-cluster-kafka-bootstrap:9092 --topic my-topic + ``` + Consumer : + ``` + +kubectl run kafka-consumer -ti --image=strimzi/kafka:0.12.2-kafka-2.2.1 --rm=true --restart=Never -- bin/kafka-console-consumer.sh --bootstrap-server kafka-cluster-kafka-bootstrap:9092 --topic my-topic --from-beginning +``` + + #### Onboard an Inference Application ``` TODO