1 .. This work is licensed under a
2 .. Creative Commons Attribution 4.0 International License.
3 .. http://creativecommons.org/licenses/by/4.0
5 .. _strimzi-policy-label:
10 Policy Framework with Strimzi-Kafka communication
11 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
13 This page will explain how to set up a local Kubernetes cluster and minimal helm setup to run and deploy Policy Framework on a single host.
14 The rationale for this page is to spin up a development environment quickly and efficiently without the hassle of setting up the multi node cluster/Network file share that are required in a full deployment.
16 These instructions are for development purposes only. We are using the lightweight `microk8s <https://microk8s.io/>`_ as our Kubernetes environment.
18 Troubleshooting tips are included for possible issues while installation
23 One VM running Ubuntu 20.04 LTS (should also work on 18.04), with internet access to download charts/containers and the OOM repo
25 Sufficient RAM, depending on how many components you want to deploy
26 Around 20G of RAM allows for a few components, the minimal setup requires AAF, Policy, and Strimzi-Kafka
32 Install/remove Microk8s with appropriate version
33 Install/remove Helm with appropriate version
36 Install the required Helm plugins
37 Install ChartMuseum as a local helm repo
38 Build all OOM charts and store them in the chart repo
39 Fine tune deployment based on your VM capacity and component needs
40 Deploy/Undeploy charts
41 Enable communication over Kafka
45 Install/Upgrade Microk8s with appropriate version
46 -------------------------------------------------
48 Microk8s is a bundled lightweight version of kubernetes maintained by Canonical, it has the advantage of being well integrated with snap on Ubuntu, which makes it super easy to manage/upgrade/work with
50 More info on : https://microk8s.io/docs
52 There are 2 things to know about microk8s :
54 1) it is wrapped by snap, which is nice but you need to understand that it's not exactly the same as having a proper k8s installation (more info below on some specific commands)
56 2) it is not using docker as the container runtime, it's using containerd. it's not an issue, just be aware of that as you won't see containers using classic docker commands
59 If you have a previous version of microk8s, you first need to uninstall it (upgrade is possible but it is not recommended between major versions so I recommend to uninstall as it's fast and safe)
63 sudo snap remove microk8s
65 You need to select the appropriate version to install, to see all possible version do :
69 sudo snap info microk8s
70 sudo snap install microk8s --classic --channel=1.19/stable
72 You may need to change your firewall configuration to allow pod to pod communication and pod to internet communication :
76 sudo ufw allow in on cni0 && sudo ufw allow out on cni0
77 sudo ufw default allow routed
78 sudo microk8s enable dns storage
79 sudo microk8s enable dns
81 Install/remove Helm with appropriate version
82 --------------------------------------------
84 Helm is the package manager for k8s, we require a specific version for each ONAP release, it's the best is to look at the OOM guides to see which one is required `<https://helm.sh>`_
86 For the Honolulu release we need Helm 3 - A significant improvement with Helm3 is that it does not require a specific pod running in the kubernetes cluster (no more Tiller pod)
88 As Helm is self contained, it's pretty straightforward to install/upgrade, we can also use snap to install the right version
92 sudo snap install helm --classic --channel=3.5/stable
94 Note: You may encounter some log issues when installing helm with snap
96 Normally the helm logs are available in "~/.local/share/helm/plugins/deploy/cache/onap/logs", if you notice that the log files are all equal to 0, you can uninstall helm with snap and reinstall it manually
100 wget https://get.helm.sh/helm-v3.5.4-linux-amd64.tar.gz
102 tar xvfz helm-v3.5.4-linux-amd64.tar.gz
104 sudo mv linux-amd64/helm /usr/local/bin/helm
109 The tweaks below are not strictly necessary, but they help in making the setup more simple and flexible.
111 A) Increase the max number of pods & Add priviledged config
112 As ONAP may deploy a significant amount of pods, we need to inform kubelet to allow more than the basic configuration (as we plan an all in box setup), If you only plan to run a limited number of components, this is not needed
114 to change the max number of pods, we need to add a parameter to the startup line of kubelet.
116 1. Edit the file located at :
120 sudo nano /var/snap/microk8s/current/args/kubelet
122 add the following line at the end :
126 save the file and restart kubelet to apply the change :
130 sudo service snap.microk8s.daemon-kubelet restart
132 2. Edit the file located at :
136 sudo nano /var/snap/microk8s/current/args/kube-apiserver
138 add the following line at the end :
140 --allow-privileged=true
142 save the file and restart kubelet to apply the change :
146 sudo service snap.microk8s.daemon-apiserver restart
149 B) run a local copy of kubectl
150 Microk8s comes bundled with kubectl, you can interact with it by doing:
154 sudo microk8s kubectl describe node
156 to make things simpler as we will most likely interact a lot with kubectl, let's install a local copy of kubectl so we can use it to interact with the kubernetes cluster in a more straightforward way
158 We need kubectl 1.19 to match the cluster we have installed, let's again use snap to quickly choose and install the one we need
162 sudo snap install kubectl --classic --channel=1.19/stable
164 Now we need to provide our local kubectl client with a proper config file so that it can access the cluster, microk8s allows to retrieve the cluster config very easily
166 Simply create a .kube folder in your home directory and dump the config there
173 sudo microk8s.config > config
176 the last line will avoid helm complaining about too open permissions
178 you should now have helm and kubectl ready to interact with each other, you can verify this by trying :
184 this should output both the local client and server version
188 Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.7", GitCommit:"1dd5338295409edcfff11505e7bb246f0d325d15", GitTreeState:"clean", BuildDate:"2021-01-13T13:23:52Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
189 Server Version: version.Info{Major:"1", Minor:"19+", GitVersion:"v1.19.7-34+02d22c9f4fb254", GitCommit:"02d22c9f4fb2545422b2b28e2152b1788fc27c2f", GitTreeState:"clean", BuildDate:"2021-02-11T20:13:16Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
194 The Policy kubernetes chart is located in the `OOM repository <https://github.com/onap/oom/tree/master/kubernetes/policy>`_.
195 This chart includes different policy components referred as <policy-component-name>.
197 Please refer to the `OOM documentation <https://docs.onap.org/projects/onap-oom/en/latest>`_ on how to install and deploy ONAP.
201 git clone "https://gerrit.onap.org/r/oom"
204 Install the needed Helm plugins
205 -------------------------------
206 Onap deployments are using the deploy and undeploy plugins for helm
208 to install them just run :
212 helm plugin install ./oom/kubernetes/helm/plugins/undeploy/
213 helm plugin install ./oom/kubernetes/helm/plugins/deploy/
215 cp -R ~/oom/kubernetes/helm/plugins/ ~/.local/share/helm/plugins
217 this will copy the plugins into your home directory .helm folder and make them available as helm commands
219 Another plugin we need is the push plugin, with helm3 there is no longer an embedded repo to use.
222 helm plugin install https://github.com/chartmuseum/helm-push.git --version 0.10.0
224 Once all plugins are installed, you should see them as available helm commands when doing :
233 helm repo add strimzi https://strimzi.io/charts/
235 Install the operator:
238 helm install strimzi-kafka-operator strimzi/strimzi-kafka-operator --namespace strimzi-system --version 0.28.0 --set watchAnyNamespace=true --create-namespace
242 Install the chartmuseum repository
243 ----------------------------------
244 Download the chartmuseum script and run it as a background task
248 curl -LO https://s3.amazonaws.com/chartmuseum/release/latest/bin/linux/amd64/chartmuseum
249 chmod +x ./chartmuseum
250 mv ./chartmuseum /usr/local/bin
251 /usr/local/bin/chartmuseum --port=8080 --storage="local" --storage-local-rootdir="~/chartstorage" &
253 you should see the chartmuseum repo starting locally, you can press enter to return to your terminal
255 you can now inform helm that a local repo is available for use :
259 # helm repo add local http://localhost:8080
261 Tip: If there is an error as below while adding repo local, then remove the repo, update and readd.
262 Error: repository name (local) already exists, please specify a different name
266 # helm repo remove local
268 "local" has been removed from your repositories
274 Hang tight while we grab the latest from your chart repositories...
275 ...Successfully got an update from the "stable" chart repository
276 Update Complete. ⎈Happy Helming!⎈
280 helm repo add local http://localhost:8080
281 2022-09-24T11:43:29.777+0100 INFO [1] Request served {"path": "/index.yaml", "comment": "", "clientIP": "127.0.0.1", "method": "GET", "statusCode": 200, "latency": "4.107325ms", "reqID": "bd5d6089-b921-4086-a88a-13bd608a4135"}
282 "local" has been added to your repositories
285 Build all OOM charts and store them in the chart repo
286 -----------------------------------------------------
287 You should be ready to build all helm charts, go into the oom/kubernetes folder and run a full make
289 Ensure you have "make" installed:
293 sudo apt install make
302 You can speed up the make skipping the linting of the charts
307 $make all -e SKIP_LINT=TRUE; make onap -e SKIP_LINT=TRUE
309 You'll notice quite a few messages popping into your terminal running the chartmuseum, showing that it accepts and store the generated charts, that's normal, if you want, just open another terminal to run the helm commands
311 Once the build completes, you should be ready to deploy ONAP
314 Fine tune deployment based on your VM capacity and component needs
315 ------------------------------------------------------------------
319 Edit onap/values.yaml, to include the components to deploy, for this usecase, we set below components to true
321 policy: enabled: true
322 strimzi: enabled: true
324 Save the file and we are all set to DEPLOY
326 Installing or Upgrading Policy Components
327 =========================================
329 The assumption is you have cloned the charts from the OOM repository into a local directory.
331 **Step 1** Go to the policy charts and edit properties in values.yaml files to make any changes to particular policy component if required.
335 cd oom/kubernetes/policy/components/<policy-component-name>
337 **Step 2** Build the charts
342 make SKIP_LINT=TRUE policy
345 SKIP_LINT is only to reduce the "make" time
347 **Step 3** Undeploying already deployed policy components
349 After undeploying policy components, keep monitoring the policy pods until they go away.
353 helm del --purge <my-helm-release>-<policy-component-name>
354 kubectl get pods -n <namespace> | grep <policy-component-name>
356 **Step 4** Make sure there is no orphan database persistent volume or claim.
358 First, find if there is an orphan database PV or PVC with the following commands:
362 kubectl get pvc -n <namespace> | grep <policy-component-name>
363 kubectl get pv -n <namespace> | grep <policy-component-name>
365 If there are any orphan resources, delete them with
369 kubectl delete pvc <orphan-policy-pvc-name>
370 kubectl delete pv <orphan-policy-pv-name>
372 **Step 5** Delete NFS persisted data for policy components
374 Connect to the machine where the file system is persisted and then execute the below command
378 rm -fr /dockerdata-nfs/<my-helm-release>/<policy-component-name>
380 **Step 6** Re-Deploy policy pods
382 First you need to ensure that the onap namespace exists (it now must be created prior deployment)
386 kubectl create namespace onap
388 After deploying policy, keep monitoring the policy pods until they come up.
392 helm deploy dev local/onap -n onap --create-namespace --set global.masterPassword=test --debug -f ./onap/values.yaml --verbose --debug
393 kubectl get pods -n <namespace> | grep <policy-component-name>
395 You should see all pods starting up and you should be able to see logs using kubectl, dive into containers etc...
397 Restarting a faulty component
398 =============================
399 Each policy component can be restarted independently by issuing the following command:
403 kubectl delete pod <policy-component-pod-name> -n <namespace>
405 Some handy commands and tips below for troubleshooting:
415 kubectl logs dev-policy-api-7bb656d67f-qqmtk
416 kubectl describe dev-policy-api-7bb656d67f-qqmtk
417 kubectl exec -it <podname> ifconfig
418 kubectl exec -it <podname> pwd
419 kubectl exec -it <podname> sh
421 TIP: https://kubernetes.io/docs/reference/kubectl/cheatsheet/
423 TIP: If only policy pods are being brought down and brought-up
426 helm uninstall dev-policy
427 make policy -e SKIP_LINT=TRUE
428 helm install dev-policy local/policy -n onap --set global.masterPassword=test --debug
430 TIP: If there is an error to bringing up "dev-strimzi-entity-operator not found. Retry 60/60"
433 kubectl -nkube-system get svc/kube-dns
435 Stop the microk8s cluster with "microk8s stop" command
436 Edit the kubelet configuration file /var/snap/microk8s/current/args/kubelet and add the following lines:
439 --cluster-dns=<IPAddress>
440 --cluster-domain=cluster.local
442 Start the microk8s cluster with "microk8s start" command
443 Check the status of microk8s cluster with "microk8s status" command
445 How to undeploy and start fresh
446 The easiest is to use kubectl, you can clean up the cluster in 3 commands :
450 kubectl delete namespace onap
451 kubectl delete pv --all
454 kubectl delete pvc --all;kubectl delete pv --all;kubectl delete cm --all;kubectl delete deploy --all;kubectl delete secret --all;kubectl delete jobs --all;kubectl delete pod --all
455 rm -rvI /dockerdata-nfs/dev/
456 rm -rf ~/.cache/helm/repository/local-*
457 rm -rf ~/.cache/helm/repository/policy-11.0.0.tgz
458 rm -rf ~/.cache/helm/repository/onap-11.0.0.tgz
459 rm -rf /dockerdata-nfs/*
461 helm repo remove local
463 don't forget to create the namespace again before deploying again (helm won't complain if it is not there, but you'll end up with an empty cluster after it finishes)
465 Note : you could also reset the K8S cluster by using the microk8s feature : microk8s reset
468 Enable communication over Kafka
469 -------------------------------
470 To build a custom Kafka Cluster, Set UseStrimziKafka in policy/value.yaml to false, Or do not have any Strimzi-Kafka policy configuration in oom/kubernetes/policy/
472 The following commands will create a simple custom kafka cluster, This strimzi cluster is not an ONAP based Strimzi Kafka Cluster. A custom kafka cluster is established with ready to use commands from https://strimzi.io/quickstarts/
476 kubectl create namespace kafka
478 After that, we feed Strimzi with a simple Custom Resource, which will then give you a small persistent Apache Kafka Cluster with one node each for Apache Zookeeper and Apache Kafka:
480 # Apply the `Kafka` Cluster CR file
484 kubectl apply -f https://strimzi.io/examples/latest/kafka/kafka-persistent-single.yaml -n kafka
486 We now need to wait while Kubernetes starts the required pods, services and so on:
491 kubectl wait kafka/my-cluster --for=condition=Ready --timeout=300s -n kafka
493 The above command might timeout if you’re downloading images over a slow connection. If that happens you can always run it again.
495 Once the cluster is running, you can run a simple producer to send messages to a Kafka topic (the topic will be automatically created):
500 kubectl -n kafka run kafka-producer -ti --image=quay.io/strimzi/kafka:0.31.1-kafka-3.2.3 --rm=true --restart=Never -- bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic my-topic
502 And to receive them in a different terminal you can run:
507 kubectl -n kafka run kafka-consumer -ti --image=quay.io/strimzi/kafka:0.31.1-kafka-3.2.3 --rm=true --restart=Never -- bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic my-topic --from-beginning
509 NOTE: If targeting an ONAP based Strimzi Kafka cluster with security certs, Set UseStrimziKafka to true.
510 By doing this, A policy-kafka-user, policy-kafka-topics are created in Strimzi kafka.
512 In the case of a custom kafka cluster, topics have to be either manually created with the command below or programatically created with "allow.auto.create.topics = true" in Consumer config properties. Replace the topic below in the code block and create as many topics as needed for the component.
516 cat << EOF | kubectl create -n kafka -f -
517 apiVersion: kafka.strimzi.io/v1beta2
520 name: policy-acruntime-participant
522 strimzi.io/cluster: "my-cluster"
528 Policy application properties need to be modified for communication over Kafka.
529 Modify the configuration of Topic properties for the components that need to communicate over kafka
535 topic: policy-acruntime-participant
537 - dev-strimzi-kafka-bootstrap:9092
538 topicCommInfrastructure: kafka
542 group-id: policy-group
543 key.deserializer: org.apache.kafka.common.serialization.StringDeserializer
544 value.deserializer: org.apache.kafka.common.serialization.StringDeserializer
545 partition.assignment.strategy: org.apache.kafka.clients.consumer.RoundRobinAssignor
546 enable.auto.commit: false
547 auto.offset.reset: earliest
548 security.protocol: SASL_PLAINTEXT
549 properties.sasl.mechanism: SCRAM-SHA-512
550 properties.sasl.jaas.config: ${JAASLOGIN}
554 topic: policy-acruntime-participant
556 - dev-strimzi-kafka-bootstrap:9092
557 topicCommInfrastructure: kafka
560 key.serializer: org.apache.kafka.common.serialization.StringSerializer
561 value.serializer: org.apache.kafka.common.serialization.StringSerializer
563 retry.backoff.ms: 150
565 security.protocol: SASL_PLAINTEXT
566 properties.sasl.mechanism: SCRAM-SHA-512
567 properties.sasl.jaas.config: ${JAASLOGIN}
569 Note: security.protocol can simply be PLAINTEXT, if targetting a custom kafka cluster
575 topic: policy-acruntime-participant
577 - my-cluster-kafka-bootstrap.mykafka.svc:9092
578 topicCommInfrastructure: kafka
582 group-id: policy-group
583 key.deserializer: org.apache.kafka.common.serialization.StringDeserializer
584 value.deserializer: org.apache.kafka.common.serialization.StringDeserializer
585 partition.assignment.strategy: org.apache.kafka.clients.consumer.RoundRobinAssignor
586 enable.auto.commit: false
587 auto.offset.reset: earliest
588 security.protocol: PLAINTEXT
592 topic: policy-acruntime-participant
594 - my-cluster-kafka-bootstrap.mykafka.svc:9092
595 topicCommInfrastructure: kafka
598 key.serializer: org.apache.kafka.common.serialization.StringSerializer
599 value.serializer: org.apache.kafka.common.serialization.StringSerializer
601 retry.backoff.ms: 150
603 security.protocol: PLAINTEXT
605 Ensure strimzi and policy pods are running, and topics are created with the commands below
609 $ kubectl get kafka -n onap
610 NAME DESIRED KAFKA REPLICAS DESIRED ZK REPLICAS READY WARNINGS
611 dev-strimzi 2 2 True True
613 $ kubectl get kafkatopics -n onap
614 NAME CLUSTER PARTITIONS REPLICATION FACTOR READY
615 consumer-offsets---84e7a678d08f4bd226872e5cdd4eb527fadc1c6a dev-strimzi 50 2 True
616 policy-acruntime-participant dev-strimzi 10 2 True
617 policy-heartbeat dev-strimzi 10 2 True
618 policy-notification dev-strimzi 10 2 True
619 policy-pdp-pap dev-strimzi 10 2 True
620 strimzi-store-topic---effb8e3e057afce1ecf67c3f5d8e4e3ff177fc55 dev-strimzi 1 2 True
621 strimzi-topic-operator-kstreams-topic-store-changelog---b75e702040b99be8a9263134de3507fc0cc4017b dev-strimzi 1 2 True
626 $kubectl get kafkatopics -n mykafka
627 NAME CLUSTER PARTITIONS REPLICATION FACTOR READY
628 strimzi-store-topic---effb8e3e057afce1ecf67c3f5d8e4e3ff177fc55 my-cluster 1 1 True
629 strimzi-topic-operator-kstreams-topic-store-changelog---b75e702040b99be8a9263134de3507fc0cc4017b my-cluster 1 1 True
630 consumer-offsets---84e7a678d08f4bd226872e5cdd4eb527fadc1c6a my-cluster 50 1 True
631 policy-acruntime-participant my-cluster 3 1 True
632 policy-pdp-pap my-cluster 3 1 True
633 policy-heartbeat my-cluster 3 1 True
634 policy-notification my-cluster 3 1 True
637 The following commands will execute a quick check to see if the Kafka producer and Kafka Consumer are working, with the given Bootstrap server and topic.
641 kubectl -n mykafka run kafka-producer -ti --image=quay.io/strimzi/kafka:0.31.1-kafka-3.2.3 --rm=true --restart=Never -- bin/kafka-console-producer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic policy-acruntime-participant
643 kubectl -n mykafka run kafka-consumer -ti --image=quay.io/strimzi/kafka:0.31.1-kafka-3.2.3 --rm=true --restart=Never -- bin/kafka-console-consumer.sh --bootstrap-server my-cluster-kafka-bootstrap:9092 --topic policy-acruntime-participant
646 The following table lists some properties that can be specified as Helm chart
648 +---------------------------------------+---------------------------------------------------------------------------------------------------------+-------------------------------+
649 | Property | Description | Default Value |
650 +=======================================+=========================================================================================================+===============================+
651 | config.useStrimziKafka | If targeting a custom kafka cluster, ie useStrimziKakfa: false | true |
652 +---------------------------------------+---------------------------------------------------------------------------------------------------------+-------------------------------+
653 | bootstrap-servers | Kafka hostname and port | ``<kafka-bootstrap>:9092`` |
654 +---------------------------------------+---------------------------------------------------------------------------------------------------------+-------------------------------+
655 | consumer.client-id | Kafka consumer client id | |
656 +---------------------------------------+---------------------------------------------------------------------------------------------------------+-------------------------------+
657 | security.protocol | Kafka security protocol. | ``SASL_PLAINTEXT`` |
658 | | Some possible values are: | |
660 | | * ``PLAINTEXT`` | |
661 | | * ``SASL_PLAINTEXT``, for authentication | |
662 | | * ``SASL_SSL``, for authentication and encryption | |
663 +---------------------------------------+---------------------------------------------------------------------------------------------------------+-------------------------------+
664 | sasl.mechanism | Kafka security SASL mechanism. Required for SASL_PLAINTEXT and SASL_SSL protocols. | Not defined |
665 | | Some possible values are: | |
667 | | * ``PLAIN``, for PLAINTEXT | |
668 | | * ``SCRAM-SHA-512``, for SSL | |
669 +---------------------------------------+---------------------------------------------------------------------------------------------------------+-------------------------------+
670 | sasl.jaas.config | Kafka security SASL JAAS configuration. Required for SASL_PLAINTEXT and SASL_SSL protocols. | Not defined |
671 | | Some possible values are: | |
673 | | * ``org.apache.kafka.common.security.plain.PlainLoginModule required username="..." password="...";``, | |
674 | | for PLAINTEXT | |
675 | | * ``org.apache.kafka.common.security.scram.ScramLoginModule required username="..." password="...";``, | |
677 +---------------------------------------+---------------------------------------------------------------------------------------------------------+-------------------------------+
678 | ssl.trust-store-type | Kafka security SASL SSL store type. Required for SASL_SSL protocol. | Not defined |
679 | | Some possible values are: | |
682 +---------------------------------------+---------------------------------------------------------------------------------------------------------+-------------------------------+
683 | ssl.trust-store-location | Kafka security SASL SSL store file location. Required for SASL_SSL protocol. | Not defined |
684 +---------------------------------------+---------------------------------------------------------------------------------------------------------+-------------------------------+
685 | ssl.trust-store-password | Kafka security SASL SSL store password. Required for SASL_SSL protocol. | Not defined |
686 +---------------------------------------+---------------------------------------------------------------------------------------------------------+-------------------------------+
687 | ssl.endpoint.identification.algorithm | Kafka security SASL SSL broker hostname identification verification. Required for SASL_SSL protocol. | Not defined |
688 | | Possible value is: | |
690 | | * ``""``, empty string to disable | |
691 +---------------------------------------+---------------------------------------------------------------------------------------------------------+-------------------------------+
696 If you have deployed the robot pod or have a local robot installation, you can perform some tests using the scripts provided in the OOM repo
698 Browse to the test suite you have started and open the folder, click the report.html to see the robot test results.