1 .. This work is licensed under a Creative Commons Attribution 4.0
2 .. International License. http://creativecommons.org/licenses/by/4.0
3 .. Copyright © 2017-2020 Aarna Networks, Inc.
5 .. _Elastic Stack: https://www.elastic.co/products
6 .. _Elasticsearch: https://www.elastic.co/elasticsearch
7 .. _Kibana Discover: https://www.elastic.co/guide/en/kibana/current/discover.html
13 The purpose of the ONAP Log Analytics project is to provide standardized logs across all ONAP components using ELK framework for log capture, indexing and presentation/search.
18 ONAP uses ELK stack for centralized logging for all the ONAP components. ELK stands for Elastic search, Logstash, Kibana. The aggregated logging framework uses Filebeat to send logs from each pod to the ELK stack where they are processed and stored. This requires each pod to have an additional container which will run Filebeat, and for the necessary log files to be accessible between containers. We use filebeat as a side car container for all the onap components to push logs to the logstash node port.
20 |logging-architecture|
22 Deploy ONAP Logging Component
23 =============================
25 Following are the detailed steps to install the logging component and access the ONAP component’s logs via centralized dashboard UI called Kibana discover.
29 The logging analytics stack (Elasticsearch, Logstash, Kibana) is provided as part of the OOM deployment via the component log.
31 1) Check if the logging component is deployed
32 ---------------------------------------------
34 Run the below command to check if the logging component is installed.
38 helm list | grep -i log
39 dev-log 1 Thu Dec 10 10:47:05 2020 DEPLOYED log-6.0.0 onap
41 If it does not return anything that means log component is not deployed.
43 2) Deploy log component (If it is not deployed)
44 -----------------------------------------------
46 a) Get the helm chart for log component
47 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
51 helm search | grep log
52 local/log 6.0.0 ONAP Logging ElasticStack
54 b) Deploy log component
55 ~~~~~~~~~~~~~~~~~~~~~~~~
59 helm upgrade --install dev-log local/log --namespace onap --timeout 900 --set 'flavor=unlimited'
61 3) Verify if log component is deployed now
62 ------------------------------------------
64 a) Verify the log component deployment status
65 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
69 helm list | grep -i log
70 dev-log 1 Thu Dec 10 10:47:05 2020 DEPLOYED log-6.0.0 onap
72 The status should show as deployed.
74 b) Verify that all the logging pods are up and running
75 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
79 kubectl get pod -n onap | grep log-
80 dev-log-elasticsearch-6c8f844446-rjpvs 1/1 Running 0 10d
81 dev-log-kibana-6d57c74667-t6hm2 1/1 Running 0 10d
82 dev-log-logstash-7fb656b4c9-2mttc 1/1 Running 0 10d
83 dev-log-logstash-7fb656b4c9-jdkdf 1/1 Running 0 10d
84 dev-log-logstash-7fb656b4c9-zmtl7 1/1 Running 0 10d
85 dev-modeling-etsicatalog-744b5b5955-5khg8 2/2 Running 1 12d
86 dev-so-catalog-db-adapter-988fb5db4-qzgss 1/1 Running 0 11d
88 All the pods should be up and running.
90 c) Verify that all the log pods are exposed
91 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
95 kubectl get svc -n onap | grep log-
96 log-es NodePort 10.43.85.25 <none> 9200:30254/TCP 10d
97 log-es-tcp ClusterIP 10.43.26.175 <none> 9300/TCP 10d
98 log-kibana NodePort 10.43.189.12 <none> 5601:30253/TCP 10d
99 log-ls NodePort 10.43.160.207 <none> 5044:30255/TCP 10d
100 log-ls-http ClusterIP 10.43.208.52 <none> 9600/TCP 10d
102 All the pods are exposed.
104 ONAP components with filebeat containers
105 ========================================
107 As mentioned already ONAP logging framework uses Filebeat to send logs from each pod to the ELK stack where they are processed and stored. Below is the list of ONAP components with filebeat containers.
111 kubectl get pods -n onap | grep 2/2
113 dep-dcae-hv-ves-collector-868f7b7ffc-9mgk6 2/2 Running 0 12d
114 dep-dcae-prh-8499df6dcf-x2bz7 2/2 Running 0 12d
115 dep-dcae-tca-analytics-cc44cb89-8qg2p 2/2 Running 0 12d
116 dep-dcae-tcagen2-6d59df6fb4-85qtr 2/2 Running 0 12d
117 dep-dcae-ves-collector-55f5b4f469-jd5xd 2/2 Running 0 12d
118 dev-aaf-sms-vault-0 2/2 Running 0 12d
119 dev-aai-babel-79d8d4f674-9l4h6 2/2 Running 0 12d
120 dev-aai-data-router-66d8897bc6-6vw77 2/2 Running 0 12d
121 dev-aai-graphadmin-7664654967-t78kb 2/2 Running 0 12d
122 dev-aai-modelloader-7486f7c665-8bmvp 2/2 Running 0 12d
123 dev-aai-resources-5c996776fd-nvdbq 2/2 Running 0 12d
124 dev-aai-schema-service-c9464576-5z4nh 2/2 Running 0 12d
125 dev-aai-search-data-6c899c7466-8qvkx 2/2 Running 0 12d
126 dev-aai-sparky-be-8f5569986-j88kv 2/2 Running 0 12d
127 dev-aai-traversal-6b89655c6d-r8kkf 2/2 Running 0 12d
128 dev-dcae-dashboard-7c4d647c68-hpqqr 2/2 Running 0 12d
129 dev-dcae-deployment-handler-68ff4db5d5-jk62q 2/2 Running 0 12d
130 dev-dcae-inventory-api-6d584b55d5-lzh8m 2/2 Running 0 12d
131 dev-dcae-policy-handler-587bb84c49-9gpd4 2/2 Running 0 12d
132 dev-dcaemod-genprocessor-78c588cfb5-9v5q2 2/2 Running 0 12d
133 dev-dmaap-dr-node-0 2/2 Running 0 12d
134 dev-dmaap-dr-prov-745f65979c-vw97l 2/2 Running 0 12d
135 dev-esr-server-759ccd4fcd-tvq48 2/2 Running 0 12d
136 dev-modeling-etsicatalog-744b5b5955-5khg8 2/2 Running 1 12d
137 dev-msb-discovery-5fb7c77c97-khl72 2/2 Running 0 12d
138 dev-msb-eag-bdff68dbf-7rprj 2/2 Running 0 12d
139 dev-msb-iag-5cd9744464-nq5dz 2/2 Running 0 12d
140 dev-multicloud-6b6d7f9f4c-szbsx 2/2 Running 0 12d
141 dev-multicloud-azure-56d85dfbf-jshpp 2/2 Running 0 12d
142 dev-multicloud-k8s-5498c868b4-2vzw8 2/2 Running 0 12d
143 dev-multicloud-pike-6697844fb5-5ckj7 2/2 Running 0 12d
144 dev-multicloud-vio-69d6cb7cfd-g87xh 2/2 Running 0 12d
145 dev-pdp-0 2/2 Running 0 12d
146 dev-policy-5f85767b74-c5btk 2/2 Running 0 12d
147 dev-portal-app-6f5cbdbf6f-z5w9g 2/2 Running 0 12d
148 dev-portal-sdk-79ffcff9d5-56xj8 2/2 Running 0 12d
149 dev-sdc-be-68b4dddf69-qz9d6 2/2 Running 0 9d
150 dev-sdc-dcae-be-95dcd7ccf-kk9pc 2/2 Running 0 9d
151 dev-sdc-dcae-dt-6c8568db54-4jvgv 2/2 Running 0 9d
152 dev-sdc-dcae-fe-66894f8765-dx2t6 2/2 Running 0 9d
153 dev-sdc-dcae-tosca-lab-59d6f8b74f-2985g 2/2 Running 0 9d
154 dev-sdc-fe-59977f556d-qmszf 2/2 Running 0 9d
155 dev-sdc-onboarding-be-679c4df66c-4kskk 2/2 Running 0 9d
156 dev-sdc-wfd-fe-54f8596994-zvpgp 2/2 Running 0 9d
157 dev-sdnc-0 2/2 Running 0 12d
158 dev-sdnrdb-coordinating-only-544c5bc596-49gw7 2/2 Running 0 12d
159 dev-so-6cb779c78b-fqrkx 2/2 Running 0 11d
160 dev-so-bpmn-infra-6b8cdb54f7-vcm5f 2/2 Running 0 11d
161 dev-so-openstack-adapter-7584878db6-srpjs 2/2 Running 0 11d
162 dev-so-sdc-controller-747f4485f9-tjwhb 2/2 Running 0 11d
163 dev-so-sdnc-adapter-5c5f98bf7f-cbd2c 2/2 Running 0 11d
164 dev-vfc-generic-vnfm-driver-7f459b74cf-2kcq9 2/2 Running 0 12d
165 dev-vfc-huawei-vnfm-driver-5b57557467-5j87x 2/2 Running 0 12d
166 dev-vfc-juju-vnfm-driver-6455bd954b-zbfwh 2/2 Running 0 12d
167 dev-vfc-nslcm-6d96959f5f-9fpdm 2/2 Running 0 12d
168 dev-vfc-resmgr-7768d6889d-rlw87 2/2 Running 0 12d
169 dev-vfc-vnflcm-86f65c4459-gz9q7 2/2 Running 0 12d
170 dev-vfc-vnfmgr-5cb6467fdd-wbcfb 2/2 Running 0 12d
171 dev-vfc-vnfres-5c5c69885b-bh59q 2/2 Running 1 12d
172 dev-vfc-zte-vnfm-driver-66c978dfc7-l57vq 2/2 Running 0 12d
173 dev-vid-688f46488f-ctlwh 2/2 Running 0 12d
175 For examples, let’s look at SO component and check the filebeat container details.
179 kubectl get pods -n onap | grep 'dev-so-'
180 dev-so-6cb779c78b-fqrkx 2/2 Running 0 11d
181 dev-so-appc-orchestrator-5df5cc4f9b-2mwgm 1/1 Running 0 11d
182 dev-so-bpmn-infra-6b8cdb54f7-vcm5f 2/2 Running 0 11d
183 dev-so-catalog-db-adapter-988fb5db4-qzgss 1/1 Running 0 11d
184 dev-so-mariadb-config-job-zqp56 0/1 Completed 0 12d
185 dev-so-monitoring-c69f6bdf8-ldxn8 1/1 Running 0 11d
186 dev-so-nssmf-adapter-6fdbbbbf57-6prlq 1/1 Running 0 11d
187 dev-so-openstack-adapter-7584878db6-srpjs 2/2 Running 0 11d
188 dev-so-request-db-adapter-8467c89c76-g4hgt 1/1 Running 0 11d
189 dev-so-sdc-controller-747f4485f9-tjwhb 2/2 Running 0 11d
190 dev-so-sdnc-adapter-5c5f98bf7f-cbd2c 2/2 Running 0 11d
191 dev-so-ve-vnfm-adapter-7fb59df855-98n98 1/1 Running 0 11d
192 dev-so-vfc-adapter-5cd8454bb6-gvklj 1/1 Running 0 11d
193 dev-so-vnfm-adapter-678f655bff-q9cr9 1/1 Running 0 11d
195 In the above output notice that there are 3 pods with status as 2/2 running, which means those pods has 2 containers in it and these are pods which contains filebeat containers.
197 Now use the following command to check the filebeat container details for one of the SO pod.
201 kubectl describe pod -n onap dev-so-6cb779c78b-fqrkx | grep -i filebeat
203 Image: docker.elastic.co/beats/filebeat:5.5.0
204 Image ID: docker-pullable://docker.elastic.co/beats/filebeat@sha256:fe7602b641ed8ee288f067f7b31ebde14644c4722d9f7960f176d621097a5942
205 /usr/share/filebeat/data from dev-so-data-filebeat (rw) /usr/share/filebeat/filebeat.yml from dev-so-filebeat-conf (rw,path="filebeat.yml")
206 dev-so-filebeat-conf:
207 Name: dev-so-so-filebeat-configmap
208 dev-so-data-filebeat:
210 So, we have seen that ELK stack and filebeat containers are being deployed and configured as part of the OOM deployment.
212 Access Kibana UI to visualize the ONAP component’s logs
213 =======================================================
215 The frontend UI, Kibana, can be accessed from a web browser on the port 30253. Below is the command showing that the log-kibana service is exposed to a Nodeport 30253.
219 kubectl get svc -n onap | grep log-kibana
220 log-kibana NodePort 10.43.189.12 <none> 5601:30253/TCP 10d
222 Follow the below steps to access Kibana UI and view the logs
223 ------------------------------------------------------------
225 1) To launch the Kibana UI, navigate to http://<vm-ip-address>:30253 in your browser. Refer the below screen shot for the same.
229 2) Provide the Index name or pattern as logstash-* and time filter field as timestamp and then click on create button.
233 3) Click on Visualize in the left pane and then click on create a visualization.
237 4) Now click on Data table as shown below.
241 5) Now you will choose the search source. You can see the logstash-* is available in list,click on it.
245 6) Select the bucket type as split rows as shown in the below screen shot.
249 7) Provide the following details.
253 8) Click on options, enter the values for per page and click on apply changes.
257 9) Click on clock icon in the top right corner, then select the time range as last 7 days. Click on Auto-refresh and select the desired auto-refresh frequency and then click the apply changes button.
261 10) Now you can see, it has populated the list of all onap component logs.
265 11) Click on Discover in the left pane and it will populate the logs.
269 12) You can search the logs for any component. Just provide the component name in the search field. For example we have entered SO.
273 13) Now you can see the log details by clicking on the expand sign for any particular date and time.
282 .. |logging-architecture| image:: ../media/logging-architecture.png
283 .. |image1| image:: ../media/kibana_ui.png
284 .. |image2| image:: ../media/configure-index-pattern.png
285 .. |image3| image:: ../media/visualization.png
286 .. |image4| image:: ../media/data-table.png
287 .. |image5| image:: ../media/search-source-logstash.png
288 .. |image6| image:: ../media/select-bucket-type.png
289 .. |image7| image:: ../media/split-row-data.png
290 .. |image8| image:: ../media/visualize-options.png
291 .. |image9| image:: ../media/time-range.png
292 .. |image10| image:: ../media/onap-component-logs.png
293 .. |image11| image:: ../media/onap-discover.png
294 .. |image12| image:: ../media/search-component-logs.png
295 .. |image13| image:: ../media/log-expand.png
296 .. |image14| image:: ../media/log-details.png