1 .. This work is licensed under a Creative Commons Attribution 4.0 International License.
2 .. http://creativecommons.org/licenses/by/4.0
12 - `Types of Users and Usage
13 Instructions: <#DCAEMODUserGuide(draft)-TypesofUsersand>`__
15 - `1. Deployment of DCAE MOD components via Helm
16 charts <#DCAEMODUserGuide(draft)-1.DeploymentofD>`__
18 - `Using DCAE MOD without an Ingress
19 Controller <#DCAEMODUserGuide(draft)-UsingDCAEMODwit>`__
21 - `2. Configuring DCAE
22 mod <#DCAEMODUserGuide(draft)-2.ConfiguringDC>`__
24 - `3. Design & Distribution
25 Flow <#DCAEMODUserGuide(draft)-3.Design&Distri>`__
28 Types of Users and Usage Instructions:
29 ======================================
31 +-------+-----------------------------+-----------------------------+
32 | Sr.No | User | Usage Instructions |
33 +=======+=============================+=============================+
34 | 1. | Developers who are looking | - Access the Nifi |
35 | | to onboard their mS | Web UI url provided to you |
37 | | | - Follow steps 2.c |
40 | | | - You should be able |
41 | | | to see your microservices |
42 | | | in the Nifi Web UI by |
43 | | | clicking and dragging |
44 | | | ‘Processor’ on the canvas, |
45 | | | and searching for the name |
48 | | | ervice/component/processor. |
49 +-------+-----------------------------+-----------------------------+
50 | 2. | Designers who are building | - Access the Nifi |
51 | | the flows through UI and | Web UI url provided to you |
52 | | triggering distribution | |
53 | | | - Follow steps 3 to |
54 | | | the end of the document |
55 +-------+-----------------------------+-----------------------------+
56 | 3. | Infrastructure/ Admins who | - Follow start to |
57 | | want to stand up DCAE Mod | the end |
58 | | and validate it | |
59 +-------+-----------------------------+-----------------------------+
61 1. Pre-requisite for DCAE MOD Deployment
62 ===========================================
64 With complete of DCAE Helm tranformation in Jakarta release, DCAE MOD has been enhanced
65 to support Helm chart generation for microservices onboarded.
66 In order to support the HELM flow through MOD, following dependency should be met
69 - An accessible ChartMuseum registry (internal or external)
71 - As the provided registry is used both to pull required dependencies
72 and push new generated charts, all common charts used by DCAE
73 components must be available in this registry.
76 By default, MOD charts are set to use local chartmuseum registry. This can be modified by
77 updating the `RuntimeAPI charts
78 deployment <https://git.onap.org/oom/tree/kubernetes/dcaemod/components/dcaemod-runtime-api/values.yaml#n44>`__
81 ONAP deployments (gating) will include Chartmuseum installation within
82 ONAP cluster (charts hosted here
83 - https://github.com/onap/oom/tree/master/kubernetes/platform/components/chartmuseum).
85 Dependent charts such as - dcaegen2-services-common, readinessCheck,
86 common, repositoryGenerator, postgres, mongo, serviceAccount,
87 certInitializer should be preloaded into this registry as MOD retrieves
88 them during new MS helm charts creation and linting. To support the
89 registry initialization, following scripts has been introduced.
91 - https://github.com/onap/oom/blob/master/kubernetes/contrib/tools/registry-initialize.sh
93 - https://github.com/onap/oom/blob/master/kubernetes/robot/demo-k8s.sh
95 Note: Chartmuseum being a platform component, it has to be enabled
96 on-demand and not available with generic ONAP installation.
98 Follow below steps to setup chartmuseum and pre-load required charts.
100 Chartmuseum Installation
101 ------------------------
103 Clone OOM repository and deploy optional Chartmuseum component
105 **Chartmuseum Deployment**
109 # git clone -b <BRANCH> http://gerrit.onap.org/r/oom --recurse-submodules
110 cd ~/oom/kubernetes/platform/components/chartmuseum
111 helm install -name dev-chartmuseum -n onap . -f ~/onap-1-override.yaml --set global.masterPassword=test1 --set global.pullPolicy=IfNotPresent
115 This instance of chartmuseum registry is deployed internal to ONAP cluster and
116 is different from the registry setup done part `OOM
117 deployment <https://docs.onap.org/projects/onap-oom/en/latest/oom_quickstart_guide.html>`__
118 where local helm server is setup for serving chart and to pull/push the
119 charts generated make process
121 Chartmuseum initialization
122 --------------------------
124 As noted earlier, there are two scripts available for pre-load. The
125 `registry-initialize.sh <https://github.com/onap/oom/blob/master/kubernetes/contrib/tools/registry-initialize.sh>`__
126 retrieves the Chartmuseum credential from secret and load the charts
127 individually based on parameter (default no parameters, will load all
128 DCAE service charts and its dependencies). And
129 `demo-k8s.sh <https://github.com/onap/oom/blob/master/kubernetes/robot/demo-k8s.sh>`__
130 is wrapper script used in gating, which invokes
131 `registry-initialize.sh <https://github.com/onap/oom/blob/master/kubernetes/contrib/tools/registry-initialize.sh>`__
132 with required parameters.
134 **Chartmuseum initialization via demo-k8s.sh**
138 cd ~/oom/kubernetes/robot
139 ./demo-k8s.sh onap registrySynch
143 **Chartmuseum initialization via registry-initialize script**
147 cd ~/oom/kubernetes/contrib/tools
148 ./registry-initialize.sh -d ../../dcaegen2-services/charts/ -n onap -r dev-chartmuseum
149 ./registry-initialize.sh -d ../../dcaegen2-services/charts/ -n onap -r dev-chartmuseum -p common
150 ./registry-initialize.sh -h repositoryGenerator -n onap -r dev-chartmuseum
151 ./registry-initialize.sh -h readinessCheck -n onap -r dev-chartmuseum
152 ./registry-initialize.sh -h dcaegen2-services-common -n onap -r dev-chartmuseum
153 ./registry-initialize.sh -h postgres -n onap -r dev-chartmuseum
154 ./registry-initialize.sh -h serviceAccount -n onap -r dev-chartmuseum
155 ./registry-initialize.sh -h certInitializer -n onap -r dev-chartmuseum
156 ./registry-initialize.sh -h mongo -n onap -r dev-chartmuseum
160 2. Deployment of DCAE MOD components via Helm charts
161 =======================================================
163 The DCAE MOD components are deployed using the standard ONAP OOM
164 deployment process. When deploying ONAP using the helm deploy command,
165 DCAE MOD components are deployed when the dcaemod.enabled flag is set to
166 true, either via a --set option on the command line or by an entry in an
167 overrides file. In this respect, DCAE MOD is no different from any
168 other ONAP subsystem.
170 The default DCAE MOD deployment relies on an nginx ingress controller
171 being available in the Kubernetes cluster where DCAE MOD is being
172 deployed. The Rancher RKE installation process sets up a suitable
173 ingress controller. In order to enable the use of the ingress
174 controller, it is necessary to override the OOM default global settings
175 for ingress configuration. Specifically, the installation needs to set
176 the following configuration in an override file
182 baseurl: "simpledemo.onap.org"
184 When DCAE MOD is deployed with an ingress controller, several endpoints
185 are exposed outside the cluster at the ingress controller's external IP
186 address and port. (In the case of a Rancher RKE installation, there is
187 an ingress controller on every worker node, listening at the the
188 standard HTTP port (80).) These exposed endpoints are needed by users
189 using machines outside the Kubernetes cluster.
191 +--------------+--------------------------------------------------+--------------------------+
192 | **Endpoint** | ** Routes to (cluster | **Description** |
193 | | internal address)** | |
194 +==============+==================================================+==========================+
195 | /nifi | http://dcaemod-designtool:8080/nifi | Design tool Web UI |
197 +--------------+--------------------------------------------------+--------------------------+
198 | /nifi-api | http://dcaemod-designtool:8080/nifi-api | Design tool API |
200 +--------------+--------------------------------------------------+--------------------------+
201 | /nifi-jars | http://dcaemod-nifi-registry:18080/nifi-jars | Flow registry listing of |
202 | | | JAR files built from |
203 | | | component specs |
204 +--------------+--------------------------------------------------+--------------------------+
205 | /onboarding | http://dcaemod-onboarding-api:8080/onboarding | Onboarding API |
207 +--------------+--------------------------------------------------+--------------------------+
208 | /distributor | http://dcaemod-distributor-api:8080/distributor | Distributor API |
210 +--------------+--------------------------------------------------+--------------------------+
212 | To access the design Web UI, for example, a user would use the URL :
213 http://*ingress_controller_address:ingress_controller_port*/nifi.
214 | *ingress_controller_address* is the the IP address or DNS FQDN of the
215 ingress controller and
216 | *ingress_controller_port* is the port on which the ingress controller
217 is listening for HTTP requests. (If the port is 80, the HTTP default,
218 then there is no need to specify a port.)
220 There are two additional *internal* endpoints that users need to know,
221 in order to configure a registry client and a distribution target in the
222 design tool's controller settings.
224 +------------------------+--------------------------------------------+
225 | **Configuration Item** | **Endpoint URL** |
226 +========================+============================================+
227 | Registry client | http://dcaemod-nifi-registry:18080 |
228 +------------------------+--------------------------------------------+
229 | Distribution target | http://dcaemod-runtime-api:9090 |
230 +------------------------+--------------------------------------------+
232 With Guilin release, OOM/ingress template has been updated to enable virtual host by default.
233 All MOD API's and UI access via ingress should use dcaemod.simpledemo.onap.org.
235 In order to access Design UI from local, add an entry for dcaemod.simpledemo.onap.org in /etc/hosts with the correct IP (any K8S node IP can be specified).
238 Example below using generic override
244 helm install dev-dcaemod local/dcaemod --namespace onap -f ~/onap-override.yaml --set global.masterPassword=test1 --set global.pullPolicy=IfNotPresent
246 Using DCAE MOD without an Ingress Controller
249 Not currently supported
252 3. Configuring DCAE mod
253 ==========================
255 **a. Configure Nifi Registry url**
257 Next check Nifi settings by selecting the Hamburger button in the Nifi
258 UI. It should lead you to the Nifi Settings screen
264 Add a registry client. The Registry client url will be
265 http://dcaemod-nifi-registry:18080
270 **b. Add distribution target which will be the runtime api url**
272 Set the distribution target in the controller settings
276 Distribution target URL will be
277 `http://dcaemod-runtime-api:9090 <http://dcaemod-runtime-api:9090/>`__
281 Now let’s access the Nifi (DCAE designer) UI - http://dcaemod.simpledemo.onap.org/nifi
283 IPAddress is the host address or the DNS FQDN, if there is one, for one of the Kubernetes nodes.
288 **c. Get the artifacts to test and onboard.**
290 MOD components has been upgraded to use v3 specification for Helm flow support
295 **Component Spec for DCAE-VES-Collector :** https://git.onap.org/dcaegen2/collectors/ves/tree/dpo/spec/vescollector-componentspec-v3.json
297 **VES 5.28.4 Data Format :** https://git.onap.org/dcaegen2/collectors/ves/tree/dpo/data-formats/VES-5.28.4-dataformat.json
299 **VES 7.30.2.1 Data Format :** https://git.onap.org/dcaegen2/collectors/ves/tree/etc/CommonEventFormat_30.2.1_ONAP.jsonormat.json
301 **VES Collector Response Data Format :** https://git.onap.org/dcaegen2/collectors/ves/tree/dpo/data-formats/ves-response.json
307 **Component Spec for DCAE-TCAgen2 :** https://git.onap.org/dcaegen2/collectors/ves/tree/dpo/spec/vescollector-componentspec.json
309 **TCA CL Data Format :** https://git.onap.org/dcaegen2/analytics/tca-gen2/tree/dcae-analytics/dpo/dcaeCLOutput.json
311 **TCA DMAAP Format :** https://git.onap.org/dcaegen2/analytics/tca-gen2/tree/dcae-analytics/dpo/dmaap.json
313 **TCA AAI Data Format :** https://git.onap.org/dcaegen2/analytics/tca-gen2/tree/dcae-analytics/dpo/aai.json
317 For the purpose of onboarding, a Sample Request body should be of the type -::
319 { "owner": "<some value>", "spec": <some json object> }
321 where the json object inside the spec field can be a component spec json.
323 Request bodies of this type will be used in the onboarding requests you make using curl or the onboarding swagger interface.
325 **The prepared Sample Request body for a component dcae-ves-collector looks like
328 See :download:`VES Collector Spec <./Sample-Input-Files/Request-body-of-Sample-Component_v3.json>`
330 **The prepared Sample request body for a sample data format looks like so -**
332 See :download:`VES data Format <./Sample-Input-Files/Request-body-of-Sample-Data-Format.json>`
334 Similar updates should be done for other specification and data-formats files
337 **d. Onboard data format and component-spec**
339 Each component has a description that tells what it does.
341 These requests would be of the type
343 curl -X POST http://<onboardingapi host>/onboarding/dataformats -H "Content-Type: application/json" -d
344 @<filepath to request>
346 curl -X POST http://<onboardingapi host>/onboarding/components -H "Content-Type: application/json" -d
347 @<filepath to request>
351 curl -X POST http://dcaemod.simpledemo.onap.org/onboarding/dataformats -H "Content-Type: application/json" -d @<filepath to request>
353 curl -X POST http://dcaemod.simpledemo.onap.org/onboarding/components -H "Content-Type: application/json" -d @<filepath to request>
355 **Onboard Specs and DF**
359 HOST=dcaemod.simpledemo.onap.org
360 curl -X POST http://$HOST/onboarding/dataformats -H "Content-Type: application/json" -d @ves-4.27.2-df.json
361 curl -X POST http://$HOST/onboarding/dataformats -H "Content-Type: application/json" -d @ves-5.28.4-df.json
362 curl -X POST http://$HOST/onboarding/dataformats -H "Content-Type: application/json" -d @ves-response-df.json
363 curl -X POST http://$HOST/onboarding/dataformats -H "Content-Type: application/json" -d @VES-7.30.2_ONAP-dataformat_onboard.json
364 curl -X POST http://$HOST/onboarding/components -H "Content-Type: application/json" -d @vescollector-componentspec-v3-mod.json
366 curl -X POST http://$HOST/onboarding/dataformats -H "Content-Type: application/json" -d @dcaeCLOutput-resp.json
367 curl -X POST http://$HOST/onboarding/dataformats -H "Content-Type: application/json" -d @aai-resp.json
368 curl -X POST http://$HOST/onboarding/components -H "Content-Type: application/json" -d @tcagen2-componentspec-v3-mod.json
370 You can download the Component Specification and Data Formats used for
371 the demo from here - `demo.zip <https://wiki.onap.org/download/attachments/128713665/demo.zip?version=1&modificationDate=1646673042000&api=v2>`__
374 **e. Verify the resources were created using**
376 curl -X GET http://dcaemod.simpledemo.onap.org/onboarding/dataformats
378 curl -X GET http://dcaemod.simpledemo.onap.org/onboarding/components
380 **f. Verify the genprocessor (which polls onboarding periodically to convert component specs to nifi processor), converted the component**
382 Open http://dcaemod.simpledemo.onap.org/nifi-jars in a browser.
384 These jars should now be available for you to use in the nifi UI as
389 4. Design & Distribution Flow
390 ================================
393 **a**. To start creating flows, we need to create a process group first. The
394 name of the process group will be the name of the flow. Drag and Drop on
395 the canvas, the ‘Processor Group’ icon from the DCAE Designer bar on the
401 Now enter the process group by double clicking it,
403 You can now drag and drop on the canvas ‘Processor’ icon from the top
404 DCAE Designer tab. You can search for a particular component in the
405 search box that appears when you attempt to drag the ‘Processor’ icon to
410 If the Nifi registry linking worked, you should see the “Import” button
411 when you try to add a Processor or Process group to the Nifi canvas,
416 By clicking on the import button, we can import already created saved
417 and version controlled flows from the Nifi registry, if they are
422 We can save created flows by version controlling them like so starting
423 with a 'right click' anywhere on the canvas-
427 Ideally you would name the flow and process group the same, because
428 functionally they are similar.
432 When the flow is checked in, the bar at the bottom shows a green
437 Note: Even if you move a component around on the canvas, and its
438 position on the canvas changes, it is recognized as a change, and it
439 will have to recommitted.
441 You can add additional components in your flow and connect them.
443 DcaeVesCollector connects to DockerTcagen2.
451 Along the way you need to also provide topic names in the settings
452 section. These can be arbitrary names.
456 To recap, see how DcaeVesCollector connects to DockerTcagen2. Look at
457 the connection relationships. Currently there is no way to validate
458 these relationships. Notice how it is required to name the topics by
461 The complete flow after joining our components looks like so
466 **b. Submit/ Distribute the flow:**
468 Once your flow is complete and saved in the Nifi registry, you can
469 choose to submit it for distribution.
473 If the flow was submitted successfully to the runtime api, you should
474 get a pop up a success message like so -
478 At this step, the design was packaged and sent to Runtime api.
480 The runtime is supposed to generate the Helmchart for components
481 involved in the flow and push them to registry configured. The
482 RuntimeAPI logs should looks like below for successful distribution (can
483 be viewed through kubectl log -f command)
485 **MOD/RuntimeAPI Console logs**
489 2022-03-07 18:13:25.865 INFO 1 --- [nio-9090-exec-8] o.o.d.r.web.controllers.GraphController : org.onap.dcae.runtime.web.models.GraphRequest@65efc9d3
490 2022-03-07 18:13:26.119 INFO 1 --- [nio-9090-exec-1] o.o.d.r.web.controllers.GraphController : [org.onap.dcae.runtime.web.models.Action@335a6cff, org.onap.dcae.runtime.web.models.Action@291687dd, org.onap.dcae.runtime.web.models.Action@36d57691]
491 2022-03-07 18:13:26.142 INFO 1 --- [nio-9090-exec-1] o.o.d.platform.helmchartgenerator.Utils : cloning dir/file at : /tmp/chart17927059362260733428
492 2022-03-07 18:13:26.158 INFO 1 --- [nio-9090-exec-1] o.o.d.p.h.chartbuilder.HelmClientImpl : running: helm dep up /tmp/chart17927059362260733428
493 Hang tight while we grab the latest from your chart repositories...
494 ...Successfully got an update from the "local" chart repository
495 Update Complete. ⎈Happy Helming!⎈
497 Downloading common from repo http://chart-museum:80
498 Downloading repositoryGenerator from repo http://chart-museum:80
499 Downloading readinessCheck from repo http://chart-museum:80
500 Downloading dcaegen2-services-common from repo http://chart-museum:80
501 Downloading postgres from repo http://chart-museum:80
502 Downloading serviceAccount from repo http://chart-museum:80
503 Downloading mongo from repo http://chart-museum:80
504 Deleting outdated charts
505 2022-03-07 18:13:26.273 INFO 1 --- [nio-9090-exec-1] o.o.d.p.h.chartbuilder.HelmClientImpl : running: helm lint /tmp/chart17927059362260733428
506 2022-03-07 18:13:30.641 INFO 1 --- [nio-9090-exec-1] o.o.d.p.h.chartbuilder.HelmClientImpl : ==> Linting /tmp/chart17927059362260733428
507 2022-03-07 18:13:30.642 INFO 1 --- [nio-9090-exec-1] o.o.d.p.h.chartbuilder.HelmClientImpl : [INFO] Chart.yaml: icon is recommended
508 2022-03-07 18:13:30.642 INFO 1 --- [nio-9090-exec-1] o.o.d.p.h.chartbuilder.HelmClientImpl :
509 2022-03-07 18:13:30.642 INFO 1 --- [nio-9090-exec-1] o.o.d.p.h.chartbuilder.HelmClientImpl : 1 chart(s) linted, 0 chart(s) failed
510 2022-03-07 18:13:30.646 INFO 1 --- [nio-9090-exec-1] o.o.d.p.h.chartbuilder.HelmClientImpl : running: helm package -d /tmp/chart13832736430918913290 /tmp/chart17927059362260733428
511 2022-03-07 18:13:30.737 INFO 1 --- [nio-9090-exec-1] o.o.d.p.h.chartbuilder.HelmClientImpl : Successfully packaged chart and saved it to: /tmp/chart13832736430918913290/dcae-ves-collector-1.10.1.tgz
512 2022-03-07 18:13:30.836 INFO 1 --- [nio-9090-exec-1] o.o.d.p.h.d.ChartMuseumDistributor : {"saved":true}
513 2022-03-07 18:13:30.857 INFO 1 --- [nio-9090-exec-1] o.o.d.platform.helmchartgenerator.Utils : cloning dir/file at : /tmp/chart7638328545634423550
514 2022-03-07 18:13:30.870 INFO 1 --- [nio-9090-exec-1] o.o.d.p.h.chartbuilder.HelmClientImpl : running: helm dep up /tmp/chart7638328545634423550
515 Hang tight while we grab the latest from your chart repositories...
516 ...Successfully got an update from the "local" chart repository
517 Update Complete. ⎈Happy Helming!⎈
519 Downloading common from repo http://chart-museum:80
520 Downloading repositoryGenerator from repo http://chart-museum:80
521 Downloading readinessCheck from repo http://chart-museum:80
522 Downloading dcaegen2-services-common from repo http://chart-museum:80
523 Downloading postgres from repo http://chart-museum:80
524 Downloading serviceAccount from repo http://chart-museum:80
525 Downloading mongo from repo http://chart-museum:80
526 Deleting outdated charts
527 2022-03-07 18:13:31.022 INFO 1 --- [nio-9090-exec-1] o.o.d.p.h.chartbuilder.HelmClientImpl : running: helm lint /tmp/chart7638328545634423550
528 2022-03-07 18:13:35.142 INFO 1 --- [nio-9090-exec-1] o.o.d.p.h.chartbuilder.HelmClientImpl : ==> Linting /tmp/chart7638328545634423550
529 2022-03-07 18:13:35.143 INFO 1 --- [nio-9090-exec-1] o.o.d.p.h.chartbuilder.HelmClientImpl : [INFO] Chart.yaml: icon is recommended
530 2022-03-07 18:13:35.143 INFO 1 --- [nio-9090-exec-1] o.o.d.p.h.chartbuilder.HelmClientImpl :
531 2022-03-07 18:13:35.143 INFO 1 --- [nio-9090-exec-1] o.o.d.p.h.chartbuilder.HelmClientImpl : 1 chart(s) linted, 0 chart(s) failed
532 2022-03-07 18:13:35.148 INFO 1 --- [nio-9090-exec-1] o.o.d.p.h.chartbuilder.HelmClientImpl : running: helm package -d /tmp/chart14389934160290252569 /tmp/chart7638328545634423550
533 2022-03-07 18:13:35.238 INFO 1 --- [nio-9090-exec-1] o.o.d.p.h.chartbuilder.HelmClientImpl : Successfully packaged chart and saved it to: /tmp/chart14389934160290252569/dcae-tcagen2-1.3.1.tgz
534 2022-03-07 18:13:35.303 INFO 1 --- [nio-9090-exec-1] o.o.d.p.h.d.ChartMuseumDistributor : {"saved":true}
537 5. Validation & Deployment
538 =============================
540 ** Verify if the charts are pushed into registry**
543 Charts distributed by MOD/Runtime can be verified on Chartmuseum
544 registry http://chart-museum:80/api/charts
546 Refer to supported api under `Chartmuseum Docs <https://chartmuseum.com/docs/>`__
548 Once the charts are retrieved, they can be installed using helm install command.
552 curl -X GET http://<registry:port>/charts/dcae-tcagen2-1.3.1.tgz -u onapinitializer:demo123456! -o dcae-tcagen2-1.3.1.tgz
553 helm install -name dev-dcaegen2-services -n onap dcae-tcagen2-1.3.1.tgz --set global.masterPassword=test1 --set global.pullPolicy=Always --set mongo.enabled=true
557 6. Environment Cleanup
558 -----------------------
564 helm delete -n onap dev-chartmuseum # To remove Chartmuseum setup completely
565 helm delete -n onap dev-dcaegen2-services # To remove TCAGen2 services
566 helm delete -n onap dev-dcaemod # To undeploy DCAEMOD
568 # USE DELETE METHOD ON CHARTMUSEUM TO REMOVE ANY SPECIFIC CHART PACKAGE - example below
569 curl -X DELETE http://<registry:port>/api/charts/dcae-ves-collector/1.10.1 -u onapinitializer:demo123456!
570 curl -X DELETE http://<registry:port>/api/charts/dcae-tcagen2/1.3.1 -u onapinitializer:demo123456!
572 **Remove also any persistence directory from /dockerdata-nfs/onap/ associated to chartmuseum and dcaemod**
575 .. |image0| image:: ../images/1.png
578 .. |image1| image:: ../images/2.png
581 .. |image2| image:: ../images/3.png
584 .. |image3| image:: ../images/4.png
587 .. |image4| image:: ../images/5.png
590 .. |image5| image:: ../images/6.png
593 .. |image6| image:: ../images/7.png
596 .. |image7| image:: ../images/8.png
599 .. |image8| image:: ../images/9.png
602 .. |image9| image:: ../images/10.png
605 .. |image10| image:: ../images/11.png
608 .. |image11| image:: ../images/12.png
611 .. |image12| image:: ../images/13.png
614 .. |image13| image:: ../images/14.png
617 .. |image14| image:: ../images/15.png
620 .. |image15| image:: ../images/16.png
623 .. |image16| image:: ../images/17.png
626 .. |image17| image:: ../images/18.png
629 .. |image18| image:: ../images/19.png
632 .. |image19| image:: ../images/20.png