- move doc8 dependency from docs/requirements-docs.txt to tox.ini
since it is not needed by other tox profiles
- disable doc8 tox profile voting since linelength issues are not
completely fixed yet
- fix a few linelength issues
Issue-ID: DOC-765
Signed-off-by: guillaume.lambert <guillaume.lambert@orange.com>
Change-Id: I704fc2ee8087ffbb8a83d693f6dc5a5f7c992b10
===== ===== ====== ====================
.. note::
===== ===== ====== ====================
.. note::
- Kubernetes supports a maximum of 110 pods per node - configurable in the --max-pods=n setting off the
- "additional kubelet flags" box in the kubernetes template window described in 'ONAP Development - 110 pod limit Wiki'
- - this limit does not need to be modified . The use of many small
- nodes is preferred over a few larger nodes (for example 14x16GB - 8 vCores each).
+ Kubernetes supports a maximum of 110 pods per node - configurable in the
+ --max-pods=n setting off the "additional kubelet flags" box in the kubernetes
+ template window described in 'ONAP Development - 110 pod limit Wiki'
+ - this limit does not need to be modified . The use of many small nodes is
+ preferred over a few larger nodes (for example 14x16GB - 8 vCores each).
Subsets of ONAP may still be deployed on a single node.
Cloud Installation
Subsets of ONAP may still be deployed on a single node.
Cloud Installation
└── configs
The common section of charts consists of a set of templates that assist with
└── configs
The common section of charts consists of a set of templates that assist with
-parameter substitution (`_name.tpl`, `_namespace.tpl` and others) and a set of charts
-for components used throughout ONAP. When the common components are used by other charts they
-are instantiated each time or we can deploy a shared instances for several components.
+parameter substitution (`_name.tpl`, `_namespace.tpl` and others) and a set of
+charts for components used throughout ONAP. When the common components are used
+by other charts they are instantiated each time or we can deploy a shared
+instances for several components.
All of the ONAP components have charts that follow the pattern shown below:
All of the ONAP components have charts that follow the pattern shown below:
to suit your deployment with items like the OpenStack tenant information.
.. note::
to suit your deployment with items like the OpenStack tenant information.
.. note::
- Standard and example override files (e.g. `onap-all.yaml`, `openstack.yaml`) can be found in
- the `oom/kubernetes/onap/resources/overrides/` directory.
+ Standard and example override files (e.g. `onap-all.yaml`, `openstack.yaml`)
+ can be found in the `oom/kubernetes/onap/resources/overrides/` directory.
a. You may want to selectively enable or disable ONAP components by changing
a. You may want to selectively enable or disable ONAP components by changing
the Robot Helm charts or Robot section of `openstack.yaml`
the Robot Helm charts or Robot section of `openstack.yaml`
- c. Encrypt the OpenStack password using the java based script for SO Helm charts
- or SO section of `openstack.yaml`.
+ c. Encrypt the OpenStack password using the java based script for SO Helm
+ charts or SO section of `openstack.yaml`.
d. Update the OpenStack parameters that will be used by Robot, SO and APPC Helm
charts or use an override file to replace them.
d. Update the OpenStack parameters that will be used by Robot, SO and APPC Helm
charts or use an override file to replace them.
- e. Add in the command line a value for the global master password (global.masterPassword).
+ e. Add in the command line a value for the global master password
+ (global.masterPassword).
> kubectl get pods -n onap -o=wide
.. note::
> kubectl get pods -n onap -o=wide
.. note::
- While all pods may be in a Running state, it is not a guarantee that all components are running fine.
+ While all pods may be in a Running state, it is not a guarantee that all
+ components are running fine.
- Launch the healthcheck tests using Robot to verify that the components are healthy::
+ Launch the healthcheck tests using Robot to verify that the components are
+ healthy::
> ~/oom/kubernetes/robot/ete-k8s.sh onap health
> ~/oom/kubernetes/robot/ete-k8s.sh onap health
-More examples of using the deploy and undeploy plugins can be found here: https://wiki.onap.org/display/DW/OOM+Helm+%28un%29Deploy+plugins
+More examples of using the deploy and undeploy plugins can be found here:
+https://wiki.onap.org/display/DW/OOM+Helm+%28un%29Deploy+plugins
Binaries can be found here for Linux and Mac: https://github.com/rancher/rke/releases/tag/v1.0.6
.. note::
Binaries can be found here for Linux and Mac: https://github.com/rancher/rke/releases/tag/v1.0.6
.. note::
- There are several ways to install RKE. Further parts of this documentation assumes that you have rke command available.
+ There are several ways to install RKE. Further parts of this documentation
+ assumes that you have rke command available.
If you don't know how to install RKE you may follow the below steps:
* chmod +x ./rke_linux-amd64
If you don't know how to install RKE you may follow the below steps:
* chmod +x ./rke_linux-amd64
| Alternatives Considered:
| Alternatives Considered:
- - Kubernetes port forwarding was considered but discarded as it would require
- the end user to run a script that opens up port forwarding tunnels to each of
- the pods that provides a portal application widget.
+ - Kubernetes port forwarding was considered but discarded as it would
+ require the end user to run a script that opens up port forwarding tunnels
+ to each of the pods that provides a portal application widget.
- Reverting to a VNC server similar to what was deployed in the Amsterdam
- Reverting to a VNC server similar to what was deployed in the Amsterdam
- release was also considered but there were many issues with resolution, lack
- of volume mount, /etc/hosts dynamic update, file upload that were a tall order
- to solve in time for the Beijing release.
+ release was also considered but there were many issues with resolution,
+ lack of volume mount, /etc/hosts dynamic update, file upload that were
+ a tall order to solve in time for the Beijing release.
- - If you are not using floating IPs in your Kubernetes deployment and directly attaching
- a public IP address (i.e. by using your public provider network) to your K8S Node
- VMs' network interface, then the output of 'kubectl -n onap get services | grep "portal-app"'
+ - If you are not using floating IPs in your Kubernetes deployment and
+ directly attaching a public IP address (i.e. by using your public provider
+ network) to your K8S Node VMs' network interface, then the output of
+ 'kubectl -n onap get services | grep "portal-app"'
will show your public IP instead of the private network's IP. Therefore,
will show your public IP instead of the private network's IP. Therefore,
- you can grab this public IP directly (as compared to trying to find the floating
- IP first) and map this IP in /etc/hosts.
+ you can grab this public IP directly (as compared to trying to find the
+ floating IP first) and map this IP in /etc/hosts.
.. figure:: oomLogoV2-Monitor.png
:align: right
.. figure:: oomLogoV2-Monitor.png
:align: right
[testenv:doc8]
deps = -rdocs/requirements-docs.txt
[testenv:doc8]
deps = -rdocs/requirements-docs.txt
[testenv:docs]
deps = -rdocs/requirements-docs.txt
[testenv:docs]
deps = -rdocs/requirements-docs.txt