create K8S cluster by TOSCA 55/28455/6
authorJun Hu <jh245g@att.com>
Wed, 17 Jan 2018 22:07:36 +0000 (17:07 -0500)
committerJun (Nicolas) Hu <jh245g@att.com>
Thu, 15 Feb 2018 15:19:32 +0000 (15:19 +0000)
Issue-ID: OOM-63
Change-Id: I1506e856328c5fd973a0de140982d8b1bbbac546
Signed-off-by: Nicolas Hu <jh245g@att.com>
12 files changed:
TOSCA/kubernetes-cluster-TOSCA/LICENSE [new file with mode: 0644]
TOSCA/kubernetes-cluster-TOSCA/README.md [new file with mode: 0644]
TOSCA/kubernetes-cluster-TOSCA/imports/cloud-config.yaml [new file with mode: 0644]
TOSCA/kubernetes-cluster-TOSCA/imports/kubernetes.yaml [new file with mode: 0644]
TOSCA/kubernetes-cluster-TOSCA/openstack-blueprint.yaml [new file with mode: 0644]
TOSCA/kubernetes-cluster-TOSCA/policies/scale.clj [new file with mode: 0644]
TOSCA/kubernetes-cluster-TOSCA/scripts/create.py [new file with mode: 0644]
TOSCA/kubernetes-cluster-TOSCA/scripts/kubernetes_master/configure.py [new file with mode: 0644]
TOSCA/kubernetes-cluster-TOSCA/scripts/kubernetes_master/start.py [new file with mode: 0644]
TOSCA/kubernetes-cluster-TOSCA/scripts/kubernetes_node/configure.py [new file with mode: 0644]
TOSCA/kubernetes-cluster-TOSCA/scripts/nfs.sh [new file with mode: 0644]
TOSCA/kubernetes-cluster-TOSCA/scripts/tasks.py [new file with mode: 0644]

diff --git a/TOSCA/kubernetes-cluster-TOSCA/LICENSE b/TOSCA/kubernetes-cluster-TOSCA/LICENSE
new file mode 100644 (file)
index 0000000..696f3d0
--- /dev/null
@@ -0,0 +1,17 @@
+ ============LICENSE_START==========================================
+ ===================================================================
+ Copyright © 2018 AT&T
+ All rights reserved.
+ ===================================================================
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+        http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+ ============LICENSE_END============================================
\ No newline at end of file
diff --git a/TOSCA/kubernetes-cluster-TOSCA/README.md b/TOSCA/kubernetes-cluster-TOSCA/README.md
new file mode 100644 (file)
index 0000000..8bc097f
--- /dev/null
@@ -0,0 +1,73 @@
+[![Build Status](https://circleci.com/gh/cloudify-examples/simple-kubernetes-blueprint.svg?style=shield&circle-token=:circle-token)](https://circleci.com/gh/cloudify-examples/simple-kubernetes-blueprint)
+
+
+##  Kubernetes Cluster Example
+
+This blueprint creates an example Kubernetes cluster. It is intended as an example. The underlying Kubernetes configuration method used is [Kubeadm](https://kubernetes.io/docs/admin/kubeadm/), which is not considered production-ready.
+
+Regardless of your infrastructure choice, this blueprint installs and configures on each VM:
+- The Kubernetes Yum repo will be installed on your VMs.
+- Docker, version 1.12.6-28.git1398f24.el7.centos
+- kubelet, version 1.8.6-0.
+- kubeadm, version 1.8.6-0.
+- kubernetes-cni, version 0.5.1-1.
+- weave
+
+
+## prerequisites
+
+You will need a *Cloudify Manager* running in either AWS, Azure, or Openstack. The Cloudify manager should be setup using the [Cloudify environment setup](https://github.com/cloudify-examples/cloudify-environment-setup) - that's how we test this blueprint. The following are therefore assumed:
+* You have uploaded all of the required plugins to your manager in order to use this blueprint. (See the imports section of the blueprint.yaml file to check that you are using the correct plugins and their respective versions.)
+* You have created all of the required secrets on your manager in order to use this blueprint. (See #secrets.)
+* A Centos 7.X image. If you are running in AWS or Openstack, your image must support [Cloud-init](https://cloudinit.readthedocs.io/en/latest/).
+
+
+#### Secrets
+
+* Common Secrets:
+  * agent_key_private
+  * agent_key_public
+
+* Openstack Secrets:
+  * external_network_name: This is the network on your Openstack that represents the internet gateway network.
+  * public_network_name: An openstack network. (Inbound is expected, outbound is required.)
+  * public_subnet_name: A subnet on the public network.
+  * private_network_name: An openstack network. (Inbound is not expected, outbound is required.)
+  * private_subnet_name: A subnet on the network. (Inbound is not expected, outbound is required.)
+  * router_name: This is a router that is attached to your Subnets designated in the secrets public_subnet_name and private_subnet_name.
+  * region: Your Keystone V2 region.
+  * keystone_url: Your Keystone V2 auth URL.
+  * keystone_tenant_name: Your Keystone V2 tenant name.
+  * keystone_password: Your Keystone V2 password.
+  * keystone_username:Your Keystone V2 username.
+
+
+### Step 1: Install the Kubernetes cluster
+
+#### For Openstack run:
+
+Please follow the instruction on wiki
+https://wiki.onap.org/display/DW/ONAP+on+Kubernetes+on+Cloudify#ONAPonKubernetesonCloudify-OpenStack
+
+
+### Step 2: Verify the demo installed and started.
+
+Once the workflow execution is complete, verify that these secrets were created:
+
+
+```shell
+(Incubator)UNICORN:Projects trammell$ cfy secrets list
+Listing all secrets...
+
+Secrets:
++------------------------------------------+--------------------------+--------------------------+------------+----------------+------------+
+|                   key                    |        created_at        |        updated_at        | permission |  tenant_name   | created_by |
++------------------------------------------+--------------------------+--------------------------+------------+----------------+------------+
+| kubernetes-admin_client_certificate_data | 2017-08-09 14:58:06.421  | 2017-08-09 14:58:06.421  |            | default_tenant |   admin    |
+|     kubernetes-admin_client_key_data     | 2017-08-09 14:58:06.513  | 2017-08-09 14:58:06.513  |            | default_tenant |   admin    |
+|  kubernetes_certificate_authority_data   | 2017-08-09 14:58:06.327  | 2017-08-09 14:58:06.327  |            | default_tenant |   admin    |
+|           kubernetes_master_ip           | 2017-08-09 14:56:12.359  | 2017-08-09 14:56:12.359  |            | default_tenant |   admin    |
+|          kubernetes_master_port          | 2017-08-09 14:56:12.452  | 2017-08-09 14:56:12.452  |            | default_tenant |   admin    |
++------------------------------------------+--------------------------+--------------------------+------------+----------------+------------+
+```
+
diff --git a/TOSCA/kubernetes-cluster-TOSCA/imports/cloud-config.yaml b/TOSCA/kubernetes-cluster-TOSCA/imports/cloud-config.yaml
new file mode 100644 (file)
index 0000000..1376816
--- /dev/null
@@ -0,0 +1,76 @@
+# ============LICENSE_START==========================================
+# ===================================================================
+# Copyright © 2017 AT&T
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#============LICENSE_END============================================
+
+# this is the cloud init. It will install the reqiured packages and do some basic config on every VM.
+
+node_templates:
+
+  cloudify_host_cloud_config:
+    type: cloudify.nodes.CloudInit.CloudConfig
+    properties:
+      resource_config:
+        groups:
+          - docker
+        users:
+          - name: { get_input: agent_user }
+            primary-group: wheel
+            groups: docker
+            shell: /bin/bash
+            sudo: ['ALL=(ALL) NOPASSWD:ALL']
+            ssh-authorized-keys:
+              - { get_secret: agent_key_public }
+        write_files:
+          - path: /etc/yum.repos.d/kubernetes.repo
+            owner: root:root
+            permissions: '0444'
+            content: |
+              # installed by cloud-init
+              [kubernetes]
+              name=Kubernetes
+              baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
+              enabled=1
+              gpgcheck=1
+              repo_gpgcheck=1
+              gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
+                     https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
+
+          - path: /etc/sysctl.d/k8s.conf
+            owner: root:root
+            permissions: '0444'
+            content: |
+              # installed by cloud-init
+              net.bridge.bridge-nf-call-ip6tables = 1
+              net.bridge.bridge-nf-call-iptables = 1
+
+        packages:
+          - [docker, 1.12.6]
+          - [kubelet, 1.8.6-0]
+          - [kubeadm, 1.8.6-0]
+          - [kubectl, 1.8.6-0]
+          - [kubernetes-cni, 0.5.1-1]
+          - [nfs-utils]
+        runcmd:
+          - [ setenforce, 0 ]
+          - [ sysctl , '--system' ]
+          - [ systemctl, enable, docker ]
+          - [ systemctl, start, docker ]
+          - [ systemctl, enable, kubelet ]
+          - [ systemctl, start, kubelet ]
+          - [ mkdir, '-p', /tmp/data ]
+          - [ chcon, '-Rt', svirt_sandbox_file_t, /tmp/data ]
+          - [ mkdir, '-p', /dockerdata-nfs ]
+          - [ chmod, 777, /dockerdata-nfs ]
\ No newline at end of file
diff --git a/TOSCA/kubernetes-cluster-TOSCA/imports/kubernetes.yaml b/TOSCA/kubernetes-cluster-TOSCA/imports/kubernetes.yaml
new file mode 100644 (file)
index 0000000..4467fc4
--- /dev/null
@@ -0,0 +1,216 @@
+# ============LICENSE_START==========================================
+# ===================================================================
+# Copyright © 2017 AT&T
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#============LICENSE_END============================================
+
+inputs:
+
+  labels:
+    default: {}
+
+node_types:
+
+  cloudify.nodes.Kubernetes:
+    derived_from: cloudify.nodes.Root
+    interfaces:
+      cloudify.interfaces.lifecycle:
+        create:
+          implementation: scripts/create.py
+
+  cloudify.nodes.Kubernetes.Master:
+    derived_from: cloudify.nodes.Root
+    interfaces:
+      cloudify.interfaces.lifecycle:
+        create:
+          implementation: scripts/create.py
+        configure:
+          implementation: scripts/kubernetes_master/configure.py
+        start:
+          implementation: scripts/kubernetes_master/start.py
+
+  cloudify.nodes.Kubernetes.Node:
+    derived_from: cloudify.nodes.Root
+    interfaces:
+      cloudify.interfaces.lifecycle:
+        create:
+          implementation: scripts/create.py
+        configure:
+          implementation: scripts/kubernetes_node/configure.py
+        start:
+          implementation: fabric.fabric_plugin.tasks.run_task
+          inputs:
+            tasks_file:
+              default: scripts/tasks.py
+            task_name:
+              default: label_node
+            task_properties:
+              default:
+                hostname: { get_attribute: [ SELF, hostname ] }
+                labels: { get_input: labels }
+            fabric_env:
+              default:
+                host_string: { get_attribute: [ kubernetes_master_host, ip ] }
+                user: { get_input: agent_user }
+                key: { get_secret: agent_key_private }
+        stop:
+          implementation: fabric.fabric_plugin.tasks.run_task
+          inputs:
+            tasks_file:
+              default: scripts/tasks.py
+            task_name:
+              default: stop_node
+            task_properties:
+              default:
+                hostname: { get_attribute: [ SELF, hostname ] }
+            fabric_env:
+              default:
+                host_string: { get_attribute: [ kubernetes_master_host, ip ] }
+                user: { get_input: agent_user }
+                key: { get_secret: agent_key_private }
+        delete:
+          implementation: fabric.fabric_plugin.tasks.run_task
+          inputs:
+            tasks_file:
+              default: scripts/tasks.py
+            task_name:
+              default: delete_node
+            task_properties:
+              default:
+                hostname: { get_attribute: [ SELF, hostname ] }
+            fabric_env:
+              default:
+                host_string: { get_attribute: [ kubernetes_master_host, ip ] }
+                user: { get_input: agent_user }
+                key: { get_secret: agent_key_private }
+
+node_templates:
+
+  kubernetes_master:
+    type: cloudify.nodes.Kubernetes.Master
+    relationships:
+      - type: cloudify.relationships.contained_in
+        target: kubernetes_master_host
+
+  kubernetes_node:
+    type: cloudify.nodes.Kubernetes.Node
+    relationships:
+      - type: cloudify.relationships.contained_in
+        target: kubernetes_node_host
+      - type: cloudify.relationships.depends_on
+        target: kubernetes_master
+
+outputs:
+
+  kubernetes_cluster_bootstrap_token:
+    value: { get_attribute: [ kubernetes_master, bootstrap_token ] }
+
+  kubernetes_cluster_master_ip:
+    value: { get_attribute: [ kubernetes_master, master_ip ] }
+
+  kubernetes-admin_client_certificate_data:
+    value: { get_attribute: [ kubernetes_master, kubernetes-admin_client_certificate_data ] }
+
+  kubernetes-admin_client_key_data:
+    value: { get_attribute: [ kubernetes_master, kubernetes-admin_client_key_data ] }
+
+  kubernetes_certificate_authority_data:
+    value: { get_attribute: [ kubernetes_master, kubernetes_certificate_authority_data ] }
+
+policy_types:
+  scale_policy_type:
+    source: policies/scale.clj
+    properties:
+      policy_operates_on_group:
+        default: true
+      service_selector:
+        description: regular expression that selects the metric to be measured
+        default: ".*"
+      moving_window_size:
+        description: the moving window for individual sources in secs
+        default: 10
+      scale_threshold:
+        description: the value to trigger scaling over aggregrated moving values
+      scale_limit:
+        description: scaling limit
+        default: 10
+      scale_direction:
+        description: scale up ('<') or scale down ('>')
+        default: '<'
+      cooldown_time:
+        description: the time to wait before evaluating again after a scale
+        default: 60
+
+groups: {}
+
+#  scale_up_group:
+#    members: [kubernetes_node_host]
+#    policies:
+#      auto_scale_up:
+#        type: scale_policy_type
+#        properties:
+#          policy_operates_on_group: true
+#          scale_limit: 6
+#          scale_direction: '<'
+#          scale_threshold: 30
+#          service_selector: .*kubernetes_node_host.*cpu.total.user
+#          cooldown_time: 60
+#        triggers:
+#          execute_scale_workflow:
+#            type: cloudify.policies.triggers.execute_workflow
+#            parameters:
+#              workflow: scale
+#              workflow_parameters:
+#                delta: 1
+#                scalable_entity_name: kubernetes_node_host
+
+#  scale_down_group:
+#    members: [kubernetes_node_host]
+#    policies:
+#      auto_scale_down:
+#        type: scale_policy_type
+#        properties:
+#          policy_operates_on_group: true
+#          scale_limit: 6
+#          scale_direction: '<'
+#          scale_threshold: 30
+#          service_selector: .*kubernetes_node_host.*cpu.total.user
+#          cooldown_time: 60
+#        triggers:
+#          execute_scale_workflow:
+#            type: cloudify.policies.triggers.execute_workflow
+#            parameters:
+#              workflow: scale
+#              workflow_parameters:
+#                delta: 1
+#                scalable_entity_name: kubernetes_node_host
+
+#  heal_group:
+#    members: [kubernetes_node_host]
+#    policies:
+#      simple_autoheal_policy:
+#        type: cloudify.policies.types.host_failure
+#        properties:
+#          service:
+#            - .*kubernetes_node_host.*.cpu.total.system
+#            - .*kubernetes_node_host.*.process.hyperkube.cpu.percent
+#          interval_between_workflows: 60
+#        triggers:
+#          auto_heal_trigger:
+#            type: cloudify.policies.triggers.execute_workflow
+#            parameters:
+#              workflow: heal
+#              workflow_parameters:
+#                node_instance_id: { 'get_property': [ SELF, node_id ] }
+#                diagnose_value: { 'get_property': [ SELF, diagnose ] }
diff --git a/TOSCA/kubernetes-cluster-TOSCA/openstack-blueprint.yaml b/TOSCA/kubernetes-cluster-TOSCA/openstack-blueprint.yaml
new file mode 100644 (file)
index 0000000..5c348e9
--- /dev/null
@@ -0,0 +1,307 @@
+# ============LICENSE_START==========================================
+# ===================================================================
+# Copyright © 2017 AT&T
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#============LICENSE_END============================================
+
+tosca_definitions_version: cloudify_dsl_1_3
+
+description: >
+  This blueprint creates a Kubernetes Cluster.
+  It is based on this documentation: https://kubernetes.io/docs/getting-started-guides/kubeadm/
+
+imports:
+  - https://raw.githubusercontent.com/cloudify-cosmo/cloudify-manager/4.1/resources/rest-service/cloudify/types/types.yaml
+  - https://raw.githubusercontent.com/cloudify-cosmo/cloudify-openstack-plugin/2.2.0/plugin.yaml
+  - https://raw.githubusercontent.com/cloudify-incubator/cloudify-utilities-plugin/1.2.5/plugin.yaml
+  - https://raw.githubusercontent.com/cloudify-cosmo/cloudify-fabric-plugin/1.5/plugin.yaml
+  - https://raw.githubusercontent.com/cloudify-cosmo/cloudify-diamond-plugin/1.3.5/plugin.yaml
+  - imports/cloud-config.yaml
+  - imports/kubernetes.yaml
+
+inputs:
+
+  image:
+    description: Image to be used when launching agent VMs
+    default: { get_secret: centos_core_image }
+
+  flavor:
+    description: Flavor of the agent VMs
+    default: { get_secret: large_image_flavor }
+
+  agent_user:
+    description: >
+      User for connecting to agent VMs
+    default: centos
+
+dsl_definitions:
+
+  openstack_config: &openstack_config
+    username: { get_secret: keystone_username }
+    password: { get_secret: keystone_password }
+    tenant_name: { get_secret: keystone_tenant_name }
+    auth_url: { get_secret: keystone_url }
+    region: { get_secret: region }
+
+node_templates:
+
+  nfs_server:
+    type: cloudify.nodes.SoftwareComponent
+    properties:
+    interfaces:
+      cloudify.interfaces.lifecycle:
+         start:
+          implementation: fabric.fabric_plugin.tasks.run_script
+          inputs:
+            script_path: scripts/nfs.sh
+            use_sudo: true
+            process:
+              args:
+            fabric_env:
+              host_string: { get_attribute: [ kubernetes_master_host, ip ] }
+              user: { get_input: agent_user }
+              key: { get_secret: agent_key_private }
+    relationships:
+      - type: cloudify.relationships.contained_in
+        target: kubernetes_master_host
+
+  kubernetes_master_host:
+    type: cloudify.openstack.nodes.Server
+    properties:
+      openstack_config: *openstack_config
+      agent_config:
+          user: { get_input: agent_user }
+          install_method: remote
+          port: 22
+          key: { get_secret: agent_key_private }
+      server:
+        key_name: ''
+        image: ''
+        flavor: ''
+      management_network_name: { get_property: [ public_network, resource_id ] }
+    interfaces:
+      cloudify.interfaces.lifecycle:
+        create:
+          inputs:
+            args:
+              image: { get_input: image }
+              flavor: { get_input: flavor }
+              userdata: { get_attribute: [ cloudify_host_cloud_config, cloud_config ] }
+    relationships:
+      - target: kubernetes_master_port
+        type: cloudify.openstack.server_connected_to_port
+      - type: cloudify.relationships.depends_on
+        target: cloudify_host_cloud_config
+
+  kubernetes_node_host:
+    type: cloudify.openstack.nodes.Server
+    properties:
+      openstack_config: *openstack_config
+      agent_config:
+          user: { get_input: agent_user }
+          install_method: remote
+          port: 22
+          key: { get_secret: agent_key_private }
+      server:
+        key_name: ''
+        image: ''
+        flavor: ''
+      management_network_name: { get_property: [ private_network, resource_id ] }
+    relationships:
+      - type: cloudify.relationships.contained_in
+        target: k8s_node_scaling_tier
+      - target: kubernetes_node_port
+        type: cloudify.openstack.server_connected_to_port
+    interfaces:
+      cloudify.interfaces.lifecycle:
+        create:
+          inputs:
+            args:
+              image: { get_input: image }
+              flavor: { get_input: flavor }
+              userdata: { get_attribute: [ cloudify_host_cloud_config, cloud_config ] }
+      cloudify.interfaces.monitoring_agent:
+          install:
+            implementation: diamond.diamond_agent.tasks.install
+            inputs:
+              diamond_config:
+                interval: 1
+          start: diamond.diamond_agent.tasks.start
+          stop: diamond.diamond_agent.tasks.stop
+          uninstall: diamond.diamond_agent.tasks.uninstall
+      cloudify.interfaces.monitoring:
+          start:
+            implementation: diamond.diamond_agent.tasks.add_collectors
+            inputs:
+              collectors_config:
+                CPUCollector: {}
+                MemoryCollector: {}
+                LoadAverageCollector: {}
+                DiskUsageCollector:
+                  config:
+                    devices: x?vd[a-z]+[0-9]*$
+                NetworkCollector: {}
+                ProcessResourcesCollector:
+                  config:
+                    enabled: true
+                    unit: B
+                    measure_collector_time: true
+                    cpu_interval: 0.5
+                    process:
+                      hyperkube:
+                        name: hyperkube
+
+  kubernetes_security_group:
+    type: cloudify.openstack.nodes.SecurityGroup
+    properties:
+      openstack_config: *openstack_config
+      security_group:
+        name: kubernetes_security_group
+        description: kubernetes master security group
+      rules:
+      - remote_ip_prefix: 0.0.0.0/0
+        port_range_min: 1
+        port_range_max: 65535
+        protocol: tcp
+        direction: ingress
+        ethertype: IPv4
+      - remote_ip_prefix: 0.0.0.0/0
+        port_range_min: 1
+        port_range_max: 65535
+        protocol: tcp
+        direction: egress
+        ethertype: IPv4
+      - remote_ip_prefix: 0.0.0.0/0
+        port_range_min: 1
+        port_range_max: 65535
+        protocol: udp
+        direction: ingress
+        ethertype: IPv4
+      - remote_ip_prefix: 0.0.0.0/0
+        port_range_min: 1
+        port_range_max: 65535
+        protocol: udp
+        direction: egress
+        ethertype: IPv4
+
+  kubernetes_master_port:
+    type: cloudify.openstack.nodes.Port
+    properties:
+      openstack_config: *openstack_config
+    relationships:
+      - type: cloudify.relationships.contained_in
+        target: public_network
+      - type: cloudify.relationships.depends_on
+        target: public_subnet
+      - type: cloudify.openstack.port_connected_to_security_group
+        target: kubernetes_security_group
+      - type: cloudify.openstack.port_connected_to_floating_ip
+        target: kubernetes_master_ip
+
+  kubernetes_node_port:
+    type: cloudify.openstack.nodes.Port
+    properties:
+      openstack_config: *openstack_config
+    relationships:
+      - type: cloudify.relationships.contained_in
+        target: k8s_node_scaling_tier
+      - type: cloudify.relationships.connected_to
+        target: private_network
+      - type: cloudify.relationships.depends_on
+        target: private_subnet
+      - type: cloudify.openstack.port_connected_to_security_group
+        target: kubernetes_security_group
+
+  private_subnet:
+    type: cloudify.openstack.nodes.Subnet
+    properties:
+      openstack_config: *openstack_config
+      use_external_resource: true
+      resource_id: { get_secret: private_subnet_name }
+    relationships:
+      - target: private_network
+        type: cloudify.relationships.contained_in
+
+  private_network:
+    type: cloudify.openstack.nodes.Network
+    properties:
+      openstack_config: *openstack_config
+      use_external_resource: true
+      resource_id: { get_secret: private_network_name }
+
+  public_subnet:
+    type: cloudify.openstack.nodes.Subnet
+    properties:
+      openstack_config: *openstack_config
+      use_external_resource: true
+      resource_id: { get_secret: public_subnet_name }
+    relationships:
+      - target: public_network
+        type: cloudify.relationships.contained_in
+      - target: router
+        type: cloudify.openstack.subnet_connected_to_router
+
+  public_network:
+    type: cloudify.openstack.nodes.Network
+    properties:
+      openstack_config: *openstack_config
+      use_external_resource: true
+      resource_id: { get_secret: public_network_name }
+
+  router:
+    type: cloudify.openstack.nodes.Router
+    properties:
+      openstack_config: *openstack_config
+      use_external_resource: true
+      resource_id: { get_secret: router_name }
+    relationships:
+      - target: external_network
+        type: cloudify.relationships.connected_to
+
+  external_network:
+    type: cloudify.openstack.nodes.Network
+    properties:
+      openstack_config: *openstack_config
+      use_external_resource: true
+      resource_id: { get_secret: external_network_name }
+
+  k8s_node_scaling_tier:
+    type: cloudify.nodes.Root
+
+  kubernetes_master_ip:
+    type: cloudify.openstack.nodes.FloatingIP
+    properties:
+      openstack_config: *openstack_config
+      floatingip:
+        floating_network_name: { get_property: [ external_network, resource_id ] }
+
+groups:
+
+  k8s_node_group:
+    members:
+      - kubernetes_node_host
+      - kubernetes_node_port
+
+policies:
+
+  kubernetes_node_vms_scaling_policy:
+    type: cloudify.policies.scaling
+    properties:
+      default_instances: 6
+    targets: [k8s_node_group]
+
+outputs:
+
+  kubernetes_master_public_ip:
+    value: { get_attribute: [ kubernetes_master_ip, floating_ip_address ] }
diff --git a/TOSCA/kubernetes-cluster-TOSCA/policies/scale.clj b/TOSCA/kubernetes-cluster-TOSCA/policies/scale.clj
new file mode 100644 (file)
index 0000000..369239a
--- /dev/null
@@ -0,0 +1,66 @@
+;;;; ============LICENSE_START==========================================
+;;;; ===================================================================
+;;;; Copyright © 2017 AT&T
+;;;;
+;;;; Licensed under the Apache License, Version 2.0 (the "License");
+;;;; you may not use this file except in compliance with the License.
+;;;; You may obtain a copy of the License at
+;;;;
+;;;;         http://www.apache.org/licenses/LICENSE-2.0
+;;;;
+;;;; Unless required by applicable law or agreed to in writing, software
+;;;; distributed under the License is distributed on an "AS IS" BASIS,
+;;;; WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+;;;; See the License for the specific language governing permissions and
+;;;; limitations under the License.
+;;;;============LICENSE_END============================================
+
+(where (service #"{{service_selector}}")
+  #(info "got event: " %)
+
+  (where (not (expired? event))
+    (moving-time-window {{moving_window_size}}
+      (fn [events]
+        (let [
+               hostmap (atom {})
+               hostcnt (atom {})
+             ]
+          (do
+            (doseq [m events]
+              (if (nil? (@hostmap (m :host)))
+                (do
+                  (swap! hostmap assoc (m :host) (m :metric))
+                  (swap! hostcnt assoc (m :host) 1)
+                )
+                (do
+                  (swap! hostmap assoc (m :host) (+ (m :metric) (@hostmap (m :host))))
+                  (swap! hostcnt assoc (m :host) (inc (@hostcnt (m :host))))
+                )
+              )
+            )
+            (doseq [entry @hostmap]
+              (swap! hostmap assoc (key entry) (/ (val entry) (@hostcnt (key entry))))
+            )
+
+            (let
+              [ hostcnt (count @hostmap)
+                conns (/ (apply + (map (fn [a] (val a)) @hostmap)) hostcnt)
+                cooling (not (nil? (riemann.index/lookup index "scaling" "suspended")))]
+
+              (do
+                (info "cooling=" cooling " scale_direction={{scale_direction}} hostcnt=" hostcnt " scale_threshold={{scale_threshold}} conns=" conns)
+                (if (and (not cooling) ({{scale_direction}} hostcnt {{scale_limit}}) ({{scale_direction}} {{scale_threshold}} conns))
+                  (do
+                    (info "=== SCALE ===" "{{scale_direction}}")
+                    (process-policy-triggers {})
+                    (riemann.index/update index {:host "scaling" :service "suspended" :time (unix-time) :description "cooldown flag" :metric 0 :ttl {{cooldown_time}} :state "ok"})
+                  )
+                )
+              )
+            )
+          )
+        )
+      )
+    )
+  )
+)
diff --git a/TOSCA/kubernetes-cluster-TOSCA/scripts/create.py b/TOSCA/kubernetes-cluster-TOSCA/scripts/create.py
new file mode 100644 (file)
index 0000000..4bb3710
--- /dev/null
@@ -0,0 +1,93 @@
+#!/usr/bin/env python
+
+# ============LICENSE_START==========================================
+# ===================================================================
+# Copyright © 2017 AT&T
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#============LICENSE_END============================================
+
+# This tack will be triggered after VM created. It will check whether docker is up and running.
+
+import subprocess
+from cloudify import ctx
+from cloudify.exceptions import OperationRetry
+
+
+def check_command(command):
+
+    try:
+        process = subprocess.Popen(
+            command.split()
+        )
+    except OSError:
+        return False
+
+    output, error = process.communicate()
+
+    ctx.logger.debug('command: {0} '.format(command))
+    ctx.logger.debug('output: {0} '.format(output))
+    ctx.logger.debug('error: {0} '.format(error))
+    ctx.logger.debug('process.returncode: {0} '.format(process.returncode))
+
+    if process.returncode:
+        ctx.logger.error('Running `{0}` returns error.'.format(command))
+        return False
+
+    return True
+
+
+def execute_command(_command):
+
+    ctx.logger.debug('_command {0}.'.format(_command))
+
+    subprocess_args = {
+        'args': _command.split(),
+        'stdout': subprocess.PIPE,
+        'stderr': subprocess.PIPE
+    }
+
+    ctx.logger.debug('subprocess_args {0}.'.format(subprocess_args))
+
+    process = subprocess.Popen(**subprocess_args)
+    output, error = process.communicate()
+
+    ctx.logger.debug('command: {0} '.format(_command))
+    ctx.logger.debug('error: {0} '.format(error))
+    ctx.logger.debug('process.returncode: {0} '.format(process.returncode))
+
+    if process.returncode:
+        ctx.logger.error('Running `{0}` returns error.'.format(_command))
+        return False
+
+    return output
+
+
+if __name__ == '__main__':
+
+    # Check if Docker PS works
+    docker = check_command('docker ps')
+    if not docker:
+            raise OperationRetry(
+                'Docker is not present on the system.')
+    ctx.logger.info('Docker is present on the system.')
+
+    # Next check if Cloud Init is running.
+    finished = False
+    ps = execute_command('ps -ef')
+    for line in ps.split('\n'):
+        if '/usr/bin/python /usr/bin/cloud-init modules' in line:
+            raise OperationRetry(
+                'You provided a Cloud-init Cloud Config to configure instances. '
+                'Waiting for Cloud-init to complete.')
+    ctx.logger.info('Cloud-init finished.')
diff --git a/TOSCA/kubernetes-cluster-TOSCA/scripts/kubernetes_master/configure.py b/TOSCA/kubernetes-cluster-TOSCA/scripts/kubernetes_master/configure.py
new file mode 100644 (file)
index 0000000..7d5dffc
--- /dev/null
@@ -0,0 +1,175 @@
+#!/usr/bin/env python
+
+# ============LICENSE_START==========================================
+# ===================================================================
+# Copyright © 2017 AT&T
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#============LICENSE_END============================================
+
+# This script will be executed on Kubernetes master host. It will initialize the master, and install a pod network.
+
+import pwd
+import grp
+import os
+import re
+import getpass
+import subprocess
+from cloudify import ctx
+from cloudify.exceptions import OperationRetry
+from cloudify_rest_client.exceptions import CloudifyClientError
+
+JOIN_COMMAND_REGEX = '^kubeadm join[\sA-Za-z0-9\.\:\-\_]*'
+BOOTSTRAP_TOKEN_REGEX = '[a-z0-9]{6}.[a-z0-9]{16}'
+IP_PORT_REGEX = '[0-9]+(?:\.[0-9]+){3}:[0-9]+'
+NOT_SHA_REGEX='^(?!.*sha256)'
+JCRE_COMPILED = re.compile(JOIN_COMMAND_REGEX)
+BTRE_COMPILED = re.compile(BOOTSTRAP_TOKEN_REGEX)
+IPRE_COMPILED = re.compile(IP_PORT_REGEX)
+SHA_COMPILED=re.compile(NOT_SHA_REGEX)
+
+def execute_command(_command):
+
+    ctx.logger.debug('_command {0}.'.format(_command))
+
+    subprocess_args = {
+        'args': _command.split(),
+        'stdout': subprocess.PIPE,
+        'stderr': subprocess.PIPE
+    }
+
+    ctx.logger.debug('subprocess_args {0}.'.format(subprocess_args))
+
+    process = subprocess.Popen(**subprocess_args)
+    output, error = process.communicate()
+
+    ctx.logger.debug('command: {0} '.format(_command))
+    ctx.logger.debug('output: {0} '.format(output))
+    ctx.logger.debug('error: {0} '.format(error))
+    ctx.logger.debug('process.returncode: {0} '.format(process.returncode))
+
+    if process.returncode:
+        ctx.logger.error('Running `{0}` returns error.'.format(_command))
+        return False
+
+    return output
+
+
+def cleanup_and_retry():
+    reset_cluster_command = 'sudo kubeadm reset'
+    output = execute_command(reset_cluster_command)
+    ctx.logger.info('reset_cluster_command {1}'.format(reset_cluster_command, output))
+    raise OperationRetry('Restarting kubernetes because of a problem.')
+
+
+def configure_admin_conf():
+    # Add the kubeadmin config to environment
+    agent_user = getpass.getuser()
+    uid = pwd.getpwnam(agent_user).pw_uid
+    gid = grp.getgrnam('docker').gr_gid
+    admin_file_dest = os.path.join(os.path.expanduser('~'), 'admin.conf')
+
+    execute_command('sudo cp {0} {1}'.format('/etc/kubernetes/admin.conf', admin_file_dest))
+    execute_command('sudo chown {0}:{1} {2}'.format(uid, gid, admin_file_dest))
+
+    with open(os.path.join(os.path.expanduser('~'), '.bashrc'), 'a') as outfile:
+        outfile.write('export KUBECONFIG=$HOME/admin.conf')
+    os.environ['KUBECONFIG'] = admin_file_dest
+
+
+def setup_secrets(_split_master_port, _bootstrap_token):
+    master_ip = split_master_port[0]
+    master_port = split_master_port[1]
+    ctx.instance.runtime_properties['master_ip'] = _split_master_port[0]
+    ctx.instance.runtime_properties['master_port'] = _split_master_port[1]
+    ctx.instance.runtime_properties['bootstrap_token'] = _bootstrap_token
+    from cloudify import manager
+    cfy_client = manager.get_rest_client()
+
+    _secret_key = 'kubernetes_master_ip'
+    if cfy_client and not len(cfy_client.secrets.list(key=_secret_key)) == 1:
+        cfy_client.secrets.create(key=_secret_key, value=master_ip)
+    else:
+        cfy_client.secrets.update(key=_secret_key, value=master_ip)
+    ctx.logger.info('Set secret: {0}.'.format(_secret_key))
+
+    _secret_key = 'kubernetes_master_port'
+    if cfy_client and not len(cfy_client.secrets.list(key=_secret_key)) == 1:
+        cfy_client.secrets.create(key=_secret_key, value=master_port)
+    else:
+        cfy_client.secrets.update(key=_secret_key, value=master_port)
+    ctx.logger.info('Set secret: {0}.'.format(_secret_key))
+
+    _secret_key = 'bootstrap_token'
+    if cfy_client and not len(cfy_client.secrets.list(key=_secret_key)) == 1:
+        cfy_client.secrets.create(key=_secret_key, value=_bootstrap_token)
+    else:
+        cfy_client.secrets.update(key=_secret_key, value=_bootstrap_token)
+    ctx.logger.info('Set secret: {0}.'.format(_secret_key))
+
+
+if __name__ == '__main__':
+
+    ctx.instance.runtime_properties['KUBERNETES_MASTER'] = True
+    cniCommand1=subprocess.Popen(["sudo", "sysctl", 'net.bridge.bridge-nf-call-iptables=1'], stdout=subprocess.PIPE)
+    # Start Kubernetes Master
+    ctx.logger.info('Attempting to start Kubernetes master.')
+    start_master_command = 'sudo kubeadm init'
+    start_output = execute_command(start_master_command)
+    ctx.logger.debug('start_master_command output: {0}'.format(start_output))
+    # Check if start succeeded.
+    if start_output is False or not isinstance(start_output, basestring):
+        ctx.logger.error('Kubernetes master failed to start.')
+        cleanup_and_retry()
+    ctx.logger.info('Kubernetes master started successfully.')
+
+    # Slice and dice the start_master_command start_output.
+    ctx.logger.info('Attempting to retrieve Kubernetes cluster information.')
+    split_start_output = \
+        [line.strip() for line in start_output.split('\n') if line.strip()]
+    del line
+
+    ctx.logger.debug(
+        'Kubernetes master start output, split and stripped: {0}'.format(
+            split_start_output))
+    split_join_command = ''
+    for li in split_start_output:
+        ctx.logger.debug('li in split_start_output: {0}'.format(li))
+        if re.match(JCRE_COMPILED, li):
+            split_join_command = re.split('\s', li)
+    del li
+    ctx.logger.info('split_join_command: {0}'.format(split_join_command))
+
+    if not split_join_command:
+        ctx.logger.error('No join command in split_start_output: {0}'.format(split_join_command))
+        cleanup_and_retry()
+
+    for li in split_join_command:
+        ctx.logger.info('Sorting bits and pieces: li: {0}'.format(li))
+        if (re.match(BTRE_COMPILED, li) and re.match(SHA_COMPILED, li)):
+            bootstrap_token = li
+        elif re.match(IPRE_COMPILED, li):
+            split_master_port = li.split(':')
+    setup_secrets(split_master_port, bootstrap_token)
+    configure_admin_conf()
+
+    weaveCommand1=subprocess.Popen(["kubectl", "version"], stdout=subprocess.PIPE)
+    weaveCommand2=subprocess.Popen(["base64"],stdin=weaveCommand1.stdout, stdout=subprocess.PIPE)
+    kubever = weaveCommand2.communicate()[0]
+    kubever = kubever.replace('\n', '').replace('\r', '')
+    ctx.logger.info("kubever :"+kubever)
+    weaveURL=('https://cloud.weave.works/k8s/net?k8s-version={0}'.format(kubever))
+    ctx.logger.info("weaveURL:" + weaveURL)
+    weaveCommand4=subprocess.Popen(["kubectl","apply","-f",weaveURL],stdout=subprocess.PIPE)
+    weaveResult= weaveCommand4.communicate()[0]
+    ctx.logger.info("weaveResult :"+weaveResult)
diff --git a/TOSCA/kubernetes-cluster-TOSCA/scripts/kubernetes_master/start.py b/TOSCA/kubernetes-cluster-TOSCA/scripts/kubernetes_master/start.py
new file mode 100644 (file)
index 0000000..bbc166b
--- /dev/null
@@ -0,0 +1,153 @@
+#!/usr/bin/env python
+
+# ============LICENSE_START==========================================
+# ===================================================================
+# Copyright © 2017 AT&T
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#============LICENSE_END============================================
+
+#This script will be execute on master host. This script will check whether Kube-DNS is running, and set secrets in cloudify.
+
+import os
+import subprocess
+import pip
+try:
+    import yaml
+except ImportError:
+    pip.main(['install', 'pyyaml'])
+    import yaml
+
+from cloudify import ctx
+from cloudify.exceptions import RecoverableError
+from cloudify import manager
+
+
+def execute_command(_command):
+
+    ctx.logger.debug('_command {0}.'.format(_command))
+
+    subprocess_args = {
+        'args': _command.split(),
+        'stdout': subprocess.PIPE,
+        'stderr': subprocess.PIPE
+    }
+
+    ctx.logger.debug('subprocess_args {0}.'.format(subprocess_args))
+
+    process = subprocess.Popen(**subprocess_args)
+    output, error = process.communicate()
+
+    ctx.logger.debug('command: {0} '.format(_command))
+    ctx.logger.debug('output: {0} '.format(output))
+    ctx.logger.debug('error: {0} '.format(error))
+    ctx.logger.debug('process.returncode: {0} '.format(process.returncode))
+
+    if process.returncode:
+        ctx.logger.error('Running `{0}` returns error.'.format(_command))
+        return False
+
+    return output
+
+
+def check_kubedns_status(_get_pods):
+
+    ctx.logger.debug('get_pods: {0} '.format(_get_pods))
+
+    for pod_line in _get_pods.split('\n'):
+        ctx.logger.debug('pod_line: {0} '.format(pod_line))
+        try:
+            _namespace, _name, _ready, _status, _restarts, _age = pod_line.split()
+        except ValueError:
+            pass
+        else:
+            if 'kube-dns' in _name and 'Running' not in _status:
+                return False
+            elif 'kube-dns' in _name and 'Running' in _status:
+                return True
+    return False
+
+
+if __name__ == '__main__':
+
+    cfy_client = manager.get_rest_client()
+
+    # Checking if the Kubernetes DNS service is running (last step).
+    admin_file_dest = os.path.join(os.path.expanduser('~'), 'admin.conf')
+    os.environ['KUBECONFIG'] = admin_file_dest
+    get_pods = execute_command('kubectl get pods --all-namespaces')
+    if not check_kubedns_status(get_pods):
+        raise RecoverableError('kube-dns not Running')
+
+    # Storing the K master configuration.
+    kubernetes_master_config = {}
+    with open(admin_file_dest, 'r') as outfile:
+        try:
+            kubernetes_master_config = yaml.load(outfile)
+        except yaml.YAMLError as e:
+            RecoverableError(
+                'Unable to read Kubernetes Admin file: {0}: {1}'.format(
+                    admin_file_dest, str(e)))
+    ctx.instance.runtime_properties['configuration_file_content'] = \
+        kubernetes_master_config
+
+    clusters = kubernetes_master_config.get('clusters')
+    _clusters = {}
+    for cluster in clusters:
+        __name = cluster.get('name')
+        _cluster = cluster.get('cluster', {})
+        _secret_key = '%s_certificate_authority_data' % __name
+        if cfy_client and not len(cfy_client.secrets.list(key=_secret_key)) == 1:
+            cfy_client.secrets.create(key=_secret_key, value=_cluster.get('certificate-authority-data'))
+            ctx.logger.info('Set secret: {0}.'.format(_secret_key))
+        else:
+            cfy_client.secrets.update(key=_secret_key, value=_cluster.get('certificate-authority-data'))
+        ctx.instance.runtime_properties['%s_certificate_authority_data' % __name] = _cluster.get('certificate-authority-data')
+        _clusters[__name] = _cluster
+    del __name
+
+    contexts = kubernetes_master_config.get('contexts')
+    _contexts = {}
+    for context in contexts:
+        __name = context.get('name')
+        _context = context.get('context', {})
+        _contexts[__name] = _context
+    del __name
+
+    users = kubernetes_master_config.get('users')
+    _users = {}
+    for user in users:
+        __name = user.get('name')
+        _user = user.get('user', {})
+        _secret_key = '%s_client_certificate_data' % __name
+        if cfy_client and not len(cfy_client.secrets.list(key=_secret_key)) == 1:
+            cfy_client.secrets.create(key=_secret_key, value=_user.get('client-certificate-data'))
+            ctx.logger.info('Set secret: {0}.'.format(_secret_key))
+        else:
+            cfy_client.secrets.update(key=_secret_key, value=_user.get('client-certificate-data'))
+        _secret_key = '%s_client_key_data' % __name
+        if cfy_client and not len(cfy_client.secrets.list(key=_secret_key)) == 1:
+            cfy_client.secrets.create(key=_secret_key, value=_user.get('client-key-data'))
+            ctx.logger.info('Set secret: {0}.'.format(_secret_key))
+        else:
+            cfy_client.secrets.update(key=_secret_key, value=_user.get('client-key-data'))
+        ctx.instance.runtime_properties['%s_client_certificate_data' % __name] = _user.get('client-certificate-data')
+        ctx.instance.runtime_properties['%s_client_key_data' % __name] = _user.get('client-key-data')
+        _users[__name] = _user
+    del __name
+
+    ctx.instance.runtime_properties['kubernetes'] = {
+        'clusters': _clusters,
+        'contexts': _contexts,
+        'users': _users
+    }
diff --git a/TOSCA/kubernetes-cluster-TOSCA/scripts/kubernetes_node/configure.py b/TOSCA/kubernetes-cluster-TOSCA/scripts/kubernetes_node/configure.py
new file mode 100644 (file)
index 0000000..69faaa8
--- /dev/null
@@ -0,0 +1,88 @@
+#!/usr/bin/env python
+
+# ============LICENSE_START==========================================
+# ===================================================================
+# Copyright © 2017 AT&T
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#============LICENSE_END============================================
+
+# Afther K8s master up and running. This script will be triggered in each worker nodes. It will join the nodes, and mount the NFS directory.
+
+import subprocess
+from cloudify import ctx
+from cloudify.exceptions import NonRecoverableError
+
+START_COMMAND = 'sudo kubeadm join --token {0} {1}:{2}'
+
+
+def execute_command(_command):
+
+    ctx.logger.debug('_command {0}.'.format(_command))
+
+    subprocess_args = {
+        'args': _command.split(),
+        'stdout': subprocess.PIPE,
+        'stderr': subprocess.PIPE
+    }
+
+    ctx.logger.debug('subprocess_args {0}.'.format(subprocess_args))
+
+    process = subprocess.Popen(**subprocess_args)
+    output, error = process.communicate()
+
+    ctx.logger.debug('command: {0} '.format(_command))
+    ctx.logger.debug('output: {0} '.format(output))
+    ctx.logger.debug('error: {0} '.format(error))
+    ctx.logger.debug('process.returncode: {0} '.format(process.returncode))
+
+    if process.returncode:
+        ctx.logger.error('Running `{0}` returns error.'.format(_command))
+        return False
+
+    return output
+
+
+if __name__ == '__main__':
+
+    hostname = execute_command('hostname')
+    ctx.instance.runtime_properties['hostname'] = hostname.rstrip('\n')
+
+    # Get the master cluster info.
+    masters = \
+        [x for x in ctx.instance.relationships if
+         x.target.instance.runtime_properties.get(
+             'KUBERNETES_MASTER', False)]
+    if len(masters) != 1:
+        raise NonRecoverableError(
+            'Currently, a Kubernetes node must have a '
+            'dependency on one Kubernetes master.')
+    master = masters[0]
+    bootstrap_token = \
+        master.target.instance.runtime_properties['bootstrap_token']
+    master_ip = \
+        master.target.instance.runtime_properties['master_ip']
+    master_port = \
+        master.target.instance.runtime_properties['master_port']
+
+    # Join the cluster.
+    cniCommand1=subprocess.Popen(["sudo", "sysctl", 'net.bridge.bridge-nf-call-iptables=1'], stdout=subprocess.PIPE)
+    join_command = \
+        'sudo kubeadm join --token {0} {1}:{2}'.format(
+            bootstrap_token, master_ip, master_port)
+    execute_command(join_command)
+
+    #mount
+    mount_command=\
+        'sudo mount -t nfs -o proto=tcp,port=2049 {0}:/dockerdata-nfs /dockerdata-nfs'.format(master_ip)
+    execute_command(mount_command)
\ No newline at end of file
diff --git a/TOSCA/kubernetes-cluster-TOSCA/scripts/nfs.sh b/TOSCA/kubernetes-cluster-TOSCA/scripts/nfs.sh
new file mode 100644 (file)
index 0000000..2d59acd
--- /dev/null
@@ -0,0 +1,29 @@
+#!/bin/sh
+
+# ============LICENSE_START==========================================
+# ===================================================================
+# Copyright © 2017 AT&T
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#============LICENSE_END============================================
+
+# this script will set the NFS server on k8s master.
+
+mkdir -p /dockerdata-nfs
+chmod 777 /dockerdata-nfs
+yum -y install nfs-utils
+systemctl enable nfs-server.service
+systemctl start nfs-server.service
+echo "/dockerdata-nfs *(rw,no_root_squash,no_subtree_check)" |sudo tee --append /etc/exports
+echo "/home/centos/dockerdata-nfs /dockerdata-nfs    none    bind  0  0" |sudo tee --append /etc/fstab
+exportfs -a
\ No newline at end of file
diff --git a/TOSCA/kubernetes-cluster-TOSCA/scripts/tasks.py b/TOSCA/kubernetes-cluster-TOSCA/scripts/tasks.py
new file mode 100644 (file)
index 0000000..7680fac
--- /dev/null
@@ -0,0 +1,43 @@
+#!/usr/bin/env python
+
+# ============LICENSE_START==========================================
+# ===================================================================
+# Copyright © 2017 AT&T
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#============LICENSE_END============================================
+
+# here we define some tasks
+
+from fabric.api import run
+
+
+def label_node(labels, hostname):
+    if labels:
+        label_list = []
+        for key, value in labels.items():
+            label_pair_string = '%s=%s' % (key, value)
+            label_list.append(label_pair_string)
+        label_string = ' '.join(label_list)
+        command = 'kubectl label nodes %s %s' % (hostname, label_string)
+        run(command)
+
+
+def stop_node(hostname):
+    command = 'kubectl drain %s' % (hostname)
+    run(command)
+
+
+def delete_node(hostname):
+    command = 'kubectl delete no %s' % (hostname)
+    run(command)