3 This folder contains all the test scripts. Most of those test scripts can be executed simply by `bash <script-name>`.
6 * [Multus CNI](#Multus-CNI)
7 1. [define the additional network custom resource definition(CRD)](#Define-custom-resource-definition)
8 * [bridge CNI](#bridge-CNI)
9 * [macvlan CNI](#macvlan-CNI)
10 * [ipvlan CNI](#ipvlan-CNI)
12 2. [Create a pod with the previously CRD annotation](#Create-a-pod-with-the-previously-CRD-annotation)
13 3. [Verify the additional interface was configured](#Verify-the-additional-interface-was-configured)
14 * [SRIOV plugin](#SRIOV-plugin)
15 1. [define SRIOV network CRD](#define-SRIOV-network-CRD)
16 2. [Create a pod with single/multiple VF interface](#Create-a-pod-with-single/multiple-VF-interface)
17 * [single VF allocated](#single-VF-allocated)
18 * [multiple VF allocated](#multiple-VF-allocated)
19 3. [Verify the VF interface was allocated](#Verify-the-VF-interface-was-allocated)
21 1. [Create a pod to run on particular node](#Create-a-pod-to-run-on-particular-node)
22 * [nodeSelector](#nodeSelector)
23 * [node affinity](#node-affinity)
24 2. [Verify pod created status](#Verify-pod-created-status)
28 [Multus CNI](https://github.com/intel/multus-cni) is a container network interface (CNI) plugin for Kubernetes that enables attaching multiple network interfaces to pods. Typically, in Kubernetes each pod only has one network interface (apart from a loopback) -- with Multus you can create a multi-homed pod that has multiple interfaces. This is accomplished by Multus acting as a "meta-plugin", a CNI plugin that can call multiple other CNI plugins.
30 ### Define custom resource definition
36 With bridge plugin, all containers (on the same host) are plugged into a bridge (virtual switch) that resides in the host network namespace. Please refer to [the bridge cni](https://github.com/containernetworking/plugins/tree/master/plugins/main/bridge) for details.
38 ##### Example configuration
41 cat << NET | kubectl apply -f -
42 apiVersion: "k8s.cni.cncf.io/v1"
43 kind: NetworkAttachmentDefinition
48 "cniVersion": "0.3.0",
53 "subnet": "$multus_private_net_cidr"
58 ##### Network configuration reference
60 * `name` (string, required): the name of the network.
61 * `type` (string, required): "bridge".
62 * `bridge` (string, optional): name of the bridge to use/create. Defaults to "cni0".
63 * `isGateway` (boolean, optional): assign an IP address to the bridge. Defaults to false.
64 * `isDefaultGateway` (boolean, optional): Sets isGateway to true and makes the assigned IP the default route. Defaults to false.
65 * `forceAddress` (boolean, optional): Indicates if a new IP address should be set if the previous value has been changed. Defaults to false.
66 * `ipMasq` (boolean, optional): set up IP Masquerade on the host for traffic originating from this network and destined outside of it. Defaults to false.
67 * `mtu` (integer, optional): explicitly set MTU to the specified value. Defaults to the value chosen by the kernel.
68 * `hairpinMode` (boolean, optional): set hairpin mode for interfaces on the bridge. Defaults to false.
69 * `ipam` (dictionary, required): IPAM configuration to be used for this network. For L2-only network, create empty dictionary.
70 * `promiscMode` (boolean, optional): set promiscuous mode on the bridge. Defaults to false.
71 * `vlan` (int, optional): assign VLAN tag. Defaults to none.
79 macvlan functions like a switch that is already connected to the host interface.
80 A host interface gets "enslaved" with the virtual interfaces sharing the physical device but having distinct MAC addresses.
81 Please refer to [the macvlan cni](https://github.com/containernetworking/plugins/tree/master/plugins/main/macvlan) for details.
83 ##### Example configuration
86 cat << NET | kubectl apply -f -
87 apiVersion: "k8s.cni.cncf.io/v1"
88 kind: NetworkAttachmentDefinition
95 "master": "$master_name",
98 "subnet": "$multus_private_net_cidr"
104 ##### Network configuration reference
106 * `name` (string, required): the name of the network
107 * `type` (string, required): "macvlan"
108 * `master` (string, optional): name of the host interface to enslave. Defaults to default route interface.
109 * `mode` (string, optional): one of "bridge", "private", "vepa", "passthru". Defaults to "bridge".
110 * `mtu` (integer, optional): explicitly set MTU to the specified value. Defaults to the value chosen by the kernel. The value must be \[0, master's MTU\].
111 * `ipam` (dictionary, required): IPAM configuration to be used for this network. For interface only without ip address, create empty dictionary.
119 ipvlan is a new addition to the Linux kernel. It virtualizes the host interface. Please refer to [the ipvlan cni](https://github.com/containernetworking/plugins/tree/master/plugins/main/ipvlan) for details.
121 ##### Example configuration
124 cat << NET | kubectl apply -f -
125 apiVersion: "k8s.cni.cncf.io/v1"
126 kind: NetworkAttachmentDefinition
133 "master": "$master_name",
135 "type": "host-local",
136 "subnet": "$multus_private_net_cidr"
142 ##### Network configuration reference
144 * `name` (string, required): the name of the network.
145 * `type` (string, required): "ipvlan".
146 * `master` (string, required unless chained): name of the host interface to enslave.
147 * `mode` (string, optional): one of "l2", "l3", "l3s". Defaults to "l2".
148 * `mtu` (integer, optional): explicitly set MTU to the specified value. Defaults to the value chosen by the kernel.
149 * `ipam` (dictionary, required unless chained): IPAM configuration to be used for this network.
155 The ptp plugin creates a point-to-point link between a container and the host by using a veth device.
156 Please refer to [the ptp cni](https://github.com/containernetworking/plugins/tree/master/plugins/main/ptp) for details.
158 ##### Example network configuration
161 cat << NET | kubectl apply -f -
162 apiVersion: "k8s.cni.cncf.io/v1"
163 kind: NetworkAttachmentDefinition
171 "type": "host-local",
172 "subnet": "$multus_private_net_cidr"
178 ##### Network configuration reference
180 * `name` (string, required): the name of the network
181 * `type` (string, required): "ptp"
182 * `ipMasq` (boolean, optional): set up IP Masquerade on the host for traffic originating from ip of this network and destined outside of this network. Defaults to false.
183 * `mtu` (integer, optional): explicitly set MTU to the specified value. Defaults to value chosen by the kernel.
184 * `ipam` (dictionary, required): IPAM configuration to be used for this network.
185 * `dns` (dictionary, optional): DNS information to return as described in the result.
187 #### Create a pod with the previously CRD annotation
189 cat << DEPLOYMENT | kubectl create -f -
193 name: $multus_deployment_name
206 k8s.v1.cni.cncf.io/networks: bridge-conf
209 - name: $multus_deployment_name
216 > You can add more interfaces to a pod by creating more custom resources and then referring to them in pod's annotation. You can also reuse configurations, so for example, to attach a bridge interface and a macvlan interface to a pod, you could create a pod like so:
218 cat << DEPLOYMENT | kubectl create -f -
222 name: $multus_deployment_name
235 k8s.v1.cni.cncf.io/networks: bridge-conf, macvlan-conf
238 - name: $multus_deployment_name
245 > Note that the annotation now reads k8s.v1.cni.cncf.io/networks: bridge-conf, macvlan-conf. Where we have the same configuration used twice, separated by a comma.
246 If you were to create another custom resource with the name foo you could use that such as: k8s.v1.cni.cncf.io/networks: foo,macvlan-conf, and use any number of attachments.
248 #### Verify the additional interface was configured
249 We can Verify the additional interface by running command as shown below.
251 kubectl exec -it $deployment_pod -- ip a
253 The output should looks like the following.
255 ===== multus-deployment-688659b564-79dth details =====
256 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
257 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
258 inet 127.0.0.1/8 scope host lo
259 valid_lft forever preferred_lft forever
260 3: eth0@if543: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue
261 link/ether 0a:58:0a:f4:44:1e brd ff:ff:ff:ff:ff:ff
262 inet 10.244.68.30/24 scope global eth0
263 valid_lft forever preferred_lft forever
264 5: net1@if544: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
265 link/ether 46:9d:68:90:f1:eb brd ff:ff:ff:ff:ff:ff
266 inet 10.20.0.12/16 scope global net1
267 valid_lft forever preferred_lft forever
269 You should note that a new interface named `net1` is attached.
270 >For further test information please refer to the file [`./multus.sh`](./multus.sh).
275 ### define SRIOV network CRD
277 ##### Example CRD configuration
280 apiVersion: "k8s.cni.cncf.io/v1"
281 kind: NetworkAttachmentDefinition
285 k8s.v1.cni.cncf.io/resourceName: intel.com/intel_sriov_700
289 "cniVersion": "0.3.1",
291 "type": "host-local",
292 "subnet": "10.56.206.0/24",
294 { "dst": "0.0.0.0/0" }
296 "gateway": "10.56.206.1"
300 ### Create a pod with single/multiple VF interface
302 #### single VF allocated
304 cat << POD | kubectl create -f - --validate=false
308 name: $deployment_pod
310 k8s.v1.cni.cncf.io/networks: sriov-conf
314 image: docker.io/centos/tools:latest
319 intel.com/intel_sriov_700: '1'
321 intel.com/intel_sriov_700: '1'
324 #### multiple VF allocated
326 cat << POD | kubectl create -f - --validate=false
330 name: $deployment_pod
332 k8s.v1.cni.cncf.io/networks: sriov-conf, sriov-conf
336 image: docker.io/centos/tools:latest
341 intel.com/intel_sriov_700: '2'
343 intel.com/intel_sriov_700: '2'
347 ### Verify the VF interface was allocated
349 We can Verify the additional VF interface by running command as shown below.
351 kubectl exec -it $deployment_pod -- ip a
353 The output should looks like the following.
355 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
356 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
357 inet 127.0.0.1/8 scope host lo
358 valid_lft forever preferred_lft forever
359 3: eth0@if429: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
360 link/ether 0a:58:0a:f4:40:09 brd ff:ff:ff:ff:ff:ff link-netnsid 0
361 inet 10.244.64.9/24 scope global eth0
362 valid_lft forever preferred_lft forever
363 413: net2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
364 link/ether 36:57:78:8b:e1:3b brd ff:ff:ff:ff:ff:ff
365 inet 10.56.206.4/24 brd 10.56.206.255 scope global net2
366 valid_lft forever preferred_lft forever
367 414: net1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
368 link/ether de:d0:73:53:08:66 brd ff:ff:ff:ff:ff:ff
369 inet 10.56.206.3/24 brd 10.56.206.255 scope global net1
370 valid_lft forever preferred_lft forever
372 >For further test information please refer to the file [`./sriov.sh`](./sriov.sh).
376 node feature discovery([NFD](https://github.com/kubernetes-sigs/node-feature-discovery)) detects hardware features available on each node in a Kubernetes cluster, and advertises those features using node labels.
377 ### Create a pod to run on particular node
379 `nodeSelector` is a field of PodSpec. It specifies a map of key-value pairs. For the pod to be eligible to run on a node, the node must have each of the indicated key-value pairs as labels (it can have additional labels as well). The most common usage is one key-value pair.
380 ##### To create pod configuration with `nodeSelector`
382 cat << POD > $HOME/$pod_name.yaml | kubectl create -f $HOME/$pod_name.yaml
389 feature.node.kubernetes.io/kernel-version.major: '4'
391 - name: with-node-affinity
392 image: gcr.io/google_containers/pause:2.0
396 `nodeAffinity` is conceptually similar to `nodeSelector` – it allows you to constrain which nodes your pod is eligible to be scheduled on, based on labels on the node.
398 ##### To create pod configuration with `nodeAffinity`
400 cat << POD > $HOME/$pod_name.yaml | kubectl create -f $HOME/$pod_name.yaml
408 requiredDuringSchedulingIgnoredDuringExecution:
411 - key: "feature.node.kubernetes.io/kernel-version.major"
416 - name: with-node-affinity
417 image: gcr.io/google_containers/pause:2.0
420 >For further information on how to configure nodeAffinity `operator` field please refer to the file [`./nfd.sh`](./nfd.sh).
422 ### Verify pod created status
423 To Verify the nfd pod by running command as shown below.
425 kubectl get pods -A | grep $pod_name
427 If the output shows pod `STATUS` field is `running`, the pod have been scheduled successfully.