@@ -5,6 +5,7 @@ Table of Contents
5
5
* [ Introduction] ( #introduction )
6
6
* [ Installation] ( #installation )
7
7
* [ Upgrade] ( #upgrade )
8
+ * [ Limiting Supported Devices] ( #limiting-supported-devices )
8
9
* [ Known issues] ( #known-issues )
9
10
10
11
## Introduction
@@ -16,6 +17,12 @@ administrators.
16
17
17
18
## Installation
18
19
20
+ Operator deployment depends on NFD and cert-manager. Those components have to be installed to the cluster before the operator can be deployed.
21
+
22
+ > ** Note** : Operator can also be installed via Helm charts. See [ INSTALL.md] ( ../../INSTALL.md ) for details.
23
+
24
+ ### NFD
25
+
19
26
Install NFD (if it's not already installed) and node labelling rules (requires NFD v0.10+):
20
27
21
28
```
@@ -38,7 +45,7 @@ nfd-worker-qqq4h 1/1 Running 0 25h
38
45
Note that labelling is not performed immediately. Give NFD 1 minute to pick up the rules and label nodes.
39
46
40
47
As a result all found devices should have correspondent labels, e.g. for Intel DLB devices the label is
41
- intel.feature.node.kubernetes.io/dlb:
48
+ ` intel.feature.node.kubernetes.io/dlb ` :
42
49
```
43
50
$ kubectl get no -o json | jq .items[].metadata.labels |grep intel.feature.node.kubernetes.io/dlb
44
51
"intel.feature.node.kubernetes.io/dlb": "true",
@@ -55,6 +62,8 @@ deployments/operator/samples/deviceplugin_v1_fpgadeviceplugin.yaml: intel.fea
55
62
deployments/operator/samples/deviceplugin_v1_dsadeviceplugin.yaml: intel.feature.node.kubernetes.io/dsa: 'true'
56
63
```
57
64
65
+ ### Cert-Manager
66
+
58
67
The default operator deployment depends on [ cert-manager] ( https://cert-manager.io/ ) running in the cluster.
59
68
See installation instructions [ here] ( https://cert-manager.io/docs/installation/kubectl/ ) .
60
69
@@ -68,45 +77,7 @@ cert-manager-cainjector-87c85c6ff-59sb5 1/1 Running 0 21d
68
77
cert-manager-webhook-64dc9fff44-29cfc 1/1 Running 0 21d
69
78
```
70
79
71
- Also if your cluster operates behind a corporate proxy make sure that the API
72
- server is configured not to send requests to cluster services through the
73
- proxy. You can check that with the following command:
74
-
75
- ``` bash
76
- $ kubectl describe pod kube-apiserver --namespace kube-system | grep -i no_proxy | grep " \.svc"
77
- ```
78
-
79
- In case there's no output and your cluster was deployed with ` kubeadm ` open
80
- ` /etc/kubernetes/manifests/kube-apiserver.yaml ` at the control plane nodes and
81
- append ` .svc ` and ` .svc.cluster.local ` to the ` no_proxy ` environment variable:
82
-
83
- ``` yaml
84
- apiVersion : v1
85
- kind : Pod
86
- metadata :
87
- ...
88
- spec :
89
- containers :
90
- - command :
91
- - kube-apiserver
92
- - --advertise-address=10.237.71.99
93
- ...
94
- env :
95
- - name : http_proxy
96
- value : http://proxy.host:8080
97
- - name : https_proxy
98
- value : http://proxy.host:8433
99
- - name : no_proxy
100
- value : 127.0.0.1,localhost,.example.com,10.0.0.0/8,.svc,.svc.cluster.local
101
- ...
102
- ```
103
-
104
- ** Note:** To build clusters using ` kubeadm ` with the right ` no_proxy ` settings from the very beginning,
105
- set the cluster service names to ` $no_proxy ` before ` kubeadm init ` :
106
-
107
- ```
108
- $ export no_proxy=$no_proxy,.svc,.svc.cluster.local
109
- ```
80
+ ### Device Plugin Operator
110
81
111
82
Finally deploy the operator itself:
112
83
@@ -117,7 +88,7 @@ $ kubectl apply -k 'https://github.com/intel/intel-device-plugins-for-kubernetes
117
88
Now you can deploy the device plugins by creating corresponding custom resources.
118
89
The samples for them are available [ here] ( /deployments/operator/samples/ ) .
119
90
120
- ## Usage
91
+ ### Device Plugin Custom Resource
121
92
122
93
Deploy your device plugin by applying its custom resource, e.g.
123
94
` GpuDevicePlugin ` with
@@ -134,8 +105,23 @@ NAME DESIRED READY NODE SELECTOR AGE
134
105
gpudeviceplugin-sample 1 1 5s
135
106
```
136
107
108
+ ## Upgrade
109
+
110
+ The upgrade of the deployed plugins can be done by simply installing a new release of the operator.
111
+
112
+ The operator auto-upgrades operator-managed plugins (CR images and thus corresponding deployed daemonsets) to the current release of the operator.
113
+
114
+ During upgrade the tag in the image path is updated (e.g. docker.io/intel/intel-sgx-plugin: tag ), but the rest of the path is left intact.
115
+
116
+ No upgrade is done for:
117
+
118
+ - Non-operator managed deployments
119
+ - Operator deployments without numeric tags
120
+
121
+ ## Limiting Supported Devices
122
+
137
123
In order to limit the deployment to a specific device type,
138
- use one of kustomizations under deployments/operator/device.
124
+ use one of kustomizations under ` deployments/operator/device ` .
139
125
140
126
For example, to limit the deployment to FPGA, use:
141
127
@@ -148,20 +134,51 @@ In this case, create a new kustomization with the necessary resources
148
134
that passes the desired device types to the operator using ` --device `
149
135
command line argument multiple times.
150
136
151
- ## Upgrade
137
+ ## Known issues
152
138
153
- The upgrade of the deployed plugins can be done by simply installing a new release of the operator.
139
+ ### Cluster behind a proxy
154
140
155
- The operator auto-upgrades operator-managed plugins (CR images and thus corresponding deployed daemonsets) to the current release of the operator.
141
+ If your cluster operates behind a corporate proxy make sure that the API
142
+ server is configured not to send requests to cluster services through the
143
+ proxy. You can check that with the following command:
156
144
157
- The [ registry-url] /[ namespace] /[ image] are kept intact on the upgrade.
145
+ ``` bash
146
+ $ kubectl describe pod kube-apiserver --namespace kube-system | grep -i no_proxy | grep " \.svc"
147
+ ```
158
148
159
- No upgrade is done for:
149
+ In case there's no output and your cluster was deployed with ` kubeadm ` open
150
+ ` /etc/kubernetes/manifests/kube-apiserver.yaml ` at the control plane nodes and
151
+ append ` .svc ` and ` .svc.cluster.local ` to the ` no_proxy ` environment variable:
160
152
161
- - Non-operator managed deployments
162
- - Operator deployments without numeric tags
153
+ ``` yaml
154
+ apiVersion : v1
155
+ kind : Pod
156
+ metadata :
157
+ ...
158
+ spec :
159
+ containers :
160
+ - command :
161
+ - kube-apiserver
162
+ - --advertise-address=10.237.71.99
163
+ ...
164
+ env :
165
+ - name : http_proxy
166
+ value : http://proxy.host:8080
167
+ - name : https_proxy
168
+ value : http://proxy.host:8433
169
+ - name : no_proxy
170
+ value : 127.0.0.1,localhost,.example.com,10.0.0.0/8,.svc,.svc.cluster.local
171
+ ...
172
+ ```
163
173
164
- ## Known issues
174
+ ** Note:** To build clusters using ` kubeadm ` with the right ` no_proxy ` settings from the very beginning,
175
+ set the cluster service names to ` $no_proxy ` before ` kubeadm init ` :
176
+
177
+ ```
178
+ $ export no_proxy=$no_proxy,.svc,.svc.cluster.local
179
+ ```
180
+
181
+ ### Leader election enabled
165
182
166
183
When the operator is run with leader election enabled, that is with the option
167
184
` --leader-elect ` , make sure the cluster is not overloaded with excessive
0 commit comments