Skip to content

Commit 4e81190

Browse files
tkatilaeero-t
andcommitted
gpu: restructure readme
Split readme into smaller chunks, show only one "easy installation" and hide the rest. Add some notes about tile resources. Co-authored-by: Eero Tamminen <[email protected]> Signed-off-by: Tuomas Katila <[email protected]>
1 parent cc1fca3 commit 4e81190

File tree

5 files changed

+208
-190
lines changed

5 files changed

+208
-190
lines changed

cmd/gpu_plugin/README.md

Lines changed: 44 additions & 190 deletions
Original file line numberDiff line numberDiff line change
@@ -5,30 +5,20 @@ Table of Contents
55
* [Introduction](#introduction)
66
* [Modes and Configuration Options](#modes-and-configuration-options)
77
* [Operation modes for different workload types](#operation-modes-for-different-workload-types)
8+
* [Installing driver and firmware for Intel GPUs](#installing-driver-and-firmware-for-intel-gpus)
9+
* [Pre-built Images](#pre-built-images)
810
* [Installation](#installation)
9-
* [Prerequisites](#prerequisites)
10-
* [Drivers for discrete GPUs](#drivers-for-discrete-gpus)
11-
* [Kernel driver](#kernel-driver)
12-
* [Intel DKMS packages](#intel-dkms-packages)
13-
* [Upstream kernel](#upstream-kernel)
14-
* [GPU Version](#gpu-version)
15-
* [GPU Firmware](#gpu-firmware)
16-
* [User-space drivers](#user-space-drivers)
17-
* [Drivers for older (integrated) GPUs](#drivers-for-older-integrated-gpus)
18-
* [Pre-built Images](#pre-built-images)
19-
* [Install to all nodes](#install-to-all-nodes)
20-
* [Install to nodes with Intel GPUs with NFD](#install-to-nodes-with-intel-gpus-with-nfd)
21-
* [Install to nodes with NFD, Monitoring and Shared-dev](#install-to-nodes-with-nfd-monitoring-and-shared-dev)
22-
* [Install to nodes with Intel GPUs with Fractional resources](#install-to-nodes-with-intel-gpus-with-fractional-resources)
23-
* [Fractional resources details](#fractional-resources-details)
11+
* [Install with NFD](#install-with-nfd)
12+
* [Install with Operator](#install-with-operator)
2413
* [Verify Plugin Registration](#verify-plugin-registration)
2514
* [Testing and Demos](#testing-and-demos)
26-
* [Labels created by GPU plugin](#labels-created-by-gpu-plugin)
27-
* [SR-IOV use with the plugin](#sr-iov-use-with-the-plugin)
28-
* [Issues with media workloads on multi-GPU setups](#issues-with-media-workloads-on-multi-gpu-setups)
15+
* [Notes](#notes)
16+
* [Running GPU plugin as non-root](#running-gpu-plugin-as-non-root)
17+
* [Labels created by GPU plugin](#labels-created-by-gpu-plugin)
18+
* [SR-IOV use with the plugin](#sr-iov-use-with-the-plugin)
19+
* [Issues with media workloads on multi-GPU setups](#issues-with-media-workloads-on-multi-gpu-setups)
2920
* [Workaround for QSV and VA-API](#workaround-for-qsv-and-va-api)
3021

31-
3222
## Introduction
3323

3424
Intel GPU plugin facilitates Kubernetes workload offloading by providing access to
@@ -51,7 +41,7 @@ backend libraries can offload compute operations to GPU.
5141
| Flag | Argument | Default | Meaning |
5242
|:---- |:-------- |:------- |:------- |
5343
| -enable-monitoring | - | disabled | Enable 'i915_monitoring' resource that provides access to all Intel GPU devices on the node |
54-
| -resource-manager | - | disabled | Enable fractional resource management, [see also dependencies](#fractional-resources) |
44+
| -resource-manager | - | disabled | Enable fractional resource management, [see use](./fractional.md) |
5545
| -shared-dev-num | int | 1 | Number of containers that can share the same GPU device |
5646
| -allocation-policy | string | none | 3 possible values: balanced, packed, none. For shared-dev-num > 1: _balanced_ mode spreads workloads among GPU devices, _packed_ mode fills one GPU fully before moving to next, and _none_ selects first available device from kubelet. Default is _none_. Allocation policy does not have an effect when resource manager is enabled. |
5747

@@ -60,145 +50,45 @@ Please use the -h option to see the complete list of logging related options.
6050

6151
## Operation modes for different workload types
6252

53+
<img src="usage-scenarios.png"/>
54+
6355
Intel GPU-plugin supports a few different operation modes. Depending on the workloads the cluster is running, some modes make more sense than others. Below is a table that explains the differences between the modes and suggests workload types for each mode. Mode selection applies to the whole GPU plugin deployment, so it is a cluster wide decision.
6456

6557
| Mode | Sharing | Intended workloads | Suitable for time critical workloads |
6658
|:---- |:-------- |:------- |:------- |
6759
| shared-dev-num == 1 | No, 1 container per GPU | Workloads using all GPU capacity, e.g. AI training | Yes |
6860
| shared-dev-num > 1 | Yes, >1 containers per GPU | (Batch) workloads using only part of GPU resources, e.g. inference, media transcode/analytics, or CPU bound GPU workloads | No |
69-
| shared-dev-num > 1 && resource-management | Yes and no, 1>= containers per GPU | Any. For best results, all workloads should declare their expected GPU resource usage (memory, millicores). Requires [GAS](https://github.com/intel/platform-aware-scheduling/tree/master/gpu-aware-scheduling). See also [fractional use](#fractional-resources-details) | Yes. 1000 millicores = exclusive GPU usage. See note below. |
61+
| shared-dev-num > 1 && resource-management | Depends on resource requests | Any. For requirements and usage, see [fractional resource management](./fractional.md) | Yes. 1000 millicores = exclusive GPU usage. See note below. |
7062

7163
> **Note**: Exclusive GPU usage with >=1000 millicores requires that also *all other GPU containers* specify (non-zero) millicores resource usage.
7264
73-
## Installation
74-
75-
The following sections detail how to obtain, build, deploy and test the GPU device plugin.
76-
77-
Examples are provided showing how to deploy the plugin either using a DaemonSet or by hand on a per-node basis.
78-
79-
### Prerequisites
80-
81-
Access to a GPU device requires firmware, kernel and user-space
82-
drivers supporting it. Firmware and kernel driver need to be on the
83-
host, user-space drivers in the GPU workload containers.
84-
85-
Intel GPU devices supported by the current kernel can be listed with:
86-
```
87-
$ grep i915 /sys/class/drm/card?/device/uevent
88-
/sys/class/drm/card0/device/uevent:DRIVER=i915
89-
/sys/class/drm/card1/device/uevent:DRIVER=i915
90-
```
91-
92-
#### Drivers for discrete GPUs
93-
94-
> **Note**: Kernel (on host) and user-space drivers (in containers)
95-
> should be installed from the same repository as there are some
96-
> differences between DKMS and upstream GPU driver uAPI.
97-
98-
##### Kernel driver
99-
100-
###### Intel DKMS packages
101-
102-
`i915` GPU driver DKMS[^dkms] package is recommended for Intel
103-
discrete GPUs, until their support in upstream is complete. DKMS
104-
package(s) can be installed from Intel package repositories for a
105-
subset of older kernel versions used in enterprise / LTS
106-
distributions:
107-
https://dgpu-docs.intel.com/installation-guides/index.html
108-
109-
[^dkms]: [intel-gpu-i915-backports](https://github.com/intel-gpu/intel-gpu-i915-backports).
110-
111-
###### Upstream kernel
112-
113-
Upstream Linux kernel 6.2 or newer is needed for Intel discrete GPU
114-
support. For now, upstream kernel is still missing support for a few
115-
of the features available in DKMS kernels (e.g. Level-Zero Sysman API
116-
GPU error counters).
65+
## Installing driver and firmware for Intel GPUs
11766

118-
##### GPU Version
67+
In case your host's operating system lacks support for Intel GPUs, see this page for help: [Drivers for Intel GPUs](./driver-firmware.md)
11968

120-
PCI IDs for the Intel GPUs on given host can be listed with:
121-
```
122-
$ lspci | grep -e VGA -e Display | grep Intel
123-
88:00.0 Display controller: Intel Corporation Device 56c1 (rev 05)
124-
8d:00.0 Display controller: Intel Corporation Device 56c1 (rev 05)
125-
```
126-
127-
(`lspci` lists GPUs with display support as "VGA compatible controller",
128-
and server GPUs without display support, as "Display controller".)
129-
130-
Mesa "Iris" 3D driver header provides a mapping between GPU PCI IDs and their Intel brand names:
131-
https://gitlab.freedesktop.org/mesa/mesa/-/blob/main/include/pci_ids/iris_pci_ids.h
132-
133-
###### GPU Firmware
134-
135-
If your kernel build does not find the correct firmware version for
136-
a given GPU from the host (see `dmesg | grep i915` output), latest
137-
firmware versions are available in upstream:
138-
https://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git/tree/i915
139-
140-
##### User-space drivers
141-
142-
Until new enough user-space drivers (supporting also discrete GPUs)
143-
are available directly from distribution package repositories, they
144-
can be installed to containers from Intel package repositories. See:
145-
https://dgpu-docs.intel.com/installation-guides/index.html
146-
147-
Example container is listed in [Testing and demos](#testing-and-demos).
148-
149-
Validation status against *upstream* kernel is listed in the user-space drivers release notes:
150-
* Media driver: https://github.com/intel/media-driver/releases
151-
* Compute driver: https://github.com/intel/compute-runtime/releases
152-
153-
#### Drivers for older (integrated) GPUs
154-
155-
For the older (integrated) GPUs, new enough firmware and kernel driver
156-
are typically included already with the host OS, and new enough
157-
user-space drivers (for the GPU containers) are in the host OS
158-
repositories.
159-
160-
### Pre-built Images
69+
## Pre-built Images
16170

16271
[Pre-built images](https://hub.docker.com/r/intel/intel-gpu-plugin)
16372
of this component are available on the Docker hub. These images are automatically built and uploaded
16473
to the hub from the latest main branch of this repository.
16574

16675
Release tagged images of the components are also available on the Docker hub, tagged with their
16776
release version numbers in the format `x.y.z`, corresponding to the branches and releases in this
168-
repository. Thus the easiest way to deploy the plugin in your cluster is to run this command
169-
170-
> **Note**: Replace `<RELEASE_VERSION>` with the desired [release tag](https://github.com/intel/intel-device-plugins-for-kubernetes/tags) or `main` to get `devel` images.
171-
172-
> **Note**: Add ```--dry-run=client -o yaml``` to the ```kubectl``` commands below to visualize the yaml content being applied.
77+
repository.
17378

17479
See [the development guide](../../DEVEL.md) for details if you want to deploy a customized version of the plugin.
17580

176-
#### Install to all nodes
177-
178-
Simplest option to enable use of Intel GPUs in Kubernetes Pods.
179-
180-
```bash
181-
$ kubectl apply -k 'https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/gpu_plugin?ref=<RELEASE_VERSION>'
182-
```
183-
184-
#### Install to nodes with Intel GPUs with NFD
185-
186-
Deploying GPU plugin to only nodes that have Intel GPU attached. [Node Feature Discovery](https://github.com/kubernetes-sigs/node-feature-discovery) is required to detect the presence of Intel GPUs.
81+
## Installation
18782

188-
```bash
189-
# Start NFD - if your cluster doesn't have NFD installed yet
190-
$ kubectl apply -k 'https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/nfd?ref=<RELEASE_VERSION>'
83+
There are multiple ways to install Intel GPU plugin to a cluster. The most common methods are described below. For alternative methods, see [advanced install](./advanced-install.md) page.
19184

192-
# Create NodeFeatureRules for detecting GPUs on nodes
193-
$ kubectl apply -k 'https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/nfd/overlays/node-feature-rules?ref=<RELEASE_VERSION>'
85+
> **Note**: Replace `<RELEASE_VERSION>` with the desired [release tag](https://github.com/intel/intel-device-plugins-for-kubernetes/tags) or `main` to get `devel` images.
19486
195-
# Create GPU plugin daemonset
196-
$ kubectl apply -k 'https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/gpu_plugin/overlays/nfd_labeled_nodes?ref=<RELEASE_VERSION>'
197-
```
87+
> **Note**: Add ```--dry-run=client -o yaml``` to the ```kubectl``` commands below to visualize the yaml content being applied.
19888
199-
#### Install to nodes with NFD, Monitoring and Shared-dev
89+
### Install with NFD
20090

201-
Same as above, but configures GPU plugin with logging, [monitoring and shared-dev](#modes-and-configuration-options) features enabled. This option is useful when there is a desire to retrieve GPU metrics from nodes. For example with [XPU-Manager](https://github.com/intel/xpumanager/) or [collectd](https://github.com/collectd/collectd/tree/collectd-6.0).
91+
Deploy GPU plugin with the help of NFD ([Node Feature Discovery](https://github.com/kubernetes-sigs/node-feature-discovery)). It detects the presence of Intel GPUs and labels them accordingly. GPU plugin's node selector is used to deploy plugin to nodes which have such a GPU label.
20292

20393
```bash
20494
# Start NFD - if your cluster doesn't have NFD installed yet
@@ -208,66 +98,20 @@ $ kubectl apply -k 'https://github.com/intel/intel-device-plugins-for-kubernetes
20898
$ kubectl apply -k 'https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/nfd/overlays/node-feature-rules?ref=<RELEASE_VERSION>'
20999

210100
# Create GPU plugin daemonset
211-
$ kubectl apply -k 'https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/gpu_plugin/overlays/monitoring_shared-dev_nfd/?ref=<RELEASE_VERSION>'
101+
$ kubectl apply -k 'https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/gpu_plugin/overlays/nfd_labeled_nodes?ref=<RELEASE_VERSION>'
212102
```
213103

214-
#### Install to nodes with Intel GPUs with Fractional resources
215-
216-
With the experimental fractional resource feature you can use additional kubernetes extended
217-
resources, such as GPU memory, which can then be consumed by deployments. PODs will then only
218-
deploy to nodes where there are sufficient amounts of the extended resources for the containers.
104+
### Install with Operator
219105

220-
(For this to work properly, all GPUs in a given node should provide equal amount of resources
221-
i.e. heteregenous GPU nodes are not supported.)
106+
GPU plugin can be installed with the Intel Device Plugin Operator. It allows configuring GPU plugin's parameters without kustomizing the deployment files. The general installation is described in the [install documentation](../operator/README.md#installation). For configuring the GPU Custom Resource (CR), see the [configuration options](#modes-and-configuration-options) and [operation modes](#operation-modes-for-different-workload-types).
222107

223-
Enabling the fractional resource feature isn't quite as simple as just enabling the related
224-
command line flag. The DaemonSet needs additional RBAC-permissions
225-
and access to the kubelet podresources gRPC service, plus there are other dependencies to
226-
take care of, which are explained below. For the RBAC-permissions, gRPC service access and
227-
the flag enabling, it is recommended to use kustomization by running:
108+
### Install alongside with GPU Aware Scheduling
228109

229-
```bash
230-
# Start NFD - if your cluster doesn't have NFD installed yet
231-
$ kubectl apply -k 'https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/nfd?ref=<RELEASE_VERSION>'
232-
233-
# Create NodeFeatureRules for detecting GPUs on nodes
234-
$ kubectl apply -k 'https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/nfd/overlays/node-feature-rules?ref=<RELEASE_VERSION>'
235-
236-
# Create GPU plugin daemonset
237-
$ kubectl apply -k 'https://github.com/intel/intel-device-plugins-for-kubernetes/deployments/gpu_plugin/overlays/fractional_resources?ref=<RELEASE_VERSION>'
238-
```
110+
GPU plugin can be installed alongside with GPU Aware Scheduling (GAS). It allows scheduling Pods which e.g. request only partial use of a GPU. The installation is described in [fractional resources](./fractional.md) page.
239111

240-
##### Fractional resources details
241-
242-
Usage of these fractional GPU resources requires that the cluster has node
243-
extended resources with the name prefix `gpu.intel.com/`. Those can be created with NFD
244-
by running the [hook](/cmd/gpu_nfdhook/) installed by the plugin initcontainer. When fractional resources are
245-
enabled, the plugin lets a [scheduler extender](https://github.com/intel/platform-aware-scheduling/tree/master/gpu-aware-scheduling)
246-
do card selection decisions based on resource availability and the amount of extended
247-
resources requested in the [pod spec](https://github.com/intel/platform-aware-scheduling/blob/master/gpu-aware-scheduling/docs/usage.md#pods).
248-
249-
The scheduler extender then needs to annotate the pod objects with unique
250-
increasing numeric timestamps in the annotation `gas-ts` and container card selections in
251-
`gas-container-cards` annotation. The latter has container separator '`|`' and card separator
252-
'`,`'. Example for a pod with two containers and both containers getting two cards:
253-
`gas-container-cards:card0,card1|card2,card3`. Enabling the fractional-resource support
254-
in the plugin without running such an annotation adding scheduler extender in the cluster
255-
will only slow down GPU-deployments, so do not enable this feature unnecessarily.
256-
257-
In multi-tile systems, containers can request individual tiles to improve GPU resource usage.
258-
Tiles targeted for containers are specified to pod via `gas-container-tiles` annotation where the the annotation
259-
value describes a set of card and tile combinations. For example in a two container pod, the annotation
260-
could be `gas-container-tiles:card0:gt0+gt1|card1:gt1,card2:gt0`. Similarly to `gas-container-cards`, the container
261-
details are split via `|`. In the example above, the first container gets tiles 0 and 1 from card 0,
262-
and the second container gets tile 1 from card 1 and tile 0 from card 2.
263-
264-
> **Note**: It is also possible to run the GPU device plugin using a non-root user. To do this,
265-
the nodes' DAC rules must be configured to device plugin socket creation and kubelet registration.
266-
Furthermore, the deployments `securityContext` must be configured with appropriate `runAsUser/runAsGroup`.
267-
268-
### Verify Plugin Registration
112+
### Verify Plugin Installation
269113

270-
You can verify the plugin has been registered with the expected nodes by searching for the relevant
114+
You can verify that the plugin has been installed on the expected nodes by searching for the relevant
271115
resource allocation status on the nodes:
272116

273117
```bash
@@ -341,17 +185,27 @@ The GPU plugin functionality can be verified by deploying an [OpenCL image](../.
341185
Warning FailedScheduling <unknown> default-scheduler 0/1 nodes are available: 1 Insufficient gpu.intel.com/i915.
342186
```
343187

344-
## Labels created by GPU plugin
188+
## Notes
189+
190+
### Running GPU plugin as non-root
191+
192+
It is possible to run the GPU device plugin using a non-root user. To do this,
193+
the nodes' DAC rules must be configured to device plugin socket creation and kubelet registration.
194+
Furthermore, the deployments `securityContext` must be configured with appropriate `runAsUser/runAsGroup`.
195+
196+
More info: https://kubernetes.io/blog/2021/11/09/non-root-containers-and-devices/
197+
198+
### Labels created by GPU plugin
345199
346200
If installed with NFD and started with resource-management, plugin will export a set of labels for the node. For detailed info, see [labeling documentation](./labels.md).
347201
348-
## SR-IOV use with the plugin
202+
### SR-IOV use with the plugin
349203
350204
GPU plugin does __not__ setup SR-IOV. It has to be configured by the cluster admin.
351205
352206
GPU plugin does however support provisioning Virtual Functions (VFs) to containers for a SR-IOV enabled GPU. When the plugin detects a GPU with SR-IOV VFs configured, it will only provision the VFs and leaves the PF device on the host.
353207
354-
## Issues with media workloads on multi-GPU setups
208+
### Issues with media workloads on multi-GPU setups
355209
356210
OneVPL media API, 3D and compute APIs provide device discovery
357211
functionality for applications and work fine in multi-GPU setups.
@@ -376,7 +230,7 @@ options are documented here:
376230
* QSV: https://github.com/Intel-Media-SDK/MediaSDK/wiki/FFmpeg-QSV-Multi-GPU-Selection-on-Linux
377231
378232
379-
### Workaround for QSV and VA-API
233+
#### Workaround for QSV and VA-API
380234
381235
[Render device](render-device.sh) shell script locates and outputs the
382236
correct device file name. It can be added to the container and used

0 commit comments

Comments
 (0)