Skip to content
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.

Commit accca70

Browse files
WafaaTclaynerobison
andauthoredApr 27, 2023
Sync with upstream (#1152)
* Adding README files for Intel® Data Center Flex Series GPUs (#125) * fix incorrect links (#127) * bump ipython to fix CVE (#128) --------- Signed-off-by: WafaaT <[email protected]> Co-authored-by: Clayne Robison <[email protected]>
1 parent 8c4c7c9 commit accca70

File tree

9 files changed

+604
-42
lines changed

9 files changed

+604
-42
lines changed
 

‎README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Model Zoo for Intel® Architecture
22

3-
This repository contains **links to pre-trained models, sample scripts, best practices, and step-by-step tutorials** for many popular open-source machine learning models optimized by Intel to run on Intel® Xeon® Scalable processors.
3+
This repository contains **links to pre-trained models, sample scripts, best practices, and step-by-step tutorials** for many popular open-source machine learning models optimized by Intel to run on Intel® Xeon® Scalable processors and Intel® Data Center GPUs.
44

55
Model packages and containers for running the Model Zoo's workloads can be found at the [Intel® Developer Catalog](https://software.intel.com/containers).
66

‎docs/general/FLEX_DEVCATALOG.md

Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
# Model Zoo for Intel® Architecture Workloads Optimized for the Intel® Data Center GPU Flex Series
2+
3+
This document provides links to step-by-step instructions on how to leverage Model Zoo docker containers to run optimized open-source Deep Learning inference workloads using Intel® Extension for PyTorch* and Intel® Extension for TensorFlow* on the [Intel® Data Center GPU Flex Series](https://www.intel.com/content/www/us/en/products/docs/discrete-gpus/data-center-gpu/flex-series/overview.html).
4+
5+
## Base Containers
6+
7+
| AI Framework | Extension | Documentation |
8+
| -----------------------------| ------------- | ----------------- |
9+
| PyTorch | Intel® Extension for PyTorch* | [Intel® Extension for PyTorch Container](https://github.com/IntelAI/models/blob/master/quickstart/ipex-tool-container/gpu/devcatalog.md) |
10+
| TensorFlow | Intel® Extension for TensorFlow* | [Intel® Extension for TensorFlow Container](https://github.com/IntelAI/models/blob/master/quickstart/tf-tool-container/gpu/devcatalog.md)|
11+
12+
## Optimized Workloads
13+
14+
The table below provides links to run each workload in a docker container. The containers are optimized for Linux*.
15+
16+
17+
| Model | Framework | Mode | Documentation | Dataset |
18+
| ----------------------------| ---------- | ----------| ------------------- | ------------ |
19+
| [ResNet 50 v1.5](https://github.com/tensorflow/models/tree/v2.11.0/official/legacy/image_classification/resnet) | TensorFlow | Inference| [INT8](https://github.com/IntelAI/models/blob/master/quickstart/image_recognition/tensorflow/resnet50v1_5/inference/gpu/devcatalog.md) | [ImageNet 2012](https://github.com/IntelAI/models/tree/master/datasets/imagenet/README.md) |
20+
| [ResNet 50 v1.5](https://arxiv.org/pdf/1512.03385.pdf) | PyTorch | Inference | [INT8](https://github.com/IntelAI/models/blob/master/quickstart/image_recognition/pytorch/resnet50v1_5/inference/gpu/DEVCATALOG_FLEX.md) | [ImageNet 2012](https://github.com/IntelAI/models/tree/master/datasets/imagenet/README.md) |
21+
| [SSD-MobileNet v1](https://arxiv.org/pdf/1704.04861.pdf) | PyTorch | Inference | [INT8](https://github.com/IntelAI/models/blob/master/quickstart/object_detection/pytorch/ssd-mobilenet/inference/gpu/devcatalog.md) | [COCO 2017](https://github.com/IntelAI/models/blob/master/quickstart/object_detection/pytorch/ssd-mobilenet/inference/gpu/README.md#datasets) |
22+
| [YOLO v4](https://arxiv.org/pdf/1704.04861.pdf) | PyTorch | Inference |[INT8](https://github.com/IntelAI/models/blob/master/quickstart/object_detection/pytorch/yolov4/inference/gpu/devcatalog.md) | [COCO 2017](https://github.com/IntelAI/models/blob/master/quickstart/object_detection/pytorch/ssd-mobilenet/inference/gpu/README.md#datasets) |
23+
| [SSD-MobileNet](https://arxiv.org/pdf/1704.04861.pdf) | TensorFlow | Inference | [INT8](https://github.com/IntelAI/models/blob/master/quickstart/object_detection/tensorflow/ssd-mobilenet/inference/gpu/devcatalog.md)| [COCO 2017 validation dataset](https://github.com/IntelAI/models/tree/master/datasets/coco#download-and-preprocess-the-coco-validation-images) |
Lines changed: 102 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,102 @@
1+
# Running ResNet50 v1.5 Inference with Int8 on Intel® Data Center GPU Flex Series using Intel® Extension for PyTorch*
2+
3+
4+
## Overview
5+
6+
This document has instructions for running ResNet50v1.5 inference using Intel(R) Extension for PyTorch with GPU.
7+
8+
## Requirements
9+
| Item | Detail |
10+
| ------ | ------- |
11+
| Host machine | Intel® Data Center GPU Flex Series |
12+
| Drivers | GPU-compatible drivers need to be installed: [Download Driver 476.14](https://dgpu-docs.intel.com/releases/stable_476_14_20221021.html)
13+
| Software | Docker* Installed |
14+
15+
## Get Started
16+
17+
## Download Datasets
18+
19+
The [ImageNet](http://www.image-net.org/) validation dataset is used.
20+
21+
Download and extract the ImageNet2012 dataset from http://www.image-net.org/,
22+
then move validation images to labeled subfolders, using
23+
[the valprep.sh shell script](https://github.com/raw/soumith/imagenetloader.torch/master/valprep.sh)
24+
25+
A after running the data prep script, your folder structure should look something like this:
26+
27+
```
28+
imagenet
29+
└── val
30+
├── ILSVRC2012_img_val.tar
31+
├── n01440764
32+
│ ├── ILSVRC2012_val_00000293.JPEG
33+
│ ├── ILSVRC2012_val_00002138.JPEG
34+
│ ├── ILSVRC2012_val_00003014.JPEG
35+
│ ├── ILSVRC2012_val_00006697.JPEG
36+
│ └── ...
37+
└── ...
38+
```
39+
The folder that contains the `val` directory should be set as the
40+
`DATASET_DIR`
41+
(for example: `export DATASET_DIR=/home/<user>/imagenet`).
42+
43+
## Quick Start Scripts
44+
45+
| Script name | Description |
46+
|-------------|-------------|
47+
| `inference_block_format.sh` | Runs ResNet50 inference (block format) for the specified precision (int8) |
48+
49+
## Run Using Docker
50+
51+
### Set up Docker Image
52+
53+
```
54+
docker pull intel/image-recognition:pytorch-flex-gpu-resnet50v1-5-inference
55+
```
56+
### Run Docker Image
57+
The ResNet50 v1-5 inference container includes scripts,model and libraries need to run int8 inference. To run the `inference_block_format.sh` quickstart script using this container, you'll need to provide volume mounts for the ImageNet dataset. You will need to provide an output directory where log files will be written.
58+
59+
```
60+
export PRECISION=int8
61+
export OUTPUT_DIR=<path to output directory>
62+
export DATASET_DIR=<path to the preprocessed imagenet dataset>
63+
export SCRIPT=quickstart/inference_block_format.sh
64+
65+
DOCKER_ARGS=${DOCKER_ARGS:---rm -it}
66+
IMAGE_NAME=intel/image-recognition:pytorch-flex-gpu-resnet50v1-5-inference
67+
68+
69+
VIDEO=$(getent group video | sed -E 's,^video:[^:]*:([^:]*):.*$,\1,')
70+
RENDER=$(getent group render | sed -E 's,^render:[^:]*:([^:]*):.*$,\1,')
71+
72+
test -z "$RENDER" || RENDER_GROUP="--group-add ${RENDER}"
73+
74+
docker run \
75+
-v <your-local-dir>:/workspace \
76+
--group-add ${VIDEO} \
77+
${RENDER_GROUP} \
78+
--device=/dev/dri \
79+
--ipc=host \
80+
--env PRECISION=${PRECISION} \
81+
--env OUTPUT_DIR=${OUTPUT_DIR} \
82+
--env DATASET_DIR=${DATASET_DIR} \
83+
--env http_proxy=${http_proxy} \
84+
--env https_proxy=${https_proxy} \
85+
--env no_proxy=${no_proxy} \
86+
--volume ${OUTPUT_DIR}:${OUTPUT_DIR} \
87+
--volume ${DATASET_DIR}:${DATASET_DIR} \
88+
${DOCKER_ARGS} \
89+
${IMAGE_NAME} \
90+
/bin/bash $SCRIPT
91+
```
92+
93+
## Documentation and Sources
94+
95+
[GitHub* Repository](https://github.com/IntelAI/models/tree/master/dockerfiles/model_containers)
96+
97+
## Support
98+
Support for Intel® Extension for PyTorch* is found via the [Intel® AI Analytics Toolkit.](https://www.intel.com/content/www/us/en/developer/tools/oneapi/ai-analytics-toolkit.html#gs.qbretz) Additionally, the Intel® Extension for PyTorch* team tracks both bugs and enhancement requests using [GitHub issues](https://github.com/intel/intel-extension-for-pytorch/issues). Before submitting a suggestion or bug report, please search the GitHub issues to see if your issue has already been reported.
99+
100+
## License Agreement
101+
102+
LEGAL NOTICE: By accessing, downloading or using this software and any required dependent software (the “Software Package”), you agree to the terms and conditions of the software license agreements for the Software Package, which may also include notices, disclaimers, or license terms for third party software included with the Software Package. Please refer to the [license file](https://github.com/IntelAI/models/tree/master/third_party) for additional details.

‎quickstart/image_recognition/tensorflow/resnet50v1_5/inference/gpu/devcatalog.md

Lines changed: 36 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -1,56 +1,65 @@
1-
# ResNet50 v1.5 Inference
1+
# Running ResNet50 v1.5 Inference with Int8 on Intel® Data Center GPU Flex Series using Intel® Extension for TensorFlow*
22

3-
## Description
3+
## Overview
44

5-
This document has instructions for running ResNet50 v1.5 inference using
6-
Intel(R) Extension for TensorFlow* with Intel(R) Data Center GPU Flex Series.
5+
This document has instructions for running ResNet50 v1.5 inference using Intel(R) Extension for TensorFlow* with Intel(R) Data Center GPU Flex Series.
76

8-
## Datasets
7+
8+
## Requirements
9+
| Item | Detail |
10+
| ------ | ------- |
11+
| Host machine | Intel® Data Center GPU Flex Series |
12+
| Drivers | GPU-compatible drivers need to be installed: [Download Driver 476.14](https://dgpu-docs.intel.com/releases/stable_476_14_20221021.html)
13+
| Software | Docker* Installed |
14+
15+
## Get Started
16+
17+
### Download Datasets
918

1019
Download and preprocess the ImageNet dataset using the [instructions here](https://github.com/IntelAI/models/blob/master/datasets/imagenet/README.md).
1120
After running the conversion script you should have a directory with the
1221
ImageNet dataset in the TF records format.
1322

1423
Set the `DATASET_DIR` to point to the TF records directory when running ResNet50 v1.5.
1524

16-
## Quick Start Scripts
25+
### Quick Start Scripts
1726

1827
| Script name | Description |
1928
|:-------------:|:-------------:|
20-
| `online_inference` | Runs online inference for int8 precision |
29+
| `online_inference` | Runs online inference for int8 precision |
2130
| `batch_inference` | Runs batch inference for int8 precision |
2231
| `accuracy` | Measures the model accuracy for int8 precision |
2332

24-
## Docker
2533

26-
Requirements:
27-
* Host machine has Intel(R) Data Center GPU Flex Series
28-
* Follow instructions to install GPU-compatible driver [419.40](https://dgpu-docs.intel.com/releases/stable_419_40_20220914.html)
29-
* Docker
34+
## Run Using Docker
3035

31-
### Docker pull command:
36+
### Set up Docker Image
3237

3338
```
34-
docker pull intel/image-recognition:tf-atsm-gpu-resnet50v1-5-inference
39+
docker pull intel/image-recognition:tf-flex-gpu-resnet50v1-5-inference
3540
```
3641

37-
The ResNet50 v1-5 inference container includes scripts,model and libraries need to run int8 inference. To run one of the inference quickstart scripts using this container, you'll need to provide volume mounts for the ImageNet dataset for running `accuracy.sh` script. For `online_inference.sh` and `batch_inference.sh` dummy dataset will be used. You will need to provide an output directory where log files will be written.
42+
### Run Docker Image
43+
The ResNet50 v1-5 inference container includes scripts,model and libraries need to run int8 inference. To run one of the inference quickstart scripts using this container, you'll need to provide volume mounts for the ImageNet dataset for running `accuracy.sh` script. For `online_inference.sh` and `batch_inference.sh` dummy dataset will be used. You will need to provide an output directory where log files will be written.
3844

3945
```
4046
export PRECISION=int8
4147
export OUTPUT_DIR=<path to output directory>
4248
export DATASET_DIR=<path to the preprocessed imagenet dataset>
43-
IMAGE_NAME=intel/image-recognition:tf-atsm-gpu-resnet50v1-5-inference
49+
DOCKER_ARGS=${DOCKER_ARGS:---rm -it}
50+
IMAGE_NAME=intel/image-recognition:tf-flex-gpu-resnet50v1-5-inference
4451
4552
VIDEO=$(getent group video | sed -E 's,^video:[^:]*:([^:]*):.*$,\1,')
4653
RENDER=$(getent group render | sed -E 's,^render:[^:]*:([^:]*):.*$,\1,')
4754
55+
test -z "$RENDER" || RENDER_GROUP="--group-add ${RENDER}"
56+
4857
docker run \
58+
-v <your-local-dir>:/workspace \
4959
--group-add ${VIDEO} \
5060
${RENDER_GROUP} \
5161
--device=/dev/dri \
5262
--ipc=host \
53-
--privileged \
5463
--env PRECISION=${PRECISION} \
5564
--env OUTPUT_DIR=${OUTPUT_DIR} \
5665
--env DATASET_DIR=${DATASET_DIR} \
@@ -59,16 +68,22 @@ docker run \
5968
--env no_proxy=${no_proxy} \
6069
--volume ${OUTPUT_DIR}:${OUTPUT_DIR} \
6170
--volume ${DATASET_DIR}:${DATASET_DIR} \
62-
--rm -it \
63-
$IMAGE_NAME \
71+
${DOCKER_ARGS} \
72+
${IMAGE_NAME} \
6473
/bin/bash quickstart/<script name>.sh
6574
```
6675

6776
## Documentation and Sources
6877

69-
**Get Started**
78+
[GitHub* Repository](https://github.com/IntelAI/models/tree/master/dockerfiles/model_containers)
79+
80+
## Summary and Next Steps
81+
82+
Now you are inside container with Python 3.9 and Tensorflow 2.10.0 preinstalled. You can run your own script
83+
to run on intel GPU.
7084

71-
[Docker* Repository](https://hub.docker.com/r/intel/image-recognition)
85+
## Support
86+
Support for Intel® Extension for TensorFlow* is found via the [Intel® AI Analytics Toolkit.](https://www.intel.com/content/www/us/en/developer/tools/oneapi/ai-analytics-toolkit.html#gs.qbretz) Additionally, the Intel® Extension for TensorFlow* team tracks both bugs and enhancement requests using [GitHub issues](https://github.com/intel/intel-extension-for-tensorflow/issues). Before submitting a suggestion or bug report, please search the GitHub issues to see if your issue has already been reported.
7287

7388
## License Agreement
7489

Lines changed: 87 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,87 @@
1+
# Optimizations for Intel® Data Center GPU Flex Series using Intel® Extension for PyTorch*
2+
3+
## Overview
4+
5+
This document has instruction for running Intel® Extension for PyTorch* (IPEX) for
6+
GPU in container.
7+
8+
## Requirements
9+
| Item | Detail |
10+
| ------ | ------- |
11+
| Host machine | Intel® Data Center GPU Flex Series |
12+
| Drivers | GPU-compatible drivers need to be installed: [Download Driver 476.14](https://dgpu-docs.intel.com/releases/stable_476_14_20221021.html)
13+
| Software | Docker* Installed |
14+
15+
## Get Started
16+
17+
### Installing the Intel Extensions for PyTorch
18+
#### Docker pull command:
19+
20+
`docker pull intel/intel-extension-for-pytorch:xpu-flex`
21+
22+
### Running container:
23+
24+
Run following commands to start IPEX GPU tools container. You can use `-v` option to mount your
25+
local directory into container. The `-v` argument can be omitted if you do not need
26+
access to a local directory in the container. Pass the video and render groups to your
27+
docker container so that the GPU is accessible.
28+
```
29+
IMAGE_NAME=intel/intel-extension-for-pytorch:xpu-flex
30+
DOCKER_ARGS=${DOCKER_ARGS:---rm -it}
31+
32+
VIDEO=$(getent group video | sed -E 's,^video:[^:]*:([^:]*):.*$,\1,')
33+
RENDER=$(getent group render | sed -E 's,^render:[^:]*:([^:]*):.*$,\1,')
34+
35+
test -z "$RENDER" || RENDER_GROUP="--group-add ${RENDER}"
36+
37+
docker run --rm \
38+
-v <your-local-dir>:/workspace \
39+
--group-add ${VIDEO} \
40+
${RENDER_GROUP} \
41+
--device=/dev/dri \
42+
--ipc=host \
43+
-e http_proxy=$http_proxy \
44+
-e https_proxy=$https_proxy \
45+
-e no_proxy=$no_proxy \
46+
${DOCKER_ARGS} \
47+
${IMAGE_NAME} \
48+
bash
49+
```
50+
51+
#### Verify if XPU is accessible from PyTorch:
52+
You are inside container now. Run following command to verify XPU is visible to PyTorch:
53+
```
54+
python -c "import torch;print(torch.device('xpu'))"
55+
```
56+
Sample output looks like below:
57+
```
58+
xpu
59+
```
60+
Then, verify that the XPU device is available to IPEX:
61+
```
62+
python -c "import intel_extension_for_pytorch as ipex;print(ipex.xpu.is_available())"
63+
```
64+
Sample output looks like below:
65+
```
66+
True
67+
```
68+
Finally, use the following command to check whether MKL is enabled as default:
69+
```
70+
python -c "import intel_extension_for_pytorch as ipex;print(ipex.xpu.has_onemkl())"
71+
```
72+
Sample output looks like below:
73+
```
74+
True
75+
```
76+
77+
## Summary and Next Steps
78+
Now you are inside container with Python 3.9, PyTorch and IPEX preinstalled. You can run your own script
79+
to run on Intel GPU.
80+
81+
## Documentation and Sources
82+
83+
[GitHub* Repository](https://github.com/intel/intel-extension-for-pytorch/tree/master/docker)
84+
85+
86+
## Support
87+
Support for Intel® Extension for PyTorch* is found via the [Intel® AI Analytics Toolkit.](https://www.intel.com/content/www/us/en/developer/tools/oneapi/ai-analytics-toolkit.html#gs.qbretz) Additionally, the Intel® Extension for PyTorch* team tracks both bugs and enhancement requests using [GitHub issues](https://github.com/intel/intel-extension-for-pytorch/issues). Before submitting a suggestion or bug report, please search the GitHub issues to see if your issue has already been reported.
Lines changed: 118 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,118 @@
1+
# Running SSD-MobileNetv1 Inference on Intel® Data Center GPU Flex Series using Intel® Extension for PyTorch*
2+
3+
## Overview
4+
5+
This document has instructions for running SSD-Mobilenetv1 inference using
6+
Intel(R) Extension for PyTorch with GPU.
7+
8+
## Requirements
9+
| Item | Detail |
10+
| ------ | ------- |
11+
| Host machine | Intel® Data Center GPU Flex Series |
12+
| Drivers | GPU-compatible drivers need to be installed: [Download Driver 476.14](https://dgpu-docs.intel.com/releases/stable_476_14_20221021.html)
13+
| Software | Docker* Installed |
14+
15+
## Download Datasets
16+
17+
The [VOC2007](http://host.robots.ox.ac.uk/pascal/VOC/voc2007/) validation dataset is used.
18+
19+
Download and extract the VOC2007 dataset from http://host.robots.ox.ac.uk/pascal/VOC/voc2007/,
20+
After extracting the data, your folder structure should look something like this:
21+
22+
```
23+
VOC2007
24+
├── Annotations
25+
│ ├── 000038.xml
26+
│ ├── 000724.xml
27+
│ ├── 001440.xml
28+
│ └── ...
29+
├── ImageSets
30+
│ ├── Layout
31+
│ ├── Main
32+
│ └── Segmentation
33+
├── SegmentationClass
34+
│ ├── 005797.png
35+
│ ├── 007415.png
36+
│ ├── 006581.png
37+
│ └── ...
38+
├── SegmentationObject
39+
│ ├── 005797.png
40+
│ ├── 006581.png
41+
│ ├── 007415.png
42+
│ └── ...
43+
└── JPEGImages
44+
├── 002832.jpg
45+
├── 003558.jpg
46+
├── 004262.jpg
47+
└── ...
48+
```
49+
The folder should be set as the `DATASET_DIR`
50+
(for example: `export DATASET_DIR=/home/<user>/VOC2007`).
51+
52+
## Pretrained Model
53+
54+
You are required to create the model folder and set environment `PRETRAINED_MODEL`. If the folder is empty, the code downloads the pre-trained model.
55+
56+
## Quick Start Scripts
57+
58+
| Script name | Description |
59+
|-------------|-------------|
60+
| `inference_with_dummy_data.sh` | Inference with dummy data, batch size 512, for int8 blocked channel first. |
61+
62+
## Run Using Docker
63+
64+
### Set up Docker Image
65+
66+
```
67+
docker pull intel/object-detection:pytorch-flex-gpu-ssd-mobilenet-inference
68+
```
69+
### Run Docker Image
70+
The SSD-MobileNet inference container includes scripts,model and libraries need to run int8 inference. To run the `inference_with_dummy_data.sh` quickstart script using this container, you'll need to provide volume mounts for the VOC2007 dataset. You will need to provide an output directory where log files will be written.
71+
72+
```
73+
export PRECISION=int8
74+
export OUTPUT_DIR=<path to output directory>
75+
export DATASET_DIR=<path to the preprocessed voc2007 dataset>
76+
export PRETRAINED_MODEL=<path to the pretrained model folder. The code downloads the model if this folder is empty>
77+
export SCRIPT=quickstart/inference_with_dummy_data.sh
78+
export label=/workspace/pytorch-atsm-ssd-mobilenet-inference/labels/voc-model-labels.txt
79+
80+
DOCKER_ARGS=${DOCKER_ARGS:---rm -it}
81+
IMAGE_NAME=intel/object-detection:pytorch-flex-gpu-ssd-mobilenet-inference
82+
VIDEO=$(getent group video | sed -E 's,^video:[^:]*:([^:]*):.*$,\1,')
83+
RENDER=$(getent group render | sed -E 's,^render:[^:]*:([^:]*):.*$,\1,')
84+
85+
test -z "$RENDER" || RENDER_GROUP="--group-add ${RENDER}"
86+
87+
docker run \
88+
-v <your-local-dir>:/workspace \
89+
--group-add ${VIDEO} \
90+
${RENDER_GROUP} \
91+
--device=/dev/dri \
92+
--ipc=host \
93+
--env PRECISION=${PRECISION} \
94+
--env OUTPUT_DIR=${OUTPUT_DIR} \
95+
--env DATASET_DIR=${DATASET_DIR} \
96+
--env PRETRAINED_MODEL=${PRETRAINED_MODEL} \
97+
--env label=${label} \
98+
--env http_proxy=${http_proxy} \
99+
--env https_proxy=${https_proxy} \
100+
--env no_proxy=${no_proxy} \
101+
--volume ${OUTPUT_DIR}:${OUTPUT_DIR} \
102+
--volume ${PRETRAINED_MODEL}:${PRETRAINED_MODEL} \
103+
--volume ${DATASET_DIR}:${DATASET_DIR} \
104+
${DOCKER_ARGS} \
105+
${IMAGE_NAME} \
106+
/bin/bash $SCRIPT
107+
```
108+
109+
## Documentation and Sources
110+
111+
[GitHub* Repository](https://github.com/IntelAI/models/tree/master/dockerfiles/model_containers)
112+
113+
## Support
114+
Support for Intel® Extension for PyTorch* is found via the [Intel® AI Analytics Toolkit.](https://www.intel.com/content/www/us/en/developer/tools/oneapi/ai-analytics-toolkit.html#gs.qbretz) Additionally, the Intel® Extension for PyTorch* team tracks both bugs and enhancement requests using [GitHub issues](https://github.com/intel/intel-extension-for-pytorch/issues). Before submitting a suggestion or bug report, please search the GitHub issues to see if your issue has already been reported.
115+
116+
## License Agreement
117+
118+
LEGAL NOTICE: By accessing, downloading or using this software and any required dependent software (the “Software Package”), you agree to the terms and conditions of the software license agreements for the Software Package, which may also include notices, disclaimers, or license terms for third party software included with the Software Package. Please refer to the [license file](https://github.com/IntelAI/models/tree/master/third_party) for additional details.
Lines changed: 109 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,109 @@
1+
# Running YOLOv4 inference on Intel® Data Center GPU Flex Series using Intel® Extension for PyTorch*
2+
3+
4+
## Overview
5+
6+
This document has instructions for running YOLOv4 inference using
7+
Intel(R) Extension for PyTorch with GPU.
8+
9+
## Requirements
10+
| Item | Detail |
11+
| ------ | ------- |
12+
| Host machine | Intel® Data Center GPU Flex Series |
13+
| Drivers | GPU-compatible drivers need to be installed:[Download Driver 476.14](https://dgpu-docs.intel.com/releases/stable_476_14_20221021.html)
14+
| Software | Docker* Installed |
15+
16+
## Download Datasets
17+
18+
Download and extract the 2017 training/validation images and annotations from the
19+
[COCO dataset website](https://cocodataset.org/#download) to a `coco` folder
20+
and unzip the files. After extracting the zip files, your dataset directory
21+
structure should look something like this:
22+
```
23+
coco
24+
├── annotations
25+
│ ├── captions_train2017.json
26+
│ ├── captions_val2017.json
27+
│ ├── instances_train2017.json
28+
│ ├── instances_val2017.json
29+
│ ├── person_keypoints_train2017.json
30+
│ └── person_keypoints_val2017.json
31+
├── train2017
32+
│ ├── 000000454854.jpg
33+
│ ├── 000000137045.jpg
34+
│ ├── 000000129582.jpg
35+
│ └── ...
36+
└── val2017
37+
├── 000000000139.jpg
38+
├── 000000000285.jpg
39+
├── 000000000632.jpg
40+
└── ...
41+
```
42+
The parent of the `annotations`, `train2017`, and `val2017` directory (in this example `coco`)
43+
is the directory that should be used when setting the `image` environment
44+
variable for YOLOv4 (for example: `export image=/home/<user>/coco/val2017/000000581781.jpg`).
45+
In addition, we should also set the `size` environment to match the size of image.
46+
(for example: `export size=416`)
47+
48+
## Pretrained Model
49+
50+
You need to download pretrained weights from: yolov4.pth(https://pan.baidu.com/s/1ZroDvoGScDgtE1ja_QqJVw Extraction code:xrq9) or yolov4.pth(https://drive.google.com/open?id=1wv_LiFeCRYwtpkqREPeI13-gPELBDwuJ) to the any directory of your choice, and set environment `PRETRAINED_MODEL`.
51+
52+
## Quick Start Scripts
53+
54+
| Script name | Description |
55+
|-------------|-------------|
56+
| `inference_with_dummy_data.sh` | Inference with int8 batch_size 64 on dummy data |
57+
58+
## Run Using Docker
59+
60+
### Set up Docker Image
61+
62+
```
63+
docker pull intel/image-recognition:pytorch-flex-gpu-yolov4-inference
64+
```
65+
### Run Docker Image
66+
The Yolov4 inference container includes scripts,model and libraries need to run int8 inference. To run the `inference_with_dummy_data.sh` quickstart script using this container, you'll need to provide volume mounts for the COCO dataset. You will need to provide an output directory where log files will be written.
67+
68+
```
69+
export PRECISION=int8
70+
export OUTPUT_DIR=<path to output directory>
71+
export DATASET_DIR=<path to the preprocessed coco dataset>
72+
export SCRIPT=quickstart/inference_with_dummy_data.sh
73+
export PRETRAINED_MODEL=<path to downloaded yolov4 model>
74+
75+
DOCKER_ARGS=${DOCKER_ARGS:---rm -it}
76+
VIDEO=$(getent group video | sed -E 's,^video:[^:]*:([^:]*):.*$,\1,')
77+
RENDER=$(getent group render | sed -E 's,^render:[^:]*:([^:]*):.*$,\1,')
78+
79+
test -z "$RENDER" || RENDER_GROUP="--group-add ${RENDER}"
80+
81+
docker run \
82+
--group-add ${VIDEO} \
83+
${RENDER_GROUP} \
84+
--device=/dev/dri \
85+
--ipc=host \
86+
--env PRECISION=${PRECISION} \
87+
--env OUTPUT_DIR=${OUTPUT_DIR} \
88+
--env DATASET_DIR=${DATASET_DIR} \
89+
--env http_proxy=${http_proxy} \
90+
--env https_proxy=${https_proxy} \
91+
--env no_proxy=${no_proxy} \
92+
--volume ${OUTPUT_DIR}:${OUTPUT_DIR} \
93+
--volume ${PRETRAINED_MODEL}:${PRETRAINED_MODEL} \
94+
--volume ${DATASET_DIR}:${DATASET_DIR} \
95+
${DOCKER_ARGS} \
96+
${IMAGE_NAME} \
97+
/bin/bash $SCRIPT
98+
```
99+
100+
## Documentation and Sources
101+
102+
[GitHub* Repository](https://github.com/IntelAI/models/tree/master/dockerfiles/model_containers)
103+
104+
## Support
105+
Support for Intel® Extension for PyTorch* is found via the [Intel® AI Analytics Toolkit.](https://www.intel.com/content/www/us/en/developer/tools/oneapi/ai-analytics-toolkit.html#gs.qbretz) Additionally, the Intel® Extension for PyTorch* team tracks both bugs and enhancement requests using [GitHub issues](https://github.com/intel/intel-extension-for-pytorch/issues). Before submitting a suggestion or bug report, please search the GitHub issues to see if your issue has already been reported.
106+
107+
## License Agreement
108+
109+
LEGAL NOTICE: By accessing, downloading or using this software and any required dependent software (the “Software Package”), you agree to the terms and conditions of the software license agreements for the Software Package, which may also include notices, disclaimers, or license terms for third party software included with the Software Package. Please refer to the [license file](https://github.com/IntelAI/models/tree/master/third_party) for additional details.

‎quickstart/object_detection/tensorflow/ssd-mobilenet/inference/gpu/devcatalog.md

Lines changed: 33 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,20 @@
1-
# SSD-MobileNet Inference
1+
# Running SSD-MobileNet Inference on Intel® Data Center GPU Flex Series using Intel® Extension for TensorFlow*
22

3-
## Description
3+
## Overview
44

55
This document has instructions for running SSD-MobileNet inference using
66
Intel(R) Extension for TensorFlow* with Intel(R) Data Center GPU Flex Series.
77

8-
## Datasets
8+
## Requirements
9+
| Item | Detail |
10+
| ------ | ------- |
11+
| Host machine | Intel® Data Center GPU Flex Series |
12+
| Drivers | GPU-compatible drivers need to be installed:[Download Driver 476.14](https://dgpu-docs.intel.com/releases/stable_476_14_20221021.html)
13+
| Software | Docker* Installed |
14+
15+
## Get Started
16+
17+
## Download Datasets
918

1019
Download and preprocess the COCO dataset using the [instructions here](https://github.com/IntelAI/models/blob/master/datasets/coco/README.md).
1120
After running the conversion script you should have a directory with the
@@ -17,40 +26,38 @@ Set the `DATASET_DIR` to point to the TF records directory when running SSD-Mobi
1726

1827
| Script name | Description |
1928
|:-------------:|:-------------:|
20-
| `online_inference` | Runs online inference for int8 precision |
29+
| `online_inference` | Runs online inference for int8 precision |
2130
| `batch_inference` | Runs batch inference for int8 precision |
2231
| `accuracy` | Measures the model accuracy for int8 precision |
2332

24-
## Docker
33+
## Run Using Docker
2534

26-
Requirements:
27-
* Host machine has Intel(R) Data Center GPU Flex Series
28-
* Follow instructions to install GPU-compatible driver [419.40](https://dgpu-docs.intel.com/releases/stable_419_40_20220914.html)
29-
* Docker
30-
31-
### Docker pull command:
35+
### Set up Docker Image
3236

3337
```
34-
docker pull intel/object-detection:tf-atsm-gpu-ssd-mobilenet-inference
38+
docker pull intel/object-detection:tf-flex-gpu-ssd-mobilenet-inference
3539
```
36-
37-
The SSD-MobileNet inference container includes scripts,model and libraries need to run int8 inference. To run the inference quickstart scripts using this container, you'll need to provide volume mounts for the COCO dataset for running `accuracy.sh` script. For `online_inference.sh` and `batch_inference.sh` dummy dataset will be used. You will need to provide an output directory where log files will be written.
40+
### Run Docker Image
41+
The SSD-MobileNet inference container includes scripts,model and libraries need to run int8 inference. To run the inference quickstart scripts using this container, you'll need to provide volume mounts for the COCO dataset for running `accuracy.sh` script. For `online_inference.sh` and `batch_inference.sh` dummy dataset will be used. You will need to provide an output directory where log files will be written.
3842

3943
```
4044
export PRECISION=int8
4145
export OUTPUT_DIR=<path to output directory>
4246
export DATASET_DIR=<path to the preprocessed coco dataset>
43-
IMAGE_NAME=intel/object-detection:tf-atsm-gpu-ssd-mobilenet-inference
47+
IMAGE_NAME=intel/object-detection:tf-flex-gpu-ssd-mobilenet-inference
48+
DOCKER_ARGS=${DOCKER_ARGS:---rm -it}
4449
4550
VIDEO=$(getent group video | sed -E 's,^video:[^:]*:([^:]*):.*$,\1,')
4651
RENDER=$(getent group render | sed -E 's,^render:[^:]*:([^:]*):.*$,\1,')
4752
53+
test -z "$RENDER" || RENDER_GROUP="--group-add ${RENDER}"
54+
4855
docker run \
56+
-v <your-local-dir>:/workspace \
4957
--group-add ${VIDEO} \
5058
${RENDER_GROUP} \
5159
--device=/dev/dri \
5260
--ipc=host \
53-
--privileged \
5461
--env PRECISION=${PRECISION} \
5562
--env OUTPUT_DIR=${OUTPUT_DIR} \
5663
--env DATASET_DIR=${DATASET_DIR} \
@@ -59,16 +66,22 @@ docker run \
5966
--env no_proxy=${no_proxy} \
6067
--volume ${OUTPUT_DIR}:${OUTPUT_DIR} \
6168
--volume ${DATASET_DIR}:${DATASET_DIR} \
62-
--rm --it \
63-
$IMAGE_NAME \
69+
${DOCKER_ARGS} \
70+
${IMAGE_NAME} \
6471
/bin/bash quickstart/<script name>.sh
6572
```
6673

6774
## Documentation and Sources
6875

69-
**Get Started**
76+
[GitHub* Repository](https://github.com/IntelAI/models/tree/master/dockerfiles/model_containers)
77+
78+
## Summary and Next Steps
79+
80+
Now you are inside container with Python 3.9 and Tensorflow 2.10.0 preinstalled. You can run your own script
81+
to run on intel GPU.
7082

71-
[Docker* Repository](https://hub.docker.com/r/intel/image-recognition)
83+
## Support
84+
Support for Intel® Extension for TensorFlow* is found via the [Intel® AI Analytics Toolkit.](https://www.intel.com/content/www/us/en/developer/tools/oneapi/ai-analytics-toolkit.html#gs.qbretz) Additionally, the Intel® Extension for TensorFlow* team tracks both bugs and enhancement requests using [GitHub issues](https://github.com/intel/intel-extension-for-tensorflow/issues). Before submitting a suggestion or bug report, please search the GitHub issues to see if your issue has already been reported.
7285

7386
## License Agreement
7487

Lines changed: 95 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,95 @@
1+
# Optimizations for Intel® Data Center GPU Flex Series using Intel® Extension for TensorFlow*
2+
3+
## Overview
4+
5+
This document has instruction for running Tensorflow using Intel GPU in container.
6+
7+
## Requirements
8+
| Item | Detail |
9+
| ------ | ------- |
10+
| Host machine | Intel® Data Center GPU Flex Series |
11+
| Drivers | GPU-compatible drivers need to be installed:[Download Driver 476.14](https://dgpu-docs.intel.com/releases/stable_476_14_20221021.html)
12+
| Software | Docker* Installed |
13+
14+
## Get Started
15+
16+
### Installing the Intel Extensions for TensorFlow
17+
#### Docker pull command:
18+
19+
`docker pull intel/intel-extension-for-tensorflow:gpu-flex`
20+
21+
#### Running container:
22+
23+
Run following commands to start TF GPU tools container. You can use `-v` option to mount your
24+
local directory into container.
25+
26+
```
27+
IMAGE_NAME=intel/intel-extension-for-tensorflow:gpu-flex
28+
DOCKER_ARGS=${DOCKER_ARGS:---rm -it}
29+
30+
VIDEO=$(getent group video | sed -E 's,^video:[^:]*:([^:]*):.*$,\1,')
31+
RENDER=$(getent group render | sed -E 's,^render:[^:]*:([^:]*):.*$,\1,')
32+
33+
test -z "$RENDER" || RENDER_GROUP="--group-add ${RENDER}"
34+
35+
docker run \
36+
-v <your-local-dir>:/workspace \
37+
--group-add ${VIDEO} \
38+
${RENDER_GROUP} \
39+
-e http_proxy=$http_proxy \
40+
-e https_proxy=$https_proxy \
41+
-e no_proxy=$no_proxy \
42+
${DOCKER_ARGS} \
43+
${IMAGE_NAME} \
44+
bash
45+
```
46+
47+
##### Verify if GPU is accessible from Tensorflow:
48+
You are inside container now. Run following command to verify GPU is visible to Tensorflow:
49+
50+
```
51+
python -c "from tensorflow.python.client import device_lib; print(device_lib.list_local_devices())"
52+
```
53+
You should be able to see GPU device in list of devices. Sample output looks like below:
54+
55+
```
56+
[name: "/device:CPU:0"
57+
device_type: "CPU"
58+
memory_limit: 268435456
59+
locality {
60+
}
61+
incarnation: 9266936945121049176
62+
xla_global_id: -1
63+
, name: "/device:XPU:0"
64+
device_type: "XPU"
65+
locality {
66+
bus_id: 1
67+
}
68+
incarnation: 15031084974591766410
69+
physical_device_desc: "device: 0, name: INTEL_XPU, pci bus id: <undefined>"
70+
xla_global_id: -1
71+
, name: "/device:XPU:1"
72+
device_type: "XPU"
73+
locality {
74+
bus_id: 1
75+
}
76+
incarnation: 17448926295332318308
77+
physical_device_desc: "device: 1, name: INTEL_XPU, pci bus id: <undefined>"
78+
xla_global_id: -1
79+
]
80+
```
81+
## Documentation and Sources
82+
83+
[GitHub* Repository](https://github.com/intel/intel-extension-for-tensorflow/tree/main/docker)
84+
85+
## Summary and Next Steps
86+
87+
Now you are inside container with Python 3.9 and Tensorflow 2.10.0 preinstalled. You can run your own script
88+
to run on intel GPU.
89+
90+
## Support
91+
Support for Intel® Extension for TensorFlow* is found via the [Intel® AI Analytics Toolkit.](https://www.intel.com/content/www/us/en/developer/tools/oneapi/ai-analytics-toolkit.html#gs.qbretz) Additionally, the Intel® Extension for TensorFlow* team tracks both bugs and enhancement requests using [GitHub issues](https://github.com/intel/intel-extension-for-tensorflow/issues). Before submitting a suggestion or bug report, please search the GitHub issues to see if your issue has already been reported.
92+
93+
## License Agreement
94+
95+
LEGAL NOTICE: By accessing, downloading or using this software and any required dependent software (the “Software Package”), you agree to the terms and conditions of the software license agreements for the Software Package, which may also include notices, disclaimers, or license terms for third party software included with the Software Package. Please refer to the [license file](https://github.com/IntelAI/models/tree/master/third_party) for additional details.

0 commit comments

Comments
 (0)
Please sign in to comment.