File tree Expand file tree Collapse file tree 10 files changed +21
-21
lines changed
datasets/data_connector/samples
image_recognition/tensorflow/resnet50v1_5
language_modeling/tensorflow/bert_large
language_translation/tensorflow/transformer_mlperf
recommendation/tensorflow/dien/training/cpu Expand file tree Collapse file tree 10 files changed +21
-21
lines changed Original file line number Diff line number Diff line change 1
- mlflow == 2.2.2
1
+ mlflow == 2.3.1 #upgraded to resolve Snyk critical vulnerability
2
2
scikit-learn == 1.2.2
3
3
xlrd == 2.0.1
Original file line number Diff line number Diff line change 1
- mlflow == 2.2.2
1
+ mlflow == 2.3.1 #upgraded to resolve Snyk critical vulnerability
2
2
scikit-learn == 1.2.2
3
3
xlrd == 2.0.1
4
4
pandas-gbq == 0.19.1
Original file line number Diff line number Diff line change @@ -16,9 +16,9 @@ The table below provides links to run each workload in a docker container. The c
16
16
17
17
| Model | Framework | Mode | Documentation | Dataset |
18
18
| ----------------------------| ---------- | ----------| ------------------- | ------------ |
19
- | [ ResNet 50 v1.5] ( https://github.com/tensorflow/models/tree/v2.11.0/official/legacy/image_classification/resnet ) | TensorFlow | INT8 Inference| [ single card] ( https://github.com/IntelAI/models/blob/master/quickstart/image_recognition/tensorflow/resnet50v1_5/inference/gpu/devcatalog .md ) [ multi-card] ( https://github.com/IntelAI/models/blob/master/quickstart/image_recognition/tensorflow/resnet50v1_5/inference/gpu/DEVCATALOG_MULTI_CARD.md ) | [ ImageNet 2012] ( https://github.com/IntelAI/models/tree/master/datasets/imagenet/README.md ) |
20
- | [ ResNet 50 v1.5] ( https://arxiv.org/pdf/1512.03385.pdf ) | PyTorch | INT8 Inference | [ single card ] ( https://github.com/IntelAI/models/blob/master/quickstart/image_recognition/pytorch/resnet50v1_5/inference/gpu/devcatalog .md ) [ multi-card] ( https://github.com/IntelAI/models/blob/master/quickstart/image_recognition/pytorch/resnet50v1_5/inference/gpu/DEVCATALOG_MULTI_CARD.md ) | [ ImageNet 2012] ( https://github.com/IntelAI/models/tree/master/datasets/imagenet/README.md ) |
21
- | [ SSD-MobileNet v1] ( https://arxiv.org/pdf/1704.04861.pdf ) | PyTorch | INT8 Inference | [ single card] ( https://github.com/IntelAI/models/blob/master/quickstart/quickstart/ object_detection/pytorch/ssd-mobilenet/inference/gpu/devcatalog .md ) [ multi-card] ( https://github.com/IntelAI/models/blob/master/quickstart/object_detection/pytorch/ssd-mobilenet/inference/gpu/DEVCATALOG_MULTI_CARD.md ) | [ COCO 2017] ( https://github.com/IntelAI/models/blob/master/quickstart/object_detection/pytorch/ssd-mobilenet/inference/gpu/README.md#datasets ) |
22
- | [ YOLO v4] ( https://arxiv.org/pdf/1704.04861.pdf ) | PyTorch | INT8 Inference | [ single card] ( https://github.com/IntelAI/models/blob/master/quickstart/object_detection/pytorch/yolov4/inference/gpu/devcatalog .md ) [ multi-card] ( https://github.com/IntelAI/models/blob/master/quickstart/object_detection/pytorch/yolov4/inference/gpu/DEVCATALOG_MULTI_CARD.md ) | [ COCO 2017] ( https://github.com/IntelAI/models/blob/master/quickstart/object_detection/pytorch/ssd-mobilenet/inference/gpu/README.md#datasets ) |
23
- | [ SSD-MobileNet] ( https://arxiv.org/pdf/1704.04861.pdf ) | TensorFlow | INT8 Inference | [ single card] ( https://github.com/IntelAI/models/blob/master/quickstart/object_detection/tensorflow/ssd-mobilenet/inference/gpu/devcatalog .md ) [ multi-card] ( https://github.com/IntelAI/models/blob/master/quickstart/object_detection/tensorflow/ssd-mobilenet/inference/gpu/DEVCATALOG_MULTI_CARD.md ) | [ COCO 2017 validation dataset] ( https://github.com/IntelAI/models/tree/master/datasets/coco#download-and-preprocess-the-coco-validation-images ) |
19
+ | [ ResNet 50 v1.5] ( https://github.com/tensorflow/models/tree/v2.11.0/official/legacy/image_classification/resnet ) | TensorFlow | INT8 Inference| [ single card] ( https://github.com/IntelAI/models/blob/master/quickstart/image_recognition/tensorflow/resnet50v1_5/inference/gpu/DEVCATALOG_FLEX .md ) [ multi-card] ( https://github.com/IntelAI/models/blob/master/quickstart/image_recognition/tensorflow/resnet50v1_5/inference/gpu/DEVCATALOG_MULTI_CARD.md ) | [ ImageNet 2012] ( https://github.com/IntelAI/models/tree/master/datasets/imagenet/README.md ) |
20
+ | [ ResNet 50 v1.5] ( https://arxiv.org/pdf/1512.03385.pdf ) | PyTorch | INT8 Inference | [ single card ] ( https://github.com/IntelAI/models/blob/master/quickstart/image_recognition/pytorch/resnet50v1_5/inference/gpu/DEVCATALOG_FLEX .md ) [ multi-card] ( https://github.com/IntelAI/models/blob/master/quickstart/image_recognition/pytorch/resnet50v1_5/inference/gpu/DEVCATALOG_MULTI_CARD.md ) | [ ImageNet 2012] ( https://github.com/IntelAI/models/tree/master/datasets/imagenet/README.md ) |
21
+ | [ SSD-MobileNet v1] ( https://arxiv.org/pdf/1704.04861.pdf ) | PyTorch | INT8 Inference | [ single card] ( https://github.com/IntelAI/models/blob/master/quickstart/object_detection/pytorch/ssd-mobilenet/inference/gpu/DEVCATALOG .md ) [ multi-card] ( https://github.com/IntelAI/models/blob/master/quickstart/object_detection/pytorch/ssd-mobilenet/inference/gpu/DEVCATALOG_MULTI_CARD.md ) | [ COCO 2017] ( https://github.com/IntelAI/models/blob/master/quickstart/object_detection/pytorch/ssd-mobilenet/inference/gpu/README.md#datasets ) |
22
+ | [ YOLO v4] ( https://arxiv.org/pdf/1704.04861.pdf ) | PyTorch | INT8 Inference | [ single card] ( https://github.com/IntelAI/models/blob/master/quickstart/object_detection/pytorch/yolov4/inference/gpu/DEVCATALOG .md ) [ multi-card] ( https://github.com/IntelAI/models/blob/master/quickstart/object_detection/pytorch/yolov4/inference/gpu/DEVCATALOG_MULTI_CARD.md ) | [ COCO 2017] ( https://github.com/IntelAI/models/blob/master/quickstart/object_detection/pytorch/ssd-mobilenet/inference/gpu/README.md#datasets ) |
23
+ | [ SSD-MobileNet] ( https://arxiv.org/pdf/1704.04861.pdf ) | TensorFlow | INT8 Inference | [ single card] ( https://github.com/IntelAI/models/blob/master/quickstart/object_detection/tensorflow/ssd-mobilenet/inference/gpu/DEVCATALOG .md ) [ multi-card] ( https://github.com/IntelAI/models/blob/master/quickstart/object_detection/tensorflow/ssd-mobilenet/inference/gpu/DEVCATALOG_MULTI_CARD.md ) | [ COCO 2017 validation dataset] ( https://github.com/IntelAI/models/tree/master/datasets/coco#download-and-preprocess-the-coco-validation-images ) |
24
24
Original file line number Diff line number Diff line change @@ -5,7 +5,7 @@ This document has instructions for running ResNet50 v1.5 inference using Intel-o
5
5
6
6
## Pull Command
7
7
```
8
- docker pull intel/image-recognition:spr-resnet50v1-5-inference
8
+ docker pull intel/image-recognition:tf- spr-resnet50v1-5-inference
9
9
```
10
10
11
11
<table >
@@ -72,7 +72,7 @@ docker run --rm \
72
72
--privileged --init -it \
73
73
--shm-size 8G \
74
74
-w /workspace/tf-spr-resnet50v1-5-inference \
75
- intel/image-recognition:spr-resnet50v1-5-inference \
75
+ intel/image-recognition:tf- spr-resnet50v1-5-inference \
76
76
/bin/bash quickstart/${SCRIPT}
77
77
```
78
78
Original file line number Diff line number Diff line change @@ -5,7 +5,7 @@ This document has instructions for running ResNet50 v1.5 training using Intel-op
5
5
6
6
## Pull Command
7
7
```
8
- docker pull intel/image-recognition:spr-resnet50v1-5-training
8
+ docker pull intel/image-recognition:tf- spr-resnet50v1-5-training
9
9
```
10
10
11
11
<table >
@@ -61,7 +61,7 @@ docker run --rm \
61
61
--privileged --init -it \
62
62
--shm-size 8G \
63
63
-w /workspace/tf-spr-resnet50v1-5-training \
64
- intel/image-recognition:spr-resnet50v1-5-training \
64
+ intel/image-recognition:tf- spr-resnet50v1-5-training \
65
65
/bin/bash quickstart/training.sh
66
66
```
67
67
Original file line number Diff line number Diff line change @@ -6,7 +6,7 @@ Intel-optimized TensorFlow.
6
6
7
7
## Pull Command
8
8
```
9
- docker pull intel/language-modeling:spr-bert-large-inference
9
+ docker pull intel/language-modeling:tf- spr-bert-large-inference
10
10
```
11
11
12
12
<table >
@@ -79,7 +79,7 @@ docker run --rm \
79
79
--privileged --init -it \
80
80
--shm-size 8G \
81
81
-w /workspace/tf-spr-bert-large-inference \
82
- intel/language-modeling:spr-bert-large-inference \
82
+ intel/language-modeling:tf- spr-bert-large-inference \
83
83
/bin/bash quickstart/${SCRIPT}
84
84
```
85
85
Original file line number Diff line number Diff line change @@ -5,7 +5,7 @@ This document has instructions for running BERT Large Pretraining using Intel-op
5
5
6
6
## Pull Command
7
7
```
8
- docker pull intel/language-modeling:spr-bert-large-pretraining
8
+ docker pull intel/language-modeling:tf- spr-bert-large-pretraining
9
9
```
10
10
11
11
<table >
@@ -68,7 +68,7 @@ docker run --rm \
68
68
--privileged --init -it \
69
69
--shm-size 8G \
70
70
-w /workspace/tf-spr-bert-large-pretraining \
71
- intel/language-modeling:spr-bert-large-pretraining \
71
+ intel/language-modeling:tf- spr-bert-large-pretraining \
72
72
/bin/bash quickstart/pretraining.sh
73
73
```
74
74
Original file line number Diff line number Diff line change @@ -6,7 +6,7 @@ Intel-optimized TensorFlow.
6
6
7
7
## Pull Command
8
8
```
9
- docker pull intel/language-translation:spr-transformer-mlperf-inference
9
+ docker pull intel/language-translation:tf- spr-transformer-mlperf-inference
10
10
```
11
11
12
12
<table >
@@ -64,7 +64,7 @@ docker run --rm \
64
64
--privileged --init -it \
65
65
--shm-size 8G \
66
66
-w /workspace/tf-spr-transformer-mlperf-inference \
67
- intel/language-translation:spr-transformer-mlperf-inference \
67
+ intel/language-translation:tf- spr-transformer-mlperf-inference \
68
68
/bin/bash quickstart/${SCRIPT}
69
69
```
70
70
Original file line number Diff line number Diff line change @@ -6,7 +6,7 @@ using Intel-optimized TensorFlow.
6
6
7
7
## Pull Command
8
8
```
9
- docker pull intel/language-translation:spr-transformer-mlperf-training
9
+ docker pull intel/language-translation:tf- spr-transformer-mlperf-training
10
10
```
11
11
12
12
<table >
@@ -55,7 +55,7 @@ docker run --rm \
55
55
--privileged --init -it \
56
56
--shm-size 8G \
57
57
-w /workspace/tf-spr-transformer-mlperf-training \
58
- intel/language-translation:spr-transformer-mlperf-training \
58
+ intel/language-translation:tf- spr-transformer-mlperf-training \
59
59
/bin/bash quickstart/training.sh
60
60
```
61
61
Original file line number Diff line number Diff line change @@ -5,7 +5,7 @@ This document has instructions for running DIEN training using Intel-optimized T
5
5
6
6
## Pull Command
7
7
```
8
- docker pull intel/recommendation:spr-dien-training
8
+ docker pull intel/recommendation:tf- spr-dien-training
9
9
```
10
10
11
11
<table >
@@ -69,7 +69,7 @@ docker run --rm \
69
69
--privileged --init -it \
70
70
--shm-size 8G \
71
71
-w /workspace/tf-spr-dien-training \
72
- intel/recommendation:spr-dien-training \
72
+ intel/recommendation:tf- spr-dien-training \
73
73
/bin/bash quickstart/training.sh
74
74
```
75
75
You can’t perform that action at this time.
0 commit comments