-
Notifications
You must be signed in to change notification settings - Fork 26
Errors when using the tensorflow or tensorflow-gpu packages #216
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
When
|
Hi @marunguy, thanks for bringing this to our attention! We're looking into it and will get back to you with an update as soon as possible. We do intend to support the |
I'm getting a similar error to the above. My environment:
My testing command: python squeezenet.py --mode train --tb_profile --cifar10 My output: 2022-07-13 14:14:38.916979: I tensorflow/c/logging.cc:34] Successfully opened dynamic library libdirectml.0de2b4431c6572ee74152a7ee0cd3fb1534e4a95.so |
@anammari Please use |
I am getting a similar error with directml plugin for tensorflow 2.9.1. I want to use GPU and not CPU, and want to run it in I am using the latest 2022 Dell XPS 15 with i9 and NVidia 3050 Ti
|
@mehfuzh You can still use the GPU if you install pip uninstall tensorflow
pip install tensorflow-cpu
pip install tensorflow-directml-plugin |
@PatriceVignola I'll test it. But I don't want to use build intel Iris Xe GPU, I want to take advantage of the RTX GPU, which is mostly CUDA for |
@mehfuzh |
@PatriceVignola This makes sense, However, after installing
|
@mehfuzh Could you open another issue for this error and include the complete output? It would also help if you could provide a simple script that reproduces it. |
pip install tensorflow-cpu==2.9 (tfdml_plugin) tomie@TomieNW:~/ai$ python test.py
2022-09-08 00:23:04.444358: I tensorflow/c/logging.cc:34] Successfully opened dynamic library libdirectml.0de2b4431c6572ee74152a7ee0cd3fb1534e4a95.so
2022-09-08 00:23:04.444429: I tensorflow/c/logging.cc:34] Successfully opened dynamic library libdxcore.so
2022-09-08 00:23:04.445555: I tensorflow/c/logging.cc:34] Successfully opened dynamic library libd3d12.so
2022-09-08 00:23:06.878862: I tensorflow/c/logging.cc:34] DirectML device enumeration: found 1 compatible adapters.
2022-09-08 00:23:07.208456: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-09-08 00:23:07.209431: I tensorflow/c/logging.cc:34] DirectML: creating device on adapter 0 (AMD Radeon RX 5700) I have rx 5700 ... it says it's using that but why is my CPU temp ramping up instead of my GPU? i tried this tho tf.debugging.set_log_device_placement(True)
# Place tensors on the CPU
with tf.device('/GPU:0'):
a = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])
# Run on the GPU
c = tf.matmul(a, b)
print(c) result is 2022-09-08 00:39:04.278269: I tensorflow/c/logging.cc:34] Successfully opened dynamic library libdirectml.0de2b4431c6572ee74152a7ee0cd3fb1534e4a95.so
2022-09-08 00:39:04.278357: I tensorflow/c/logging.cc:34] Successfully opened dynamic library libdxcore.so
2022-09-08 00:39:04.279080: I tensorflow/c/logging.cc:34] Successfully opened dynamic library libd3d12.so
2022-09-08 00:39:06.682054: I tensorflow/c/logging.cc:34] DirectML device enumeration: found 1 compatible adapters.
2022-09-08 00:39:06.930454: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-09-08 00:39:06.931500: I tensorflow/c/logging.cc:34] DirectML: creating device on adapter 0 (AMD Radeon RX 5700)
2022-09-08 00:39:08.256615: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:305] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support.
2022-09-08 00:39:08.256685: W tensorflow/core/common_runtime/pluggable_device/pluggable_device_bfc_allocator.cc:28] Overriding allow_growth setting because force_memory_growth was requested by the device.
2022-09-08 00:39:08.256737: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:271] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6939 MB memory) -> physical PluggableDevice (device: 0, name: DML, pci bus id: <undefined>)
2022-09-08 00:39:08.268485: I tensorflow/core/common_runtime/eager/execute.cc:1323] Executing op _EagerConst in device /job:localhost/replica:0/task:0/device:GPU:0
2022-09-08 00:39:08.268829: I tensorflow/core/common_runtime/eager/execute.cc:1323] Executing op _EagerConst in device /job:localhost/replica:0/task:0/device:GPU:0
2022-09-08 00:39:08.269389: I tensorflow/core/common_runtime/eager/execute.cc:1323] Executing op MatMul in device /job:localhost/replica:0/task:0/device:GPU:0
tf.Tensor(
[[22. 28.]
[49. 64.]], shape=(2, 2), dtype=float32) so it indeed working it is what it is xD I don't know what I'm doing I'm just playing. I'm following a series https://www.youtube.com/watch?v=z1PGJ9quPV8 for fun and educational purposes. |
Like you said, this is a very simple model so the overhead of the CPU initializing TensorFlow and copying the tensors to/from the GPU outweigh the benefits of having a GPU. You should see the GPU activity ramping up once you start running bigger models ^^ |
The TensorFlow team has no plan at this moment to allow plugins that define devices with the "GPU" string to be used together with We just released version 0.1.0.dev220928 which adds |
Uh oh!
There was an error while loading. Please reload this page.
What is the recommended development environment?
An error occurs when running a simple example.
The text was updated successfully, but these errors were encountered: