-
Notifications
You must be signed in to change notification settings - Fork 3.7k
[OPENCL][TEXTURE] Improved texture memory planning #17571
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Except the relay backend changes everything is reusable for Texture support in Relax. |
36e9ea7
to
56d0739
Compare
@tvm-bot rerun |
1 similar comment
@tvm-bot rerun |
include/tvm/runtime/device_api.h
Outdated
* \param mem_scope The memory scope of allocated tensor. | ||
* \return The allocated device pointer. | ||
*/ | ||
virtual void* AllocDataSpaceView(Device dev, void* data, ShapeTuple shape, DLDataType dtype, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
given this is specific to OpenCL, let us still strive to keep it within opencl allocator interface instead of going through the DeviceAPI
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see this is the clean way to keep minimal changes in other modules (graph runtime, ndarray, memory manager ..etc).
In Relax also , I am mapping alloc_storage to allocate cl_buffer and alloc_tensor does a view over it by this device API call. (WIP Ref. srkreddy1238@a6376b9#diff-847ee73fb0b77db96cce920da6cbae223f6bdb026ea125514122e96630356c9b)
Later, this also allows easy path for CLML memory management going through TVM memory_manager interface and also features like GMEM (on chip memory of AdrenoGPU) support for TVM ..etc.
Let me know if you have different advice, I can explore the possibilities.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would be great to start with thinking along the direction of special allocator https://github.com/apache/tvm/blob/main/include/tvm/runtime/memory/memory_manager.h
My reading is that seems the main issue lies in the need to get Tensor from existing Buffer in an customized fashion, perhaps we can extend Allocator interface to enable such view
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My reading is that seems the main issue lies in the need to get Tensor from existing Buffer in an customized fashion, perhaps we can extend Allocator interface to enable such view
True, where the backing buffer is used as it as or many image views created over based on memory plan.
With view over NDArray or special Allocator we need to reach opencl device api for final view creation which happen by OpenCL call clCreateImage
. We create a new cl_mem (image) from an existing cl_mem(buffer) as backing buffer.
Current flow is
storage_pool populated by: Allocator->Empty => NDArray.
Data_entry_ populated by: NDArray => NDArray::CreateView => DeviceAPI::AllocDataSpaceView => NDArray
We can change this to Allocator interface by
Special Allocator (Extended from Allocator with new call for View) registered from OpenCL Device API at Init.
storage_pool by : Allocator->Alloc => StorageObj
data_entry_ : StorageObj => AllocNDArrayWithScope => Allocator::CreateView (access OpenCLWorkSpace and create view) => NDArray
Is my understanding correct here ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In case of VM (Relax)
alloc_storage : Allocator->Alloc (always scope "global") => DeviceAPI::AllocDataSpace =>StorageObj
alloc_tensor : StorageObj::AllocNDArrayScoped => DeviceAPI::AllocDataSpaceView => NDArray
Ref. AllocNDArrayScoped with destructor that calls FreeDataSpaceView for clean up.
srkreddy1238@a6376b9#diff-847ee73fb0b77db96cce920da6cbae223f6bdb026ea125514122e96630356c9b
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We might be able to in tihs case have StorageObj to refer back to a raw ptr of allocator, in this case
StorageObj => AllocNDArrayWithScope => allocator_-> CreateView(Storage storage)
. The main thing is you need an allocator specific dispatch to create such view, while previously we don't have to.
That does mean allocator needs to have a new virtual function(CreateView)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Up for another review where, we have new allocator for OpenCL and now the allocation goes by the suggested flow
Have added a packed helper function like DeviceAllocator.opencl
where the memory manager can query if the the device has specialized Allocator for a given type like Naive or Pooled.
We are now clear from DeviceAPI interface modification. Let me know how does it look now.
753835e
to
2f7864f
Compare
061ec2f
to
4c4a81d
Compare
@tvm-bot rerun |
4120af7
to
3571aa5
Compare
@tvm-bot rerun |
Failed to re-run CI in https://github.com/apache/tvm/actions/runs/13300072214
with response
|
@tvm-bot rerun |
Failed to re-run CI in https://github.com/apache/tvm/actions/runs/13300093925
with response
|
Motivated form the fact that textures can be allocated over a clBuffer object and the size of backing clBuffer can be computed based on hardware image pitch alignment. This optimizes the overall memory allocation on device and helps greately the models with large memory requirements. Improvised the graph memory planner to not differentiate buffer and texture storage tokens and reuse them across. The texture pool in OpenCL runtime is rebranded as memory pool that handles allocation for both buffer and image objects. NDArray to DeviceAPI interface is extended with AllocDataSpaceView and FreeDataSpaceView. These new API's acommodates accessing same physical memory as clBuffer / clImage objects.
4c2e3c0
to
ba9b2fd
Compare
@tqchen good for review now :) |
thanks @srkreddy1238 , merging so we can have changes in before relay phasing out |
* [OPENCL][TEXTURE] Improved texture memory planning Motivated form the fact that textures can be allocated over a clBuffer object and the size of backing clBuffer can be computed based on hardware image pitch alignment. This optimizes the overall memory allocation on device and helps greately the models with large memory requirements. Improvised the graph memory planner to not differentiate buffer and texture storage tokens and reuse them across. The texture pool in OpenCL runtime is rebranded as memory pool that handles allocation for both buffer and image objects. NDArray to DeviceAPI interface is extended with AllocDataSpaceView and FreeDataSpaceView. These new API's acommodates accessing same physical memory as clBuffer / clImage objects.
Motivated form the fact that textures can be allocated over a clBuffer object and the size of backing clBuffer can be computed based on hardware image pitch alignment.
This optimizes the overall memory allocation on device and helps greately the models with large memory requirements.
Improvised the graph memory planner to not differentiate buffer and texture storage tokens and reuse them across. The texture pool in OpenCL runtime is rebranded as memory pool that handles allocation for both buffer and image objects.
NDArray to DeviceAPI interface is extended with AllocDataSpaceView and FreeDataSpaceView. These new API's acommodates accessing same physical memory as clBuffer / clImage objects.