Skip to content

Conversation

steffenlarsen
Copy link
Contributor

This commit adds functions for handling inter-process communication (IPC) of device USM memory allocations. Support for these functions are implemented for both level zero adapters and the CUDA adapter.

This commit adds functions for handling inter-process communication
(IPC) of device USM memory allocations. Support for these functions are
implemented for both level zero adapters and the CUDA adapter.

Signed-off-by: Larsen, Steffen <[email protected]>
@steffenlarsen
Copy link
Contributor Author

Draft SYCL extension building on this: #20018

@@ -439,3 +440,10 @@ struct ur_mem_handle_t_ : ur::cuda::handle_base {
}
}
};

struct ur_exp_ipc_mem_handle_t_ {
umf_memory_pool_handle_t UMFPool;
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Asked by @vinser52 in #20018 (comment):

Why do you need UMFPool to be part of the handle?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The use-case of IPC handles are two-fold:

Owner:

  1. Create a handle with urIPCGetMemHandleExp. This uses the pool tied to the USM pointer argument.
  2. Pool should not be used after this, but theoretically an implementation could allow a call to urIPCOpenMemHandleExp even though it may not make much sense to do in the same process.

Consumer:

  1. Create a handle from passed data using urIPCCreateMemHandleFromDataExp. This uses the default device memory pool associated with both the associated context and device.
  2. Open the handle using urIPCOpenMemHandleExp. This uses the pool retrieved in the previous step in order to get the UMF IPC handler. If we did not have this, I believe we would need to add the device to this API as well, but doing so means the device could differ between these two steps.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant