-
Notifications
You must be signed in to change notification settings - Fork 695
always detach ArrayBuffer on grow #795
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
So one potential concern with behaving differently in the face of a shared buffer is that this would mean that programs built assuming ArrayBuffer would behave differently if later passed a SAB. But since this would normally mean they would make weaker assumptions about needing to go back to the module to get a new reference to the memory, this might be ok. Though this would make it trickier for JS code from outside that wants to manipulate the buffer from burning in references to the buffer. I would assume we do want the behavior where all modules that were set up with that memory are updated by the grow. So I think there will still need to be book-keeping around the memory being referenced from two places. Plus the modules themselves are effectively a second, or n-th reference to the same memory (you store via an exported function and "surprisingly a buffer somewhere else changes). But not firmly committed either way. What do other think? |
I think that we'll probably want to have memory imports and definitions declare shared-ness and we won't allow mixing shared and unshared. That way, if shared memory adds any overhead (size, codegen, metadata, less aggressive memory access optimizations due to memory model, etc), this would avoid penalizing the unshared case. If we did, however, allow the mixing, I bet you're right that it wouldn't be a problem in practice b/c the unshared code is dealing with the more restrictive API. As to your second point: you're right that, in some configurations, |
Would be great to get @lars-t-hansen's view on this. On racy detach: I think resize should first suspend all wasm threads in the module. That doesn't prevent races (e.g. check allocated size, suspend, use allocated size), but it prevents damage to the VM due to races. |
Just to be clear, I'm not proposing racy detach: regardless of what we do for |
@lukewagner To clarify, how do you expect grow_memory to work in the following cases: b) If max is omitted, what will happen when grow_memory is called? c) If the same memory is shared across more than one module and one of the modules calls grow_memory, what will happen when the other module tries to access memory? d) If grow_memory called on imported_memory is the max from the exporting module supposed to be used? Thanks! |
a) Yes. |
Any other comments/objections? |
@rossberg-chromium or @titzer ? |
LGTM. |
When memory is grown, a new
ArrayBuffer
is created to alias the new, bigger memory. This is necessary sinceArrayBuffer
s lengths' are immutable(ish, modulo detachment). So the question is: what happens to the previous buffer? Right now in JS.md:max
specified, the previous buffer is detachedmax
specified, the previous buffer keeps aliasing the same bytesThe latter case was motivated by thinking forward to when we have threads: shared memory will require
max
(so it can pre-reserve and never do a moving realloc) and the "keep aliasing" semantics avoids what would otherwise be completely racy detachment.However, another option I don't think we considered is to have the "keep aliasing" semantics only apply to shared memories and to have all unshared memories detach on grow (regardless of
max
). I think this would be preferable for a few reasons:ArrayBuffer
s aliasing the same memory which doesn't happen anywhere else in JS that I know of and this could break expectations (SharedArrayBuffer
, OTOH, is entirely designed for having multiple JS objects aliasing the same underlying memory)ArrayBuffer
s aliasing the same memory ends up requiring an extra GC edge (from the dependents to the owner) which means an extra field or weak map, so an iota more complexity/size