Skip to content

Discuss dynamic linking #53

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
jfbastien opened this issue May 12, 2015 · 5 comments
Closed

Discuss dynamic linking #53

jfbastien opened this issue May 12, 2015 · 5 comments

Comments

@jfbastien
Copy link
Member

We had a meeting with Moz folks today about dynamic linking. I'm writing up meeting notes, and we made good progress. I'm opening this bug as a self-reminder to get extract documentation from what we discussed.

Note: we're still targeting dynamic linking as a post-V1 feature, but it can affect some of our decisions so early agreement on the general direction will help us save time in the medium term, and avoid warts in the finalized wasm.

@jfbastien jfbastien self-assigned this May 12, 2015
@jfbastien
Copy link
Member Author

I'm checking with our security team what we should recommend developers do to get cache hits on download and compile. Something like hosted libraries seems like the right thing.

@jfbastien
Copy link
Member Author

I discussed this with Justin Schuh from our security team, here are some notes.

Justin asked if we could just wrap wasm in JS and piggyback off of its support for CORS. We wants to avoid having any JS-isms in wasm, so that's a no-go.

Luke suggested using Stream API, where wasm is a bytecode with a macro compression on top, as well as LZHAM (which is better than regular HTTP or HTTP2 compression). The stream would decompress and compile during download.

Justin strongly encourages us to support subresource integrity, so that developers can use Google's (or another CDN's) caching service without having to trust the CDN. The developer would just provide a hash of the shared library, it would be calculated while the stream is downloading, and checked when the download is done but before executing.

Note that developers don't have to use SRI, it is optional. This comes in handy if we want to offer developers the ability to specify "give me whatever the latest libAwesome.1.x.pso". This would be done through a URL convention such as wasm.example.com/libAwesome.1.latest.pso. Applications are already checking certs through HTTPS, so trust in the CDN is established, but SRI doesn't work in that case since developers are asking to obtain libraries that may not have existed when they created the application.

Justin says best the bet is to use CORS. Off-origin includes from something similar to hosted libraries would work, but Justin suggests using a different subdomain such as wasm.googleapis.com.

@kg
Copy link
Contributor

kg commented May 13, 2015

What would stream API and in-content LZHAM actually look like in production - would it still provide load time gains or would we get nailed by it? It seems to imply multiple roundtrips between native and JS before the bytecode actually gets to a VM, even once the polyfill is gone. (Is the idea that the polyfill provides all those functions, and then they all move into the JS VM as a single functional unit?)

@lukewagner
Copy link
Member

@jfbastien Great to hear all that information. I think if we make network fetches as indistinguishable from JS (at the module loader pipeline level) as we can, we can basically draft off of existing work for JS on these issues. I haven't heard anything about SRI for module imports, but hopefully we could keep that an orthogonal problem and solve it for all module loads.

@kg The Streams API should deliver chunks of bits (as ArrayBuffers) and the decoded data would be fed to the native decoder via chunks of bits (also ArrayBuffers). My plan is to integrate lzham as an option into the polyfill so that way we can do minimal copying. Basically a loop fusion of the lzham loop and the wasm-to-asm.js decode loop and no worse (other than raw lzham-decode time) than the polyfill today. The polyfill would thus fetch data using either XHR or Streams and feed the result to the browser as either wasm or asm.js via Blob or Stream. In the final state, we have builtin stream decoders for popular compression formats so we can (from code, not via HTTP Content-Encoding) create a stream from the network into a native decompression into native decoder and thus WebAssembly stays orthogonal to generic compression.

@jfbastien
Copy link
Member Author

We have dynamic linking.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants