-
Notifications
You must be signed in to change notification settings - Fork 696
Loads, stores, memory types, and conversions #326
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
A variation on approach 2 would be most consistent with the conversion operators as they are presently described in AstSemantics:
The convention used by the conversion operators is |
I like solution 1. that you propose overall. A few issues though:
|
IIUC, the differences between Approach 2 and what is currently in AstSemantics.md is:
and the point of the second bullet is that otherwise we're being asymmetric between int and float types. That seems fair. I also like proposal 1 better for the reasons you gave. For the "far" conversions ( Agreed with @AndrewScheidecker that, even if it's technically redundant, for symmetry with other op names, the result local type should prefix the opcode's name. I'd suggest you use the syntax in Proposal 2, e.g., Lastly, both proposals imply adding |
@lukewagner globals can already be |
Ah, I hadn't noticed that; I had inferred from recent discussion that they had local type, sorry. |
Ok, starting with solution 1, ignoring globals (which I hope we can remove from MVP anyway (#154)), and applying the convention of prefixing with result type (mentioned above) produces this set of load/store ops.
Look good to everyone (incl @sunfishcode, @titzer)? I like that this achieves a reasonable minimalism: to remove the 8->32 and 16->32 coercive ops, we'd have to introduce int8/int16 types local types, which is more work. |
@lukewagner this mostly lgtm, except:
|
lgtm On Fri, Sep 11, 2015 at 3:49 AM, Luke Wagner [email protected]
|
lgtm |
Let's just look at the PR which I'll write in a bit, which should just be a minimal change to the Linear Memory section. |
@lukewagner it's not clear that we'll want I think you're suggesting we go with the later? Indexing: I mean base+offset on load/store. I'm asking about these other issues becasue @rossberg-chromium wants to revisit load/store. I want to make sure we do so for real! I'm OK with these, and want to make sure we're keeping them knowingly. Assuming all of the above, the corresponding PR would lgtm. |
@jfbastien I'd like to keep the specific topic @rossberg-chromium filed this issue about and we discussed and agreed on separate from float16 and indexing. |
Regarding |
Consider of the 'indexing' in these memory access operations might need some more elaboration. When bounds checking is considered it can become important to distinguish if the pre-offset value is expected to be positive (for performance reasons). For example, some code generates a signed index (say a natural result of a computation) then adds an offset to make its range positive and then looks up this index in a table which might be at a fixed address. Wasm has no concept of pointers so does not know if an added offset is applied to a potentially signed index or to an pointer that is expected to be positive. It could hurt performance if a wasm code generator folded these two offsets together, and perhaps the load/store index could be defined as expecting a positive source argument otherwise a slow path might be taken in bounds checking. |
I think this issue is resolved now. |
The current AstSemantics is somewhat baroque and inconsistent regarding loads and stores:
The way I see it, there would be two possible approaches for a more consistent design:
1. Loads and stores with memory types, but minimal set of conversions
In this approach,
That is, there would be opcodes
In the (presumably rarer case) that other combinations are needed, the respective conversion operators are readily available.
2. Loads and stores with local types + memory sizes, all conversions built-in
This is a variation of what was discussed in #82.
In this approach
The opcodes would be
This scheme requires more opcodes, but saves extra use of conversions.
Additional types
In both schemes it would be easy to add support for additional types, e.g. a float16 memory type. In approach 1, it would introduce opcodes
In approach 2:
Comparison
Approach 1 needs fewer opcodes (especially when adding more types) and has less redundancy with conversion operators. Approach 2 is a bit more explicit about the sizes and closer to what is currently in AstSemantics. Personally, I think apporach 1 is more attractive and less bloated.
The text was updated successfully, but these errors were encountered: