-
Notifications
You must be signed in to change notification settings - Fork 3
Fix error when float variable value is greater 10^15-1 #48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
29f7757
to
908a842
Compare
908a842
to
9006535
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks. LGTM.
2^53 or 10^15? Sure, values over the [-2^53; 2^53] range has precision over 1.0 (not all integer numbers are representable out of this range), but the graphql standard due not obligate or instruct us to impose particular precision. Moreover: it links the IEEE 753 standard, which allows such double-precision floats. Regarding |
I already summarized it above. |
What do you suggest? |
I don't understand why, say, 2^54 is not considered as a double-precision floating point number. Can you clarify it? |
@Totktonada Here (http://spec.graphql.org/October2021/#sec-Float) spec points to https://en.wikipedia.org/wiki/IEEE_754 where Double precision floating point value may have up to 53 Significand bits aka mantissa. Fix me if I'm not right. |
Playing with exponent values allows to represent some numbers on the order of 10^308. 53 binary digits in the mantissa does not mean that the range is [-2^53; 2^53-1]. It is just the range, where we have enough precision to represent all integer values (without holes). You can look how double numbers are represented in the memory and make sure that numbers above 2^53 are actually representable: $ cc --std=c11 -Wall -Wextra -x c <(echo -e '#include <stdio.h>\nstatic int idx(int n) { return __FLOAT_WORD_ORDER__ == __ORDER_LITTLE_ENDIAN__ ? 7 - n : n; }\nint main() { double x = 18014398509481984.0; printf("Decimal representation: %lf\\n", x); unsigned char *cs = (unsigned char *)&x; printf("In memory: %02hhX %02hhX %02hhX %02hhX %02hhX %02hhX %02hhX %02hhX\\n", cs[idx(0)], cs[idx(1)], cs[idx(2)], cs[idx(3)], cs[idx(4)], cs[idx(5)], cs[idx(6)], cs[idx(7)]); return 0; }') && ./a.out; rm a.out
Decimal representation: 18014398509481984.000000
In memory: 43 50 00 00 00 00 00 00 |
@Totktonada I can't see any principal difference in your solution except of remove of cheking of 2^53 for cdata. All other things is a matter of personal taste. Let's close this PR and merge yours. |
Closed in favour of #49 |
According to http://www.lua.org/pil/2.3.html before this patch if mantissa is greater than 100,000,000,000,000 tonumber converts value not to number but to cdata(int64_t or uint64_t) and previous algorithm checked value type only for "number". If value is greater that 10^15-1 there was an error because type(value) ~= 'number'. This patch fixes such misbehavour.
closes: #47