Closed
Description
So this is a continuation from the discussion done here: #2750.
I'm trying to implement the plan discussed but currently a little bit confused.
- To my current understanding, it is
format_float
which produces actual digits for the input floating-point number, and it does not print the decimal dot, and it does not attempt to choose between fixed / scientific form. It just print digits to a buffer, and later on in another function up in the call stack it is adjusted according to the precise spec. Is that the right understanding? - It seems that the parameter
precision
offormat_float
is different from the precision given by the input. What exactly is that number? Why, for example, we decide shortest roundtrip output ifprecision
is negative? Ultimately, what we need is the number of digits to print, counted from the first nonzero digit. How can I get that number? - The case when the input is zero, either positive or negative, regardless of the formatting specs, returns early at the beginning of
format_float
, right? So can I assume that the input is not zero if it passed over the branchif (value <= 0)
?
Metadata
Metadata
Assignees
Labels
No labels