Skip to content

Try a new formatting algorithm for float/double with a small given precision #3262

Closed
@jk-jeon

Description

@jk-jeon

So this is a continuation from the discussion done here: #2750.

I'm trying to implement the plan discussed but currently a little bit confused.

  1. To my current understanding, it is format_float which produces actual digits for the input floating-point number, and it does not print the decimal dot, and it does not attempt to choose between fixed / scientific form. It just print digits to a buffer, and later on in another function up in the call stack it is adjusted according to the precise spec. Is that the right understanding?
  2. It seems that the parameter precision of format_float is different from the precision given by the input. What exactly is that number? Why, for example, we decide shortest roundtrip output if precision is negative? Ultimately, what we need is the number of digits to print, counted from the first nonzero digit. How can I get that number?
  3. The case when the input is zero, either positive or negative, regardless of the formatting specs, returns early at the beginning of format_float, right? So can I assume that the input is not zero if it passed over the branch if (value <= 0)?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions