Skip to content

How to handle the precision of BigFloats #10040

Closed
@dpsanders

Description

@dpsanders

Currently, the precision of BigFloats is determined by a global variable stored in an array DEFAULT_PRECISION, which is manipulated by set_bigfloat_precision.

This is not very Julian. I have been thinking about two possibilities:

  1. The precision is given explicitly, e.g. as a second argument to the various BigFloat constructors and to the big function, e.g.

    a = BigFloat(3.1, 100)
    b = big(3.1, 200)

Here, though, the precision is still hidden inside the object.

  1. An arguably more Julian approach, which I would favour, is that the precision be a parameter of the BigFloat{prec} type, so that we could write

    a = BigFloat{100}(3.1)
    b = BigFloat{200}(3.1)

This version would have the advantage that operations on BigFloats could explicitly track the precision of the objects being operated on (which MPFR explicitly states that it does not do). E.g., a + b in this example should have only 100 bits. (If I understand correctly, with MPFR there is the possibility to specify any precision for the result, but bits beyond 100 will be incorrect.)

If there is a consensus about whether either of these would be useful, or am I missing a reason why this would not be possible?

c.c. @simonbyrne, @andrioni, @lbenet

Metadata

Metadata

Assignees

No one assigned

    Labels

    bignumsBigInt and BigFloatdesignDesign of APIs or of the language itselfmathsMathematical functions

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions