Skip to content

float division by zero #395

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
tiehuis opened this issue Jun 17, 2017 · 8 comments
Closed

float division by zero #395

tiehuis opened this issue Jun 17, 2017 · 8 comments
Labels
breaking Implementing this issue could cause existing code to no longer compile or have different behavior. enhancement Solving this issue will likely involve adding new logic or components to the codebase.
Milestone

Comments

@tiehuis
Copy link
Member

tiehuis commented Jun 17, 2017

Currently division by zero of floats will be caught at runtime and the program will abort when encountered. However, most other languages will allow float division by zero and instead generate a signalling nan with the FPU div by zero error set.

This is currently used in a few places in the math library as a result of musl relying on division by zero to signal nans, and so division by zero errors will occur with valid inputs. The implementation here though is not a huge concern, since I can manually set whatever exceptions we need and rewrite the code to avoid them internally.

The question is more about what behavior is considered right and we should follow so the user gets the least surprise. It is probably more correct to allow for float division by zero to be valid and not panic when encountered.

Thoughts?

@thejoshwolfe
Copy link
Contributor

Relevant: https://en.wikipedia.org/wiki/IEEE_floating_point#Exception_handling

Sometimes x/0.0 is Infinity, -Infinity, or NaN depending on if x > 0, x < 0 or otherwise respectively. It looks like IEEE754 has multiple "modes" for exception handling. What does that mean for Zig's standard library?

@tiehuis
Copy link
Member Author

tiehuis commented Jun 17, 2017

Floating point math is certainly a tricky subject.

There was some discussion here tiehuis/zig-fmath#7 on which was decided at least for the moment not to expose an exception testing interface. This can be added at a later time without much issue I can see.

The system c library should be able to be used in the case that fenv access is required at least for the moment.

To me it seems like we should allow division by zero to occur, and also overflow/underflow for floats, even in debug mode. As an example, Rust does this by default and if we are trying to conform to IEEE by default now it makes sense.

@andrewrk andrewrk added the breaking Implementing this issue could cause existing code to no longer compile or have different behavior. label Jun 17, 2017
@andrewrk andrewrk added this to the 0.1.0 milestone Jun 17, 2017
@andrewrk
Copy link
Member

I agree that there is value in conforming to IEEE. But I also think that dividing by zero is almost always a bug. What do you think about using setFloatMode to control this?

Currently FloatMode.Optimized is the default, and this would give a divide by zero error, and probably also an error if the arguments to a floating point operation are NaN or Inf. Of course in release-fast mode, these checks all disappear and the LLVM equivalent of -ffast-math is applied.

If the programmer sets FloatMode.Strict then we can remove all these checks and do IEEE by the books.

@andrewrk andrewrk added the enhancement Solving this issue will likely involve adding new logic or components to the codebase. label Jun 17, 2017
@tiehuis
Copy link
Member Author

tiehuis commented Jun 17, 2017

I think that sounds reasonable. Given that FloatMode.Optimized currently makes assumptions on arguments being non-nan, non-inf, I think we can assume any division by zero which produces infs or nans would be fairly useless in any case.

@thejoshwolfe
Copy link
Contributor

Is setting modes the best design? i understand there's a precedent already for it, but it might be worth at least discussing before diving in.

what's the scope of the state? is it compiletime or runtime? is it local to the thread or library? generally setting a mode is a design smell, so I just wanted to get some confirmation that this is really what we want.

@tiehuis
Copy link
Member Author

tiehuis commented Jun 18, 2017

As far as I am aware, this is a compile-time option specific to the scope that is passed to the builtin. I'm actually unsure the valid scope options that can be used (besides this), so cannot comment here. It would be similar to setting ffast-math in gcc with the optimize("-ffast-math") attribute or pragma. This changes what the compiler can assume it is allowed to do in terms of math transformations/behaviour.

This is a pretty good overview of the exact things that it allows the compiler to assume.
https://gcc.gnu.org/wiki/FloatingPointMath

I think this is okay to set via a mode, since it would only be required to be adjusted when a user explicitly needs IEEE behavior for nan/inf or is dealing with sub-normals or another less-known feature. I would think this is the minority compared to standard float usage.

Also, what is the original rationale for preferring fast-math in the general case? I'm inclined to see it as the better option in the case of usual floating point usage, but extra justification and reassurance is always good.

@andrewrk
Copy link
Member

andrewrk commented Jun 19, 2017

Is setting modes the best design? i understand there's a precedent already for it, but it might be worth at least discussing before diving in.

What's the alternative proposal here? Different operators like wrapping integer arithmetic?

To be clear, the precedent set is setting the floating point mode at the compile unit scope using command line options. Having the floating point mode set in the source at block scope level is bringing something new to the table.

Also, what is the original rationale for preferring fast-math in the general case?

  • we want users' code to go fast
  • we want to help users not have bugs
  • dividing by zero / using infinite in floating point operations is usually a bug, and if we assume that it is, we can catch the bug in debug and release-safe builds and make the code go faster in release-fast builds.

@tiehuis
Copy link
Member Author

tiehuis commented Jun 22, 2017

My initial question on this is at least answered. I think the runtime error on division by zero does make sense, as long as we are consistent across all modes (release vs. debug).

I've also made changes in the math library to not rely on this behavior at all, so okay on that end, too.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
breaking Implementing this issue could cause existing code to no longer compile or have different behavior. enhancement Solving this issue will likely involve adding new logic or components to the codebase.
Projects
None yet
Development

No branches or pull requests

3 participants