Skip to content

Consider fuzzing syntax changes #16448

Open
@brianrourkeboll

Description

@brianrourkeboll

Proposal

I wonder whether it might be worth considering setting up some kind of process to fuzz (along the lines of this) any changes to the compiler that could affect either:

  1. The treatment of existing syntax
  2. The treatment of new syntax

I've noticed a pattern of longstanding compiler oddities or inconsistencies as well as of regressions and unintended features accidentally introduced alongside new syntax, both of which might have been caught earlier by fuzzing—that is, generating random programs (FsCheck? Hedgehog?) and asserting whether and how they should compile.

Even some kind of tool that could be run by hand on demand feels like it might be better than nothing.

Problem examples

Here is a quick, non-exhaustive list of examples that fuzzing might have helped catch sooner:

A language like F# is perhaps more prone to this kind of problem than many other programming languages, since most of F#'s features are designed to be composable—the language's expression and pattern-based nature, and the fact that expressions and patterns and types can be nested inside of themselves and each other, means that the language's syntagmatic surface area is already quite vast, and every new or updated construct makes it bigger still.

I think it's inevitable that we humans will fail to consider all of the potential syntactic implications of any given change to the parser or addition of a new syntactic construct.

Some questions

  1. How do other languages handle this kind of problem?
  2. Is it feasible to build and maintain such a tool, or is it known not to be a worthwhile undertaking?

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    Status

    New

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions