Skip to content

Suggestion - Make Machine Learning Models explainable by design with ML.NET #511

Closed
@tauheedul

Description

@tauheedul

It's often difficult to understand how Machine Learning applications come to a decision. Some Developers reuse model samples without knowing how it works and is considered a black box to many.

This is an opportunity for ML.NET to stand out and automatically make models explainable.

  • ML.NET framework could keep a stack trace of some kind that keeps an audit of decisions
  • Including how confident it was in that decision (a rating or percentage)
  • With a fairness rating, evaluating the bias contained in the data supplied to the model
  • This could be output to the application upon request. Much like you can output a trace of an Exception.
  • Extend these peek abilities in Visual Studio so you can inspect what 3rd party models are doing (just like Resharpers decompile capabilities with libraries)

A framework that automatically keeps a self-audit of decisions would be way ahead of the rest and could help developers understand what the model is doing under the hood. Especially if they are relying on models supplied by third parties.

This could boost the development of ML using ML.NET and is exactly the kind of thing that made .NET such an easy framework to work with.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or requestusabilitySmoothing user interaction or experience

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions