Skip to content

Add an evaluator for Bias and Fairness metrics #1911

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
rogancarr opened this issue Dec 18, 2018 · 2 comments
Closed

Add an evaluator for Bias and Fairness metrics #1911

rogancarr opened this issue Dec 18, 2018 · 2 comments
Labels
enhancement New feature or request

Comments

@rogancarr
Copy link
Contributor

We currently have model evaluators that produce metrics on the predicted label. For practical use of machine learning, it is necessary to have a sense for any biases the model may propagate and any fairness issues the mode has. In this way, it would be great to have a standard evaluators for bias and fairness metrics.

Related to #511

@tauheedul
Copy link
Contributor

tauheedul commented Dec 19, 2018

@rogancarr If you need ideas on how this could be tackled in ML.NET, you may find it useful to research Pymetrics Open Source Audit AI

It is used for Bias Testing of Generalized Machine Learning Applications
Source: https://github.com/pymetrics/audit-ai

@codemzs
Copy link
Member

codemzs commented Jun 30, 2019

thanks but not on the roadmap for the near future.

@codemzs codemzs closed this as completed Jun 30, 2019
@ghost ghost locked as resolved and limited conversation to collaborators Mar 26, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants