You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Bias and fairness in predictions are big concerns with deploying ML models, and bias can work its way into ML models through the datasets they are trained on**. It would be helpful for modelers to have tools to assist in calculating standard metrics for bias over training data.
** Bias in the training data doesn't always translate evenly to the bias and fairness of the model's predictions, so we need separate evaluation metrics of bias and fairness for the model predictions (captured in #1911).
Bias and fairness in predictions are big concerns with deploying ML models, and bias can work its way into ML models through the datasets they are trained on**. It would be helpful for modelers to have tools to assist in calculating standard metrics for bias over training data.
** Bias in the training data doesn't always translate evenly to the bias and fairness of the model's predictions, so we need separate evaluation metrics of bias and fairness for the model predictions (captured in #1911).
Related to #511
Related to #1911
Related to #1912
The text was updated successfully, but these errors were encountered: