Skip to content

Create functional tests for all V1 Explainability scenarios #2573

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
rogancarr opened this issue Feb 15, 2019 · 0 comments
Closed

Create functional tests for all V1 Explainability scenarios #2573

rogancarr opened this issue Feb 15, 2019 · 0 comments
Assignees
Labels
API Issues pertaining the friendly API test related to tests
Milestone

Comments

@rogancarr
Copy link
Contributor

rogancarr commented Feb 15, 2019

As laid out in #2498 , we need scenarios to cover the TensorFlow functionality we want fully supported in V1.

  • I can get near-free (local) feature importance for scored examples (Feature Contributions)
  • I can view the overall importance of each feature (Permutation Feature Importance, GetFeatureWeights)
  • I can train interpretable models (linear model, GAM)
  • I can view how much each feature contributed to each prediction for trees and linear models (Feature Contributions)      
@rogancarr rogancarr self-assigned this Feb 15, 2019
@rogancarr rogancarr added API Issues pertaining the friendly API test related to tests labels Feb 15, 2019
@shauheen shauheen added this to the 0219 milestone Feb 19, 2019
@ghost ghost locked as resolved and limited conversation to collaborators Mar 24, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
API Issues pertaining the friendly API test related to tests
Projects
None yet
Development

No branches or pull requests

2 participants