-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Getting only TIMEOUT for PredefinedSplit #1274
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hi @mereldawu, Sorry to read about this issue, I meant to reply earlier but apologies that I did not. I'm not sure what would cause this issue as it should not cause a |
Hi @mereldawu, Sorry that I'm only getting to this now. I'm looking into it and getting the same time out issue. Two things:
selected = (X_train.to_numpy()[:, 4] < np.mean(X_train.to_numpy()[:, 4])).astype(int)
assert len(X_train) == len(selected) # an array with 1 for selected and 0 for not However, even doing these things, I get a timeout error so I will keep looking for the reason for that. EDIT: I'm surprised that example runs correctly given this fact... EDIT_2: There's actually nothing in that example that says it works, given that the final accuracy presented is so low. |
So with this sample code where I fixed what import os
import pickle
import pandas as pd
import numpy as np
from sklearn.model_selection import PredefinedSplit, train_test_split
from sklearn.datasets import load_iris
from autosklearn.classification import AutoSklearnClassifier
# Using credit card public dataset to demonstrate the problem
def user_data():
df = pd.read_csv("https://github.com/raw/irenebenedetto/default-of-credit-card-clients/master/dataset/credit_cards_dataset.csv")
df.drop(columns="ID", inplace=True)
X_train, X_test = train_test_split(df, test_size=0.2, random_state=42)
y_train = X_train.pop(X_train.columns[-1])
selected = (X_train.to_numpy()[:, 4] < np.mean(X_train.to_numpy()[:, 4])).astype(int)
strategy = PredefinedSplit(test_fold=selected)
model_name = "user"
return X_train, y_train, strategy, model_name, 240
"""
def sample_data():
df = load_iris(as_frame=True)["frame"]
X_train, X_test = train_test_split(df, test_size=0.2, random_state=42)
y_train = X_train.pop(X_train.columns[-1])
selected = (X_train.to_numpy()[:, 3] < np.mean(X_train.to_numpy()[:, 3])).astype(int)
strategy = PredefinedSplit(test_fold=selected)
model_name = "sample"
return X_train, y_train, strategy, model_name, 30
"""
X_train, y_train, strategy, model_name, time = user_data()
model = None
if os.path.exists(model_name):
with open(model_name, "rb") as f:
model = pickle.load(f)
else:
model = AutoSklearnClassifier(
time_left_for_this_task=time,
resampling_strategy=strategy,
)
model.fit(X_train, y_train)
with open(model_name, "wb") as f:
pickle.dump(model, f)
print(model.sprint_statistics())
print(model.leaderboard(detailed=True, ensemble_only=False))
|
An update with the 10 minute version, it appears it was just our example is incorrect and I'm not sure why sklearn does not complain given that array.
|
Please let me know if this fixed your issue, I will update the example in the meantime and do a small investigation as to why we had silent errors and if we can catch them. You may also want to perform a refit as specified in that example. |
Some further looking show's that there's not much we can do to detect a bad The only way I could think to test for this would require that the length of both splits add up to the original length before splitting. While this is generally the case, I don't think we should enforce this. x = np.ones((100, 9)) # 100 rows, 9 features
y = np.ones((100,)) # 100 targets
splitter_good = PredefinedSplitter(test_fold=[1]*50 + [0]*50)
splitter_bad = PredefinedSplitter(test_fold=list(range(0, 51)))
# Correctly creates a 50/50 split
print(next(splitter_good.split(x, y)))
(array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,
17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33,
34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49]),
array([50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66,
67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83,
84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99]))
# Test split only has element 0 for some reason
print(next(splitter_bad.split(x, y)))
(array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17,
18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34,
35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50]),
array([0]))
# To enforce, I think it's a bad idea
test_idxs, train_idxs = next(splitter.split(x, y))
assert len(test_idxs) + len(train_idxs) == len(x) |
Hi @mereldawu, this was resolved by PR #1340 updating our example on how to use PredefinedSplit which gave you the errors. There is nothing we can do about automatically detecting bad splits returned by the custom splitter. |
Describe the bug
When passing PredefinedSplit as a resampling strategy, the result only shows timeout for even a small dataset. By using the default configuration, auto-sklearn can create successful trials in a couple of seconds.
To Reproduce
This is the minimal code I can come up with, based on the example here.
By commenting out the resampling_strategy line, the trials run successfully.
I've also tried to increase the time_left_for_this_task and per_run_time_limit both to 6000, still only got TIMEOUT.
I also tried to run the example code and it ran with successfully generated trials.
I'm not sure if the issue is the dataset, how I'm using PredefinedSplit or?
Expected behavior
Generate multiple successful trials.
Actual behavior, stacktrace or logfile
Result from sprint statistics:
auto-sklearn results:
Dataset name: 1e6334d4-3831-11ec-9a9c-0255ac100090
Metric: accuracy
Number of target algorithm runs: 12
Number of successful target algorithm runs: 0
Number of crashed target algorithm runs: 0
Number of target algorithms that exceeded the time limit: 12
Number of target algorithms that exceeded the memory limit: 0
Logfile uploaded.
Environment and installation:
Please give details about your installation:
The text was updated successfully, but these errors were encountered: