-
Notifications
You must be signed in to change notification settings - Fork 1.3k
ConvergenceWarning while trainning #965
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Thanks for reporting these warnings. They are actually nothing to worry about. Auto-sklearn tries all kinds of configurations, and if one does not work (due to an algorithm not converging) it will look in other areas of the configuration space. The reason why this happens so often is due to the way we train the SGD: we train it first for 2 iterations, and then iteratively double the amount of iterations until we end up with 1024 iterations. It's very likely then that after 2 iterations such a warning is emitted (and also after 4 etc). I'm leaving this open so that we actually hide this warning. |
I understand this is how your algorithm works, but sklearn doesn't know this and issues warnings. =) Probably, are you planning to write a popular science article on how your algorithm works? |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs for the next 7 days. Thank you for your contributions. |
Hello,
auto-sklearn works great for my. But during training I often get messages like this:
Sometimes these lines are repeated about 100-200 times one after the other.
Please tell me if there is any way to increase this 'max_iter' for stochastic_gradient using by auto-sklearn?
I tried increasing the time by using "time_left_for_this_task" and "per_run_time_limit". But it didn't help.
The text was updated successfully, but these errors were encountered: