Skip to content

Add CMAES optimizer from nevergrad #591

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 18 commits into
base: main
Choose a base branch
from

Conversation

gauravmanmode
Copy link
Collaborator

@gauravmanmode gauravmanmode commented Apr 23, 2025

Hi @janosg
I am wrapping the CMA-ES optimizer from nevergrad
Will be adding tests and docs shortly.
Referring from the discussion in the existing PR's and issues,
some things I have experimented with are:

  1. Refactor code (use helper function nevergrad_internal to simplify code)
  2. Tried using a custom executor that call problem.batch_fun inside
    Using a CustomExecutor, for time consuming objective functions, benchmarking reveals
    Screenshot from 2025-04-21 11-32-57,
    whereas with lightweight objective functions, n_cores = 1 seemed more preferrable

Is this in the right direction?

@janosg
Copy link
Member

janosg commented Apr 28, 2025

Hi @gauravmanmode, thanks for the PR.

I definitely like the idea of your nevergrad_internal function. We currently have several independent nevergrad PRs open and a function like this is good to avoid code duplication.

Regarding the Executor: There was an argument brought forward by @r3kste that suggests it would be better to use the low-level ask-and-tell interface if we want to support parallelism. While I still think the solution with the custom Executor can be made to work, I think that the ask-and-tell interface is simpler and more readable for this.

@janosg
Copy link
Member

janosg commented Apr 28, 2025

Currently your tests fail because nevergrad is not compatible with numpy 2.0 and higher. You can pin numpy in the environment file for now.

@janosg
Copy link
Member

janosg commented Apr 28, 2025

Or better: Install nevergrad via pip instead of conda. The conda version is outdated. Then you don't need to pin any numpy versions.

Copy link

codecov bot commented Apr 30, 2025

Codecov Report

Attention: Patch coverage is 96.73913% with 3 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
src/optimagic/config.py 60.00% 2 Missing ⚠️
src/optimagic/optimizers/nevergrad_optimizers.py 98.14% 1 Missing ⚠️
Files with missing lines Coverage Δ
src/optimagic/algorithms.py 85.94% <100.00%> (+0.16%) ⬆️
src/optimagic/optimizers/nevergrad_optimizers.py 98.14% <98.14%> (ø)
src/optimagic/config.py 69.44% <60.00%> (-0.71%) ⬇️
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@gauravmanmode
Copy link
Collaborator Author

gauravmanmode commented May 5, 2025

Hi, @janosg ,
Installing nevergrad with pip solved the failing tests.

Here is the list of parameter names I have referred to

nevergrad_cmaes

Old Name Proposed Name from optimizer in optimagic
tolx xtol scipy
tolfun ftol scipy
budget stopping_maxfun scipy
CMA_rankmu learning_rate_rank_mu_update pygmo_cmaes
CMA_rankone learning_rate_rank_one_update pygmo_cmaes
popsize population_size pygmo_cmaes
fcmaes use_fast_implementation needs review
diagonal diagonal needs review
elitist elitist needs review
seed seed
scale scale needs review
num_workers n_cores optimagic
high_speed high_speed needs review

what kind of tests should i have for the internal helper function ?
Should I have tests for ftol, stopping_maxfun?
Also, in nevergrad, recommendation.loss returns None for some optimizers like CMA. Is this a nevergrad issue or am i missing something.
image
For reference, I have attached a notebook I used while exploring here

@gauravmanmode
Copy link
Collaborator Author

Hi @janosg,
I am thinking of refactoring the code for the already added nevergrad_pso optimizer and nevergrad_cmaes in this pr.
Does this sound good?
Also, I would like your thoughts on this.

  1. currently I am passing the optimizer object to the helper function _nevergrad_internal.
    image
  2. Another approach is to pass the optimizer name as a string as in pygmo
    image
    image
    What would be a better choice?

@janosg
Copy link
Member

janosg commented May 10, 2025

Hi @gauravmanmode, yes please go ahead and refactor the code for pso as well.

I would stick to approach one, i.e. passing the configured optimizer object to the internal function. It is more in line with the design philosophy shown here.

@janosg
Copy link
Member

janosg commented May 10, 2025

Hi, @janosg , Installing nevergrad with pip solved the failing tests.

Here is the list of parameter names I have referred to

nevergrad_cmaes

Old Name Proposed Name from optimizer in optimagic
tolx xtol scipy
tolfun ftol scipy
budget stopping_maxfun scipy
CMA_rankmu learning_rate_rank_mu_update pygmo_cmaes
CMA_rankone learning_rate_rank_one_update pygmo_cmaes
popsize population_size pygmo_cmaes
fcmaes use_fast_implementation needs review
diagonal diagonal needs review
elitist elitist needs review
seed seed
scale scale needs review
num_workers n_cores optimagic
high_speed high_speed needs review
what kind of tests should i have for the internal helper function ? Should I have tests for ftol, stopping_maxfun? Also, in nevergrad, recommendation.loss returns None for some optimizers like CMA. Is this a nevergrad issue or am i missing something. image For reference, I have attached a notebook I used while exploring here

About the names:

  • xtol and ftol are convergence criteria, so the name would be convergence_xtol. Ideally you would also find out if this is an absolute or relative tolerance and then add the corresponding abbreviation (e.g. convergence_xtol_rel). You can find examples of the naming scheme here
  • The otrher names are god

I would mainly add a name for stopping_maxfun. Other convergence criteria are super hard to test.

If you cannot get a loss out of nevergrad for some optimizers you can evaluate problem.fun at the solution for now and create an issue with a minimal example at nevergrad to get feedback. I wouldn't frame it as a bug report (unless you are absolutely sure) but rather frame it as a question whether you are using the library correctly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants