Search Algorithms

class chocolate.Bayes(connection, space, crossvalidation=None, clear_db=False, n_bootstrap=10, utility_function='ucb', kappa=2.756, xi=0.1)[source]

Bayesian minimization method with gaussian process regressor.

This method uses scikit-learn’s implementation of gaussian processes with the addition of a conditional kernel when the provided space is conditional [Lévesque2017]. Two acquisition functions are made available, the Upper Confidence Bound (UCB) and the Expected Improvement (EI).

Parameters:
  • connection – A database connection object.
  • space – the search space to explore with only discrete dimensions.
  • crossvalidation – A cross-validation object that handles experiment repetition.
  • clear_db – If set to True and a conflict arise between the provided space and the space in the database, completely clear the database and set set the space to the provided one.
  • n_bootstrap – The number of random iteration done before using gaussian processes.
  • utility_function (str) – The acquisition function used for the bayesian optimization. Two functions are implemented: “ucb” and “ei”.
  • kappa – Kappa parameter for the UCB acquisition function.
  • xi – xi parameter for the EI acquisition function.
[Lévesque2017]Lévesque, Durand, Gagné and Sabourin. Bayesian Optimization for Conditional Hyperparameter Spaces. 2017
next()

Retrieve the next point to evaluate based on available data in the database.

Returns:A tuple containing a unique token and a fully qualified parameter set.
update(token, values)

Update the loss of the parameters associated with token.

Parameters:
  • token – A token generated by the sampling algorithm for the current parameters
  • values – The loss of the current parameter set. The values can be a single Number, a Sequence or a Mapping. When a sequence is given, the column name is set to “_loss_i” where “i” is the index of the value. When a mapping is given, each key is prefixed with the string “_loss_”.
class chocolate.CMAES(connection, space, crossvalidation=None, clear_db=False, **params)[source]

Covariance Matrix Adaptation Evolution Strategy minimization method.

A CMA-ES strategy that combines the \((1 + \lambda)\) paradigm [Igel2007], the mixed integer modification [Hansen2011] and active covariance update [Arnold2010]. It generates a single new point per iteration and adds a random step mutation to dimensions that undergoes a too small modification. Even if it includes the mixed integer modification, CMA-ES does not handle well dimensions without variance and thus it should be used with care on search spaces with conditional dimensions.

Parameters:
  • connection – A database connection object.
  • space – The search space to explore.
  • crossvalidation – A cross-validation object that handles experiment repetition.
  • clear_db – If set to True and a conflict arise between the provided space and the space in the database, completely clear the database and set the space to the provided one.
  • **params

    Additional parameters to pass to the strategy as described in the following table, along with default values.

    Parameter Default value Details
    d 1 + ndim / 2 Damping for step-size.
    ptarg 1 / 3 Taget success rate.
    cp ptarg / (2 + ptarg) Step size learning rate.
    cc 2 / (ndim + 2) Cumulation time horizon.
    ccovp 2 / (ndim**2 + 6) Covariance matrix positive learning rate.
    ccovn 0.4 / (ndim**1.6 + 1) Covariance matrix negative learning rate.
    pthresh 0.44 Threshold success rate.

Note

To reduce sampling, the constraint to the search space bounding box is enforced by repairing the individuals and adjusting the taken step. This will lead to a slight over sampling of the boundaries when local optimums are close to them.

[Igel2007]Igel, Hansen, Roth. Covariance matrix adaptation for multi-objective optimization. 2007
[Arnold2010]Arnold and Hansen. Active covariance matrix adaptation for the (1 + 1)-CMA-ES. 2010.
[Hansen2011]Hansen. A CMA-ES for Mixed-Integer Nonlinear Optimization. Research Report] RR-7751, INRIA. 2011
next()

Retrieve the next point to evaluate based on available data in the database.

Returns:A tuple containing a unique token and a fully qualified parameter set.
update(token, values)

Update the loss of the parameters associated with token.

Parameters:
  • token – A token generated by the sampling algorithm for the current parameters
  • values – The loss of the current parameter set. The values can be a single Number, a Sequence or a Mapping. When a sequence is given, the column name is set to “_loss_i” where “i” is the index of the value. When a mapping is given, each key is prefixed with the string “_loss_”.
class chocolate.MOCMAES(connection, space, mu, crossvalidation=None, clear_db=False, **params)[source]

Multi-Objective Covariance Matrix Adaptation Evolution Strategy.

A CMA-ES strategy for multi-objective optimization. It combines the improved step size adaptation [Voss2010] and the mixed integer modification [Hansen2011]. It generates a single new point per iteration and adds a random step mutation to dimensions that undergoes a too small modification. Even if it includes the mixed integer modification, MO-CMA-ES does not handle well dimensions without variance and thus it should be used with care on search spaces with conditional dimensions.

Parameters:
  • connection – A database connection object.
  • space – The search space to explore.
  • crossvalidation – A cross-validation object that handles experiment repetition.
  • mu – The number of parents used to generate the candidates. The higher this number is the better the Parato front coverage will be, but the longer it will take to converge.
  • clear_db – If set to True and a conflict arise between the provided space and the space in the database, completely clear the database and set the space to the provided one.
  • **params

    Additional parameters to pass to the strategy as described in the following table, along with default values.

    Parameter Default value Details
    d 1 + ndim / 2 Damping for step-size.
    ptarg 1 / 3 Taget success rate.
    cp ptarg / (2 + ptarg) Step size learning rate.
    cc 2 / (ndim + 2) Cumulation time horizon.
    ccov 2 / (ndim**2 + 6) Covariance matrix learning rate.
    pthresh 0.44 Threshold success rate.
    indicator mo.hypervolume_indicator Indicator function used for ranking candidates

Note

To reduce sampling, the constraint to the search space bounding box is enforced by repairing the individuals and adjusting the taken step. This will lead to a slight over sampling of the boundaries when local optimums are close to them.

[Voss2010]Voss, Hansen, Igel. Improved Step Size Adaptation for the MO-CMA-ES. In proc. GECCO‘10, 2010.
[Hansen2011]Hansen. A CMA-ES for Mixed-Integer Nonlinear Optimization. [Research Report] RR-7751, INRIA. 2011
next()

Retrieve the next point to evaluate based on available data in the database.

Returns:A tuple containing a unique token and a fully qualified parameter set.
update(token, values)

Update the loss of the parameters associated with token.

Parameters:
  • token – A token generated by the sampling algorithm for the current parameters
  • values – The loss of the current parameter set. The values can be a single Number, a Sequence or a Mapping. When a sequence is given, the column name is set to “_loss_i” where “i” is the index of the value. When a mapping is given, each key is prefixed with the string “_loss_”.