Nelder-Mead

Table of contents


Algorithm Description

Nelder-Mead is a derivative-free simplex method, used solve to optimization problems of the form

\[\min_{x \in X} f(x)\]

where \(f\) need not be convex or differentiable.

The updating rule for Nelder-Mead is described below. Let \(x^{(i)}\) denote the simplex values at stage \(i\) of the algorithm.

  1. Sort the simplex vertices \(x\) in order of function values, from smallest to largest:

    \[f(x^{(i)})(1,:) \leq f(x^{(i)})(2,:) \leq \cdots \leq f(x^{(i)})(n+1,:)\]
  2. Calculate the centroid value up to the \(n\) th vertex:

    \[\bar{x} = \frac{1}{n} \sum_{j=1}^n x^{(i)}(j,:)\]

    and compute the reflection point:

    \[x^r = \bar{x} + \alpha (\bar{x} - x^{(i)}(n+1,:))\]

    where \(\alpha\) is set by par_alpha.

    If \(f(x^{(i)}(1,:)) \leq f(x^r) < f(x^{(i)}(n,:))\), then

    \[x^{(i+1)}(n+1,:) = x^r, \ \ \textbf{ and go to Step 1.}\]

    Otherwise continue to Step 3.

  3. If \(f(x^r) \geq f(x^{(i)}(1,:))\) then go to Step 4, otherwise compute the expansion point:

    \[x^e = \bar{x} + \gamma (x^r - \bar{x})\]

    where \(\gamma\) is set by par_gamma.

    Set

    \[x^{(i+1)}(n+1,:) = \begin{cases} x^e & \text{ if } f(x^e) < f(x^r) \\ x^r & \text{ else } \end{cases}\]

    and go to Step 1.

  4. If \(f(x^r) < f(x^{(i)}(n,:))\) then compute the outside or inside contraction:

    \[x^{c} = \begin{cases} \bar{x} + \beta(x^r - \bar{x}) & \text{ if } f(x^r) < f(x^{(i)}(n+1,:)) \\ \bar{x} - \beta(x^r - \bar{x}) & \text{ else} \end{cases}\]

    If \(f(x^c) < f(x^{(i)}(n+1,:))\), then

    \[x^{(i+1)}(n+1,:) = x^c, \ \ \textbf{ and go to Step 1.}\]

    Otherwise go to Step 5.

  5. Shrink the simplex toward \(x^{(i)}(1,:)\):

    \[x^{(i+1)}(j,:) = x^{(i)}(1,:) + \delta (x^{(i)}(j,:) - x^{(i)}(1,:)), \ \ j = 2, \ldots, n+1\]

    where \(\delta\) is set by par_delta. Go to Step 1.

The algorithm stops when at least one of the following conditions are met:

  1. the relative change in the simplex of function values, defined as:

    \[\dfrac{\max \{ | f(x^{(i+1)}(1,:)) - f(x^{(i)}(1,:)) |, | f(x^{(i+1)}(n+1,:)) - f(x^{(i)}(1,:)) | \} }{ \max_j | f(x^{(i+1)}(j,:)) | + \epsilon}\]

    is less than rel_objfn_change_tol.

  2. the relative change between \(x^{(i+1)}\) and \(x^{(i)}\) is less than rel_sol_change_tol;

  3. the total number of iterations exceeds iter_max.


Function Declarations

bool nm(ColVec_t &init_out_vals, std::function<fp_t(const ColVec_t &vals_inp, ColVec_t *grad_out, void *opt_data)> opt_objfn, void *opt_data)

The Nelder-Mead Simplex-based Optimization Algorithm.

Parameters
  • init_out_vals – a column vector of initial values, which will be replaced by the solution upon successful completion of the optimization algorithm.

  • opt_objfn – the function to be minimized, taking three arguments:

    • vals_inp a vector of inputs;

    • grad_out a vector to store the gradient; and

    • opt_data additional data passed to the user-provided function.

  • opt_data – additional data passed to the user-provided function.

Returns

a boolean value indicating successful completion of the optimization algorithm.

bool nm(ColVec_t &init_out_vals, std::function<fp_t(const ColVec_t &vals_inp, ColVec_t *grad_out, void *opt_data)> opt_objfn, void *opt_data, algo_settings_t &settings)

The Nelder-Mead Simplex-based Optimization Algorithm.

Parameters
  • init_out_vals – a column vector of initial values, which will be replaced by the solution upon successful completion of the optimization algorithm.

  • opt_objfn – the function to be minimized, taking three arguments:

    • vals_inp a vector of inputs;

    • grad_out a vector to store the gradient; and

    • opt_data additional data passed to the user-provided function.

  • opt_data – additional data passed to the user-provided function.

  • settings – parameters controlling the optimization routine.

Returns

a boolean value indicating successful completion of the optimization algorithm.


Optimization Control Parameters

The basic control parameters are:

  • fp_t rel_objfn_change_tol: the error tolerance value controlling how small the relative change in the simplex of function values, defined as:

    \[\dfrac{\max \{ | f(x^{(i+1)}(1,:)) - f(x^{(i)}(1,:)) |, | f(x^{(i+1)}(n+1,:)) - f(x^{(i)}(1,:)) | \} }{ \max_j | f(x^{(i+1)}(j,:)) | + \epsilon};\]

    should be before ‘convergence’ is declared.

  • fp_t rel_sol_change_tol: the error tolerance value controlling how small the proportional change in the solution vector should be before ‘convergence’ is declared.

    The relative change is computed using:

    \[\dfrac{\max_{j,k}|x^{(i+1)}(j,k) - x^{(i)}(j,k)|}{ \max_{j,k}|x^{(i)}(j,k)| + \epsilon }\]

    where \(\epsilon\) is a small number added for numerical stability.

  • size_t iter_max: the maximum number of iterations/updates before the algorithm exits.

  • bool vals_bound: whether the search space of the algorithm is bounded. If true, then

    • ColVec_t lower_bounds: defines the lower bounds of the search space.

    • ColVec_t upper_bounds: defines the upper bounds of the search space.

  • struct nm_settings_t, which defines several parameters that control the behavior of the simplex.

    • bool adaptive_pars = true: scale the contraction, expansion, and shrinkage parameters using the dimension of the optimization problem.

    • fp_t par_alpha = 1.0: reflection parameter.

    • fp_t par_beta = 0.5: contraction parameter.

    • fp_t par_gamma = 2.0: expansion parameter.

    • fp_t par_delta = 0.5: shrinkage parameter.

    • bool custom_initial_simplex = false: whether to use user-defined values for the initial simplex matrix.

    • Mat_t initial_simplex_points: user-defined values for the initial simplex (optional). Dimensions: \((n + 1) \times n\).

In addition to these:

  • int print_level: Set the level of detail for printing updates on optimization progress.

    • Level 1: Print the iteration count and current error values.

    • Level 2: Level 1 plus the current candidate solution values.

    • Level 3: Level 2 plus the simplex matrix, \(x^{(i)}\), and value of the objective function at each vertex of the simplex.


Examples

Sphere Function

Code to run this example is given below.

Armadillo (Click to show/hide)

#define OPTIM_ENABLE_ARMA_WRAPPERS
#include "optim.hpp"

inline
double
sphere_fn(const arma::vec& vals_inp, arma::vec* grad_out, void* opt_data)
{
    double obj_val = arma::dot(vals_inp,vals_inp);

    if (grad_out) {
        *grad_out = 2.0*vals_inp;
    }

    return obj_val;
}

int main()
{
    const int test_dim = 5;

    arma::vec x = arma::ones(test_dim,1); // initial values (1,1,...,1)

    bool success = optim::nm(x, sphere_fn, nullptr);

    if (success) {
        std::cout << "nm: sphere test completed successfully." << "\n";
    } else {
        std::cout << "nm: sphere test completed unsuccessfully." << "\n";
    }

    arma::cout << "nm: solution to sphere test:\n" << x << arma::endl;

    return 0;
}

Eigen (Click to show/hide)

#define OPTIM_ENABLE_EIGEN_WRAPPERS
#include "optim.hpp"

inline
double
sphere_fn(const Eigen::VectorXd& vals_inp, Eigen::VectorXd* grad_out, void* opt_data)
{
    double obj_val = vals_inp.dot(vals_inp);

    if (grad_out) {
        *grad_out = 2.0*vals_inp;
    }

    return obj_val;
}

int main()
{
    const int test_dim = 5;

    Eigen::VectorXd x = Eigen::VectorXd::Ones(test_dim); // initial values (1,1,...,1)

    bool success = optim::nm(x, sphere_fn, nullptr);

    if (success) {
        std::cout << "nm: sphere test completed successfully." << "\n";
    } else {
        std::cout << "nm: sphere test completed unsuccessfully." << "\n";
    }

    std::cout << "nm: solution to sphere test:\n" << x << std::endl;

    return 0;
}

Booth’s Function

Code to run this example is given below.

Armadillo Code (Click to show/hide)

#define OPTIM_ENABLE_ARMA_WRAPPERS
#include "optim.hpp"

inline
double
booth_fn(const arma::vec& vals_inp, arma::vec* grad_out, void* opt_data)
{
    double x_1 = vals_inp(0);
    double x_2 = vals_inp(1);

    double obj_val = std::pow(x_1 + 2*x_2 - 7.0,2) + std::pow(2*x_1 + x_2 - 5.0,2);

    if (grad_out) {
        (*grad_out)(0) = 10*x_1 + 8*x_2   2*(- 7.0) + 4*(x_2 - 5.0);
        (*grad_out)(1) = 2*(x_1 + 2*x_2 - 7.0)*2 + 2*(2*x_1 + x_2 - 5.0);
    }

    return obj_val;
}

int main()
{
    arma::vec x_2 = arma::zeros(2,1); // initial values (0,0)

    bool success_2 = optim::nm(x, booth_fn, nullptr);

    if (success_2) {
        std::cout << "nm: Booth test completed successfully." << "\n";
    } else {
        std::cout << "nm: Booth test completed unsuccessfully." << "\n";
    }

    arma::cout << "nm: solution to Booth test:\n" << x_2 << arma::endl;

    return 0;
}

Eigen Code (Click to show/hide)

#define OPTIM_ENABLE_EIGEN_WRAPPERS
#include "optim.hpp"

inline
double
booth_fn(const Eigen::VectorXd& vals_inp, Eigen::VectorXd* grad_out, void* opt_data)
{
    double x_1 = vals_inp(0);
    double x_2 = vals_inp(1);

    double obj_val = std::pow(x_1 + 2*x_2 - 7.0,2) + std::pow(2*x_1 + x_2 - 5.0,2);

    if (grad_out) {
        (*grad_out)(0) = 2*(x_1 + 2*x_2 - 7.0) + 2*(2*x_1 + x_2 - 5.0)*2;
        (*grad_out)(1) = 2*(x_1 + 2*x_2 - 7.0)*2 + 2*(2*x_1 + x_2 - 5.0);
    }

    return obj_val;
}

int main()
{
    Eigen::VectorXd x = Eigen::VectorXd::Zero(test_dim); // initial values (0,0)

    bool success_2 = optim::nm(x, booth_fn, nullptr);

    if (success_2) {
        std::cout << "nm: Booth test completed successfully." << "\n";
    } else {
        std::cout << "nm: Booth test completed unsuccessfully." << "\n";
    }

    std::cout << "nm: solution to Booth test:\n" << x_2 << std::endl;

    return 0;
}