Box ConstraintsΒΆ

This section provides implementation details for how OptimLib handles box constraints.

The problem is to transform

\[\min_{x \in X} f(x)\]

where \(X\) is a subset of \(\mathbb{R}^d\), to

\[\min_{y \in \mathbb{R}^d} f(g^{-1}(y))\]

using a smooth, invertible mapping \(g: X \to \mathbb{R}^d\).

OptimLib allows the user to specify upper and lower bounds for each element of the input vector, \(x_j \in [a_j, b_j]\), and uses the following specification for \(g\):

\[g(x_j) = \ln \left( \frac{x_j - a_j}{b_j - x_j} \right)\]

with corresponding inverse:

\[g^{-1}(y_j) = \frac{a_j + b_j \exp(y_j)}{1 + \exp(y_j)}\]

The gradient vector is then:

\[\nabla_y f(g^{-1}(y)) = J(y) [\nabla_{x = g^{-1}(y)} f]\]

where \(J(y)\) is a \(d \times d\) diagonal matrix with typical element:

\[J_{j,j} = \frac{d}{d y_j} g^{-1}(y_j) = \frac{\exp( y_j ) (b_j - a_j)}{(1 + \exp(y_j))^2}\]