EigenOpt 1.0.0
Loading...
Searching...
No Matches
EigenOpt::simplex Namespace Reference

Contains functions to solve linear programming problems with the Simplex method. More...

Namespaces

namespace  internal
 Utility functions used by the Simplex algorithm.
 

Functions

template<class Scalar , class D1 , class D2 , class D3 , class D4 , class D5 , class D6 >
bool minimize (const Eigen::MatrixBase< D1 > &f, const Eigen::MatrixBase< D2 > &A, const Eigen::MatrixBase< D3 > &b, const Eigen::MatrixBase< D4 > &C, const Eigen::MatrixBase< D5 > &d, Eigen::DenseBase< D6 > &x, std::string &halt_reason, const Scalar &small_number, const Scalar &large_number=-1)
 Solve a constrained linear optimization problem.
 
template<class Scalar , class D1 , class D2 , class D3 , class D4 >
bool minimize (const Eigen::DenseBase< D1 > &f, const Eigen::DenseBase< D2 > &C, const Eigen::DenseBase< D3 > &d, Eigen::DenseBase< D4 > &x, std::string &halt_reason, const Scalar &small_number, const Scalar &large_number=-1)
 Solve an inequality-constrained linear optimization problem.
 
template<class Scalar , class D1 , class D2 , class D3 , class D4 , class D5 , class D6 >
bool maximize (const Eigen::MatrixBase< D1 > &f, const Eigen::MatrixBase< D2 > &A, const Eigen::MatrixBase< D3 > &b, const Eigen::MatrixBase< D4 > &C, const Eigen::MatrixBase< D5 > &d, Eigen::DenseBase< D6 > &x, std::string &halt_reason, const Scalar &small_number, const Scalar &large_number=-1)
 Solve a constrained linear optimization problem.
 
template<class Scalar , class D1 , class D2 , class D3 , class D4 >
bool maximize (const Eigen::DenseBase< D1 > &f, const Eigen::DenseBase< D2 > &C, const Eigen::DenseBase< D3 > &d, Eigen::DenseBase< D4 > &x, std::string &halt_reason, const Scalar &small_number, const Scalar &large_number=-1)
 Solve an inequality-constrained linear optimization problem.
 

Detailed Description

Contains functions to solve linear programming problems with the Simplex method.

Function Documentation

◆ maximize() [1/2]

template<class Scalar , class D1 , class D2 , class D3 , class D4 >
bool EigenOpt::simplex::maximize ( const Eigen::DenseBase< D1 > &  f,
const Eigen::DenseBase< D2 > &  C,
const Eigen::DenseBase< D3 > &  d,
Eigen::DenseBase< D4 > &  x,
std::string &  halt_reason,
const Scalar &  small_number,
const Scalar &  large_number = -1 
)
inline

Solve an inequality-constrained linear optimization problem.

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

See also
minimize()

Definition at line 161 of file simplex.hpp.

◆ maximize() [2/2]

template<class Scalar , class D1 , class D2 , class D3 , class D4 , class D5 , class D6 >
bool EigenOpt::simplex::maximize ( const Eigen::MatrixBase< D1 > &  f,
const Eigen::MatrixBase< D2 > &  A,
const Eigen::MatrixBase< D3 > &  b,
const Eigen::MatrixBase< D4 > &  C,
const Eigen::MatrixBase< D5 > &  d,
Eigen::DenseBase< D6 > &  x,
std::string &  halt_reason,
const Scalar &  small_number,
const Scalar &  large_number = -1 
)

Solve a constrained linear optimization problem.

This function uses the Simplex method to solve the problem:

\begin{equation} \max_{\bm{x}} \bm{f}^T \bm{x} \quad\text{subject to:}\quad \begin{cases} \bm{A} \bm{x} = \bm{b} \\ \bm{C} \bm{x} \leq \bm{d} \end{cases} \end{equation}

See also
minimize()

Definition at line 140 of file simplex.hpp.

◆ minimize() [1/2]

template<class Scalar , class D1 , class D2 , class D3 , class D4 >
bool EigenOpt::simplex::minimize ( const Eigen::DenseBase< D1 > &  f,
const Eigen::DenseBase< D2 > &  C,
const Eigen::DenseBase< D3 > &  d,
Eigen::DenseBase< D4 > &  x,
std::string &  halt_reason,
const Scalar &  small_number,
const Scalar &  large_number = -1 
)

Solve an inequality-constrained linear optimization problem.

This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts.

See also
minimize()

Definition at line 15 of file simplex.hxx.

◆ minimize() [2/2]

template<class Scalar , class D1 , class D2 , class D3 , class D4 , class D5 , class D6 >
bool EigenOpt::simplex::minimize ( const Eigen::MatrixBase< D1 > &  f,
const Eigen::MatrixBase< D2 > &  A,
const Eigen::MatrixBase< D3 > &  b,
const Eigen::MatrixBase< D4 > &  C,
const Eigen::MatrixBase< D5 > &  d,
Eigen::DenseBase< D6 > &  x,
std::string &  halt_reason,
const Scalar &  small_number,
const Scalar &  large_number = -1 
)

Solve a constrained linear optimization problem.

This function uses the Simplex method to solve the problem:

\begin{equation} \min_{\bm{x}} \bm{f}^T \bm{x} \quad\text{subject to:}\quad \begin{cases} \bm{A} \bm{x} = \bm{b} \\ \bm{C} \bm{x} \leq \bm{d} \end{cases} \end{equation}

where \( \bm{f} \) is a weight vector, the matrix \( \bm{A} \) and vector \( \bm{b} \) define a set of equality constraints, and the matrix \( \bm{C} \) and vector \( \bm{d} \) define a set of inequality constraints.

To obtain the solution, several steps are needed:

  1. Equality constraints are "removed" using \( \bm{x} = \bm{x}_{eq} + \bm{Z}\bm{y} \), where \( \bm{x}_{eq} \) is a particular solution to the equalities and \( \bm{Z} \) is a basis of the kernel of \( \bm{A} \). By substituting into the objective and inequality constraints, the problem becomes \( \min_{\bm{y}} \bm{f}_y^T \bm{y} \) subject to \( \bm{C}_y \bm{y} \leq \bm{d}_y \), where \( \bm{f}_y = \bm{f}\bm{Z} \), \( \bm{C}_y = \bm{C}\bm{Z} \) and \( \bm{d}_y = \bm{d} - \bm{C}\bm{x}_{eq} \).
  2. New variables are then introduced:
    • For each coordinates \( y_i \), two variables \( y_i^{(+)} \geq 0 \) and \( y_i^{(-)} \geq 0 \) are introduced. The substitution \( y_i = y_i^{(+)} - y_i^{(-)}\) is then performed.
    • Each inequality \( \bm{c}_i^T \bm{y} \leq d_i \) with \( d_i \geq 0 \) is transformed in an equality by instroducing a slack variable \( s_i \geq 0 \). The new constraint is therefore \( \bm{c}_i^T \bm{y} + s_i = d_i \).
    • Each inequality \( \bm{c}_i^T \bm{y} \leq d_i \) with \( d_i < 0 \) is transformed in an equality by instroducing a slack variable \( s_i \geq 0 \) and an artificial variable \( p_i \geq 0 \). The new constraint is therefore \( - \bm{c}_i^T \bm{y} - s_i + p_i = -d_i \).
  3. The problem has been transformed into the canonical form expected by the Simplex method, i.e., all decision variables are non-negative and all constraints are equalities that can be expressed as \( \bm{M}\bm{x}_{n} + \bm{x}_{b} = \bm{\delta} \), where \( \bm{x}_{b} \) is a subset of the new decision variables, labelled as "basic variables" ( \( \bm{x}_{n} \) are all remaining, "non-basic" variables). Initially, the basic vector consists in all artificial variables plus the slack variables for the constraint with positive right-hand side. Pivoting operations can now be performed to solve the problem.
Parameters
[in]fVector of objective coefficients.
[in]AEquality constraints matrix.
[in]bEquality constraints vector.
[in]CInequality constraints matrix.
[in]dInequality constraints vector.
[out]xSolution vector (modified only if a solution is actually found).
[out]halt_reasonA short feedback message explaining why the optimization routine halted.
[in]small_numberA small positive tolerance, used to detect near-zero values. In practice, a number n will be considered positive if n > small_number and negative if n < -small_number. Finally, if `-small_nuber <= n <= small_number, then n is treates as zero.
[in]large_numberA "large number". When constraints are not "trivially feasible" ( \( \bm{x}=\bm{0} \) does not satisfy the inequality \( \bm{C}\bm{x}\leq\bm{d} \)) it is necessary to first determine a feasible solution. Two methods are commonly employed:
  • A "two-steps" method will ignore the original objective and focus on finding a feasible point. It will focus on optimizing the original problem only after a feasible solution has been found.
  • A "penalty" method will simultaneosly optimize the original objective and a penalty function which discourages violating the constraints. If a feasible solution exists, this method will converge to the optimal solution of the original problem, given a large enough penalty.
The two-steps method is more accurate, but takes slightly longer; to use it, set large_number to a negative value. The penalty method is faster but requires a user-defined constant; set large_number to a positive value to use this method. The value should be several orders of magnitude larger than the values in the objective and constraints.
Returns
true if an optimal solution was found, and false otherwise. The reasons for failure are an unbounded problem or an infeasible constraint set. The returned message (inside halt_reason) should provide more details about the reason for a failed optimization.

Definition at line 152 of file simplex.hxx.