Options
All
  • Public
  • Public/Protected
  • All
Menu

Non-linear least-square method.

Using a given optimization algorithm (default is conjugate gradient),

$$ min { r(x) : x in R^n } $$

where $ r(x) = |f(x)|^2 $ is the Euclidean norm of $ f(x) $ for some vector-valued function $ f $ from $ R^n $ to $ R^m $, $$ f = (f_1, ..., f_m) $$ with $ f_i(x) = b_i - \phi(x,t_i) $ where $ b $ is the vector of target data and $ phi $ is a scalar function.

Assuming the differentiability of $ f $, the gradient of $ r $ is defined by $$ grad r(x) = f'(x)^t.f(x) $$

Hierarchy

  • NonLinearLeastSquare

Index

Constructors

constructor

Properties

Private _accuracy

_accuracy: Real

required accuracy of the solver

Private _bestAccuracy

_bestAccuracy: Real

Private _c

Private _exitFlag

_exitFlag: Integer

Exit flag of the optimization process

Private _initialValue

_initialValue: Real[]

Private _maxIterations

_maxIterations: Size

maximum and real number of iterations

Private _nbIterations

_nbIterations: Size

Private _om

Optimization method

Private _resnorm

_resnorm: Real

least square residual norm

Private _results

_results: Real[]

solution vector

Methods

exitFlag

  • Returns Integer

iterationsNumber

  • Returns Integer

lastValue

  • Returns Real

perform

residualNorm

  • residualNorm(): Real
  • Returns Real

results

  • Returns Real[]

setInitialValue

  • setInitialValue(initialValue: Real[]): void
  • Parameters

    • initialValue: Real[]

    Returns void