Options
All
  • Public
  • Public/Protected
  • All
Menu

The process is as follows:

M individuals are used to explore the N-dimensional parameter space X_{i}^k = (X_{i, 1}^k, X_{i, 2}^k, \ldots, X_{i, N}^k) is the kth-iteration for the ith-individual X is updated via the rule X_{i, j}^{k+1} = X_{i, j}^k

  • V_{i, j}^{k+1} with V being the "velocity" that updates the position: V_{i, j}^{k+1} = \chi\left(V_{i, j}^k + c_1 r_{i, j}^k (P_{i, j}^k - X_{i, j}^k)
  • c_2 R_{i, j}^k (G_{i, j}^k - X_{i, j}^k)\right) where c are constants, r and R are uniformly distributed random numbers in the range [0, 1], and P_{i, j} is the personal best parameter set for individual i up to iteration k G_{i, j} is the global best parameter set for the swarm up to iteration k. c_1 is the self recognition coefficient c_2 is the social recognition coefficient

This version is known as the PSO with constriction factor (PSO-Co). PSO with inertia factor (PSO-In) updates the velocity according to: V_{i, j}^{k+1} = \omega V_{i, j}^k + \hat{c}1 r{i, j}^k (P_{i, j}^k - X_{i, j}^k)

  • \hat{c}2 R{i, j}^k (G_{i, j}^k - X_{i, j}^k) and is accessible from PSO-Co by setting \omega = \chi, and \hat{c}{1,2} = \chi c{1,2}

These two versions of PSO are normally referred to as canonical PSO.

Convergence of PSO-Co is improved if \chi is chosen as \chi = \frac{2}{\vert 2-\phi-\sqrt{\phi^2 - 4\phi}\vert}, with \phi = c_1 + c_2 Stable convergence is achieved if \phi >= 4. Clerc and Kennedy recommend c_1 = c_2 = 2.05 and \phi = 4.1

Different topologies can be chosen for G, e.g. instead of it being the best of the swarm, it is the best of the nearest neighbours, or some other form.

In the canonical PSO, the inertia function is trivial. It is simply a constant (the inertia) multiplying the previous iteration's velocity. The value of the inertia constant determines the weight of a global search over local search. Like in the case of the topology, other possibilities for the inertia function are also possible, e.g. a function that interpolates between a high inertia at the beginning of the optimization (hence prioritizing a global search) and a low inertia towards the end of the optimization (hence prioritizing a local search).

The optimization stops either because the number of iterations has been reached or because the stationary function value limit has been reached.

Hierarchy

Index

Classes

Accessors

Methods

Accessors

isDisposed

  • get isDisposed(): boolean

Methods

dispose

  • dispose(): void

init1

init2

minimize