Difference between revisions of "Particle Swarm Optimization - Scholarpedia Draft"

From IridiaWiki
Jump to navigationJump to search
 
Line 1: Line 1:
  +
{\bf Particle swarm optimization} (PSO) is a population-based stochastic approach for tackling continuous and discrete optimization
trial
 
  +
problems.
  +
  +
In particle swarm optimization, the solution space of an optimization problem represents the environment where simple software agents,
  +
called \emph{particles}, can move around. Accordingly, the position of a particle represents a candidate solution to the
  +
optimization problem at hand. Particles search for better positions in the solution space by changing their velocity according to
  +
rules originally inspired by behavioral models of bird flocking.
  +
  +
Particle swarm optimization belongs to the class of {\bf swarm intelligence} techniques that are used solve optimization problems.
  +
  +
\section*{Contents}
  +
\begin{itemize}
  +
\item History
  +
\item Standard PSO algorithm
  +
\item Main PSO variants
  +
\begin{itemize}
  +
\item Fully informed PSO
  +
\item Bare bones PSO
  +
\item Discrete PSO
  +
\item Multiobjective PSO
  +
\end{itemize}
  +
\item Applications of PSO
  +
\item Current Research Issues
  +
\item References
  +
\item External Links
  +
\item See Also
  +
  +
\end{itemize}
  +
  +
\section{History}
  +
  +
Particle swarm optimization was introduced by James Kennedy and Russell Eberhart in 1995~\cite{Kennedy95}. It
  +
has roots in the simulation of social behaviors using tools and ideas taken from computer graphics and social
  +
psychology research.
  +
  +
Within the field of computer graphics, the first antecedents of particle swarm optimization date back to 1983 when
  +
Reeves~\cite{Reeves83} proposed particle systems to model objects that are dynamic and cannot be easily
  +
represented by polygons or surfaces. Examples of such objects are fire, smoke, water and clouds. In these models,
  +
particles are independent of each other and their movement is governed by a set of rules.
  +
  +
In 1987, Reynolds~\cite{Reynolds87} used a particle system to simulate the collective behavior of a flock
  +
of birds. In a similar kind simulation, Heppner and Grenander~\cite{Heppner90} included a ``roost'' that
  +
was attractive to the simulated birds. Both models inspired the set of rules that were used in the original
  +
particle swarm optimization algorithm.
  +
  +
Social psychology research was another source of influence in the development of the first particle swarm optimization
  +
algorithm. The rules that govern the movement of the particles in a problem's solution space can also be seen
  +
as a model of {\bf social cognition} in which successful individuals tend to be imitated by less successful ones~\cite{Kennedy01}.
  +
  +
The name particle swarm comes from the fact that particles update their position by modifying their velocity (even
  +
though they are massless) and because the collective behavior of the particles adheres to the principles described
  +
by Millonas~\cite{Millonas94}.
  +
  +
\section{Standard PSO algorithm}
  +
  +
\subsection{Preliminaries}
  +
The problem of minimizing\footnote{Without loss of generality, the presentation considers only minimization problems.}
  +
the function $f: \Theta \to \mathbb{R}$ with $\Theta \subseteq \mathbb{R}^n$ can be stated as finding the set
  +
\begin{displaymath}
  +
\Theta^* = \argmin_{\bvec{\theta} \in \Theta} f(\bvec{\theta}) = \{ \bvec{\theta}^* \in \Theta \colon f(\bvec{\theta}^*) \leq f(\bvec{\theta}) \,\,\,\,\,\,\forall \bvec{\theta} \in \Theta\}\,,
  +
\end{displaymath}
  +
where \bvec{\theta} is an $n$-dimensional vector that belongs to the set of feasible solutions $\Theta$ (also called solution space).
  +
The elements of the set $\Theta^*$ are equivalent with respect to the function $f$.
  +
  +
Let ${\cal P} = \{p_{1},p_{2},\ldots,p_{k}\}$ be the population of particles (also referred to as \emph{swarm}).
  +
At any time step $t$, a particle $p_i$ has a position $\bvec{x}^{\,t}_i$ and a velocity $\bvec{v}^{\,t}_i$ associated
  +
to it. The best position that particle $p_i$ has ever visited until time step $t$ is represented by vector $\bvec{b}^{\,t}_i$
  +
(also known as a particle's \emph{personal best}). Moreover, a particle $p_i$ receives information from its \emph{neighborhood},
  +
which is defined as the set ${\cal N}_i \subseteq {\cal P}$. Note that a particle can belong to its own neighborhood. In the
  +
standard particle swarm optimization algorithm, the particles' neighborhood relations
  +
are commonly represented as a graph $G=\{V,E\}$, where each vertex in $V$ corresponds to a particle in the swarm
  +
and each edge in $E$ establishes a neighbor relation between a pair of particles. The resulting graph is commonly referred to as
  +
the swarm's \emph{population topology}. In Figure~\ref{figure:topologies} some example topologies are shown.
  +
  +
\begin{figure}[htbp!]
  +
\centering
  +
\begin{tabular}{cc}
  +
\subfigure[Fully connected]{\label{subfig:fc}\includegraphics[width=4cm, angle=-90]{Topology/fc.pdf}} &
  +
\subfigure[von Neumann]{\label{subfig:vN}\includegraphics[width=4cm, angle=-90]{Topology/square5x4.pdf}}\\
  +
\multicolumn{2}{c}{\subfigure[Ring]{\label{subfig:r}\includegraphics[width=4cm, angle=-90]{Topology/ring.pdf}}}
  +
\end{tabular}
  +
\caption{Example population topologies. Figure~\subref{subfig:fc} depicts a fully connected topology, that is,
  +
${\cal N}_i = {\cal P} \setminus \{p_i\}\,\,\forall p_i \in {\cal P}$. Figure~\subref{subfig:vN} depicts a so-called von Neumann topology,
  +
in which $|{\cal N}_i| = 4\,\,\forall p_i \in {\cal P}$. Figure~\subref{subfig:r} depicts a ring topology in which each particle
  +
is neighbor to any other 2 particles.}
  +
\label{figure:topologies}
  +
\end{figure}
  +
  +
\subsection{The algorithm}
  +
Initially, the particles' positions and velocities are randomly initialized within a space
  +
$\Theta^\prime \subseteq \Theta$. During the main loop of the algorithm, the particles' velocities and positions
  +
are iteratively updated until a stopping criterion is met.
  +
  +
The updating rules are:
  +
\begin{equation} \label{equation:velocityUpdate}
  +
\bvec{v}^{\,t+1}_i = w\bvec{v}^{\,t}_i + \varphi_2\bvec{U}^{\,t}_1(\bvec{b}^{\,t}_i - \bvec{x}^{\,t}_i) + \varphi_2\bvec{U}^{\,t}_2(\bvec{l}^{\,t}_i - \bvec{x}^{\,t}_i) \,,
  +
\end{equation}
  +
  +
\begin{equation} \label{equation:positionUpdate}
  +
\bvec{x}^{\,t+1}_i = \bvec{x}^{\,t}_i +\bvec{v}^{\,t+1}_i \,,
  +
\end{equation}
  +
where $w$ is a parameter called \emph{inertia weight}, $\varphi_1$ and $\varphi_2$ are two parameters called \emph{acceleration
  +
coefficients}, $\bvec{U}^{\,t}_1$ and $\bvec{U}^{\,t}_2$ are two $n \times n$ diagonal matrices with in-diagonal elements
  +
distributed in the interval $[0,1)$ uniformly at random. Every iteration, these matrices are regenerated, that is,
  +
$\bvec{U}^{\,t+1}_{1,2} \neq \bvec{U}^{\,t}_{1,2}$. Vector $\bvec{l}^{\,t}_i$ is the best position ever found by any particle
  +
in the neighborhood of particle $p_i$, that is, $f(\bvec{l}^{\,t}_i) \leq f(\bvec{b}^{\,t}_j) \,\,\, \forall p_j \in {\cal N}_i$.
  +
  +
A pseudocode version of the standard PSO algorithm is shown in Algorithm~\ref{algorithm:originalPSO}.
  +
  +
\begin{algorithm}[th!]
  +
\caption{Pseudocode version of the standard PSO algorithm}
  +
\label{algorithm:originalPSO}
  +
\begin{algorithmic}
  +
\REQUIRE Objective function $f:\Theta \to \mathbb{R}$, the initialization domain $\Theta^\prime \subseteq \Theta$, the set of particles ${\cal P} \colon |{\cal P}| =k$, the parameters $w$, $\varphi_1$, and $\varphi_2$, and the stopping criterion $S$.
  +
\STATE
  +
\STATE \COMMENT{Initialization}
  +
\STATE Set $t=0$
  +
\FOR{$i=1$ to $k$}
  +
\STATE Initialize ${\cal N}_i$ to a subset of ${\cal P}$
  +
\STATE Initialize $\bvec{x}^{\,t}_i$ and $\bvec{v}^{\,t}_i$ randomly within $\Theta^\prime$
  +
\STATE Set $\bvec{b}^{\,t}_i = \bvec{x}^{\,t}_i$
  +
\ENDFOR
  +
\STATE
  +
\STATE \COMMENT{Main Loop}
  +
\WHILE{$S$ is not satisfied}
  +
\FOR{$i=1$ to $k$}
  +
\IF{$f(\bvec{x}^{\,t}_i) \leq f(\bvec{b}^{\,t}_i)$}
  +
\STATE Set $\bvec{b}^{\,t}_i = \bvec{x}^{\,t}_i$
  +
\ENDIF
  +
\STATE Set $min = \infty$
  +
\FORALL{$p_j \in {\cal N}_i$}
  +
\IF{$f(\bvec{b}^{\,t}_j) < min$}
  +
\STATE Set $\bvec{l}^{\,t}_i = \bvec{b}^{\,t}_j$
  +
\STATE Set $min = f(\bvec{b}^{\,t}_j)$
  +
\ENDIF
  +
\ENDFOR
  +
\STATE Generate $\bvec{U}^{\,t}_1$ and $\bvec{U}^{\,t}_2$
  +
\STATE Set $\bvec{v}^{\,t+1}_i = w\bvec{v}^{\,t}_i + \varphi_2\bvec{U}^{\,t}_1(\bvec{b}^{\,t}_i - \bvec{x}^{\,t}_i) + \varphi_2\bvec{U}^{\,t}_2(\bvec{l}^{\,t}_i - \bvec{x}^{\,t}_i)$
  +
\STATE Set $\bvec{x}^{\,t+1}_i = \bvec{x}^{\,t}_i +\bvec{v}^{\,t+1}_i$
  +
\ENDFOR
  +
\STATE Set $t = t+1$
  +
\ENDWHILE
  +
\end{algorithmic}
  +
\end{algorithm}
  +
  +
\section{Main PSO variants}
  +
  +
The original particle swarm optimization algorithm has undergone a number of changes since it was first proposed. Most of
  +
those changes affect the way the particles' velocity is updated. In the following paragraphs, we briefly describe some of
  +
the most important developments. For a more complete description of many of the existing particle swarm optimization variants,
  +
we refer the reader to~\cite{Engelbrecht05},~\cite{Clerc06} and~\cite{Poli07}.
  +
  +
\subsection{Fully informed PSO}
  +
  +
In the standard particle swarm optimization algorithm, a particle is attracted toward its best neighbor. A variant in which a particle
  +
uses the information provided by all its neighbors in order to update its velocity is called the \emph{fully informed
  +
particle swarm} (FIPS)~\cite{Mendes04}.
  +
  +
In the fully informed particle swarm optimization algorithm, the velocity-update rule is
  +
\begin{equation}
  +
\bvec{v}^{\,t+1}_i = w\bvec{v}^{\,t}_i + \frac{\varphi}{|{\cal N}_i|}\sum_{p_j \in {\cal N}_i}{\cal W}(\bvec{b}^{\,t}_j)\bvec{U}^{\,t}_j(\bvec{b}^{\,t}_j-\bvec{x}^{\,t}_i) \,,
  +
\end{equation}
  +
where $w$ is a parameter called the \emph{inertia weight}, $\varphi$ is a parameter called \emph{acceleration coefficient},
  +
and ${\cal W} \colon \Theta \to [0,1]$ is a function that weights the contribution of a particle's personal best position
  +
to the movement of the target particle based on its relative quality.
  +
  +
\subsection{Bare bones PSO}
  +
  +
A particle swarm optimization algorithm in which the velocity- and position-update rules are substituted by a procedure that
  +
samples a parametric probability density function is called the \emph{bare-bones particle swarm}~\cite{Kennedy03}.
  +
  +
In the bare bones particle swarm optimization algorithm, a particle's position update rule in the $j$th dimension is
  +
\begin{equation}
  +
x^{t+1}_{ij} = N\left(\mu^{t} ,\sigma^{\,t}\right)\,,
  +
\end{equation}
  +
where $N$ is a normal distribution with
  +
\begin{eqnarray*}
  +
\mu^{t} &=& \frac{b^{t}_{ij} + l^{t}_{ij}}{2} \,, \\
  +
\sigma^{t} & = & |b^{t}_{ij} - l^{t}_{ij}| \,.
  +
\end{eqnarray*}
  +
  +
Although a normal distribution was chosen when it was first proposed, any other probability density function can be used with
  +
the bare bones model.
  +
  +
% The idea of replacing the particles' movement rules for a sampling method came after realizing that a particle's next position is
  +
% a stochastic variable with a specific probability distribution. (see Section~\ref{Section:Research}, for more information).
  +
  +
\subsection{Discrete PSO}
  +
  +
Although particle swarm optimization algorithms are normally designed to search in continuous domains, there are a number of variants
  +
that operate in discrete spaces. One such algorithm is the binary particle swarm optimization algorithm~\cite{Kennedy97b}. In this
  +
algorithm, a component of a particle's position vector is interpreted as the probability of that component taking a value of 1. A
  +
particle's velocity is updated using Equation~\ref{equation:velocityUpdate}; however, the $j$th component of a particle's position
  +
is updated using the following rule
  +
\begin{equation}
  +
x^{t+1}_{ij} =
  +
\begin{cases}
  +
1 & \text{if $r < sig(v^{t+1}_{ij})$,}\\
  +
0 & \text{otherwise,}
  +
\end{cases}
  +
\end{equation}
  +
where $r$ is a uniformly distributed random number in the range $[0,1)$ and
  +
\begin{equation}
  +
sig(x) = \frac{1}{1+e^{-x}}\,.
  +
\end{equation}
  +
  +
Other approaches to tackle discrete problems include rounding off the components of the particles' position vectors~\cite{Laskari02},
  +
transforming the continuous domain into discrete sets of intervals~\cite{Fukuyama99}, redefining the mathematical operators used in the
  +
velocity- and position-update rules to suit a chosen problem representation~\cite{Clerc04}, and the stochastic instantiation of solution
  +
values to the discrete problem based on probabilities defined by the traditional rules~\cite{Pugh06}.
  +
  +
\subsection{Multiobjective PSO}
  +
  +
The particle swarm optimization algorithm has also been adapted to tackle {\bf multiobjective optimization problems}~\cite{Reyes06}.
  +
The main differences between traditional single-objective particle swarm optimization algorithms and their multiobjective counterparts
  +
are:
  +
  +
\begin{itemize}
  +
\item Leader selection. In multiobjective optimization problems there is no single best solution that dominates all the others.
  +
The challenge, therefore, is to select from among equally good solutions one that can guide a particle toward the true Pareto
  +
front of the problem.
  +
% Some of the approaches used to tackle this issue include selection based on Pareto dominance with quality measures~\cite{Ray02}, clustering-based swarm division~\cite{Toscano04}, and~\cite{}.
  +
  +
\item Output set generation. The output of a multiobjective optimizer is the set of nondominated solutions found during the
  +
whole run of the algorithm. The common approach used in multiobjective particle swarm optimization algorithms is to maintain
  +
a \emph{solution archive} in which the set of nondominated solutions are stored.
  +
  +
\item Diversity mechanisms. To find diverse solutions that spread across the problem's Pareto front, multiobjective optimization
  +
algorithms must maintain a high level of diversity so as to avoid convergence to a single solution. In multiobjective particle
  +
swarm optimizers an extra mutation operator is usually added for this purpose.
  +
\end{itemize}
  +
  +
\section{Applications of PSO}
  +
  +
Particle swarm optimization algorithms have been applied in many domains, including neural networks (the first application of these algorithms), telecommunications, data mining, combinatorial optimization, power systems, signal processing, and many others. There are hundreds of publications
  +
reporting applications of particle swarm optimization algorithms. For a review, please see~\cite{Poli08a}.
  +
  +
\section{Current Research Issues} \label{Section:Research}
  +
  +
Current research in particle swarm optimization is done along several lines. Some of them are:
  +
  +
\begin{description}
  +
\item[Theory.] Understanding how particle swarm optimizers work, from a theoretical point of view, has been the subject of active
  +
research in the last years. The first efforts were directed toward understanding the effects of the different parameters in the
  +
behavior of the standard particle swarm optimization algorithm~\cite{Ozcan99,Clerc02,Trelea03}. In recent years there
  +
has been a particular interest in studying the stochastic properties of the pair of stochastic equations that govern the movement of a particle~\cite{Blackwell07,Poli08b,Pena08}.
  +
  +
  +
\item[New variants.] This line of research has been the most active since the proposal of the first particle swarm algorithm.
  +
New particle position-update mechanisms are continuously proposed in an effort to design ever better performing particle swarms.
  +
Some efforts are directed toward understanding on which classes of problems particular particle swarm algorithms can be top
  +
performers~\cite{Poli05a,Poli05b,Langdon07}. Also of interest is the study of hybridizations between particle swarms and other
  +
high-performing optimization algorithms~\cite{Lovbjerg01,Naka03}. Recently, some parallel implementations have been studied~\cite{Schutte04,Koh06}
  +
and due to the wider availability of parallel computers, we should expect more research to be done along this line.
  +
  +
\item[Performance evaluation.] Due to the great number of new variants that are frequently proposed, performance comparisons have been always
  +
of interest. Comparisons between the standard particle swarm and other optimization techniques, like the ones in ~\cite{Eberhart98,Hassan05},
  +
have triggered the interest of many researchers on particle swarm optimization. Comparisons between different particle
  +
swarm optimization variants or empirical parameter settings studies help in improving our understanding of the technique and
  +
many times trigger empirical and theoretical work~\cite{Mendes04,Schutte05,MdeO06}.
  +
  +
\item[Applications.] It is expected that in the future more applications of the particle swarm optimization algorithm will be considered.
  +
Much of the work done on other aspects of the paradigm will hopefully allow us to solve practically relevant problems in many domains.
  +
  +
\end{description}
  +
  +
\section{External Links}
  +
  +
\section{See also}
  +
  +
\bf{Optimization}, \bf{Stochastic Optimization}, {\bf Swarm Intelligence}, {\bf Ant Colony Optimization}
  +
  +
\bibliographystyle{apalike}
  +
  +
\bibliography{PSO.bib}

Revision as of 10:41, 19 September 2008

{\bf Particle swarm optimization} (PSO) is a population-based stochastic approach for tackling continuous and discrete optimization problems.

In particle swarm optimization, the solution space of an optimization problem represents the environment where simple software agents, called \emph{particles}, can move around. Accordingly, the position of a particle represents a candidate solution to the optimization problem at hand. Particles search for better positions in the solution space by changing their velocity according to rules originally inspired by behavioral models of bird flocking.

Particle swarm optimization belongs to the class of {\bf swarm intelligence} techniques that are used solve optimization problems.

\section*{Contents} \begin{itemize} \item History \item Standard PSO algorithm \item Main PSO variants \begin{itemize} \item Fully informed PSO \item Bare bones PSO \item Discrete PSO \item Multiobjective PSO \end{itemize} \item Applications of PSO \item Current Research Issues \item References \item External Links \item See Also

\end{itemize}

\section{History}

Particle swarm optimization was introduced by James Kennedy and Russell Eberhart in 1995~\cite{Kennedy95}. It has roots in the simulation of social behaviors using tools and ideas taken from computer graphics and social psychology research.

Within the field of computer graphics, the first antecedents of particle swarm optimization date back to 1983 when Reeves~\cite{Reeves83} proposed particle systems to model objects that are dynamic and cannot be easily represented by polygons or surfaces. Examples of such objects are fire, smoke, water and clouds. In these models, particles are independent of each other and their movement is governed by a set of rules.

In 1987, Reynolds~\cite{Reynolds87} used a particle system to simulate the collective behavior of a flock of birds. In a similar kind simulation, Heppner and Grenander~\cite{Heppner90} included a ``roost that was attractive to the simulated birds. Both models inspired the set of rules that were used in the original particle swarm optimization algorithm.

Social psychology research was another source of influence in the development of the first particle swarm optimization algorithm. The rules that govern the movement of the particles in a problem's solution space can also be seen as a model of {\bf social cognition} in which successful individuals tend to be imitated by less successful ones~\cite{Kennedy01}.

The name particle swarm comes from the fact that particles update their position by modifying their velocity (even though they are massless) and because the collective behavior of the particles adheres to the principles described by Millonas~\cite{Millonas94}.

\section{Standard PSO algorithm}

\subsection{Preliminaries} The problem of minimizing\footnote{Without loss of generality, the presentation considers only minimization problems.} the function $f: \Theta \to \mathbb{R}$ with $\Theta \subseteq \mathbb{R}^n$ can be stated as finding the set \begin{displaymath} \Theta^* = \argmin_{\bvec{\theta} \in \Theta} f(\bvec{\theta}) = \{ \bvec{\theta}^* \in \Theta \colon f(\bvec{\theta}^*) \leq f(\bvec{\theta}) \,\,\,\,\,\,\forall \bvec{\theta} \in \Theta\}\,, \end{displaymath} where \bvec{\theta} is an $n$-dimensional vector that belongs to the set of feasible solutions $\Theta$ (also called solution space). The elements of the set $\Theta^*$ are equivalent with respect to the function $f$.

Let ${\cal P} = \{p_{1},p_{2},\ldots,p_{k}\}$ be the population of particles (also referred to as \emph{swarm}). At any time step $t$, a particle $p_i$ has a position $\bvec{x}^{\,t}_i$ and a velocity $\bvec{v}^{\,t}_i$ associated to it. The best position that particle $p_i$ has ever visited until time step $t$ is represented by vector $\bvec{b}^{\,t}_i$ (also known as a particle's \emph{personal best}). Moreover, a particle $p_i$ receives information from its \emph{neighborhood}, which is defined as the set ${\cal N}_i \subseteq {\cal P}$. Note that a particle can belong to its own neighborhood. In the standard particle swarm optimization algorithm, the particles' neighborhood relations are commonly represented as a graph $G=\{V,E\}$, where each vertex in $V$ corresponds to a particle in the swarm and each edge in $E$ establishes a neighbor relation between a pair of particles. The resulting graph is commonly referred to as the swarm's \emph{population topology}. In Figure~\ref{figure:topologies} some example topologies are shown.

\begin{figure}[htbp!] \centering \begin{tabular}{cc} \subfigure[Fully connected]{\label{subfig:fc}\includegraphics[width=4cm, angle=-90]{Topology/fc.pdf}} & \subfigure[von Neumann]{\label{subfig:vN}\includegraphics[width=4cm, angle=-90]{Topology/square5x4.pdf}}\\ \multicolumn{2}{c}{\subfigure[Ring]{\label{subfig:r}\includegraphics[width=4cm, angle=-90]{Topology/ring.pdf}}} \end{tabular} \caption{Example population topologies. Figure~\subref{subfig:fc} depicts a fully connected topology, that is, ${\cal N}_i = {\cal P} \setminus \{p_i\}\,\,\forall p_i \in {\cal P}$. Figure~\subref{subfig:vN} depicts a so-called von Neumann topology, in which $|{\cal N}_i| = 4\,\,\forall p_i \in {\cal P}$. Figure~\subref{subfig:r} depicts a ring topology in which each particle is neighbor to any other 2 particles.} \label{figure:topologies} \end{figure}

\subsection{The algorithm} Initially, the particles' positions and velocities are randomly initialized within a space $\Theta^\prime \subseteq \Theta$. During the main loop of the algorithm, the particles' velocities and positions are iteratively updated until a stopping criterion is met.

The updating rules are: \begin{equation} \label{equation:velocityUpdate} \bvec{v}^{\,t+1}_i = w\bvec{v}^{\,t}_i + \varphi_2\bvec{U}^{\,t}_1(\bvec{b}^{\,t}_i - \bvec{x}^{\,t}_i) + \varphi_2\bvec{U}^{\,t}_2(\bvec{l}^{\,t}_i - \bvec{x}^{\,t}_i) \,, \end{equation}

\begin{equation} \label{equation:positionUpdate} \bvec{x}^{\,t+1}_i = \bvec{x}^{\,t}_i +\bvec{v}^{\,t+1}_i \,, \end{equation} where $w$ is a parameter called \emph{inertia weight}, $\varphi_1$ and $\varphi_2$ are two parameters called \emph{acceleration coefficients}, $\bvec{U}^{\,t}_1$ and $\bvec{U}^{\,t}_2$ are two $n \times n$ diagonal matrices with in-diagonal elements distributed in the interval $[0,1)$ uniformly at random. Every iteration, these matrices are regenerated, that is, $\bvec{U}^{\,t+1}_{1,2} \neq \bvec{U}^{\,t}_{1,2}$. Vector $\bvec{l}^{\,t}_i$ is the best position ever found by any particle in the neighborhood of particle $p_i$, that is, $f(\bvec{l}^{\,t}_i) \leq f(\bvec{b}^{\,t}_j) \,\,\, \forall p_j \in {\cal N}_i$.

A pseudocode version of the standard PSO algorithm is shown in Algorithm~\ref{algorithm:originalPSO}.

\begin{algorithm}[th!] \caption{Pseudocode version of the standard PSO algorithm} \label{algorithm:originalPSO} \begin{algorithmic} \REQUIRE Objective function $f:\Theta \to \mathbb{R}$, the initialization domain $\Theta^\prime \subseteq \Theta$, the set of particles ${\cal P} \colon |{\cal P}| =k$, the parameters $w$, $\varphi_1$, and $\varphi_2$, and the stopping criterion $S$. \STATE \STATE \COMMENT{Initialization} \STATE Set $t=0$ \FOR{$i=1$ to $k$} \STATE Initialize ${\cal N}_i$ to a subset of ${\cal P}$ \STATE Initialize $\bvec{x}^{\,t}_i$ and $\bvec{v}^{\,t}_i$ randomly within $\Theta^\prime$ \STATE Set $\bvec{b}^{\,t}_i = \bvec{x}^{\,t}_i$ \ENDFOR \STATE \STATE \COMMENT{Main Loop} \WHILE{$S$ is not satisfied} \FOR{$i=1$ to $k$} \IF{$f(\bvec{x}^{\,t}_i) \leq f(\bvec{b}^{\,t}_i)$} \STATE Set $\bvec{b}^{\,t}_i = \bvec{x}^{\,t}_i$ \ENDIF \STATE Set $min = \infty$ \FORALL{$p_j \in {\cal N}_i$} \IF{$f(\bvec{b}^{\,t}_j) < min$} \STATE Set $\bvec{l}^{\,t}_i = \bvec{b}^{\,t}_j$ \STATE Set $min = f(\bvec{b}^{\,t}_j)$ \ENDIF \ENDFOR \STATE Generate $\bvec{U}^{\,t}_1$ and $\bvec{U}^{\,t}_2$ \STATE Set $\bvec{v}^{\,t+1}_i = w\bvec{v}^{\,t}_i + \varphi_2\bvec{U}^{\,t}_1(\bvec{b}^{\,t}_i - \bvec{x}^{\,t}_i) + \varphi_2\bvec{U}^{\,t}_2(\bvec{l}^{\,t}_i - \bvec{x}^{\,t}_i)$ \STATE Set $\bvec{x}^{\,t+1}_i = \bvec{x}^{\,t}_i +\bvec{v}^{\,t+1}_i$ \ENDFOR \STATE Set $t = t+1$ \ENDWHILE \end{algorithmic} \end{algorithm}

\section{Main PSO variants}

The original particle swarm optimization algorithm has undergone a number of changes since it was first proposed. Most of those changes affect the way the particles' velocity is updated. In the following paragraphs, we briefly describe some of the most important developments. For a more complete description of many of the existing particle swarm optimization variants, we refer the reader to~\cite{Engelbrecht05},~\cite{Clerc06} and~\cite{Poli07}.

\subsection{Fully informed PSO}

In the standard particle swarm optimization algorithm, a particle is attracted toward its best neighbor. A variant in which a particle uses the information provided by all its neighbors in order to update its velocity is called the \emph{fully informed particle swarm} (FIPS)~\cite{Mendes04}.

In the fully informed particle swarm optimization algorithm, the velocity-update rule is \begin{equation} \bvec{v}^{\,t+1}_i = w\bvec{v}^{\,t}_i + \frac{\varphi}{|{\cal N}_i|}\sum_{p_j \in {\cal N}_i}{\cal W}(\bvec{b}^{\,t}_j)\bvec{U}^{\,t}_j(\bvec{b}^{\,t}_j-\bvec{x}^{\,t}_i) \,, \end{equation} where $w$ is a parameter called the \emph{inertia weight}, $\varphi$ is a parameter called \emph{acceleration coefficient}, and ${\cal W} \colon \Theta \to [0,1]$ is a function that weights the contribution of a particle's personal best position to the movement of the target particle based on its relative quality.

\subsection{Bare bones PSO}

A particle swarm optimization algorithm in which the velocity- and position-update rules are substituted by a procedure that samples a parametric probability density function is called the \emph{bare-bones particle swarm}~\cite{Kennedy03}.

In the bare bones particle swarm optimization algorithm, a particle's position update rule in the $j$th dimension is \begin{equation} x^{t+1}_{ij} = N\left(\mu^{t} ,\sigma^{\,t}\right)\,, \end{equation} where $N$ is a normal distribution with \begin{eqnarray*} \mu^{t} &=& \frac{b^{t}_{ij} + l^{t}_{ij}}{2} \,, \\ \sigma^{t} & = & |b^{t}_{ij} - l^{t}_{ij}| \,. \end{eqnarray*}

Although a normal distribution was chosen when it was first proposed, any other probability density function can be used with the bare bones model.

% The idea of replacing the particles' movement rules for a sampling method came after realizing that a particle's next position is % a stochastic variable with a specific probability distribution. (see Section~\ref{Section:Research}, for more information).

\subsection{Discrete PSO}

Although particle swarm optimization algorithms are normally designed to search in continuous domains, there are a number of variants that operate in discrete spaces. One such algorithm is the binary particle swarm optimization algorithm~\cite{Kennedy97b}. In this algorithm, a component of a particle's position vector is interpreted as the probability of that component taking a value of 1. A particle's velocity is updated using Equation~\ref{equation:velocityUpdate}; however, the $j$th component of a particle's position is updated using the following rule \begin{equation} x^{t+1}_{ij} = \begin{cases} 1 & \text{if $r < sig(v^{t+1}_{ij})$,}\\ 0 & \text{otherwise,} \end{cases} \end{equation} where $r$ is a uniformly distributed random number in the range $[0,1)$ and \begin{equation} sig(x) = \frac{1}{1+e^{-x}}\,. \end{equation}

Other approaches to tackle discrete problems include rounding off the components of the particles' position vectors~\cite{Laskari02}, transforming the continuous domain into discrete sets of intervals~\cite{Fukuyama99}, redefining the mathematical operators used in the velocity- and position-update rules to suit a chosen problem representation~\cite{Clerc04}, and the stochastic instantiation of solution values to the discrete problem based on probabilities defined by the traditional rules~\cite{Pugh06}.

\subsection{Multiobjective PSO}

The particle swarm optimization algorithm has also been adapted to tackle {\bf multiobjective optimization problems}~\cite{Reyes06}. The main differences between traditional single-objective particle swarm optimization algorithms and their multiobjective counterparts are:

\begin{itemize} \item Leader selection. In multiobjective optimization problems there is no single best solution that dominates all the others. The challenge, therefore, is to select from among equally good solutions one that can guide a particle toward the true Pareto front of the problem. % Some of the approaches used to tackle this issue include selection based on Pareto dominance with quality measures~\cite{Ray02}, clustering-based swarm division~\cite{Toscano04}, and~\cite{}.

\item Output set generation. The output of a multiobjective optimizer is the set of nondominated solutions found during the whole run of the algorithm. The common approach used in multiobjective particle swarm optimization algorithms is to maintain a \emph{solution archive} in which the set of nondominated solutions are stored.

\item Diversity mechanisms. To find diverse solutions that spread across the problem's Pareto front, multiobjective optimization algorithms must maintain a high level of diversity so as to avoid convergence to a single solution. In multiobjective particle swarm optimizers an extra mutation operator is usually added for this purpose. \end{itemize}

\section{Applications of PSO}

Particle swarm optimization algorithms have been applied in many domains, including neural networks (the first application of these algorithms), telecommunications, data mining, combinatorial optimization, power systems, signal processing, and many others. There are hundreds of publications reporting applications of particle swarm optimization algorithms. For a review, please see~\cite{Poli08a}.

\section{Current Research Issues} \label{Section:Research}

Current research in particle swarm optimization is done along several lines. Some of them are:

\begin{description} \item[Theory.] Understanding how particle swarm optimizers work, from a theoretical point of view, has been the subject of active research in the last years. The first efforts were directed toward understanding the effects of the different parameters in the behavior of the standard particle swarm optimization algorithm~\cite{Ozcan99,Clerc02,Trelea03}. In recent years there has been a particular interest in studying the stochastic properties of the pair of stochastic equations that govern the movement of a particle~\cite{Blackwell07,Poli08b,Pena08}.


\item[New variants.] This line of research has been the most active since the proposal of the first particle swarm algorithm. New particle position-update mechanisms are continuously proposed in an effort to design ever better performing particle swarms. Some efforts are directed toward understanding on which classes of problems particular particle swarm algorithms can be top performers~\cite{Poli05a,Poli05b,Langdon07}. Also of interest is the study of hybridizations between particle swarms and other high-performing optimization algorithms~\cite{Lovbjerg01,Naka03}. Recently, some parallel implementations have been studied~\cite{Schutte04,Koh06} and due to the wider availability of parallel computers, we should expect more research to be done along this line.

\item[Performance evaluation.] Due to the great number of new variants that are frequently proposed, performance comparisons have been always of interest. Comparisons between the standard particle swarm and other optimization techniques, like the ones in ~\cite{Eberhart98,Hassan05}, have triggered the interest of many researchers on particle swarm optimization. Comparisons between different particle swarm optimization variants or empirical parameter settings studies help in improving our understanding of the technique and many times trigger empirical and theoretical work~\cite{Mendes04,Schutte05,MdeO06}.

\item[Applications.] It is expected that in the future more applications of the particle swarm optimization algorithm will be considered. Much of the work done on other aspects of the paradigm will hopefully allow us to solve practically relevant problems in many domains.

\end{description}

\section{External Links}

\section{See also}

\bf{Optimization}, \bf{Stochastic Optimization}, {\bf Swarm Intelligence}, {\bf Ant Colony Optimization}

\bibliographystyle{apalike}

\bibliography{PSO.bib}